Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2021 Apr 2;17(4):e1008893. doi: 10.1371/journal.pcbi.1008893

Computation of the electroencephalogram (EEG) from network models of point neurons

Pablo Martínez-Cañada 1,2,3, Torbjørn V Ness 4, Gaute T Einevoll 4,5, Tommaso Fellin 1,3, Stefano Panzeri 1,2,*
Editor: Daniele Marinazzo6
PMCID: PMC8046357  PMID: 33798190

Abstract

The electroencephalogram (EEG) is a major tool for non-invasively studying brain function and dysfunction. Comparing experimentally recorded EEGs with neural network models is important to better interpret EEGs in terms of neural mechanisms. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neurons cannot generate an EEG, as EEG generation requires spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of a rodent’s EEG with quantities defined in point-neuron network models. We constructed different approximations (or proxies) of the EEG signal that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and combinations of synaptic currents. We then evaluated how well each proxy reconstructed a ground-truth EEG obtained when the synaptic currents of the LIF model network were fed into a three-dimensional network model of multicompartmental neurons with realistic morphologies. Proxies based on linear combinations of AMPA and GABA currents performed better than proxies based on firing rates or membrane potentials. A new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states. The new linear proxies explained 85–95% of the variance of the ground-truth EEG for a wide range of network configurations including different cell morphologies, distributions of presynaptic inputs, positions of the recording electrode, and spatial extensions of the network. Non-linear EEG proxies using a convolutional neural network (CNN) on synaptic currents increased proxy performance by a further 2–8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations thus facilitating a quantitative comparison between computational models and experimental EEG recordings.

Author summary

Networks of point neurons are widely used to model neural dynamics. Their output, however, cannot be directly compared to the electroencephalogram (EEG), which is one of the most used tools to non-invasively measure brain activity. To allow a direct integration between neural network theory and empirical EEG data, here we derived a new mathematical expression, termed EEG proxy, which estimates with high accuracy the EEG based simply on the variables available from simulations of point-neuron network models. To compare and validate these EEG proxies, we computed a realistic ground-truth EEG produced by a network of simulated neurons with realistic 3D morphologies that receive the same synaptic input of the simpler network of point neurons. The new obtained EEG proxies outperformed previous approaches and worked well under a wide range of network configurations with different cell morphologies, distribution of presynaptic inputs, position of the recording electrode and spatial extension of the network. The new proxies approximated well both EEG spectra and EEG evoked potentials. Our work provides important mathematical tools that allow a better interpretation of experimentally measured EEGs in terms of neural models of brain function.

Introduction

Electroencephalography is a powerful and widely used technique for non-invasively measuring neural activity, with important applications both in scientific research and in the clinic [1]. Electroencephalography has played a key role in the study of how both neural oscillations and stimulus-evoked activity relate to sensation, perception, cognitive and motor functions [24]. The electroencephalogram (EEG), like its intracranial counterpart, the local field potential (LFP), originates from the aggregation of all the electric fields generated by transmembrane currents across the surfaces of all neurons sufficiently close to the electrode [58]. The physics of how electromagnetic fields are generated from transmembrane currents is well understood, and mathematically described by forward models [6]. Yet, how to interpret changes in EEG across experimental conditions or diagnostic categories in terms of underlying neural processes remains challenging [1].

One way to better understand the EEG in terms of neural circuit mechanisms and to link theoretical models of brain functions to empirical EEG recordings is to compare EEG data with quantitative predictions obtained from network models. Network models of recurrently connected leaky-integrate-and-fire (LIF) point neurons are a current major tool in modelling brain function [911]. These models reduce the morphology of neurons to a single point in space and describe the neuron dynamics by a tractable set of coupled differential equations. These models are sufficiently simple to be understood thoroughly, either with simulations that are relatively light to implement, or by analytical approaches [12,13]. Despite their simplicity, they generate a wide range of network states and dynamics that resemble those observed in cortical recordings. They have been employed to satisfactorily explain a broad spectrum of different cortical mechanisms and cortical functions, such as sensory information coding [14,15], working memory [16,17], attention [18], propagating waves [19,20], non-rhythmic waking states [21,22], or the emergence of up and down states [23]. It remains an open question how to compute realistically EEGs from such widely used network models of simple point neurons.

A major problem in achieving the above goal is that in such LIF point neurons all transmembrane currents collapse into a single point in space and the resulting extracellular potential is, therefore, zero [6]. Previous studies comparing the simulation output of networks of simple model neurons without a spatial structure with measures of graded extracellular potentials such as EEGs or LFPs have used ad-hoc approaches to estimate the EEG from variables available from simulation of the network, including the average membrane potential [2328], the average firing rate [2931], the sum of all synaptic currents [13,32,33], or the sum of absolute values of synaptic currents [14,34]. However, the limitations and caveats of using such ad-hoc simplifications to compute the EEG have been rarely considered and tested. As a result, it is still unclear how best to compute EEGs directly from the output of point-like neuron network models [35,36].

In order to generate extracellular potentials, spatially extended neuron models, i.e., multicompartment neuron models, are required [37,38]. Previous studies have numerically computed the compound extracellular potential as the linear superposition of all single-cell distance-weighted transmembrane currents within a network of multicompartment neurons [3941]. This approach is however computationally cumbersome, and it does not allow an easily tractable and exhaustive analysis of the dynamics of such networks. One alternative could be using a hybrid scheme [30,35,42,43] that projects the spike times generated by the point-neuron network onto morphologically detailed three-dimensional (3D) neuron models, and then computing the electric field generated by the currents flowing through these 3D neuron models. This scheme provides a simplification by separating the study of the network dynamics (described by the point-neuron network model) from that of field generation (described by the multicompartment neuron model), but still requires running cumbersome multicompartment model simulations for each simulation of the LIF network.

In this article, we implemented a much simpler and lighter method to predict the EEG based simply on the variables available directly from simulation of a point-neuron network model (e.g., membrane potentials, spike times or synaptic currents of the neuron models). We constructed several different candidate approximations (termed proxies) of the EEG that can be computed from networks of LIF point neurons. We then evaluated how well each proxy reconstructed a ground-truth EEG obtained when the synaptic input currents of the LIF model network were injected into an analogous 3D network model of multicompartmental neurons with realistic cell morphologies. This approach was shown to perform remarkably well in predicting the LFP [42], based on a specific weighted sum of synaptic currents from the point-neuron network model, for a specific network state (i.e., asynchronous irregular) of the LIF model network. However, the previously obtained LFP proxy did not include a head model that approximated the different geometries and electrical conductivities of the head necessary for computing a realistic EEG signal recorded by scalp electrodes. In this study, to compute the EEG, we chose a simple head model suitable for rodents. These animal species are the ones most commonly used in laboratories to invasively record neural activity. Studying rodent’s EEG is thus directly relevant to interpret many available neuroscientific data, and it facilitates comparison of simulation results with empirical data. Additionally, we performed a proof-of-concept simulation of the ground-truth EEG and proxies on a complex human head model.

We derived a new proxy for the EEG that was validated against detailed simulations of the multicompartment model network, investigating different cell morphologies, variations of distribution of presynaptic inputs and changes in position of the recording electrode and in the spatial extension of the network model. Unlike previous studies which focused on approximations valid in specific network states [42], we also validated our proxies across the repertoire of network states displayed by recurrent network models, namely the asynchronous irregular (AI), synchronous irregular (SI), and synchronous regular (SR) [12] states, with different patterns of oscillations and individual cell activity. We found that a new class of simple EEG proxies, based on a weighted sum of synaptic currents, outperformed previous approaches, including those optimized for predicting LFPs [14,42]. The new EEG proxies closely captured both the temporal and spectral features of the EEG. We also provided a non-linear refinement using a convolutional neural network to estimate the EEG from synaptic currents, which yielded moderate improvements over the linear proxy at the expense of increasing complexity of the EEG estimation model.

Results

Computing the ground-truth EEG and EEG proxies

We investigated how to compute a simple but accurate approximation of the EEG (“EEG proxy” hereafter) that would be generated by the activity of a LIF point-neuron network if its neurons had a realistic spatial structure. We therefore first simulated a well-established model of a recurrent network of LIF point neurons. We then fed the spiking activity generated by the LIF point-neuron network into a realistic 3D multicompartmental model network of a cortical layer and computed the EEG generated by this activity. We finally studied how to approximate this EEG simply by using the variables directly available from the simulation of the point-neuron network model.

The LIF point-neuron network was constructed using a well-established two-populations (one excitatory and one inhibitory) model of a recurrent cortical circuit [12], illustrated in Fig 1A. The network was composed of 5000 neurons: 4000 were excitatory (i.e., their projections onto other neurons formed AMPA-like excitatory synapses) and 1000 inhibitory (i.e., their projections formed GABA-like synapses). The neurons were randomly connected with a connection probability between each pair of neurons of 0.2. This means that, on average, the number of incoming excitatory and inhibitory connections onto each neuron was 800 and 200, respectively. The network receives thalamic synaptic input that carries sensory information and stimulus-unrelated inputs representing slow ongoing fluctuations of cortical activity. This type of network can generate a repertoire of different network states that map well into empirical observations of cortical dynamics [12,44]. Fig 1B shows, as an example, the asynchronous irregular spiking activity generated by a subset of the excitatory and inhibitory populations in response to a low firing rate of the thalamic input. We have shown in previous work that this model captured well (even more than 90% of the variance of empirical data) the dynamics of primary visual cortex under naturalistic stimulation [14,34,45].

Fig 1. Overview of the network models and computation of proxies and EEG.

Fig 1

(A) Sketch of the point-neuron network with recurrent connections between two types of populations: excitatory cells (pyramidal cells, PY) and inhibitory cells (interneurons, IN). Each population receives two kinds of external inputs: global ongoing cortico-cortical activity and thalamic stimulation. (B) Raster plot of spiking activity from a subset of cells in each population. (C) Sketch of the multicompartment neuron models used for generation of the EEG. Two representative model neurons are depicted, a pyramidal cell on the left and an interneuron on the right, positioned within a cylinder of r = 0.5 mm. While AMPA synapses are homogenously distributed over all compartments of both types of cells, GABA synapses on pyramidal cells are located only below Z = 8.5 mm. The EEG recording electrode is situated on the surface of the scalp layer. (D) Comparison between example proxies calculated from the point-neuron network and the ground-truth EEG computed from the multicompartment neuron model network. (E) EEG generated in the multicompartment neuron network by all neurons (dotted black), only pyramidal neurons (dashed red) or only interneurons (solid blue). (F) Corresponding power spectra for the three sets depicted in (E).

We then computed a “ground-truth” EEG (referred to simply as “EEG” in the paper), following the hybrid modelling scheme [30,35,42,43], and used this ground-truth EEG to compare the performance of the different proxies. To do so, we created a network of unconnected multicompartment neuron models with realistic morphologies and homogeneous spatial distribution within the circular section of a cylinder of radius r = 0.5 mm (Fig 1C), which roughly approximates the spatial extension of a layer in a cortical column. We focused on computing the EEG generated by neurons with somas positioned in a single cortical layer, layer 2/3 (L2/3), so that somas of multicompartment neurons are aligned along the same Z-axis coordinate (150 μm below the reference point Z = 8.5 mm). We chose to position somas in L2/3 based on previous computational work suggesting that this layer gives a large contribution to extracellular potentials [30,35]. The reference point Z = 8.5 mm was chosen to approximate the radial distance between the center of a spherical rodent head model and the brain tissue [46]. In this specific set of simulations performed for optimizing the proxies, we used the reconstructed morphology of a broad-tuft layer-2/3 pyramidal cell from rat somatosensory cortex available in the Neocortical Microcircuitry (NMC) portal [47,48], referenced as dend-C250500A-P3_axon-C260897C-P2-Clone_9 (see “Methods”). We chose this pyramidal-cell morphology because its open-field geometry is expected to generate large extracellular potentials. Inhibitory cells of the model were implemented using the morphology of L2/3 large basket cell interneurons (the most numerous class in L2/3 [47]).

AMPA synapses were homogenously positioned along the Z-axis in both cell types, representing uniformly distributed excitatory input. In our default setting, we assumed that all inhibitory synapses are made by large basket cell interneurons of the model, which based on their morphology would be principally located below the reference point Z = 8.5 mm. Thus, all dendrites of inhibitory cells receive GABA synapses while only those dendrites of excitatory cells below Z = 8.5 mm receive GABA synapses, representing perisomatic inhibition.

EEGs were then generated from transmembrane currents of multicompartment neurons in combination with a forward-modelling scheme based on volume conduction theory [6]. To approximate the different geometries and electrical conductivities of the head, we computed the EEG using the four-layered spherical head model described in [35,49]. In this model, the different layers represent the brain tissue, cerebrospinal fluid (CSF), skull, and scalp, with radii 9, 9.5, 10 and 10.5 mm respectively, which approximate the dimensions of a rodent head model [46]. The values of the chosen conductivities are the default values of 0.3, 1.5, 0.015 and 0.3 S/m. The simulated EEG electrode was placed on the scalp surface, at the top of the head model (Fig 1C).

The time series of spikes of individual point neurons were finally mapped to synapse activation times on corresponding postsynaptic multicompartment neurons. Each multicompartment neuron was randomly assigned to a unique neuron in the point-neuron network and receives the same input spikes of the equivalent point neuron. Since the multicompartment neurons were not connected to each other, they were not involved in the network dynamics and their only role was to transform the spiking activity of the point-neuron network into a realistic estimate of the EEG. The EEG computed from the multicompartment neuron model network was then used as benchmark ground-truth data against which we compared different candidate proxies (Fig 1D).

Dynamic states of network activity of the point-neuron network model

The LIF point-neuron network model we chose is known to generate a number of qualitatively different activity states [12,44] with patterns of variability of spike activity and network oscillations observed in cortical data. Here we recapitulate the different network states we generated for the LIF point-neuron network and that were used to evaluate the different proxies. The states generated by the LIF neuron network can be mapped by systematically varying across simulations the thalamic input (ν0) and the relative strength of inhibitory synapses (g). We then used three different measures to describe the network dynamics: synchrony, irregularity, and mean firing rate [12,44].

In Fig 2A, we plot these three descriptors as a function of g and ν0. We individuated 3 different regions of the parameter space, each corresponding to a qualitatively different network state, according to the criteria employed by Kumar and collaborators [44]. The asynchronous irregular (AI) state is characterized by a low value of network synchrony (< 0.01), an irregularity level close to the value of a Poisson generator (> 0.8) and a very low firing rate, below 2 spikes/s. The synchronous irregular (SI) state has a level of network synchrony higher than that of the AI state (between 0.01 and 0.1), but with highly irregular firing of individual neurons (irregularity above 0.8). In the SI, neurons spike at low rate (< 5 spikes/s). For the synchronous regular (SR) state, the network exhibits high synchronous activity (> 0.1), a more regular single-cell spiking (irregularity below 0.8) and high spiking rate (> 60 spikes/s). Spike raster plots of excitatory and inhibitory cell populations of representative samples selected for each network state are shown in Fig 2B.

Fig 2. Optimization and validation of proxies for different sets of network parameters (ν0, g).

Fig 2

(A) Dynamic states of network activity defined by the control parameters g and ν0. The labels AI (asynchronous irregular), SI (synchronous irregular) and SR (synchronous regular) indicate the combinations of parameters that have been selected as representative samples of each network state. The synchrony and irregularity are unitless, the mean firing rate (FR) is measured in spikes/s. (B) Spiking activity from a subset of cells of the excitatory and inhibitory populations for the same samples shown in (A). (C) Optimized parameters of ERWS1 and ERWS2 (Eqs 79) as a function of the thalamic firing rate ν0. We considered two alternative scenarios. In the causal version of the proxy, the output depends only on present and past inputs so that the time delay parameters (τAMPA and τGABA) are constrained to be positive. In contrast, non-causal proxies can be assigned positive or negative time delays. (D) Outputs of non-causal ERWS1 (bottom row) and non-causal ERWS2 (top row) proxies for different network states compared to ground-truth EEGs. (E) Spiking activity for the same simulation cases of panel D. (F) Average performance, evaluated by using the coefficient of determination R2, of |I|, LRWS, ERWS1 (non-causal) and ERWS2 (non-causal) calculated on the validation dataset as a function of ν0 (same colors as shown in (G)). The dotted line R2 = 0.9 was chosen arbitrarily as a reference value of good performance and was used only for visual inspection of results. (G) Average R2 of every proxy across all network instantiations i of the validation dataset (c is causal, n is non-causal). The same colors shown in this legend are used throughout the article to identify the different proxies. Tests for statistical significance are computed only for the pair ERWS1 (non-causal) and ERWS2 (non-causal) and for the pair ERWS1 (causal) and ERWS2 (causal). (H) R2 across network states. (I) Power spectral density (PSD) of the proxies and the EEG (in black). (J) Average R2 computed across the 5–200 Hz frequency range of the log10(PSDs) of all network instantiations i of the validation dataset.

Optimization and validation of proxies across different network states

We investigated how best to compute the proxy that combines the variables available directly from the simulation of a LIF point-neuron network model for accurately predicting the EEG over a wide range of network activity states. We explored different proxies that have been commonly used in previous literature for estimating the extracellular signal from point-neuron networks: (i) the average firing rate (FR), (ii) the average membrane potential (Vm), (iii) the average sum of AMPA currents (AMPA), (iv) the average sum of GABA currents (GABA), (v) the average sum of synaptic currents (∑I) and (vi) the average sum of their absolute values (|I|). Furthermore, we propose here a new class of current-based proxies, (vii) the EEG reference weighted sum 1 (ERWS1) and (viii) the EEG reference weighted sum 2 (ERWS2), which are optimized linear combinations of time-delayed measures of AMPA and GABA currents. Indeed, an optimized weighted sum of synaptic currents (defined here as LRWS) was previously shown to be a robust proxy for the LFP [42]. The difference between ERWS1 and ERSW2 is that parameters of ERWS2 adapt theirs values as a function of the strength of the external thalamic input υ0, whereas the parameters of ERWS1 are not dependent on υ0 (see “Methods”).

We only considered the transmembrane currents of pyramidal cells to generate the EEG (in the multicompartment neuron network) because the contribution of transmembrane currents of interneurons to the EEG was shown to be negligible (Fig 1E and 1F), in line with findings of Refs. [35] for the EEG and [42] for the LFP. Accordingly, we computed proxies of the LIF neuron network only using excitatory neurons. It is important to bear in mind, though, that interneurons, play an indirect role in generating the EEG in our models, because GABAergic currents in pyramidal cells depend on synaptic input from interneurons.

The firing rate of inhibitory neurons might be expected to contribute as well to the FR proxy and, as a consequence, to the EEG, as observed in Ref. [30]. To keep consistency with definition of the other proxies, we decided to compute the FR proxy based only on firing rates of excitatory cells. We checked that using a proxy computed on firing rates of both excitatory and inhibitory cells gave an EEG reconstruction accuracy considerably poorer than accuracy of the proxies based on synaptic currents (from proxy iii to proxy viii above).

The first 6 proxies taken from previous literature are parameter-free. The two new ones, ERWS1 and ERWS2 have 3 and 9 free parameters, respectively, which need to be optimized (Eqs 79). Following previous work [42], these parameters are the factor α describing the relative ratio between the two currents and a specific delay for each type of current (τAMPA, τGABA). We computed the values of these parameters by a cross-validated optimization of the predicted EEG across the different network states seen for the LIF model network.

For optimization (i.e., parameter training) of the proxies, we generated a large set of numerical simulations (522 simulations) by systematically varying the values of g and ν0 over a wide state range. In each simulation instantiation, we set a given value g and ν0 and used different random initial conditions (e.g., recurrent connections of the point-neuron network or soma positions of multicompartment neurons). The best-fit values of ERWS1 and ERWS2 were calculated by minimizing the sum of square errors between the ground-truth EEG and the proxy for all network instantiations of the optimization dataset (see “Methods”, Eq 11).

We then cross-validated performance of the proxies by first computing how well they approximated EEGs generated by networks with the same properties that were used during proxy training, and then evaluating the proxies in networks in which we changed some of their features (e.g., cell morphologies or network size). Our reasoning was that if a proxy trained on some specific network features approximated well the EEG simulated across different network configurations, then we could hypothesize that our EEG proxies captured important and general properties of the relationship between the EEG and neural activity and thus could be used under a wide range of conditions.

Fig 2C shows the best-fit parameters (α, τAMPA and τGABA) found by the optimization algorithm for the two alternative scenarios considered here: causal and non-causal proxies (see also Table 1). For causal proxies, the predicted EEG depended only on present and past values of AMPA and GABA currents. Thus, the time delay parameters τAMPA and τGABA (quantifying the delay by which the synaptic current contributes to the EEG) were constrained during optimization to be non-negative. For non-causal proxies, time delay parameters can take positive and negative values. Non-causal relationships between measured extracellular potentials and neural activity at multiple sites may emerge because of closed-loop recurrent interactions within the network [6]. The mathematical expressions of the optimized causal proxies (ΦERWS1(t) and ΦERWS2(t, υ0)) are:

ΦERWS1(t)=exc.IAMPA(t)0.1(exc.IGABA(t3.1ms)), (1)
ΦERWS2(t,υ0)=exc.IAMPA(t)(0.5υ00.5)(exc.IGABA(t+1.5υ00.2ms4ms)). (2)

Table 1. Parameters of ERWS1 and ERWS2.

Proxy Optimized values
ERWS1 (causal) τAMPA = 0 ms, τGABA = 3.1 ms, α = 0.1
ERWS2 (causal) a1 = 0, b1 = 0, c1 = 0, a2 = -1.5, b2 = 0.2, c2 = 4, a3 = 0.5, b3 = 0.5, c3 = 0
ERWS1 (non-causal) τAMPA = -0.9 ms, τGABA = 2.3 ms, α = 0.3
ERWS2 (non-causal) a1 = -0.6, b1 = 0.1, c1 = -0.4, a2 = -1.9, b2 = 0.6, c2 = 3, a3 = 1.4, b3 = 1.7, c3 = 0.2

Expressions of the optimized non-causal proxies (where ν0 is unitless) are:

ΦERWS1(t)=exc.IAMPA(t+0.9ms)0.3(exc.IGABA(t2.3ms)), (3)
ΦERWS2(t,υ0)=exc.IAMPA(t+0.6υ00.1ms+0.4ms)(1.4υ01.7+0.2)(exc.IGABA(t+1.9υ00.6ms3ms)). (4)

For both ERWS1 and ERWS2, in the non-causal versions, the time delay parameters were small (few milliseconds) but had opposite signs, τGABA was positive while τAMPA was negative (Fig 2C). In the causal version of both proxies, we observed a similar trend but τAMPA was constrained to 0 by the optimization. Thus, the best EEG proxies depend on past values of GABA synaptic currents and on current and future values of AMPA synaptic currents. These values are different from the optimal time delays (τGABA = 0 ms and τAMPA = 6 ms) found for the LFP in Ref. [42]. One reason for the observed difference between the previous LFP proxy and our new EEG proxies may relate to differences in spatial integration properties of the EEG signal and the LFP signal. Another probable cause of this difference is that in Ref. [42] the LFP proxy was optimized over a much smaller range of network states and external input rates (ν0 < 6 spikes/s). Indeed, our results for ERWS2 show that optimal values of τGABA exhibit strong adaptation towards τGABA = 0 ms within the low regime of the external rate ν0. The parameter α, which expresses the ratio of the contribution to the EEG of GABA relative to AMPA synaptic currents, also exhibits a strong adaptation. The dependence of α on the value of input rate ν0 in Fig 2C is particularly relevant because it reflects a larger weight of GABA currents for low values of ν0 and the opposite effect, stronger weight of AMPA currents, as the external rate increases.

We next explored the performance of proxies on networks with the same properties of those used for training (i.e., same network size and cell morphologies). To quantitatively evaluate the performance of all proxies, we computed for each proxy the coefficient of determination R2, which represents the fraction of the EEG variance explained. The average R2 calculated on the validation dataset (Fig 2G) shows a clear superiority of the new class of proxies. Both the causal and non-causal versions of ERWS1 and ERWS2 outperform all the other proxies, and the non-causal versions reach the best overall performance (ERWS1: R2 = 0.94 and ERWS2: R2 = 0.95). In agreement with previous results for the LFP [42], the three proxies that give the worst fits were FR, ∑I and Vm.

To understand if the performance of proxies depended on the specific state of network activity, we first examined the performance of the most interesting proxies (|I|, LRWS, ERWS1 (non-causal) and ERWS2 (non-causal)) separately for different values of the input rate ν0. We found that while LRWS performed well for low input rates (the range of external rates for which it was optimized [42]), its performance rapidly dropped with ν0 (Fig 2F). The other three proxies maintained a high R2 for the whole spectrum of firing rates studied here, with ERWS1 and ERWS2 performing notably better than |I|. Note also that ERWS2 is the only proxy that yields a value of R2 above 0.9 for all firing rates. We then computed the performance of these proxies separately for different types of network states. We found that the new proxies developed here, ERWS1 and ERWS2, produced accurate fits of the EEG for all network states (Fig 2H), while accuracy of EEG approximations made by the other proxies was less uniform across network states.

The above analyses quantified how well the proxies approximated the actual values of the EEG in the time domain. We next examined how well the proxies approximated the overall power spectrum of the EEG rather than all variations of the EEG time series. In Fig 2I we show power spectral density (PSD) functions of all the proxies for the AI and SI states, compared to spectral responses of the EEG. In the whole frequency range considered (5–200 Hz), all proxies provided a qualitatively good fit of the EEG power spectrum, except ∑I, which attenuated low frequencies and amplified high frequencies. In Fig 2J we report the average R2 computed for the log10(PSD) across all data points of the validation dataset. We logarithmically weighted the spectra to prevent R2 to be dominated by low frequencies, neglecting the importance of errors at high frequencies. The performance obtained for power spectra confirmed the superiority of ERWS1 and ERWS2 also in the spectral domain. Interestingly, the average membrane potential (Vm), whose performance in the time domain was poor, performed instead better in the spectral domain. In contrast, the firing rate remained a poorly performing proxy also in the spectral domain.

Time-shifted variants of proxies

The ERSW proxies were optimized for EEG prediction choosing optimal values for the time shifts between neural activity and the EEG. It is thus possible that the superior performance of the ERWS proxies over all others may have been due to the fact that the other proxies were not optimally time shifted. To investigate this hypothesis, we generated optimized time-shifted versions of all the other proxies by computing cross-correlation between the ground-truth EEG and all other proxies and choosing the optimum time shift of each proxy as the lag of the cross-correlation peak. We then compared the performance of the time-shifted versions of proxies in predicting the EEG with the performance of the ERWS proxies.

In this analysis, we recomputed the optimum time shift of every proxy separately for each network state, whereas the parameters of the ERWS proxies were jointly optimized (see previous section) over the entire simulated EEG dataset spanning all possible network states. Thus, this comparison was clearly favorable to the other proxies. Nevertheless, we still found that the ERWS proxies outperformed all previous proxies for the majority of network states. Only in the AI state, we observed that the LRWS proxy slightly outperformed ERWS1 and ERWS2. The ERWS2 proxy was the only one providing remarkably good performance across all states (R2 > 0.9 over all states).

Further results came out of this analysis. Two proxies clearly improved the quality of their fits after time shifting, FR and Vm, but presented opposed time shifts: while FR was delayed, Vm was moved forward in time. A spike is a much faster event than the dipoles generated by synaptic activity and, as a result, a firing-rate proxy is expected to exhibit faster temporal changes than the EEG signal. By contrast, somatic integration of the postsynaptic membrane potentials following presynaptic spiking is a slower process that might lead to a signal more low-pass filtered than the EEG.

When comparing AMPA and GABA proxies, we observed that, in the AI state (Fig 3A), the temporal dynamics of the EEG signal was better approximated by the GABA proxy, whereas AMPA currents showed a faster response. Indeed, the performance of the AMPA proxy was improved after applying the corresponding time shift. As the firing rate of the external input increased and switched the network state from AI to SI (Fig 3B), the temporal evolution of the EEG began to diverge from GABA currents and, instead, AMPA currents were seen to better approximate the EEG.

Fig 3. Optimum time shift of proxies that maximizes cross-correlation with the EEG.

Fig 3

Comparison of the outputs of proxies and the ground-truth EEG before (left) and after (right) applying the optimum time shift, with the optimum time shift for each proxy and network state indicated on the right. Note that some proxies have positive time shifts for all network states (e.g., FR), while others (e.g., GABA) change the sign of the time shift when passing from the AI to the SR state. The network states shown are the following: AI in panel A, SI in panel B and SR in panel C. On the right: R2 before (color bars) and after (black bars) applying the optimum time shift. ERWS1 and ERWS2 are not time shifted.

The performance of EEG proxies depends on the neuron morphology and distribution of synapses

Modelling studies have demonstrated that extracellular potentials generated by synaptic input currents vary with the neurons’ dendritic morphology and the positions of individual synaptic inputs [6,50]. For example, morphological types that display a so-called open-field structure, such as pyramidal cells, have spatially separated current sources and current sinks that generate a sizable current dipole. Synaptic inputs onto neurons that have a closed-field configuration, such as interneurons, largely cancel out when they are superimposed so that the net contribution to the current dipole is weak [35]. The hybrid modelling scheme [30,35,42,43] gives us the opportunity to study, independently from the spiking dynamics of the point-neuron network, how different parameters of the multicompartment neuron network (e.g., distribution of synapses or dendritic morphology) affect the EEG signal and, as a consequence, modify the prediction capabilities of the proxies.

Above results (Figs 2 and 3) were computed using a specific multicompartmental model type of L2/3 pyramidal cell from rat somatosensory cortex (taken from the NMC database [47,48]) and referred as “NMC L2/3 PY, clone 9” (Fig 4A). Here, we studied whether the proxies derived for this morphology provided good approximations to the EEG generated by different cell morphologies. We thus quantified how well our proxies approximate the EEG generated by a different pyramidal-cell morphology taken also from rat somatosensory cortex (“NMC L2/3 PY, clone 0”) and by a third morphology (“ABA L2/3 PY”), which is a L2/3 pyramidal cell from the mouse primary visual area [51]. It is important to note that the parameter values of proxies optimized for the morphology “NMC L2/3 PY, clone 9” were applied unchanged to the other morphologies across network states.

Fig 4. Performance of proxies for different morphologies.

Fig 4

(A) Neuron reconstructions of L2/3 pyramidal cells acquired from the Neocortical Microcircuitry (NMC) portal [47,48] and the Allen Brain Atlas (ABA) [51]. For visualization purposes, in the synaptic distribution of each morphology, only a subset of AMPA and GABA synapses are shown, drawn randomly from all presynaptic connections. (B) R2 computed for each morphology (columns) and network state (rows). The label “All” indicates the average R2 across the three network states.

We found that ERWS2 was the proxy with the highest prediction accuracy (Fig 4). It approximated extremely well the EEG across all three types of morphology and across all network states. The performance of both ERWS proxies in predicting the EEG generated by the mouse pyramidal neuron morphology (“ABA L2/3 PY”, Fig 4, right column) was as good as the performance for the “NMC L2/3 PY, clone 9” morphology (probably because they have similar broad-tuft dendritic morphology, although different size). This suggests that the model generalizes reasonably well across species (at least for EEG generated by broad-tuft dendritic morphologies). ERWS proxies also performed well, though less compared to the morphology they were optimized for, on the EEGs generated by the other rat somatosensory cortex morphology (“NMC L2/3 PY, clone 0”, Fig 4, middle column). The small decrease in performance was probably due to the fact that, unlike the broad dendritic tuft morphology used to optimized the proxy, this morphology incorporates long apical dendrites that separates AMPA synapses located in the tuft from GABA synapses more than 200 μm away. Analogously, the fact that the performance of LRWS did not decrease for the “NMC L2/3 PY, clone 0” morphology can be understood in terms of similarity between the pyramidal-cell morphology used to develop the LRWS proxy [42] and this morphology. The LRWS proxy [42] performed well across all morphologies in the AI state but its performance decreased across other states and morphologies. Other proxies performed poorly across different morphologies and/or states.

We also evaluated performance of proxies on the EEG generated by a heterogeneous population of pyramidal cells that had different morphologies (S1 Fig). In the same simulation, we randomly assigned the “NMC L2/3 PY, clone 9” morphology to half of the pyramidal-cell population and the “NMC L2/3 PY, clone 0” morphology to the other half. We found that performance scores of proxies were between the values obtained independently for each morphology. This finding suggests that a mixed population of pyramidal cells, which includes a larger set of pyramidal-cell morphologies, could be well approximated by our proxies with an accuracy limited by the worst- and best-performing morphologies.

We next investigated how different spatial distributions of synapses on excitatory cells affect the performance of proxies (Fig 5). More specifically, GABA synapses were distributed on excitatory cells following two alternative approaches: located only on the lower part of the cell, primarily on the soma and basal dendrites (“Asymmetric”) or homogeneously distributed across all dendrites (“Homogeneous”). Note that the “Asymmetric” case (Fig 5, left column) corresponds to default configuration shown in Fig 4A, left column (“NMC L2/3 PY, clone 9” morphology). The most significant change observed when distributing GABA synapses homogeneously on excitatory cells was an overall decrease of the performance of all proxies (but see ∑I), most prominently for the AI. These findings are in agreement with previous results obtained for the LFP proxy [42] in which a homogenous distribution of AMPA and GABA synapses on pyramidal cells resulted in the worst approximation of LFPs. In all scenarios, except for the AI state, ERWS1 and ERWS2 provided the best performance and their average R2 values across network states reflect their superiority in both the asymmetric and homogenous distributions.

Fig 5. Influence of synaptic distributions on performance of proxies.

Fig 5

Outline of the two different distributions of GABA synapses on excitatory cells: distributed only below the reference point Z = 8.5 mm (“Asymmetric”) or distributed homogenously across all dendrites (“Homogeneous”). Each row below the diagram of model cells shows the corresponding R2 for a different network state. The label “All” in the last row displays the average R2 across the three network states.

Effects of the position of the electrode over the head model on the EEG and proxies

To investigate how the position of the electrode affects the EEG and performance of proxies, we simulated the EEG at four different locations over the head (Fig 6A). Simulation results, shown in Fig 6, are reported as a function of the angle between the electrode location and the Z-axis (angle Theta). We studied the effect of electrode position for the three different network states: AI, SI and SR. We first explored how properties of the EEG signal changed with the location of the electrode. As expected, the EEG amplitude, defined as the standard deviation of the EEG signal over time, decreased steeply when the electrode was moved away from the top of the head (Fig 6B). This decrease in EEG amplitude is consistent with previous simulation results of the 4-sphere head-model [35,39], in which a moderate attenuation of the EEG scalp potentials was observed when increasing the lateral distance from the center position along the head surface. Although the EEG amplitude is larger in the SR state, the relative variations of amplitude as a function of angle Theta were similar across network states. In contrast, we found (Fig 6C) sizeable differences in the normalized time courses of the EEG at different network states: an increase of angle Theta led to a delay of the EEG signal that was larger for the AI and SI states, but much weaker for the SR state. These results indicate that as the measurement point moves toward the zero-region of the current dipole, where the EEG power is much smaller, the signal-to-noise ratio is reduced and the influence of the high-frequency noise is more important. Since the signal power is significantly larger for the SR state, the effects of the high-frequency noise are less evident for the SR state.

Fig 6. EEG and proxies as a function of the position of the electrode over the head model.

Fig 6

(A) Illustration of the scalp layer in the four-sphere head model and locations where the EEG is computed. Location of the center of soma positions of the multicompartment neurons is marked as “Neuron population”. (B) EEG amplitude, (C) normalized EEG and (D) performance of |I|, LRWS, ERWS1 and ERWS2 as a function of the angle between the electrode location and the Z-axis (angle Theta), computed for the three different network states: AI, SI and SR.

Variations of properties of the EEG signal when the electrode was shifted from the top of the head affected the performance of proxies. As depicted in Fig 6D, in the AI and SI states the performance of |I|, LRWS, ERWS1 and ERWS2 decreased when the angle Theta increased. However, in the SR state the performance of proxies was largely independent of the position of the electrode, or it even increased with angle Theta in the case of the LRWS proxy. In any case, ERWS1 and ERWS2 gave the best performance in most scenarios, particularly ERWS2 whose R2 value was above 0.9, provided that Theta was smaller than 36 degrees.

EEG estimation with a Convolutional Neural Network (CNN) proxy

The proxies considered above are all simple linear functions of the neural parameters of the LIF point-neuron network model. Linear proxies have the advantage of simplicity and interpretability. However, an alternative strategy for constructing an EEG proxy is training a convolutional neural network (CNN) to learn complex and possibly non-linear relationships between variables of the LIF point-neuron network model, such as AMPA and GABA currents, and the EEG. This could potentially improve the estimation of linear proxies, at the expense of increasing computational complexity and obscuring interpretation. Instead of using a deep neural network with many hidden layers that could largely increase complexity and prevent us from making any type of analogy with results of linear proxies, we opted for a simpler, shallow CNN architecture, with just one convolutional layer (Fig 7A). This CNN architecture was found to be sufficiently robust achieving a R2 value of 0.99 on the test dataset (see Table 2). The network consists of one 1D convolutional layer (‘Conv1D’) with 50 filters and a kernel of size 20, followed by a max pooling layer (‘MaxPooling1D’) of pool size 2, a flatten layer and two fully connected layers of 200 units each one (marked as ‘Dense’ and ‘Output’ respectively). The input of the CNN is constructed by stacking data chunks of 100 ms (0.5 ms time resolution) extracted from the time series of AMPA and GABA currents, giving a 2 x 200 input layer.

Fig 7. Overview of the convolutional neural network, train errors and accuracy of EEG predictions.

Fig 7

(A) Illustration of the different types of layers included in the processing pipeline of the CNN architecture as well as the output shapes of each layer. Note that the 1D convolutional layer (‘Conv1D’) uses 50 filters and a 1D convolutional window (kernel) of size = 20. The total number of parameters of the entire CNN is 942450. (B) Training metrics collected during training: R2, Mean Absolute Error (MAE) and Mean Squared Error (MSE). (C) Probability density function of the prediction error calculated on the test dataset. The error is expressed in Standard Deviation Units (SDU) (D) Predictions vs true values. Each dot of the scatter plot corresponds to amplitude values of the predicted and real EEG signals at a specific time step of the simulation. The continuous line represents a perfect EEG estimator. (E) Examples of predictions of the CNN compared to the ground-truth EEGs for different network states.

Table 2. Performance (computed as R2) of the CNN in comparison with |I|, LRWS, ERWS1 and ERWS2 proxies.

The performance values shown for the test dataset (A) are averaged over all samples of the test dataset, while performance values in panels B, C and D are averaged over the samples of the different network states, i.e., AI, SI and SR.

A: Performance on the test dataset
|I| LRWS ERWS1 ERWS2 CNN
0.86 0.74 0.94 0.95 0.99
B: Morphologies
Cell model |I| LRWS ERWS1 ERWS2 CNN
NMC L2/3 PY, c. 9 0.87 0.74 0.92 0.94 0.97
NMC L2/3 PY, c. 0 0.70 0.76 0.77 0.77 0.87
ABA L2/3 PY 0.85 0.67 0.90 0.92 0.94
C: Distribution of synapses
Distribution type |I| LRWS ERWS1 ERWS2 CNN
Asymmetric 0.87 0.74 0.92 0.94 0.97
Homogeneous 0.77 0.65 0.83 0.87 0.89
D: Position of the EEG electrode
Theta (rad) |I| LRWS ERWS1 ERWS2 CNN
0 0.87 0.74 0.92 0.94 0.97
0.31 0.86 0.74 0.91 0.93 0.97
0.63 0.82 0.72 0.90 0.91 0.96
0.94 0.69 0.68 0.80 0.81 0.87

The network was trained and tested on the same datasets (one for the training of the proxies, the other for the validation/testing of their accuracy) generated for optimization of parameters of the ERWS1 and ERWS2 proxies, using a first-order gradient descent method (Adam optimizer [52]) over 100 epochs (see “Methods”). In Fig 7B, we observe a quick convergence of the three metrics used to monitor training (R2, MAE and MSE) towards optimal values (R2 ≈ 1, MAE < 0.1 and MSE < 0.01). Accuracy of predictions of the trained network, calculated on the test dataset, are shown in Fig 7. We computed the prediction error as the difference between amplitude values of the CNN-predicted and the true EEG signals. The probability distribution of the prediction error (Fig 7C) and the scatter plot of true versus predicted values (Fig 7D) both show a very accurate estimation of the EEG values. In Fig 7E, we illustrate some examples of predictions of the EEG signal compared to the ground-truth EEG for different network states. The best match between predicted and true EEG traces is seen for the SI state, although estimation performance remains high across all states.

The performance of the CNN was evaluated, like for the other proxies, as the average value of R2 computed over all samples of the test dataset. As shown in Table 2 line A, the CNN clearly outperformed all other proxies on the test dataset and reached a very high performance score (R2 = 0.99). We next assessed the performance of the CNN for the different configurations of the multicompartment neuron network, i.e., cell morphologies, distribution of presynaptic inputs and position of the recording electrode (Table 2 lines B, C and D). Compared to the best performing linear proxy, ERWS2, the CNN provided an increase of performance between 2 and 8% in most scenarios.

To gain insight into how AMPA and GABA inputs interact with layers of the network, we inspected the weights learned by different filters of the convolutional layer, as illustrated in S2 Fig for some examples of representative filters, depicted both in the time domain (panel A) and frequency domain (panel B). We observed that the majority of filters perform a band-pass and high-pass filtering of AMPA and GABA inputs and their peak frequencies are within the range [102, 103] Hz. This indicates that the CNN primarily uses the fast dynamics of the current inputs to construct an estimate of the EEG signal. We then asked whether we could disentangle the different transformation functions applied by the CNN to each type of input current. In signal processing, the impulse response of a linear system is typically used to understand the type of transfer function implemented by the system. The CNN included layers that were non-linear after the first convolutional layer. However, we could use a similar methodology to characterize the transformation function of the CNN by collecting the network responses to all possible combinations of unit impulses applied either to the AMPA or GABA inputs (S2C Fig). To extract a measure of the time shift applied by the network to AMPA and GABA inputs, we computed, for each unit impulse, the difference between the time when the impulse is applied and the time in which the absolute response of the network reaches its maximum. The histogram of time shifts applied to AMPA and GABA inputs (S2D Fig) shows that the CNN generally estimated the EEG signal by time shifting AMPA and GABA currents within the range [–2, 2] ms and the time shift could be either positive or negative.

The new EEG proxies are robust to the addition of oscillatory power in canonical low-frequency bands

Magnetoencephalography (MEG), EEG and LFP studies consistently report that brain activity contains oscillations in canonical frequency bands that are superimposed upon an aperiodic broadband power-law spectrum [5357]. An important question is how our proxies would perform in the presence of both oscillatory activity across different canonical EEG bands and of a broadband power-law spectrum.

Our spatially-unstructured local model of reciprocally connected excitatory and inhibitory populations could, when selecting appropriate parameters, intrinsically generate gamma rhythms (30–100 Hz) by the alternation of fast excitation and the delayed feedback inhibition [31,58]. As an example, Fig 2I reports EEG power spectra computed from the activity of our network model exhibiting gamma oscillations. Generation of gamma oscillations is an important model property because these oscillations are thought to underline important cognitive functions [5961] and their disruption is associated to several cognitive disorders [62]. Additionally, we previously showed that our recurrent network model could also generate broadband power-law spectra that resembled cortical spectra [14, 42]. However, it could not intrinsically generate narrow-band oscillations in the canonical low-frequency bands, including delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz) and beta (15–30 Hz), all frequencies which are thought to arise from more complex loops (e.g. cortico-cortical and thalamocortical loops [63,64]).

In previous models, this type of oscillations in lower-frequency bands was generated by entrainment of an external lower-frequency input [14]. Therefore, to study effects of low-frequency oscillations on performance of EEG proxies, here we added band-limited components to the thalamic input, in either the delta, theta, alpha, or beta bands. As expected by our previous studies [14], the use of band-limited signals increased the oscillatory power of network activity in each of these canonical low-frequency bands (Fig 8A).

Fig 8. Effect of adding oscillatory inputs in the canonical low-frequency bands.

Fig 8

(A) Power spectral density (PSD) functions of the ground-truth EEG for each one of the band-limited signals (delta, theta, alpha, and beta) added to the thalamic input of the model. The mean rate of the thalamic input was υ0 = 1.5 spikes/s. The fit to the simulated PSD of the aperiodic power-law component and of periodic components of the simulated EEG was performed using the FOOOF algorithm [53]. In the “aperiodic fit”, the periodic band-limited component (obtained using a multi-Gaussian fit of the band-limited spectral peaks) was removed leaving only the aperiodic component. The blue dashed line indicates the position of the maximum value of the PSD used to compute the magnitude of the oscillatory peak. (B) Performance of |I|, LRWS, ERWS1, ERWS2 and CNN for the different band-limited inputs and network states (AI, SI and SR).

The amount of power to be added to the input in each canonical band was a free parameter in these simulations. To obtain reasonably realistic scenarios, we decided to set the added power such that the relative increase of oscillatory power in each frequency band with respect to the power of the aperiodic signal was similar to that observed in real data. To separate the narrow band oscillations from the aperiodic power-law spectral component, we applied the FOOOF algorithm [53] to our simulated EEG spectra. Using this algorithm, we computed the aperiodic-adjusted power (or flattened spectrum)—i.e., the magnitude of the oscillatory peak above the aperiodic component [53,54]. The aperiodic-adjusted power was computed by subtracting the aperiodic fit from the power spectrum (see “Methods”). We set parameters of the added power such that in all simulations the aperiodic-adjusted power values of the simulated EEG across the different frequency bands were in the range 0.05 to 0.2 (we manually explored different amplitude values of power to be added, until the peak was at the desired amplitude in the EEG power spectrum). These values were in agreement with ranges of aperiodic-adjusted power values observed in recent studies in older adults that separate aperiodic and periodic features of power spectra in resting-state EEG [53] and MEG [54].

We computed the performance of the best EEG proxies (∑|I|, LRWS, ERWS1, ERWS2, and the CNN) across the different network states when adding power to the thalamic input in the canonical low-frequency bands (Fig 8B). Interestingly, the prediction performance of the EEG proxies remained very similar to those we reported above when simulating the case in which the thalamic input was not modulated by a low-frequency oscillatory input (Fig 2 and Table 2). Overall, we found that ERWS1, ERWS2 and the CNN produced accurate fits of the EEG for all network states, while accuracy of EEG approximations made by the other proxies was good for only one or two network states, but not for all. This suggests that our new proxies can work well to describe EEGs that show narrow-band peaks from the delta to the gamma range.

Upscaling of the network model and integration of EEG by proxies calculated on local subnetworks of cells separated across space

The new EEG proxies were trained on EEGs simulated by a small localized population (5000 neurons within a radius of 0.5 mm). We then investigated whether our proxies could approximate well the EEG of a larger network model, and how the proxies could integrate neural activity across spatial locations to generate an accurate EEG.

We simulated an upscaled version of the network model that contained 20000 neurons (four times more neurons than the model used for proxy training). To preserve the density of neurons, we increased the circular area where cells were placed to cover a circular section of radius r = 1 mm. We also opted for preserving the number of synapses that each neuron receives by decreasing fourfold the probability of connection. In this way, we could expect that network dynamics of the upscaled model would remain largely unchanged. We divided this larger network into four spatially distinct subnetworks, each made of 5000 neurons. We then studied the performance of the EEG proxies in approximating the EEG generated by the extended network for different cases of connectivity between the subnetworks.

We first considered the simplest scenario of recurrent connectivity, where cells were all-to-all connected both within and across subnetworks (Fig 9A). The ground-truth EEG was computed by integrating the EEG signal from all 20000 cells in the multicompartment model network, whereas proxies were computed in two different ways: either by using only the activity of a local subnetworks of cells (S1, S2, S3 or S4), or by summing proxies of all subnetworks (S1+S2+S3+S4). In either case, we observed that proxies locally calculated on subnetworks of cells (e.g., ERWS1 and ERWS2 in Fig 9B) could approximate very well the ground-truth EEG, as intuitively expected in an all-to-all connectivity scenario. Indeed, performance scores of global proxies (Fig 9C) were essentially identical to those of the proxies computed in individual subnetworks (Fig 9D). Another key observation was that prediction performance of both global and local proxies repeated the same trend of the 5000-neuron network used for training the proxies, with ERWS1, ERWS2 and the CNN giving accurate EEG approximations in all network states. These results indicated that upscaling the network model preserved prediction performance, and that in an all-to-all connectivity configuration, computation of proxies from local subnetworks of cells separated across space could be used to produce reliable estimations of the EEG.

Fig 9. Spatially extended network model.

Fig 9

(A) The network model was upscaled to cover a circular section of radius r = 1 mm. The number of cells of the original model was increased fourfold, although the upscaling preserved the densities of neurons and local synapses. The population of cells was divided in four subnetworks (S1, S2, S3 and S4) according to their position in each quadrant of the circular section. As an example, postsynaptic connections (red lines) of a randomly selected cell in S2 (blue spot) are depicted to illustrate the all-to-all connectivity pattern. Local proxies were computed for each subnetwork and the global proxy was calculated as the sum of proxies of all subnetworks (S1+S2+S3+S4). The ground-truth EEG was generated by summing contributions from all cells in the multicompartment model network. (B) Outputs of ERWS1 (bottom row) and ERWS2 (top row) proxies for the subnetwork S1 and of the global proxy compared with the ground-truth EEG. (C) Performance of global proxies. (D) Performance of proxies calculated for the different subnetworks of cells. (E) In the second scenario, connectivity of each subnetwork is local as exemplified by postsynaptic connections of the selected cell in S2. (F) Outputs of ERWS1 (bottom row) and ERWS2 (top row) proxies for the subnetwork S1 and of the global proxy compared with the ground-truth EEG. Performance of global proxies (G) and proxies calculated for the different subnetworks of cells (H).

We then simulated a second network configuration in which recurrent connections of cells were constrained to target only cells of the same subnetwork (Fig 9E). To produce fully uncorrelated cortical dynamics between subnetworks, the external inputs to each subnetwork were also independently generated. Unlike the model with all-to-all connectivity, we found that local proxies (S1, S2, S3 and S4) in the unconnected network could only partially approximate the EEG (Fig 9F and 9H), showing a significant decrease in performance in particular for faster network dynamics (i.e., SI and SR states). However, the linear sum of proxies computed on all subnetworks (S1+S2+S3+S4) approximated very well the EEG (Fig 9G), with a pattern of performance for all proxies similar to the one reported above for the smaller network (Fig 2) and to that observed in the upscaled model with all-to-all connectivity (Fig 9C).

Together, these results suggest that our new proxies can be successfully used to integrate neural signals across spatial locations to generate an accurate EEG in more spatially extended and larger network models.

Prediction of the stimulus-evoked EEG

Evoked potentials are a useful technique that measures the transient response of the brain following presentation of a stimulus. Although the proxies we obtained have been optimized on long stretches of steady-state network activity, we investigated how well the proxies approximate an EEG evoked potential produced by a transient input. Fig 10 shows the spiking activity of the point-neuron network (panel A) and the ground-truth EEG (panel B) in response to a transient spike volley with a Gaussian rate profile applied to the thalamic input. This transient input simulates the thalamic input that reaches cortex when an external sensory stimulus is presented. A comparison of the performance obtained for all proxies is shown in panel C, while the outputs of ERWS1, ERWS2 and the CNN are depicted in panel D, as an example, overlapped with the ground-truth EEG. We found that most of the current-based proxies approximated well the EEG when applying a transient burst of spikes of thalamic input, in particular |I|, ERWS1 and ERWS2 which reached a performance of R2 = 0.9. These results suggest that these types of proxies could also be employed to predict the type of transient response seen in evoked potentials.

Fig 10. Effect of transient activation of thalamic input with a Gaussian pulse packet.

Fig 10

(A) Raster plot of spiking activity from a subset of cells in each population in response to a transient spike volley with a Gaussian rate profile (σ = 30 ms) centered at 1000 ms. (B) Ground-truth EEG at the top of the head model. (C) Performance of proxies calculated between 850 and 1150 ms. (D) Outputs of ERWS1, ERWS2 and the CNN compared to the ground-truth EEG.

Evaluation of proxies in a human head model

Throughout this study, we have so far used a simple and analytically tractable head model (the four-sphere head model) to calculate EEGs. An important question is whether our proxies can work well when combined with more complex head models and, in particular, whether they can also be applied to the case of the human head. There are many high-resolution, anatomically detailed, and potentially personalized head models available, which for example take into account the folded cortical surface of the human brain [6567]. As an initial exploration to evaluate applicability of our proxies to more realistic head models, in this Section we report a study of how our proxies approximate EEGs when computed using the New York head model [66], a very detailed model of the human head that incorporates 231 electrode locations (S4 Fig).

Head models for calculating EEG signals typically take current dipoles as input. When calculating the EEG signal with the four-sphere head model, we first calculated the current dipole moment of each individual cell, and then calculated all single-cell EEG contributions resulting from these current dipoles at their respective locations. However, it has been previously demonstrated [35] that for populations that are small compared to the distance to the EEG electrodes (~ 10–20 mm for human EEG recordings), the individual cell locations are relatively unimportant, and a negligible error is introduced by first summing all single-cell current dipoles into a population current dipole, and then calculating the EEG signal from this. We computed the population current dipole moments of two different subnetworks (S4D Fig) from the unconnected network model shown in Fig 9E. We then placed these current dipole moments in two different locations spatially separated and orthogonally oriented on the folded cortical surface of the New York head (S4B Fig). Topographic maps generated from EEG electrodes for the two dipoles are shown in S4C Fig, left column. The total EEG map was calculated as the sum of individual EEG maps generated by each current dipole.

We then investigated if our EEG proxies could be used to predict current dipole moments. We found that the EEG signal calculated at top of the head in the four-sphere head model, resulting from a current dipole directly below the electrode, was, in fact, just a scaling of the dominant component (the component aligned with the depth axis of the cortex) of the original current dipole (S3 Fig). This means that the proxies developed for the EEG signal at the top of the four-sphere head model, are in fact equally valid as proxies for the (normalized) dominant component of the population current dipole moment. Based on this observation, we generated EEG maps from our proxies (in our example, we used ERWS2) by plugging their outputs into the dominant component of the current dipole moment and recalculating EEGs on the New York head model. The EEG topographic maps resulting from the ERWS2 proxy are depicted in S4C Fig, right column, which are shown to approximate qualitatively well the topographic maps of the ground-truth EEG data (S4C Fig, left column). The ERWS2 proxy also predicted well time traces of the EEG signal at different electrode positions (S4 Fig, panels E and F).

Together, these results suggest that the EEG proxies developed here can be effectively and easily used in combination with complex models of the human head.

Discussion

Interpreting experimental EEGs in terms of neural processes ultimately requires being able to compute realistic EEGs from simple and tractable neural network models, and then comparing the predictions of such models with data. Here we contributed to the first goal by developing simple yet robust and accurate proxies to compute EEGs from recurrent networks of LIF point neurons, a model widely used to study cortical dynamics. The new proxies give very accurate reconstructions of both steady-state and transient EEGs over an extensive range of network states, different morphologies and synaptic distributions, varying positions of the EEG electrode and different spatial extensions of the network. These proxies thus provide a well validated and computationally efficient way for calculating a realistic EEG by using simple simulations of point-neuron networks.

Simulations of EEGs using average membrane potentials or average firing rates are less accurate than those using synaptic currents

In many neural models, such as neural mass models [26,68], spiking network models [23,29,30] or dynamic causal models [25], EEGs or LFPs are simply modeled as the average firing rate or average membrane potential of excitatory neurons. While these assumptions are often reasonable, their effectiveness in describing the EEG has not been systematically validated. Here we found that these two established ways of computing the EEG worked reasonably well only under specific conditions. However, in agreement with previous results obtained for the LFP [14,42], we found that, for the EEG, proxies based on combinations of synaptic currents work much better and in more general conditions than proxies based on firing rates or membrane potentials. We found that, in several situations, membrane potential or firing rate proxies performed poorly. This was due to several factors. First, our work shows that the contribution of different proxies to the EEG is often time shifted and also state dependent (see below). Second, firing rates and membrane potentials have time scales different from those of transmembrane currents. Third, a proxy based only on the firing rates of excitatory neurons does not capture the contribution of inhibitory neurons, which is indirectly reflected in the EEG through the synaptic inputs of inhibitory neurons onto pyramidal cells.

Our results suggest that approximations of EEGs based on firing rates or membrane potentials of excitatory neurons should be discouraged, and replaced with the use of synaptic currents, whenever possible.

State-dependent relationships between predicted EEG and neural activity

One important finding of our work was that the parameters of the best EEG proxy (ERWS2) were dependent on the cortical state of the network. This, in turn, implies that the relationship between the predicted EEG and the neural elements that originate it is state dependent. The state-dependent relationship of parameters of ERWS2 may be due to properties of synaptic currents. We used realistic synaptic models, which were based on synaptic conductances and not on currents (as often implemented in simpler and less realistic models [12,31]). Unlike the current-based synapse model, AMPA and GABA currents of the conductance-based model depend on the membrane potential, which was shown in previous work to change with the external rate [69]. Consequently, the change of the membrane potential modifies the relative strength of AMPA and GABA currents and can modify, in turn, their contribution to the EEG. The optimization of parameters of ERWS2 found a larger weight of GABA currents for low values of the external input and, conversely, a stronger weight of AMPA currents for higher values of the external input rate. This is in agreement with the increase of amplitude in AMPA postsynaptic potentials in comparison with GABA currents observed in previous studies [69].

Our results suggest that the contribution of neural activity to the EEG is a highly dynamic process, and highlight the importance of developing EEG proxies, such as those developed here to capture these variations.

Robustness and generality of the EEG proxies across network states, cell morphologies, synaptic distributions and electrode locations

We found that, unlike all previously published proxies, our new optimized EEG proxies work remarkably well for a whole range of network states which capture many patterns of oscillations, synchronization, and firing regimes observed in neocortex [12]. Predicting well the EEG over a wide range of states is important because, in many cases, EEGs are experimentally used to monitor changes in brain states, and thus models used to interpret EEGs must be able to work well over multiple states. Our proxies were optimized using a specific pyramidal broad-dendritic-tuft morphology that generates large electric dipoles. We, however, showed that our proxies still had high accuracy when changing cell morphologies and distributions of presynaptic inputs. This suggests that our work, even though it could be improved using larger datasets of morphologies and synaptic distribution configurations, is already sufficiently general to capture the contribution to the EEG of some major types of pyramidal neurons. We also validated the performance of EEG proxies against changes in position of the recording electrode, with respect to the position chosen to train the proxies. The performance of proxies experienced only a moderate decrease as the position of the EEG electrode was shifted from the top of the head because of the progressive reduction in EEG amplitude. We finally demonstrated that our proxies, although trained on steady-state activity, can approximate well EEG evoked potentials, suggesting that our work could be relevant to model transient brain computations such as coding of individual stimuli or attentional modulations.

Previous work [42] used a similar approach based on optimizing a linear proxy to predict the LFP. We extended this work by computing the EEG, rather than the LFP. We used a head model that approximates the different geometries and electrical conductivities of the head, which was not necessary for the LFP proxy. Unlike this previous work, which considered only a reduced regime of network dynamics within the asynchronous or weakly synchronous states, we generated proxies trained and validated on a wider range of network states. Our EEG proxies were also validated on different pyramidal-cell morphologies reconstructed from experimental recordings, whereas the LFP proxy was validated on synthetically generated morphologies. As a result, our new optimized EEG proxies predict well the EEG over a wide range of states and different morphologies, unlike the LFP proxy, which was found in our study to work well only for a low-input-rate state and a specific morphology of pyramidal cells.

In sum, our new optimized EEG proxies provide a simple way to compute EEGs from point-neuron networks that is highly accurate across network states, variations of biophysical assumptions, and electrode position.

Possible applications and impact of the new EEG proxies

Our work provides a key computational tool that enables applying tractable network models to EEG data with significant implications in two main directions.

First, when studying computational models of brain function, our work allows quantitative rather than qualitative comparison of how different models match EEG data, thereby leading to more objective validations of hypotheses about neural computations.

Second, our work represents a crucial step in enabling a reliable inference, from real EEG data, of how different neural circuit parameters affect brain functions and brain pathologies. Since the EEG conflates many circuit-level aggregate neural phenomena organized over a wide range of frequencies, it is difficult to infer from its measure the value of key neural parameters, such as for example the ratio between excitation and inhibition [1,70]. Our proxies could be used to develop tractable LIF neural networks that generate realistic EEG predictions from each set of neural network parameters. By fitting, in future work, such models to real EEG data, estimates of neural network parameters (such as the ratio between excitation and inhibition or properties of network connectivity) could then be obtained from EEG spectra or evoked potentials. This approach could be used, for example, to test the role of excitation-inhibition balance in certain disorders [7072], or to individuate neural correlates of diseases that show alterations of EEG activity [7377]. Thus, our EEG proxies have clear promise for connecting EEG in human experiments to cellular and network data in health and disease. These future developments could complement other EEG modelling frameworks [68,78].

Although more work is needed to be able to interpret empirical EEGs in terms of network models, there are several facts that indicate that our proxies can be useful. Recent attempts to infer neural parameters from EEGs or other non-invasive signals, based on network models that use less accurate proxies than the ones developed here, are nevertheless beginning to provide credible estimates of key parameters of underlying neural circuit such as excitation-inhibition ratios [70,79], as well as accurate descriptions of cortical dynamics. For example, previous theoretical studies have modeled the LFP/EEG as the sum of absolute values of synaptic currents [14,15 34,45]. This type of proxy, though less accurate than those developed here, was sufficient to explain quantitatively several important properties of cortical field potentials, including the relationship between sensory stimuli and the spectral coding of LFPs [14], cross-frequency and spike-field relationships [34], and LFP phase of firing information content [15]. We thus expect that the new EEG proxies can build on these encouraging results and further improve the biological plausibility and robustness of neural parameter estimation from EEGs.

Linear vs non-linear proxies

We developed both linear and non-linear EEG proxies based on synaptic currents. The linear proxies (ERWS1 and ERWS2) were based on an optimized linear combination of time-shifted AMPA and GABA currents. Alternatively, we investigated the application of a shallow CNN that could capture more complex interactions between synaptic currents to estimate the EEG. Compared to the best performing linear proxy, ERWS2, the non-linear EEG proxy based on a convolutional network provided a sizeable increase of performance and it provided a very high performance in all conditions. The convolutional weights that we provide (Section “Data and Code Availability”) can be used to easily compute these non-linear EEGs proxies using similar computational power as that employed for linear proxies. However, the drawback of CNNs is that it is harder to infer direct relationships between synaptic currents and the EEG, whereas these relationships are apparent and immediate to interpret with linear proxies. However, we showed that this problem could be in part attenuated when using tools to visualize the transformation function implemented by the CNN, which allow an understanding of how synaptic currents are transformed by the non-linear proxy.

Limitations and future work

In our work, we made several simplifying assumptions regarding (i) the architecture and connectivity of the local recurrent network and of the thalamocortical and cortico-cortical loops, (ii) how to combine point-neuron and multicompartment networks, and (iii) the types of synaptic currents considered.

Our proxies have been extensively validated for a model with one class of pyramidal cells and are expected to be applied to models of any brain area in which the EEG is likely to be generated by one dominant population. We chose to model a single cortical layer, L2/3, based on previous computational work suggesting that this layer gives a large contribution to extracellular potentials [30,35]. Although we have shown that our proxies generalize well for different L2/3 pyramidal-cell morphologies, it will be important to extend our work to quantify contributions from other cortical laminae and cell morphologies to the EEGs. Electrical potentials in the brain tissue add linearly and the superposition of individual contributions to the EEG is in principle straightforward to compute if the amplitude of each laminar contribution is known. Thus, it should be possible to approximate the total EEG by a suitable linear combination of individual proxies computed for each population. We envisage future studies that couple multi-layer spiking models of cortical circuits [30,80,81] with multi-layer multicompartment neuron models within the hybrid modelling scheme.

Our model of reciprocally connected excitatory and inhibitory populations can produce internally generated gamma oscillations. However, our simple recurrent network model cannot produce internally generated oscillations in the canonical lower-frequency bands (delta, theta, alpha, and beta) commonly observed in EEG and MEG. These slower oscillations likely arise from complex cortico-cortical and thalamocortical loops [63,64] which were not modeled in this study. We investigated the effect of such lower frequencies by superimposing low-frequency band-limited components to the synaptic input to the network. This simplified approach, however, cannot capture important aspects of cortico-cortical and thalamocortical loops, such as temporal synchronization and coupling between different brain areas, which could influence temporal properties of the EEG. Further studies are needed to test the validity of our proxies to approximate the EEG dynamics in realistic models of cortico-cortical and thalamocortical loops.

The hybrid modelling approach [30] offers the advantage that we can vary parameters of the EEG-generating model, e.g., cell morphologies or synaptic distribution, without affecting the spiking dynamics. The disadvantage of this approach is, however, that the multicompartment network does not match the point-neuron network in every respect. For instance, even though the synaptic input conductances were identical in the two models, the resulting soma potentials of multicompartmental neurons were not identical to those of the point neurons because of passive dendritic filtering or the lack of a membrane-voltage reset mechanism following spike, among other effects. This inconsistency could, at least partially, be resolved by extracting the effective synaptic weight distributions from multicompartment neurons and use them in the point-neuron network in order to make the two simulation environments even more similar [82].

Another limitation of our work is that it modelled only AMPA and GABA synapses and did not include NMDA synapses. An interesting topic for a future study would then be to extend the network models to include NMDA synapses and to analyze their impact on the current descriptions of the current-based proxies.

The connectivity of the recurrent network model used to train the proxies was random and distance-independent. The majority of local cortical connections are found within 500 μm [83]. The spatial scale of the decay of connection probabilities with distance is typically larger than a cortical column [80,81,8486], which justified our simplified choice of distance-independent local network connectivity. When testing the proxies on spatially extended network models, we found our proxies to be accurate in two cases of distance-dependent connectivity (all-to-all connected or unconnected subnetworks). Further studies that use models with realistic distance-dependent connectivity are needed to provide accurate measures of the relative contribution of local proxies to the EEG as a function of cortical distance.

Methods

Overview of the approach for computing the proxies and the ground-truth EEG

Our focus is on computing an accurate prediction of the EEG (denoted as “proxy” in the following) based simply on the variables available directly from the simulation of a point-neuron network model. The point-neuron network was constructed following a well-established configuration based on two populations of LIF point neurons, one excitatory and other inhibitory, with recurrent connections between populations [12], as illustrated in Fig 1A. The network receives two types of external inputs: a thalamic synaptic input that carries the sensory information and a stimulus-unrelated input representing slow ongoing fluctuations of cortical activity.

The ground-truth EEG (referred to simply as “EEG” in the paper) with which to compare the performance of the different proxies is here computed using the hybrid modelling scheme [30,35,42,43]. We created a network of unconnected multicompartment neuron models with realistic morphologies and distribute them within a cylinder of radius r = 0.5 mm (Fig 1C). We focused on computing the EEG generated by neurons with somas positioned in one cortical layer so that the soma compartments of each cell are aligned in the Z-axis, 150 μm below the reference point Z = 8.5 mm, and homogenously distributed within the circular section of the cylinder. In our default setting, all dendrites of inhibitory cells receive GABA synapses while only those dendrites of excitatory cells below Z = 8.5 mm receive GABA synapses. AMPA synapses are homogenously positioned along the Z-axis in both cell types.

EEGs were generated from multicompartment neurons in combination with a forward-modelling scheme based on volume conduction theory [6]. From each multicompartment neuron simulation the current dipole moment of the cell was extracted with LFPy [39]. Next, these current dipole moments and the locations of the cells were used as input to the four-sphere head model to calculate all single-cell EEG contribution. The ground-truth EEG signal is the sum of all such single-cell EEG contributions. To approximate the different geometries and electrical conductivities of the head, we computed the EEG using the four-layered spherical head model described in [49]. In this model, the different layers represent the brain tissue, cerebrospinal fluid (CSF), skull, and scalp, with radii 9, 9.5, 10 and 10.5 mm respectively, which approximate the dimensions of a rodent head model [46]. The values of the conductivities chosen are the default values of 0.3, 1.5, 0.015 and 0.3 S/m. The EEG electrode is located on the scalp surface, at the top of the head model (Fig 1C).

The time series of spikes of individual point neurons were mapped to synapse activation times on corresponding postsynaptic multicompartment neurons. Each multicompartment neuron was randomly assigned to a unique neuron in the point-neuron network and received the same input spikes of the equivalent point neuron. Since the multicompartment neurons were not interconnected, they were not involved in the LIF network dynamics and their only role was to transform the spiking activity of the point-neuron network into a realistic estimate of the EEG. The EEG computed from the multicompartment neuron model network was then used as benchmark ground-truth data against which we compare different candidate proxies (Fig 1D).

Definition and computation of the proxies that approximate the ground-truth EEG

A proxy (Φ) is defined as an estimation of the EEG based on the variables available from the point neuron model over all excitatory neurons. Unless otherwise stated, we only considered the contributions of pyramidal cells to generate the EEG (in both the point-neuron and multicompartment neuron networks). The first six proxies that we tested were those used in previous literature for predicting the EEG or the LFP from point-neuron networks. These were: the average firing rate (FR), the average membrane potential (Vm), the average sum of AMPA currents (AMPA), the average sum of GABA currents (GABA), the average sum of synaptic currents (∑I) and average sum of their absolute values (|I|). Note that ∑I and |I| are defined as the sum of both AMPA and GABA currents. Because of the opposite signs assigned to the AMPA and GABA currents, |I| is equivalent to the difference between these currents. Computation of the average FR was calculated with a temporal bin width of 1 ms, and then filtered with a 5-ms rectangular window to produce a smoother output of the FR.

For several reasons (e.g., different rise and decay time constants or different peak conductances), we expect that AMPA and GABA currents contribute differently to the EEG and that the optimal combination of both types of currents could involve different time delays between them. Following Mazzoni and colleagues [42], the new class of current-based proxies, the weighted sum of currents (WS), was based on a linear combination of AMPA and GABA currents, with a factor α describing the relative ratio between the two currents and a specific delay for each type of current (τAMPA, τGABA):

ΦWS(t)=exc.IAMPA(tτAMPA)α(exc.IGABA(tτGABA)). (5)

The optimal values of α, τAMPA and τGABA were found to be 1.65, 6 ms and 0 ms for the LFP, respectively [42]. As a result, the LFP reference weighted sum (LRWS) proxy was defined as

ΦLRWS(t)=exc.IAMPA(t6ms)1.65(exc.IGABA(t)). (6)

Here we also introduced two new proxies derived from the WS formulation: the EEG reference weighted sum 1 (ERWS1) and the EEG reference weighted sum 2 (ERWS2), whose parameters were optimized to fit the EEG under different network states of the point-neuron network. While the concept of ERWS1 is similar to that of LRWS, with fixed optimal values of α, τAMPA and τGABA, the parameters of the ERWS2 were defined as a power function of the firing rate of the thalamic input (ν0, unitless) to account for possible dependencies of the EEG with the external rate:

ΦERWS1(t)=exc.IAMPA(tτAMPA(ERWS1))αERWS1(exc.IGABA(tτGABA(ERWS1))), (7)
ΦERWS2(t,υ0)=exc.IAMPA(tτAMPA(ERWS2)(υ0))αERWS2(υ0)(exc.IGABA(tτGABA(ERWS2)(υ0))), (8)
τAMPA(ERWS2)(υ0)=a1υ0b1+c1,τGABA(ERWS2)(υ0)=a2υ0b2+c2,αERWS2(υ0)=a3υ0b3+c3. (9)

The total number of parameters to optimize was 3 for ERWS1 (αERWS1, τAMPA(ERWS1) and τGABA(ERWS1)) and 9 for ERWS2 (a1, b1, c1, a2, b2, c2, a3, b3 and c3). We experimented with other classes of functions (e.g., exponential and polynomial functions) to describe the dependency of parameters of ERWS2 with ν0 but the best performance results were found with a power function.

Leaky integrate-and-fire point-neuron network

We implemented a recurrent network model of LIF point-neurons that was based on the Brunel model [31] and the modified versions developed in subsequent publications [14,15,34,42,45,69]. These models have demonstrated to explain well and capture a large fraction of the variance of the dynamics of neural activity in primary visual cortex during naturalistic stimulation, including a wide range of cortical oscillations such as low-frequency (1–12 Hz) and gamma (30–100 Hz) oscillations. In particular, the network structure and model parameters are the same ones used in [69] with conductance-based synapses (we refer the reader to this publication for an in-depth technical description of the implementation). Briefly, the network was composed of 5000 neurons, 4000 are excitatory (i.e., their projections onto other neurons form AMPA-like excitatory synapses) and 1000 inhibitory (i.e., their projections form GABA-like synapses). The neurons were randomly connected with a connection probability between each pair of neurons of 0.2. This means that, on average, the number of incoming excitatory and inhibitory connections onto each neuron was 800 and 200, respectively. Both populations received two different types of excitatory external input: a thalamic input intended to carry the information about the external stimuli and a stimulus-unrelated input representing slow ongoing fluctuations of activity. Spike trains of the external inputs are generated by independent Poisson processes. While the firing rate of every individual Poisson process for the thalamic input was kept constant in each simulation (within the range [1.5, 30] spikes/s), the firing rate of the cortico-cortical input was varied over time with slow dynamics, according to an Ornstein-Uhlenbeck (OU) process with zero mean:

τndn(t)dt=n(t)+σn(2τn)η(t) (10)

Here σn2 (0.16 spikes/s) is the variance of the noise, η(t) is a Gaussian white noise and τn (16 ms), the time constant. The full network description is given in Tables 3 and 4, following the guidelines indicated in [87].

Table 3. Description of the point-neuron network.

A: Model summary
Structure Excitatory-inhibitory (E-I) network
Populations Two: excitatory and inhibitory
Input 2 independent Poisson spike trains, one with a fixed rate and the other with a time-varying rate generated by an OU process
Measurement Spikes, membrane potential, AMPA and GABA currents
Neuron model Cortex: leaky integrate-and-fire (LIF) with fixed threshold and fixed absolute refractory time; external inputs: point process
Synapse model Difference of exponential functions; conductance-based synapses
Topology None
Connectivity Random and sparse
B: Populations
Type Elements Size
Pyramidal cells LIF neurons 4000
Interneurons LIF neurons 1000
Thalamic input Poisson generator 1
Cortico-cortical input Poisson generator 1
C: Connectivity
Name Source Target Pattern
AMPAPyr_Pyr Pyramidal Pyramidal Random convergent (p = 0.2), weight gPyr_Pyr
AMPAPyr_Int Pyramidal Interneuron Random convergent (p = 0.2), weight gPyr_Int
GABAInt_Pyr Interneuron Pyramidal Random convergent (p = 0.2), weight gInt_Pyr
GABAInt_Int Interneuron Interneuron Random convergent (p = 0.2), weight gInt_Int
AMPAtha_Pyr Thalamic Pyramidal Fixed in-degree (800), weight gtha_Pyr
AMPAtha_Int Thalamic Interneuron Fixed in-degree (800), weight gtha_Int
AMPAcort_Pyr Cortical Pyramidal Fixed in-degree (800), weight gcort_Pyr
AMPAcort_Int Cortical Interneuron Fixed in-degree (800), weight gcort_Int
D: Neuron model
Type Leaky integrate-and-fire
Description τmdV(t)dt=V(t)+VleakItot(t)gleak,
Itot(t)=NAMPArecIAMPArec(t)+NGABArecIGABArec(t)+IAMPAext(t),
E: Synapse model
Type Conductance-based synapse, difference of exponentials [31]
Description Isyn(t) = gsynssyn(t)(V(t)−Esyn),
if a presynaptic spike occurs:
ssyn(t)=τmτdτr[exp(tτlτd)exp(tτlτr)]
F: Input
Type Description
Poisson generator Thalamic input, time-constant input with rate ν0; each neuron receives 800 independent thalamic inputs
Poisson generator Cortico-cortical input, OU process with zero mean; each neuron receives 800 independent cortico-cortical inputs
G: Global simulation parameters
Simulation duration 3000 or 10000 (only for Fig 8) ms
Temporal resolution 0.05 or 0.1 (Figs 8 and 9) ms
Startup transient 500 ms

Table 4. Parameters of the neuron models used in the point-neuron network.

A: Neuron model
Parameter Pyramidal cells Interneurons
Vleak (mV) -70 -70
Vthreshold (mV) -52 -52
Vreset (mV) -59 -59
τrefractory (ms) 2 1
gleak (nS) 25 20
Cm (pF) 500 200
τm (ms) 20 10
B: Connection parameters
Parameter Pyramidal cells Interneurons
EAMPA (mV) 0 0
EGABA (mV) -80 -80
τr(AMPA) (ms) 0.4 0.2
τd(AMPA) (ms) 2 1
τr(GABA) (ms) 0.25 0.25
τd(GABA) (ms) 5 5
τl (ms) 1 1
gAMPA(rec.) (nS) 0.178 0.233
gAMPA(tha.) (nS) 0.234 0.317
gAMPA(cort.) (nS) 0.187 0.254
gGABA (nS) 2.01 2.7

Multicompartment-neuron network

The EEG was computed by projecting the spiking activity of the point-neuron network onto a network of multicompartment neuron models in which every multicompartment neuron is assigned a unique corresponding point neuron. A key factor for a successful representation of the EEG is selection of proper morphologies of multicompartment neurons with detailed and realistic dendritic compartments. Our focus was on computing the EEG for cortical layer 2/3 so that we acquired representative morphological reconstructions of L2/3 pyramidal cells and interneurons from publicly available repositories: the Neocortical Microcircuitry (NMC) portal [47,48] based predominantly on the data released by Markram and collaborators [47], and the Allen Brain Atlas (ABA) [51]. We also imposed our target animal model to be the rodent model. In our simulations, we evaluated three different types of morphologies of L2/3 pyramidal cells and one morphology of a specific type of L2/3 interneuron, the large basket cell interneuron (the most numerous class in L2/3 [47], represented as PY and LBC respectively in Table 5. Unless otherwise stated, the default morphology file used for pyramidal cells in our simulations is dend-C250500A-P3_axon-C260897C-P2-Clone_9.

Table 5. Morphologies types and file identifiers used in the multicompartment neuron network model.

Cell type Animal species File identifier Source
L2/3 PY Rat dend-C250500A-P3_axon-C260897C-P2-Clone_9 NMC
L2/3 PY Rat dend-C260897C-P3_axon-C220797A-P3-Clone_0 NMC
L2/3 PY Mouse Cux2-CreERT2, ID:486262299 ABA
L2/3 LBC Rat C250500A-I4_Clone_0 NMC

Soma compartments of pyramidal cells and interneurons were randomly placed in a cylindrical section of radius 0.5 mm, at Z = 8.35 mm. We assumed that GABA presynaptic inputs could only be located on dendritic compartments below the reference point Z = 8.5 mm. AMPA synapses were homogenously distributed along the Z-axis in both cell types with random probability normalized to the membrane area of each segment. This configuration resulted in an asymmetric distribution of AMPA and GABA synapses onto pyramidal cells creating a stronger current dipole moment from these types of cells. Each multicompartment neuron was modeled as a non-spiking neuron with a passive membrane [38]. Tables 6 and 7 summarize properties of the multicompartment neuron network.

Table 6. Description of the multicompartment neuron network.

A: Model summary
Structure Unconnected populations of multicompartment neurons
Populations Two: pyramidal cells and interneurons
Input Presynaptic spiking activity as modeled by the point-neuron network
Measurement EEG, current dipole moment
Neuron model Multicompartment neuron model based on the passive cable formalism
Synapse model Difference of exponential functions; conductance-based synapses
Topology Cylindrical volume with radius r = 0.5 mm
Connectivity None
B: Populations
Type Populations of 4000 pyramidal cells and 1000 interneurons
Cell positions Soma compartments located at Z = 8.35 mm and randomly distributed within the circular section of the cylinder
Cell orientations Fixed orientation with apical dendrites oriented along the Z-axis
Morphologies Reconstructed morphologies from the NMC and ABA (Table 5); axons removed if present
C: Connectivity
No network connectivity, synaptic inputs are generated by the point-neuron network with the same synaptic parameters (Table 4)
D: Neuron model
Type Multicompartment reconstructed morphologies
Description Non-spiking neurons based on the passive cable formalism (except in subsection “The performance of EEG proxies depends on the neuron morphology, distribution of synapses and the type of dendritic conductances”), with membrane capacity cm, membrane resistivity rm, axial resistivity ra and leak reversal potential EL.
E: Synapse model
Type Conductance-based synapse, difference of exponentials
Description Isyn(t) = gsynssyn(t)(V(t)−Esyn),
ssyn(t)=A[exp(tτlτd)exp(tτlτr)],
where A is a normalization factor to give a peak conductance gsyn
F: Input
Type Spike times of spiking neuron network (including thalamic and cortico-cortical input spikes), no recurrent input
Description All dendrites of interneurons receive GABA synapses while only those dendrites of pyramidal cells below Z = 8.5 mm receive GABA synapses; AMPA synapses are homogenously positioned along the Z-axis in both cell types; synapse locations are randomly assigned onto cell compartments assuming a probability proportional to the compartment’s surface area divided by the total surface area of the cell
G: Global simulation parameters
Simulation duration 3000 or 10000 (only for Fig 8) ms
Temporal resolution 0.05 or 0.1 (Figs 8 and 9) ms
Startup transient 500 ms

Table 7. Parameters of multicompartment neurons.

Parameter Pyramidal cells Interneurons
cm (μF/cm2) 1 1
rm (kΩcm2) 30 20
ra (Ωcm) 100 100
EL (mV) -70 -70

Optimization and validation of EEG proxies

We created two different simulated datasets, one for optimization of the ERWS1’s and ERWS2’s parameters (Eqs 79), and the other dataset for validation of performance of all proxies. The datasets were generated by varying the two parameters of the point-neuron network commonly used for exploration of different network states [31,44]: the rate of the external input, ν0, and the relative strength of inhibitory synapses, defined here as g = gInt_Pyr/gPyr_Pyr. We selected 58 values of ν0 within the range [1.5, 30] spikes/s and 3 values of g (5.65, 8.5 and 11.3), which encompass the different network states: asynchronous irregular, synchronous irregular and synchronous regular [12]. For every pair (ν0, g), we generated three simulations of the point-neuron and multicompartment-neuron networks with different random initial conditions (e.g., recurrent connections of the point-neuron network or soma positions of multicompartment neurons). The simulated outputs from two of these network instantiations were used for the optimization dataset and the other one for the validation dataset.

Prior to comparing the EEG traces with the point-neuron model predictions, we z-scored the proxies and the EEG signal by subtracting their mean value and dividing by the standard deviation. The best parameters of ERWS1 and ERWS2 were calculated by minimization of the sum of the square errors SSE between the ground-truth EEG and the proxy for all network instantiations i of the optimization dataset:

SSE=it(EEGi(t)Φi(t))2 (11)

Time constants of proxies (Eqs 79) were restricted to be discrete variables as the simulation time is a discrete variable. This turns the optimization problem into a discrete optimization problem, which is harder to solve than a continuous optimization problem. However, the limited number of parameters that need to be optimized allowed us to run a simple brute-force parameter search.

The performance of each proxy was evaluated by using the coefficient of determination R2, which is the fraction of the EEG variance explained by the proxy. R2 is computed as the squared value of the correlation coefficient. The validation results were calculated based on the average R2 of every proxy across all network instantiations i of the validation dataset.

Implementation of the convolutional neural network

The processing pipeline of the CNN architecture, illustrated in Fig 7A, was based on the machine-learning library Keras running on top of TensorFlow [88]. The CNN consists of a one-dimensional (1D) convolutional layer with 50 filters and a kernel of size 20, followed by a max pooling layer of pool size 2, a flatten layer and two fully connected layers of 200 units each (one of them is the output layer). The rectified linear unit (ReLU) function was used as the activation function for all layers, except for the output layer. To reduce overfitting, we applied L2 activity regularization (λ = 0.001) to the convolutional layer. The amount by which filters shift, the strides, is set to 1 for the convolutional layer and 2 for the max pooling layer. The input layer was formed by two channels of 1D data that correspond to the AMPA and GABA time series simulated by the point-neuron network. Instead of using data of the whole simulation (3000 ms), we split time series into multiple chunks (i.e., samples) of 100 ms, a window size that we found convenient to improve estimation accuracy of the CNN. Nodes of the output layer predict segments of the EEG signal at each 100-ms window.

The CNN was trained by first-order gradient descent (Adam optimizer [52]) with default parameters as those provided in the original paper. We defined the loss function for training as the mean squared error (MSE) between the predicted and the true values of the EEG. To monitor training, we employed the MSE and also the mean absolute error (MAE) and the coefficient of determination, R2. The CNN is trained for a sufficiently large number of epochs, 100 epochs, to ensure convergence of the error metrics. To train and test the CNN, we use the same datasets generated for optimizing parameters of the current-based proxies, as described above.

Band-limited inputs across lower frequency bands

In Fig 8, we simulated an increase of low-frequency power in the thalamic input by superimposing a band-limited signal s(t) to the constant baseline term υ0. The spike train activating the thalamocortical synapses was thus given by the positive part of a Poisson process with a time-varying rate vext(t) = [υ0+s(t)]+. To generate the band-limited signal s(t), we created a white Gaussian noise with zero mean and we adjusted the variance of the noise to produce a relative increase of low-frequency power that was within the empirical range observed in real EEG/MEG data. Frequencies in selected low-frequency bands were then extracted by a 2nd order Butterworth bandpass filter. To separate the narrow band oscillations from the aperiodic power-law spectral component, we applied the FOOOF algorithm [53] to the EEG spectra. The aperiodic-adjusted power was computed by subtracting the aperiodic fit from the power spectrum [53,54] as log10 PSDosc−log10 PSD1/f, where PSDosc and PSD1/f were the PSD values of the oscillatory peak and the aperiodic power law component respectively calculated at the center frequency of each frequency band.

Computation of ground-truth EEGs and proxies on the New York head model

To compute the ground-truth EEG with the New York head model [66], we used the current dipole moments generated by multicompartment neurons of two different subnetworks (S4D Fig) from the unconnected network model shown in Fig 9E (for a more detailed description of this approach, see [35]). First, all single-cell current dipole moments from the given subnetwork where summed into a single population current dipole moment. We placed these population current dipole moments in two different locations spatially separated and orthogonally oriented on the folded cortical surface of the New York head (S4B Fig). Topographic maps generated from EEG electrodes were computed for each current dipole moment individually and then summed up together to produce the total EEG map.

We then checked if our EEG proxies could be used to predict current dipole moments. We found that the EEG signal calculated at top of the head in the four-sphere head model, resulting from a current dipole directly below the electrode, was in fact just a scaling of the dominant component (the component aligned with the depth axis of the cortex) of the original current dipole (S3 Fig). This meant that the proxies developed for the EEG signal at the top of the four-sphere head model, were in fact equally valid as proxies for the (normalized) dominant component of the population current dipole moment. Based on this observation, we generated EEG maps from our proxies (in our example, we used ERWS2) by plugging their outputs into the dominant component of the current dipole moment and recalculating EEGs on the New York head model.

Analysis of network states

To characterize the different network states of activity in the point-neuron network at the level of both single neurons and populations, we employed the descriptors developed by Kumar and collaborators for conductance-based point-neuron networks [44].

Synchrony

We quantified the synchrony of the population activity in the network as the average pairwise spike-train correlation from a randomly selected subpopulation of 1000 excitatory neurons. The spike trains were binned in non-overlapping time windows of 2 ms.

Irregularity

Irregularity of individual spike trains was measured by the coefficient of variation (the ratio of the biased standard deviation to the mean) of the corresponding interspike interval (ISI) distribution. Low values indicate regular spiking; a value of 1 reflects Poisson-type behavior. The irregularity index was computed for all excitatory neurons.

Mean firing rate

The mean firing rate was estimated by averaging the firing of all excitatory cells, and was calculated with a bin width of 1 ms.

Post-processing and spectral analysis

The z-scored EEG signals and proxies were resampled by applying a fourth-order Chebyshev type I low-pass filter with critical frequency fc = 800 Hz and 0.05 dB ripple in the passband using a forward-backward linear filter operation and then selecting every 10th time sample. The estimate of the normalized power spectral density (normalized PSD) was computed using the Fast Fourier Transform with the Welch’s method, dividing the EEG z-scored data into eight overlapping segments with 50% overlap.

Numerical implementation

Here we summarize the details of the software and hardware used to generate the results presented in this study. Point-neuron network simulations were implemented using NEST v2.16.0 [89]. EEG signals were computed using LFPy v2.0 [39] for the four-sphere head model and the LFPykit module [90] for the New York head model. Simulations of multicompartment model neurons using NEURON v7.6.5 [91]. The CNN is constructed based on the machine-learning library Keras v2.3. The source-code structure relies on the freely available, object-oriented programming language Python (v3.6.9). Every simulation was parallelized using the following high-performance computing infrastructures: a 60-CPU 256-GB and a 48-CPU 128-GB dedicated servers, as well as the Franklin HPC cluster, all three at the Istituto Italiano di Tecnologia (IIT). We also used the Stallo high-performance computing facilities (NOTUR, No. NN4661K, the Norwegian Metacenter for Computational Science). Simulations of the point-neuron network were performed based on thread parallelism implemented with the OpenMP library. Network simulations with NEURON used distributed computing built on the MPI interface. Computation time for completing simulations of both network models and the post-processing of results was 2 hours on average for each experimental condition.

Supporting information

S1 Fig. Performance of proxies for a heterogeneous population of pyramidal cells.

In the same simulation, the “NMC L2/3 PY, clone 9” morphology was randomly assigned to half of the pyramidal-cell population and the “NMC L2/3 PY, clone 0” morphology to the other half. Colors used for proxies are the same used in Fig 4.

(TIF)

S2 Fig. Learned filters of the convolutional layer and illustration of time shifts applied by the CNN to AMPA and GABA input currents.

Examples of weights learned by four filters of the convolutional layer, depicted both in the time (A) and frequency domains (B) for the AMPA and GABA inputs. (C) Examples of the CNN outputs in response to unit impulses applied either to the AMPA or GABA inputs. (D) Histograms of time shifts applied to the AMPA and GABA inputs for all combinations of impulses. Each time shift is computed as the difference between the time when the impulse is applied and the time in which the absolute response of the CNN reaches its maximum.

(TIF)

S3 Fig. Comparison between EEG and the dominant component of the current dipole moment at the top of the head.

Example of time sequences of EEG (black line) and the dominant component of the current dipole moment, Pz (red dashed line), at the top of the four-sphere head model, computed both on the multicompartment model network.

(TIF)

S4 Fig. Computation of EEGs on the New York head model.

(A) Distribution of EEG electrodes and brain surface of the New York head model [66]. The black line represents the cortical cross-section where current dipoles were placed. (B) Current dipoles of the two subnetworks, S0 and S1, (represented by arrows of different colors) positioned in the cortical cross-section. (C) Topographic maps generated from EEG electrodes projected onto two-dimensional simplified plots of the head model. The ground-truth EEG maps are plotted on the left column and EEG estimations of the ERWS2 proxy on the right column. The closest EEG electrode of each current dipole is plotted as a spot in the same color of the corresponding current dipole. The EEG electrode selected to show time traces of the EEG signal is depicted as a gray spot. (D) Dominant component of the current dipole moment (Pz) for the two subnetworks. (E) Time sequences of the ground-truth EEG and the ERWS2 proxy registered at the electrode shown in panel C. (F) Time traces of ground-truth EEG and the ERWS2 proxy registered at different electrodes located in positions indicated by the gray spots.

(TIF)

Acknowledgments

We are grateful to R. Gao for useful suggestions and M. Libera for technical support.

Data Availability

The source code to simulate the LIF and multicompartment model networks can be downloaded at https://github.com/pablomc88/EEG_proxy_from_network_point_neurons. Datasets generated in this study, scripts to plot the figures and weights of the CNN are available from http://doi.org/10.5281/zenodo.4506494.

Funding Statement

This work was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie (grant agreement No 893825 to P.M.C), the NIH Brain Initiative (grants U19NS107464 to S.P. and T.F., and NS108410 to S.P.), the Simons Foundation (SFARI Explorer 602849 to S.P.), the European Union Horizon 2020 Research and Innovation Programme under Grant Agreement No. 785907 and No. 945539 [Human Brain Project (HBP) SGA2 and SGA3 to G.T.E.], and the Norwegian Research Council (NFR) through NOTUR - NN4661K to G.T.E. The funders did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Cohen MX. Where Does EEG Come From and What Does It Mean? Trends in Neurosciences. 2017;40(4):208–18. 10.1016/j.tins.2017.02.004 [DOI] [PubMed] [Google Scholar]
  • 2.Buzsaki G. Neuronal Oscillations in Cortical Networks. Science. 2004;304(5679):1926–9. 10.1126/science.1099745 [DOI] [PubMed] [Google Scholar]
  • 3.Hood DC, Zhang X. Multifocal ERG and VEP responses and visual fields: comparing disease-related changes. Doc Ophthalmol. 2000;100(2–3):115–37. 10.1023/a:1002727602212 [DOI] [PubMed] [Google Scholar]
  • 4.Siegel M, Donner TH, Engel AK. Spectral fingerprints of large-scale neuronal interactions. Nature Reviews Neuroscience. 2012;13(2):121–34. 10.1038/nrn3137 [DOI] [PubMed] [Google Scholar]
  • 5.Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience. 2012;13(6):407–20. 10.1038/nrn3241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Einevoll GT, Kayser C, Logothetis NK, Panzeri S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nature Reviews Neuroscience. 2013;14(11):770–85. 10.1038/nrn3599 [DOI] [PubMed] [Google Scholar]
  • 7.Lopes da Silva F. EEG and MEG: Relevance to Neuroscience. Neuron. 2013;80(5):1112–28. 10.1016/j.neuron.2013.10.017 [DOI] [PubMed] [Google Scholar]
  • 8.Pesaran B, Vinck M, Einevoll GT, Sirota A, Fries P, Siegel M, et al. Investigating large-scale brain dynamics using field potential recordings: analysis and interpretation. Nature Neuroscience. 2018;21(7):903–19. 10.1038/s41593-018-0171-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Brette R, Rudolph M, Carnevale T, Hines M, Beeman D, Bower JM, et al. Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience. 2007;23(3):349–98. 10.1007/s10827-007-0038-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, et al. The Scientific Case for Brain Simulations. Neuron. 2019;102(4):735–44. 10.1016/j.neuron.2019.03.027 [DOI] [PubMed] [Google Scholar]
  • 11.Plesser HE, Eppler JM, Morrison A, Diesmann M, Gewaltig M-O. Efficient Parallel Simulation of Large-Scale Neuronal Networks on Clusters of Multiprocessor Computers. Euro-Par 2007 Parallel Processing. Lecture Notes in Computer Science,2007. p. 672–81. [Google Scholar]
  • 12.Brunel N. Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons. Neurocomputing. 2000;32–33:307–12. [DOI] [PubMed] [Google Scholar]
  • 13.Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol. 2008;4(8):e1000092. 10.1371/journal.pcbi.1000092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mazzoni A, Panzeri S, Logothetis NK, Brunel N. Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Comput Biol. 2008;4(12):e1000239. 10.1371/journal.pcbi.1000239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mazzoni A, Brunel N, Cavallari S, Logothetis NK, Panzeri S. Cortical dynamics during naturalistic sensory stimulations: experiments and models. J Physiol Paris. 2011;105(1–3):2–15. 10.1016/j.jphysparis.2011.07.014 [DOI] [PubMed] [Google Scholar]
  • 16.Compte A. Synaptic Mechanisms and Network Dynamics Underlying Spatial Working Memory in a Cortical Network Model. Cerebral Cortex. 2000;10(9):910–23. 10.1093/cercor/10.9.910 [DOI] [PubMed] [Google Scholar]
  • 17.Mongillo G, Barak O, Tsodyks M. Synaptic Theory of Working Memory. Science. 2008;319(5869):1543–6. 10.1126/science.1150769 [DOI] [PubMed] [Google Scholar]
  • 18.Deco G, Thiele A. Cholinergic control of cortical network interactions enables feedback-mediated attentional modulation. European Journal of Neuroscience. 2011;34(1):146–57. 10.1111/j.1460-9568.2011.07749.x [DOI] [PubMed] [Google Scholar]
  • 19.Muller L, Reynaud A, Chavane F, Destexhe A. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave. Nature Communications. 2014;5(1). 10.1038/ncomms4675 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Muller L, Chavane F, Reynolds J, Sejnowski TJ. Cortical travelling waves: mechanisms and computational principles. Nature Reviews Neuroscience. 2018;19(5):255–68. 10.1038/nrn.2018.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience. 2014;17(4):594–600. 10.1038/nn.3658 [DOI] [PubMed] [Google Scholar]
  • 22.Zerlaut Y, Zucca S, Panzeri S, Fellin T. The Spectrum of Asynchronous Dynamics in Spiking Networks as a Model for the Diversity of Non-rhythmic Waking States in the Neocortex. Cell Reports. 2019;27(4):1119–32.e7. 10.1016/j.celrep.2019.03.102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hill S, Tononi G. Modeling Sleep and Wakefulness in the Thalamocortical System. Journal of Neurophysiology. 2005;93(3):1671–98. 10.1152/jn.00915.2004 [DOI] [PubMed] [Google Scholar]
  • 24.Bazhenov M, Stopfer M, Rabinovich M, Huerta R, Abarbanel HDI, Sejnowski TJ, et al. Model of Transient Oscillatory Synchronization in the Locust Antennal Lobe. Neuron. 2001;30(2):553–67. 10.1016/s0896-6273(01)00284-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.David O, Kiebel SJ, Harrison LM, Mattout J, Kilner JM, Friston KJ. Dynamic causal modeling of evoked responses in EEG and MEG. NeuroImage. 2006;30(4):1255–72. 10.1016/j.neuroimage.2005.10.045 [DOI] [PubMed] [Google Scholar]
  • 26.David O, Friston KJ. A neural mass model for MEG/EEG. NeuroImage. 2003;20(3):1743–55. 10.1016/j.neuroimage.2003.07.015 [DOI] [PubMed] [Google Scholar]
  • 27.Ursino M, La Cara G-E. Travelling waves and EEG patterns during epileptic seizure: Analysis with an integrate-and-fire neural network. Journal of Theoretical Biology. 2006;242(1):171–87. 10.1016/j.jtbi.2006.02.012 [DOI] [PubMed] [Google Scholar]
  • 28.Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73(4):357–66. 10.1007/BF00199471 [DOI] [PubMed] [Google Scholar]
  • 29.Buehlmann A, Deco G. Optimal information transfer in the cortex through synchronization. PLoS Comput Biol. 2010;6(9). 10.1371/journal.pcbi.1000934 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, et al. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks. Cerebral Cortex. 2016;26(12):4461–96. 10.1093/cercor/bhw237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Brunel N, Wang X-J. What Determines the Frequency of Fast Network Oscillations With Irregular Neural Discharges? I. Synaptic Dynamics and Excitation-Inhibition Balance. Journal of Neurophysiology. 2003;90(1):415–30. 10.1152/jn.01095.2002 [DOI] [PubMed] [Google Scholar]
  • 32.Compte A, Sanchez-Vives MV, McCormick DA, Wang X-J. Cellular and Network Mechanisms of Slow Oscillatory Activity (<1 Hz) and Wave Propagations in a Cortical Network Model. Journal of Neurophysiology. 2003;89(5):2707–25. 10.1152/jn.00845.2002 [DOI] [PubMed] [Google Scholar]
  • 33.Touboul J, Destexhe A. Can power-law scaling and neuronal avalanches arise from stochastic dynamics? PLoS One. 2010;5(2):e8982. 10.1371/journal.pone.0008982 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Mazzoni A, Whittingstall K, Brunel N, Logothetis NK, Panzeri S. Understanding the relationships between spike rate and delta/gamma frequency bands of LFPs and EEGs using a local cortical network model. NeuroImage. 2010;52(3):956–72. 10.1016/j.neuroimage.2009.12.040 [DOI] [PubMed] [Google Scholar]
  • 35.Næss S, Halnes G, Hagen E, Hagler DJ, Dale AM, Einevoll GT, et al. Biophysically detailed forward modeling of the neural origin of EEG and MEG signals. NeuroImage. 2021;225. 10.1016/j.neuroimage.2020.117467 [DOI] [PubMed] [Google Scholar]
  • 36.Daunizeau J, David O, Stephan KE. Dynamic causal modelling: a critical review of the biophysical and statistical foundations. Neuroimage. 2011;58(2):312–22. 10.1016/j.neuroimage.2009.11.062 [DOI] [PubMed] [Google Scholar]
  • 37.Ness TV, Halnes G, Næss S, Pettersen KH, Einevoll GT. Computing extracellular electric potentials from neuronal simulations. arXiv preprint arXiv:200616630. 2020. [DOI] [PubMed] [Google Scholar]
  • 38.De Schutter E, Van Geit W. Modeling complex neurons. Computational modeling methods for neuroscientists Cambridge: MIT. 2009:259–84. [Google Scholar]
  • 39.Hagen E, Næss S, Ness TV, Einevoll GT. Multimodal Modeling of Neural Network Activity: Computing LFP, ECoG, EEG, and MEG Signals With LFPy 2.0. Frontiers in Neuroinformatics. 2018;12. 10.3389/fninf.2018.00012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Lindén H, Hagen E, Łęski S, Norheim ES, Pettersen KH, Einevoll GT. LFPy: a tool for biophysical simulation of extracellular potentials generated by detailed model neurons. Frontiers in Neuroinformatics. 2014;7. 10.3389/fninf.2014.00007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Pettersen KH, Lindén H, Dale AM, Einevoll GT. Extracellular spikes and CSD. In: Destexhe A, Brette R, editors. Handbook of Neural Activity Measurement. Cambridge: Cambridge University Press; 2012. p. 92–135. [Google Scholar]
  • 42.Mazzoni A, Linden H, Cuntz H, Lansner A, Panzeri S, Einevoll GT. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models. PLoS Comput Biol. 2015;11(12):e1004584. 10.1371/journal.pcbi.1004584 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Skaar JW, Stasik AJ, Hagen E, Ness TV, Einevoll GT. Estimation of neural network model parameters from local field potentials (LFPs). PLoS Comput Biol. 2020;16(3):e1007725. 10.1371/journal.pcbi.1007725 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Kumar A, Schrader S, Aertsen A, Rotter S. The High-Conductance State of Cortical Networks. Neural Computation. 2008;20(1):1–43. 10.1162/neco.2008.20.1.1 [DOI] [PubMed] [Google Scholar]
  • 45.Barbieri F, Mazzoni A, Logothetis NK, Panzeri S, Brunel N. Stimulus Dependence of Local Field Potential Spectra: Experiment versus Theory. Journal of Neuroscience. 2014;34(44):14589–605. 10.1523/JNEUROSCI.5365-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Nowak K, Mix E, Gimsa J, Strauss U, Sriperumbudur KK, Benecke R, et al. Optimizing a Rodent Model of Parkinson’s Disease for Exploring the Effects and Mechanisms of Deep Brain Stimulation. Parkinson’s Disease. 2011;2011:1–19. 10.4061/2011/414682 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Markram H, Muller E, Ramaswamy S, Reimann Michael W, Abdellah M, Sanchez Carlos A, et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell. 2015;163(2):456–92. 10.1016/j.cell.2015.09.029 [DOI] [PubMed] [Google Scholar]
  • 48.Ramaswamy S, Courcol J-D, Abdellah M, Adaszewski SR, Antille N, Arsever S, et al. The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex. Frontiers in Neural Circuits. 2015;9. 10.3389/fncir.2015.00009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Næss S, Chintaluri C, Ness TV, Dale AM, Einevoll GT, Wójcik DK. Corrected Four-Sphere Head Model for EEG Signals. Frontiers in Human Neuroscience. 2017;11. 10.3389/fnhum.2017.00011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Murakami S, Okada Y. Contributions of principal neocortical neurons to magnetoencephalography and electroencephalography signals. The Journal of Physiology. 2006;575(3):925–36. 10.1113/jphysiol.2006.105379 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Lein ES, Hawrylycz MJ, Ao N, Ayres M, Bensinger A, Bernard A, et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature. 2006;445(7124):168–76. 10.1038/nature05453 [DOI] [PubMed] [Google Scholar]
  • 52.Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980. 2014. [Google Scholar]
  • 53.Donoghue T, Haller M, Peterson EJ, Varma P, Sebastian P, Gao R, et al. Parameterizing neural power spectra into periodic and aperiodic components. Nat Neurosci. 2020;23(12):1655–65. 10.1038/s41593-020-00744-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.He W, Donoghue T, Sowman PF, Seymour RA, Brock J, Crain S, et al. Co-Increasing Neuronal Noise and Beta Power in the Developing Brain. bioRxiv. 2019:839258. [Google Scholar]
  • 55.He BJ. Scale-free brain activity: past, present, and future. Trends Cogn Sci. 2014;18(9):480–7. 10.1016/j.tics.2014.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Voytek B, Kramer MA, Case J, Lepage KQ, Tempesta ZR, Knight RT, et al. Age-Related Changes in 1/f Neural Electrophysiological Noise. J Neurosci. 2015;35(38):13257–65. 10.1523/JNEUROSCI.2332-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Belitski A, Gretton A, Magri C, Murayama Y, Montemurro MA, Logothetis NK, et al. Low-Frequency Local Field Potentials and Spikes in Primary Visual Cortex Convey Independent Visual Information. Journal of Neuroscience. 2008;28(22):5696–709. 10.1523/JNEUROSCI.0009-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Buzsaki G, Wang XJ. Mechanisms of gamma oscillations. Annu Rev Neurosci. 2012;35:203–25. 10.1146/annurev-neuro-062111-150444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Fries P, Reynolds JH, Rorie AE, Desimone R. Modulation of oscillatory neuronal synchronization by selective visual attention. Science. 2001;291(5508):1560–3. 10.1126/science.1055465 [DOI] [PubMed] [Google Scholar]
  • 60.Colgin LL, Denninger T, Fyhn M, Hafting T, Bonnevie T, Jensen O, et al. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature. 2009;462(7271):353–7. 10.1038/nature08573 [DOI] [PubMed] [Google Scholar]
  • 61.Melloni L, Molina C, Pena M, Torres D, Singer W, Rodriguez E. Synchronization of neural activity across cortical areas correlates with conscious perception. J Neurosci. 2007;27(11):2858–65. 10.1523/JNEUROSCI.4623-06.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Mably AJ, Colgin LL. Gamma oscillations in cognitive disorders. Curr Opin Neurobiol. 2018;52:182–7. 10.1016/j.conb.2018.07.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Scheeringa R, Fries P. Cortical layers, rhythms and BOLD signals. Neuroimage. 2019;197:689–98. 10.1016/j.neuroimage.2017.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Zucca S, Pasquale V, Lagomarsino de Leon Roig P, Panzeri S, Fellin T. Thalamic Drive of Cortical Parvalbumin-Positive Interneurons during Down States in Anesthetized Mice. Curr Biol. 2019;29(9):1481–90 e6. 10.1016/j.cub.2019.04.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Dale AM, Fischl B, Sereno MI. Cortical Surface-Based Analysis. NeuroImage. 1999;9(2):179–94. 10.1006/nimg.1998.0395 [DOI] [PubMed] [Google Scholar]
  • 66.Huang Y, Parra LC, Haufe S. The New York Head—A precise standardized volume conductor model for EEG source localization and tES targeting. NeuroImage. 2016;140:150–62. 10.1016/j.neuroimage.2015.12.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Vorwerk J, Cho J-H, Rampp S, Hamer H, Knösche TR, Wolters CH. A guideline for head volume conductor modeling in EEG and MEG. NeuroImage. 2014;100:590–607. 10.1016/j.neuroimage.2014.06.040 [DOI] [PubMed] [Google Scholar]
  • 68.Sanz Leon P, Knock SA, Woodman MM, Domide L, Mersmann J, McIntosh AR, et al. The Virtual Brain: a simulator of primate brain network dynamics. Front Neuroinform. 2013;7:10. 10.3389/fninf.2013.00010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Cavallari S, Panzeri S, Mazzoni A. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks. Front Neural Circuits. 2014;8:12. 10.3389/fncir.2014.00012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Trakoshis S, Martínez-Cañada P, Rocchi F, Canella C, You W, Chakrabarti B, et al. Intrinsic excitation-inhibition imbalance affects medial prefrontal cortex differently in autistic men versus women. eLife. 2020;9:e55684. 10.7554/eLife.55684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Rubenstein JLR, Merzenich MM. Model of autism: increased ratio of excitation/inhibition in key neural systems. Genes, Brain and Behavior. 2003;2(5):255–67. 10.1034/j.1601-183x.2003.00037.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Sohal VS, Rubenstein JLR. Excitation-inhibition balance as a framework for investigating mechanisms in neuropsychiatric disorders. Molecular Psychiatry. 2019;24(9):1248–57. 10.1038/s41380-019-0426-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Bosl W, Tierney A, Tager-Flusberg H, Nelson C. EEG complexity as a biomarker for autism spectrum disorder risk. BMC Medicine. 2011;9(1). 10.1186/1741-7015-9-18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Bosl WJ, Loddenkemper T, Nelson CA. Nonlinear EEG biomarker profiles for autism and absence epilepsy. Neuropsychiatric Electrophysiology. 2017;3(1). [Google Scholar]
  • 75.Gogolla N, LeBlanc JJ, Quast KB, Südhof TC, Fagiolini M, Hensch TK. Common circuit defect of excitatory-inhibitory balance in mouse models of autism. Journal of Neurodevelopmental Disorders. 2009;1(2):172–81. 10.1007/s11689-009-9023-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Mäki-Marttunen T, Krull F, Bettella F, Hagen E, Næss S, Ness TV, et al. Alterations in Schizophrenia-Associated Genes Can Lead to Increased Power in Delta Oscillations. Cerebral Cortex. 2019;29(2):875–91. 10.1093/cercor/bhy291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Mäki-Marttunen T, Kaufmann T, Elvsåshagen T, Devor A, Djurovic S, Westlye LT, et al. Biophysical Psychiatry—How Computational Neuroscience Can Help to Understand the Complex Mechanisms of Mental Disorders. Frontiers in Psychiatry. 2019;10(534). 10.3389/fpsyt.2019.00534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Neymotin SA, Daniels DS, Caldwell B, McDougal RA, Carnevale NT, Jas M, et al. Human Neocortical Neurosolver (HNN), a new software tool for interpreting the cellular and network origin of human MEG/EEG data. Elife. 2020;9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Gao R, Peterson EJ, Voytek B. Inferring synaptic excitation/inhibition balance from field potentials. NeuroImage. 2017;158:70–8. 10.1016/j.neuroimage.2017.06.078 [DOI] [PubMed] [Google Scholar]
  • 80.Potjans TC, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cerebral Cortex. 2014;24(3):785–806. 10.1093/cercor/bhs358 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Senk J, Hagen E, van Albada SJ, Diesmann M. Reconciliation of weak pairwise spike-train correlations and highly coherent local field potentials across space. arXiv preprint arXiv:180510235. 2018. [Google Scholar]
  • 82.Rössert C, Pozzorini C, Chindemi G, Davison AP, Eroe C, King J, et al. Automated point-neuron simplification of data-driven microcircuit models. arXiv preprint arXiv:160400087. 2016. [Google Scholar]
  • 83.Voges N, Schuz A, Aertsen A, Rotter S. A modeler’s view on the spatial structure of intrinsic horizontal connectivity in the neocortex. Prog Neurobiol. 2010;92(3):277–92. 10.1016/j.pneurobio.2010.05.001 [DOI] [PubMed] [Google Scholar]
  • 84.Packer AM, Yuste R. Dense, Unspecific Connectivity of Neocortical Parvalbumin-Positive Interneurons: A Canonical Microcircuit for Inhibition? Journal of Neuroscience. 2011;31(37):13260–71. 10.1523/JNEUROSCI.3131-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Hellwig B. A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybern. 2000;82(2):111–21. 10.1007/PL00007964 [DOI] [PubMed] [Google Scholar]
  • 86.Perin R, Berger TK, Markram H. A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci U S A. 2011;108(13):5419–24. 10.1073/pnas.1016051108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Nordlie E, Gewaltig MO, Plesser HE. Towards reproducible descriptions of neuronal network models. PLoS Comput Biol. 2009;5(8):e1000456. 10.1371/journal.pcbi.1000456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:160304467. 2016. [Google Scholar]
  • 89.Linssen C, Deepu R, Mitchell J, Lepperød ME, Garrido J, Spreizer S, et al. NEST 2.16. 0. Jülich Supercomputing Center; 2018. [Google Scholar]
  • 90.Ness TV. Github source-code repository of the LFPykit module. 2021. [Available from: https://github.com/LFPy/LFPykit. [Google Scholar]
  • 91.Hines M. NEURON and Python. Frontiers in Neuroinformatics. 2009;3. 10.3389/neuro.11.003.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008893.r001

Decision Letter 0

Daniele Marinazzo

4 Dec 2020

Dear Dr Panzeri,

Thank you very much for submitting your manuscript "Computation of the electroencephalogram (EEG) from network models of point neurons" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: In this manuscript, Martinez-Cańada and colleagues employ the “hybrid” framework to simulate EEG more efficiently by decoupling the spiking dynamics generation (using point neuron Brunel E-I networks in different regimes) and the generation of electric fields using morphologically realistic, multi-compartmental neurons. The goal of the work is to find suitable proxies measurable in the spiking neural network (SNN), such as population firing rates or synaptic currents, that can be combined to approximate the “ground-truth” EEG signal generated from the biophysical model. The authors compare an array of such proxies and find that optimized lagged summation of the E and I currents performs the best (for linear proxies) across 1) all network regimes, 2) neuron morphologies and dendritic distribution of synapses, 3) EEG electrode positions, and 4) for non-stationary network states triggered by a Gaussian pulse thalamic input. They also test the performance of nonlinear proxies by training a shallow convolutional neural network on the SNN synaptic currents, and find small but consistent improvements over the linear proxies.

Overall, I find this paper and this line of work to be important, since the ability to make physiological inferences from non-invasive human brain signals, such as the EEG, would open up many avenues of research connecting cognitive neuroscience with computational and systems neuroscience. The simulations and analyses performed in this manuscript seem to be of high quality, very comprehensive, and well-documented, and they generate additional insights on the relationships between the different network measurements, such as the delay between firing rate vs. synaptic currents, as well as their relationship with the EEG. That being said, I’m not sure if this manuscript communicates significant advances in insights and tools over what’s already been presented in Mazzoni et al 2015 and Hagen et al 2016 (both of which I am a huge fan of) from the perspective of EEG modeling, due to the validity of the “ground truth” EEG model and a lack of emphasis (in the writing) on the insights that ARE generated from these computational experiments. The latter can be amended in a straightforward manner, for which I have some suggestions writing-wise. The former is discussed as a major concern below, for which I don’t really have recommendations for as a reviewer (other than to conduct non-trivial additional experiments), but I leave it to the editor to evaluate the suitability of this manuscript for publication considering those gaps.

The key idea of this paper is that one may use spiking neural network outputs to directly simulate the EEG without computationally expensive biophysical simulations, and these experiments check if and when proxies are good substitutes for the full simulation. As such, my main concern is that the value of this work hinges on the validity of the “ground-truth” EEG model: the more accurate the ground truth EEG model is to real EEG signal generation mechanisms, the more valuable it would be to use an SNN proxy directly. On the other hand, if the the ground truth EEG model does not accurately describe known biophysical mechanisms, then it doesn’t matter if one can find perfect proxies using SNN measures to approximate the “ground-truth” simulation, since it would not be useful in realistic situations, especially in use cases suggested in the discussion, i.e., inference. At the very least, this point should be clearly acknowledged in the discussion section as a limitation, beyond what it currently states in terms of extending to more complex head models. Given that, I feel that both components of the hybrid model suffer weaknesses when compared to real EEG, and I’ve listed the major ones below:

- while the models presented in this work makes some advances from Mazzoni et al 2015, such as including a broader set of Brunel network regimes, I find that the spiking dynamics—and as a result the simulated ground-truth EEG—are not representative of signatures often found in EEG. In the EEG, there is typically strong low-frequency oscillations, such as alpha (10Hz) or beta (20Hz). In addition, the power spectrum is almost always 1/f up to very low-frequencies, and often Lorentzian-like (1/f with plateau at low frequencies). While the simulated EEG here can oscillate at 50Hz in the SR state, which can sometimes be observed in the EEG as gamma oscillations, both the time series and the power spectra do not in general look like realistic EEGs, especially in the SI state PSD (Fig. 1I). I believe this is partially due to the limitations of the Brunel network, as it lacks recurrent dynamics between the cortical and thalamic populations, though that is thought to be a crucial generator of prominent EEG features, such as the alpha rhythms (e.g., Halgren et al., PNAS 2020). Setting up a recurrent thalamocortical network may be outside the scope of the current study, but I suggest at least an experiment using oscillatory thalamic input at 10Hz, similar to what the authors have done with the Gaussian packet to simulate an ERP-like signal.

- the current study only considers AMPA and GABA current, while slower currents, such as NMDA and calcium transients, most likely have strong contributions to the scalp EEG as well, especially since it is thought that low-frequency components of cortical activity sum constructively to make up the EEG. Again, this requires modification to the Brunel network in non-trivial ways, but it is not a guarantee that the current results will hold, or simply extrapolate linearly, with more complex currents, and should at least be mentioned in the discussion.

- neither the SNN nor the biophysical column have distance-dependent connections. I’m not sure how big of a deal this is, especially since the point neuron identities are randomly assigned to the biophysical neurons, but spatial correlation of activity could be another factor that would change how the sources are summed to get the EEG.

- the ground-truth EEG signal is assumed to come from just a single cortical column, and while this is closer to the truth for the extracellular LFP, it’s not accurate as a model of the EEG. It’s more likely that the signal an EEG sensor picks up is a complex summation of many such cortical columns across centimeters, oriented in different directions due to cortical folding, and even from heterogeneous cell-types, ratios of E/I cells, and neuronal morphologies (see. e.g., Wang Nat Rev Neurosci 2020). While a full-cortex spiking neural network model with heterogeneous populations is obviously out of the scope, it would be nice to see multiple interacting cortical populations spaced farther apart to mimic source mixing in EEG, especially considering the hybrid model studied in Hagen et al CerCor 2016. Or, at the very least, acknowledge the need to consider such factors in the discussion.

- similarly, the head model is a simple 4-layer spherical model with differential resistance, which is arguably the only addition to the model from Mazzoni et al PLOSCB 2015. The authors state in the discussion that more complex head models can be easily incorporated, since the simulated EEG sensors essentially pick up a scaled version of the dipole in the z-direction. However, as one of the main result is that the proxy perform differently when the sensor is at a different angle with respect to the population, and that cortical folding would imply the summation of sources from differentially oriented pyramidal populations, I’m not convinced that ERWS, for example, would apply straight out the box in that situation. It would be more convincing to show this with several columns, or discussed as a limitation.

- relating to the previous point, I think it’s not clear until much later on (first time in L191) that the EEG model is meant to be a rodent model (with a downscaled and smooth cortex). I think it would orient the reader to have the correct expectation if this was mentioned as early as possible, potentially even in the abstract.

some stand-alone comments:

- I believe that the manuscript would be improved if specific findings are more explicitly highlighted, perhaps even in the abstract. For example, the fact that linear combination of AMPA/GABA currents is better than firing rate or Vm (e.g., L674) is very important for somebody simulating EEG signals from spiking networks, since firing rate is very commonly (and perhaps mistakenly) used as a surrogate. Also, it’s quite interesting that the optimized parameters for ERWS proxies are state-dependent, even though the physical shapes of the neurons themselves are unchanged, which suggests frequency-dependent filtering characteristics of the membrane or synapse.

- computing the R^2 on the time series emphasizes low-frequency fluctuations due to 1/f scaling of EEG signals, does this potentially obfuscate important differences in performance for the different metrics?

- R^2 = 0.9 is used as a benchmark throughout the study, but why, and what is considered “good”?

- is the PSD R^2 computed on log power or raw power? To complement the time series R^2 comparison, perhaps it would be good to compute R^2 on the logged power to de-emphasize the low frequencies.

- worth discussing why Fig1G (R^2 on time series) and J (R^2 on PSD) are so different

- L361: why are FR, sum(I) and Vm so bad? Apologies if I missed the extended discussion on this but I think this is an interesting point, further supporting the insight that the EEG is more about absolute current fluctuations, and less so about net input

- in my opinion, Fig 8 complicates the main message, which I interpret to be that linear proxies are sufficient for estimating the EEG, and thus can be moved to be supplemental as additional analyses.

To summarize, while the EEG ground-truth model realism is a matter of the authors’ decisions, and one can always include more details at the cost of computational complexity, there are several key factors that should be considered, or at least addressed and justified as to why they are omitted. On the other hand, the dynamics produced in the ground truth simulation do not resemble realistic EEG, and while I don’t think a computational study MUST include empirical data, I do think comparison to some real data, even just at a surface level, would ground this work, especially considering the implications suggested in the discussion section regarding inference. I absolutely agree that computational models serve as an important tool for understanding the EEG and allows for physiological inferences. However, while the discussion currently (L730-744) is a nice exposition of the potential uses in theory, since this paper does not actually do it with real data, it’s unclear then from the paper itself how the current framework is suppose to be used for a naive reader (like if an EEG practitioner that wants to use the simulations for inference). So the claims read a little empty. Furthermore, some key references that work on those approaches, such as the Human Neocortical Solver (Neymotin et al eLife 2020), are missing.

Richard Gao, PhD

Department of Cognitive Science, UCSD

Reviewer #2: Building upon previous work to introduce a proxy for the LFP generated by a network of point neurons, in this work the authors adapt this approach to develop a new proxy for the EEG generated by a network of point neurons. As in previous work, the authors feed spikes generated in the point neuron network to a multicompartment, morphological model in order to compare LFP generated by the point neuron network to a ground truth. They use this approach to develop a new proxy suitable for the EEG generated from a network of neurons. This new proxy surpasses previous work, in that this proxy remains valid across a wide range of network states. Finally, they extend their approach using a shallow CNN to understand how much more variance could be explained by more complicated nonlinear proxies.

This manuscript reports well-executed and important work. The utility of a well-motivated EEG proxy in the field of neuroscience is great. There are a few revisions and minor suggestions to address in this round of review.

MAJOR

- line 153: The parameters for the LIF model are well documented in the tables; however, it would be helpful to have a little information in the main text about the approximate scale (i.e. number of neurons and synapses) when introducing the LIF model. This also raises an important point: what network scale is sufficient to faithfully model the EEG? Would the network dynamics change significantly for more than 5,000 neurons, and would this have an impact on the simulated EEG? Lastly, in the case that a larger network would need to be considered, how could the EEG proxy integrate signal across points in space?

- line 448: It is useful to note that the parameter values for the proxies were fixed across different morphologies. This increases confidence that the EEG proxy will generalize well across different morphologies. One point, however, is that the same morphology is applied for the pyramidal cells and the interneurons in the full simulation. What effect could heterogeneity in neuronal morphologies have on the proxy comparison? Is there some way to show that this will be negligible?

- Fig 6: The results from this point are important but somewhat difficult to interpret. What do these distances mean in terms of typical EEG placements in a human experiment?

- Lastly, this proxy has clear potential to shed light on important signals recorded in EEG, for example sleep rhythms. Could this proxy already give some insight into oscillations recorded in the sleep EEG?

MINOR

- In general, the notation in the equations could use improvement. Several style guides caution against multiletter abbreviations for mathematical variables (cf. Physical Review Style and Notation Guide, Section C, Point 4), as it makes equations less clear.

- line 539: Are these "parameters" or "state variables"?

- line 562: What is the unit of the standard deviation?

- Fig 7D: What is the unit of "True values" and "Predictions"?

- line 606: This sentence is unclear. Is there a typo?

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: None

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Richard Gao

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008893.r003

Decision Letter 1

Daniele Marinazzo

16 Mar 2021

Dear Dr Panzeri,

Thank you very much for submitting your manuscript "Computation of the electroencephalogram (EEG) from network models of point neurons" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: dear all,

many thanks to the authors for engaging so constructively with my comments. I read with great interest the additional experiments and insights they generated, as well as the extensive explanations the authors provided (e.g., on conductance vs. current based models and their implications near reversal potential, timing differences of currents vs. Vm, etc.), which I learned a lot from. As previously agreed upon, these set of experiments are satisfactory to me, and I believe they extend the study in many directions than originally presented, and I hope the authors feel the same after the great efforts after the first round of reviews. The additional text also provide better framing and more explicit acknowledgement of the limitations (and therefore future opportunities) of this work, and I agree that this is an excellent step towards human EEG signals with biophysical modeling.

just one small note: the PSDs in figure 8 look a bit strange, in that none of them show clear oscillatory peaks except in perhaps the delta/theta range, while the aperiodic-adjusted power was apparently non-trivial (as the authors state, compared to results from human data in literature). I'm wondering if there was potentially a mix up with the PSDs plotted? If not, it is curious that the network then somehow filtered out the input frequencies (at least by visual inspection from the PSDs). This does not hinder acceptance in my view, but could be good to discuss, especially as curious readers may wonder where those oscillatory inputs have gone.

again, excellent work and it was a pleasure to be a small part of this process

best,

richard gao

Reviewer #2: In their revised manuscript, the authors have fully addressed the concerns raised in the first round of review.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: None

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Richard Gao

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-materials-and-methods

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008893.r005

Decision Letter 2

Daniele Marinazzo

18 Mar 2021

Dear Dr Panzeri,

We are pleased to inform you that your manuscript 'Computation of the electroencephalogram (EEG) from network models of point neurons' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************************************************

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008893.r006

Acceptance letter

Daniele Marinazzo

29 Mar 2021

PCOMPBIOL-D-20-01969R2

Computation of the electroencephalogram (EEG) from network models of point neurons

Dear Dr Panzeri,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Katalin Szabo

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Performance of proxies for a heterogeneous population of pyramidal cells.

    In the same simulation, the “NMC L2/3 PY, clone 9” morphology was randomly assigned to half of the pyramidal-cell population and the “NMC L2/3 PY, clone 0” morphology to the other half. Colors used for proxies are the same used in Fig 4.

    (TIF)

    S2 Fig. Learned filters of the convolutional layer and illustration of time shifts applied by the CNN to AMPA and GABA input currents.

    Examples of weights learned by four filters of the convolutional layer, depicted both in the time (A) and frequency domains (B) for the AMPA and GABA inputs. (C) Examples of the CNN outputs in response to unit impulses applied either to the AMPA or GABA inputs. (D) Histograms of time shifts applied to the AMPA and GABA inputs for all combinations of impulses. Each time shift is computed as the difference between the time when the impulse is applied and the time in which the absolute response of the CNN reaches its maximum.

    (TIF)

    S3 Fig. Comparison between EEG and the dominant component of the current dipole moment at the top of the head.

    Example of time sequences of EEG (black line) and the dominant component of the current dipole moment, Pz (red dashed line), at the top of the four-sphere head model, computed both on the multicompartment model network.

    (TIF)

    S4 Fig. Computation of EEGs on the New York head model.

    (A) Distribution of EEG electrodes and brain surface of the New York head model [66]. The black line represents the cortical cross-section where current dipoles were placed. (B) Current dipoles of the two subnetworks, S0 and S1, (represented by arrows of different colors) positioned in the cortical cross-section. (C) Topographic maps generated from EEG electrodes projected onto two-dimensional simplified plots of the head model. The ground-truth EEG maps are plotted on the left column and EEG estimations of the ERWS2 proxy on the right column. The closest EEG electrode of each current dipole is plotted as a spot in the same color of the corresponding current dipole. The EEG electrode selected to show time traces of the EEG signal is depicted as a gray spot. (D) Dominant component of the current dipole moment (Pz) for the two subnetworks. (E) Time sequences of the ground-truth EEG and the ERWS2 proxy registered at the electrode shown in panel C. (F) Time traces of ground-truth EEG and the ERWS2 proxy registered at different electrodes located in positions indicated by the gray spots.

    (TIF)

    Attachment

    Submitted filename: response2reviewers_PlosComputBiol.docx

    Attachment

    Submitted filename: response2reviewers_PlosComputBiol.docx

    Data Availability Statement

    The source code to simulate the LIF and multicompartment model networks can be downloaded at https://github.com/pablomc88/EEG_proxy_from_network_point_neurons. Datasets generated in this study, scripts to plot the figures and weights of the CNN are available from http://doi.org/10.5281/zenodo.4506494.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES