Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4–9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Subject Areas: Biological Physics, Complex Systems, Interdisciplinary Physics
I. INTRODUCTION
A common and striking feature of cortical activity is the high degree of neuronal spiking variability [1]. This high variability is notably present in sensory cortex and motor cortex, as well as in regions with intermediate representations [2–5]. The prevalence of this variability has led to it being a major constraint for modeling cortical networks. Cortical networks may operate in distinct regimes depending on species, cortical area, and brain states. In the asleep or anesthetized state, neurons tend to fire synchronously with strong correlations between the firing of distinct neurons [6–8]. In the awake state, although synchrony has been reported as well, stimulus drive, arousal, or attention tend to promote an irregular firing regime whereby neurons spike in a seemingly random manner, with decreased or little correlation [1,8,9]. This has led to the hypothesis that cortex primarily operates asynchronously [10–12]. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experiences synchronous synaptic inputs is exceedingly low. That said, the asynchronous state hypothesis appears at odds with the high degree of observed spiking variability in cortex. Cortical neurons are thought to receive a large number of synaptic inputs (≃104) [13]. Although the impact of these inputs may vary across synapses, the law of large numbers implies that variability should average out when integrated at the soma. In principle, this would lead to clock-like spiking responses, contrary to experimental observations [14].
A number of mechanisms have been proposed to explain how high spiking variability emerges in cortical networks [15]. The prevailing approach posits that excitatory and inhibitory inputs converge on cortical neurons in a balanced manner. In balanced models, the overall excitatory and inhibitory drives cancel each other so that transient imbalances in the drive can bring the neuron’s membrane voltage across the spike-initiation threshold. Such balanced models result in spiking statistics that match those found in the neocortex [16,17]. However, these statistics can emerge in distinct dynamical regimes depending on whether the balance between excitation and inhibition is tight or loose [18]. In tightly balanced networks, whereby the net neuronal drive is negligible compared to the antagonizing components, activity correlation is effectively zero, leading to a strictly asynchronous regime [19–21]. By contrast, in loosely balanced networks, the net neuronal drive remains of the same order as the antagonizing components, which allows for strong neuronal correlations during evoked activity, compatible with a synchronous regime [22–24].
While the high spiking variability is an important constraint for cortical network modeling, there are other biophysical signatures that may be employed. We now have access to the subthreshold membrane voltage fluctuations that underlie spikes in awake, behaving animals (see Fig. 1). Membrane voltage recordings reveal two main deviations from the asynchronous hypothesis: First, membrane voltage does not hover near the spiking threshold and is modulated by the synaptic drive; second, it exhibits state- or stimulus-dependent non-Gaussian fluctuation statistics with positive skewness [25–28]. In this work, we further argue that membrane voltage recordings reveal much larger voltage fluctuations than predicted by balanced cortical models [29,30].
FIG. 1.
Large trial-by-trial membrane voltage fluctuations. Membrane voltage responses are shown using whole cell recordings in awake behaving primates for both fixation alone trials (left) and visual stimulation trials (right). A drifting grating is presented for 1 s beginning at the arrow. Below, the membrane voltage traces are records of horizontal and vertical eye movements, illustrating that the animal was fixating during the stimulus. Red and green traces indicate different trials under the same conditions. Adapted from Ref. [27].
How could such large subthreshold variations in membrane voltage emerge? One way that fluctuations could emerge, even for large numbers of input, is if there is synchrony in the driving inputs [31]. In practice, input synchrony is revealed by the presence of positive spiking correlations, which quantify the propensity of distinct synaptic inputs to coactivate. Measurements of spiking correlations between pairs of neurons vary across reports but have generally been shown to be weak [10–12]. That said, even weak correlations can have a large impact when the population of correlated inputs is large [32,33]. Furthermore, the existence of input synchrony, supported by weak but persistent spiking correlations, is consistent with at least two other experimental observations. First, intracellular recordings from pairs of neurons in both anesthetized and awake animals reveal a high degree of membrane voltage correlations [7,34,35]. Second, excitatory and inhibitory conductance inputs are highly correlated with each other within the same neuron [35,36]. These observations suggest that input synchrony could explain the observed level of subthreshold variability.
While our focus is on achieving realistic subthreshold variability, other challenges to asynchronous networks have been described. In particular, real neural networks exhibit distinct regimes of activity depending on the strength of their afferent drives. In that respect, Zerlaut et al. [37] showed that asynchronous networks can exhibit a spectrum of realistic regimes of activity if they have moderate recurrent connections and are driven by strong thalamic projections (see also Ref. [17]). Furthermore, it has been a challenge to identify the scaling rule that should apply to synaptic strengths for asynchrony to hold stably in idealized networks. Recently, Sanzeni, Histed, and Brunel [38] proposed that a realistic asynchronous regime is achieved for a particular large-coupling rule, whereby synaptic strengths scale in keeping with the logarithmic size of the network. Both studies consider balanced networks with conductance-based neuronal models, but neither focuses on the role of synchrony, consistent with the asynchronous state hypothesis. The asynchronous state hypothesis is theoretically attractive, because it represents a naturally stable regime of activity in infinite-size, balanced networks of current-based neuronal models [16,17,20,21]. Such neuronal models, however, neglect the voltage dependence of conductances, and it remains unclear whether the asynchronous regime is asymptotically stable for infinite-size, conductance-based network models.
Here, independent of the constraint of network stability, we ask whether biophysically relevant neuronal models can achieve the observed subthreshold variability under realistic levels of input synchrony. To answer this question, we derive exact analytical expressions for the stationary voltage variance of a single conductance-based neuron in response to synchronous shot-noise drives [39,40]. A benefit of shot-noise models compared to diffusion models is to allow for individual synaptic inputs to be temporally separated in distinct impulses, each corresponding to a transient positive conductance fluctuation [41–43]. We develop our shot-noise analysis for a variant of classically considered neuronal models. We call this variant the all-or-none-conductance-based model for which synaptic activation occurs as an all-or-none process rather than as an exponentially relaxing process. To perform an exact treatment of these models, we develop original probabilistic techniques inspired from Marcus’ work about shot-noise-driven dynamics [44,45]. To model shot-noise drives with synchrony, we develop a statistical framework based on the property of input exchangeability, which assumes that no synaptic inputs play a particular role. In this framework, we show that input drives with varying degree of synchrony can be rigorously modeled via jump processes, while synchrony can be quantitatively related to measures of pairwise spiking correlations.
Our main results are biophysically interpretable formulas for the voltage mean and variance in the limit of instantaneous synapses. Crucially, these formulas explicitly depend on the input numbers, weights, and synchrony and hold without any forms of diffusion approximation. This is in contrast with analytical treatments which elaborate on the diffusion and effective-time-constant approximations [37,38,46,47]. We leverage these exact, explicit formulas to determine under which synchrony conditions a neuron can achieve the experimentally observed subthreshold variability. For biophysically relevant synaptic numbers and weights, we find that achieving realistic variability is possible in response to a restricted number of large asynchronous connections, compatible with the dominance of thalamo-cortical projections in the input layers of the visual cortex. However, we find that achieving realistic variability in response to a large number of moderate cortical inputs, as in superficial cortical visual layers, necessitates nonzero input synchrony in amounts that are consistent with the weak levels of measured spiking correlations observed in vivo.
In practice, persistent synchrony may spontaneously emerge in large but finite neural networks, as nonzero correlations are the hallmark of finite-dimensional interacting dynamics. The network structural features responsible for the magnitude of such correlations remains unclear, and we do not address this question here (see Refs. [48,49] for review). The persistence of synchrony is also problematic for theoretical approaches that consider networks in the infinite-size limits. Indeed, our analysis supports that, in the absence of synchrony and for all scaling of the synaptic weights, subthreshold variability must vanish in the limit of arbitrary large numbers of synapses. This suggests that, independent of any balanced condition, the mean-field dynamics that emerge in infinite-size networks of conductance-based neurons will not exhibit Poisson-like spiking variability, at least in the absence of additional constraints on the network structure or on the biophysical properties of the neurons. In current-based neuronal models, however, variability is not dampened by a conductance-dependent effective time constant. These findings, therefore, challenge the theoretical basis for the asynchronous state in conductance-based neuronal networks.
Our exact analysis, as well as its biophysical interpretations, is possible only at the cost of several caveats: First, we neglect the impact of the spike-generating mechanism (and of the postspiking reset) in shaping the subthreshold variability. Second, we quantify synchrony under the assumption of input exchangeability, that is, for synapses having a typical strength as opposed to being heterogeneous. Third, we consider input drives that implement an instantaneous form of synchrony with temporally precise synaptic coactivations. Fourth, we do not consider slow temporal fluctuations in the mean synaptic drive. Fifth, and perhaps most concerning, we do not account for the stable emergence of a synchronous regime in network models. We argue in the discussion that all the above caveats but the last one can be addressed without impacting our findings. Addressing the last caveat remains an open problem.
For reference, we list in Table I the main notations used in this work. These notations utilize the subscript and to refer to excitation or inhibition, respectively. The notation means that the subscript can be either or . The notation is used to emphasize that a quantity depends jointly on excitation and inhibition.
TABLE I.
Main notations.
First-order synaptic efficacies | |
Second-order synaptic efficacies | |
Auxiliary second-order synaptic efficacies | |
, | Rate of the driving Poisson process |
Rate of the excitatory or inhibitory Poisson process | |
Membrane capacitance | |
, | Cross-correlation synaptic efficacy |
Stationary covariance | |
Stationary expectation | |
Expectation with respect to the joint distribution or | |
Expectation with respect to the marginal distribution , or | |
Fast-conductance small parameter | |
Passive leak conductance | |
Overall excitatory or inhibitory conductance | |
Reduced excitatory or inhibitory conductance | |
Number of coactivating excitatory or inhibitory synaptic inputs | |
Total number of excitatory or inhibitory synaptic inputs | |
Driving Poisson process with rate | |
Excitatory or inhibitory driving Poisson process with rate | |
Bivariate jump distribution of | |
Marginal jump distribution of | |
Bivariate distribution for the numbers of coactivating synapses | |
Marginal synaptic count distribution | |
Individual excitatory or inhibitory synaptic rate | |
Spiking correlation between excitatory and inhibitory inputs | |
Spiking correlation within excitatory or inhibitory inputs | |
Passive membrane time constant | |
Synaptic time constant | |
Stationary variance | |
Excitatory or inhibitory random jumps | |
Excitatory or inhibitory reversal potentials | |
Typical value for excitatory or inhibitory synaptic weights | |
Binary variable indicating the activation of excitatory synapse | |
Binary variable indicating the activation of inhibitory synapse | |
Driving compound Poisson process with base rate and jump distribution |
II. STOCHASTIC MODELING AND ANALYSIS
A. All-or-none-conductance-based neurons
We consider the subthreshold dynamics of an original neuronal model, which we called the all-or-none-conductance-based (AONCB) model. In this model, the membrane voltage obeys the first-order stochastic differential equation
(1) |
where randomness arises from the stochastically activating excitatory and inhibitory conductances, respectively denoted by and [see Fig. 2(a)]. These conductances result from the action of excitatory and inhibitory synapses: and . In the absence of synaptic inputs, i.e., when , and of external current , the voltage exponentially relaxes toward its leak reversal potential with passive time constant , where denotes the cell’s membrane capacitance and denotes the cellular passive conductance [50]. In the presence of synaptic inputs, the transient synaptic currents and cause the membrane voltage to fluctuate. Conductance-based models account for the voltage dependence of synaptic currents via the driving forces and , where and denotes the excitatory and inhibitory reversal potential, respectively. Without loss of generality, we assume in the following that and that .
FIG. 2.
All-or-none-conductance-based models. (a) Electrical diagram of conductance-based model for which the neuronal voltage evolves in response to fluctuations of excitatory and inhibitory conductances and . (b) In all-or-none models, inputs delivered as Poisson processes transiently activate the excitatory and inhibitory conductances and during a finite, nonzero synaptic activation time . Simulation parameters: , and .
We model the spiking activity of the upstream neurons as shot noise [39,40], which can be generically modeled as a -dimensional stochastic point process [51,52]. Let us denote by its excitatory component and by its inhibitory component, where denotes time and is the neuron index. For each neuron , the process is specified as the counting process registering the spiking occurrences of neuron up to time . In other words, , where denotes the full sequence of spiking times of neuron and where denotes the indicator function of set . Note that, by convention, we label spikes so that for all neuron . Given a point-process model for the upstream spiking activity, classical conductance-based models consider that a single input to a synapse causes an instantaneous increase of its conductance, followed by an exponential decay with typical timescale . Here, we depart from this assumption and consider that the synaptic conductances operates all-or-none with a common activation time still referred to as . Specifically, we assume that the dynamics of the conductance follows
(2) |
where is the dimensionless synaptic weight. The above equation prescribes that the th spike delivery to synapse at time is followed by an instantaneous increase of that synapse’s conductance by an amount for a period . Thus, the synaptic response prescribed by Eq. (2) is all-or-none as opposed to being graded as in classical conductance-based models. Moreover, just as in classical models, Eq. (2) allows synapses to multiactivate, thereby neglecting nonlinear synaptic saturation [see Fig. 2(b)].
To be complete, AONCB neurons must, in principle, include a spike-generating mechanism. A customary choice is the integrate-and-fire mechanism [53,54]: A neuron emits a spike whenever its voltage exceeds a threshold value and resets instantaneously to some value afterward. Such a mechanism impacts the neuronal subthreshold voltage dynamics via postspiking reset, which implements a nonlinear form of feedback. However, in this work, we focus on the variability that is generated by fluctuating, possibly synchronous, synaptic inputs. For this reason, we neglect the influence of the spiking reset in our analysis, and, actually, we ignore the spike-generating mechanism altogether. Finally, although our analysis of AONCB neurons applies to positive synaptic activation time , we discuss our results only in the limit of instantaneous synapses. This corresponds to taking while adopting the scaling in order to maintain the charge transfer induced by a synaptic event. We will see that this limiting process preserves the response variability of AONCB neurons.
B. Quantifying the synchrony of exchangeable synaptic inputs
Our goal here is to introduce a discrete model for synaptic inputs, whereby synchrony can be rigorously quantified. To this end, let us suppose that the neuron under consideration receives inputs from excitatory neurons and inhibitory neurons, chosen from arbitrary large pools of excitatory neurons and inhibitory neurons. Adopting a discrete-time representation with elementary bin size , we denote by in the input state within the th bin. Our main simplifying assumption consists in modeling the excitatory inputs and the inhibitory inputs as separately exchangeable random variables and that are distributed identically over and , respectively, and independently across time. This warrants dropping the dependence on time index . By separately exchangeable, we mean that no subset of excitatory inputs or inhibitory inputs plays a distinct role so that, at all time, the respective distributions of and are independent of the input labeling. In other words, for all permutations of and of , the joint distribution of and is identical to that of and [55,56]. By contrast with independent random spiking variables, exchangeable ones can exhibit nonzero correlation structure. By symmetry, this structure is specified by three correlation coefficients:
where and denote the covariance and the variance of the binary variables and , respectively.
Interestingly, a more explicit form for , and can be obtained in the limit of an infinite-size pool . This follows from de Finetti’s theorem [57], which states that the probability of observing a given input configuration for excitatory neurons and inhibitory neurons is given by
where is the directing de Finetti measure, defined as a bivariate distribution over the unit square [0, 1] × [0, 1]. In the equation above, the numbers and represent the (jointly fluctuating) probabilities that an excitatory neuron and an inhibitory neuron spike in a given time bin, respectively. The core message of the de Finetti theorem is that the spiking activity of neurons from infinite exchangeable pools is obtained as a mixture of conditionally independent binomial laws. This mixture is specified by the directing measure , which fully parametrizes our synchronous input model. Independent spiking corresponds to choosing as a point-mass measure concentrated on some probabilities , where denotes the individual spiking rate of a neuron: [see Fig. 3(a)]. By contrast, a dispersed directing measure corresponds to the existence of correlations among the inputs [see Fig. 3(b)]. Accordingly, we show in Appendix A that the spiking pairwise correlation takes the explicit form
(3) |
whereas , the correlation between excitation and inhibition, is given by
(4) |
In the above formulas, , and denote expectation, variance, and covariance of , respectively. Note that these formulas show that nonzero correlations correspond to nonzero variance, as is always the case for dispersed distribution. Independence between excitation and inhibition for which corresponds to directing measure with product form, i.e., , where and denote the marginal distributions. Alternative forms of the directed measure generally lead to nonzero cross correlation , which necessarily satisfies .
FIG. 3.
Parametrizing correlations via exchangeability. The activity of exchangeable synaptic inputs collected over consecutive time bins can be represented as {0, 1}-valued array , where if input activates in time bin . Under assumptions of exchangeability, the input spiking correlation is entirely captured by the count statistics of how many inputs coactivate within a given time bin. In the limit , the distribution of the fraction of coactivating inputs coincides with the directing de Finetti measure, which we consider as a parametric choice in our approach. In the absence of correlation, synapses tend to activate in isolation: in (a). In the presence of correlation, synapses tend to coactivate, yielding a disproportionately large synaptic activation event: in (b). Considering the associated cumulative counts specifies discrete-time jump processes that can be generalized to the continuous-time limit, i.e., for time bins of vanishing duration .
In this exchangeable setting, a reasonable parametric choice for the marginals and is given by beta distributions , where and denote shape parameters [58]. Practically, this choice is motivated by the ability of beta distributions to efficiently fit correlated spiking data generated by existing algorithms [59]. Formally, this choice is motivated by the fact that beta distributions are conjugate priors for the binomial likelihood functions, so that the resulting probabilistic models can be studied analytically [60–62]. For instance, for , the probability that synapses among the inputs are jointly active within the same time bin follows the beta-binomial distribution
(5) |
Accordingly, the mean number of active excitatory inputs is . Utilizing Eq. (3), we also find that . Note that the above results show that, by changing de Finetti’s measure, one can modify not only the spiking correlation, but also the mean spiking rate.
In the following, we exploit the above analytical results to illustrate that taking the continuous-time limit specifies synchronous input drives as compound Poisson processes [51,52]. To do so, we consider both excitation and inhibition, which in a discrete setting corresponds to considering bivariate probability distributions defined over . Ideally, these distributions should be such that its conditional marginals and , with distributions given by Eq. (5). Unfortunately, there does not seem to be a simple low-dimensional parametrization for such distributions , except in particular cases. To address this point, at least numerically, one can resort to a variety of methods including copulas [63,64]. For analytical calculations, we consider only two particular cases for which the marginals of are given by the beta distributions: (i) the case of maximum positive correlation for which , i.e., with , and (ii) the case of zero correlation for which and are independent, i.e., .
C. Synchronous synaptic drives as compound Poisson processes
Under assumption of input exchangeability and given typical excitatory and inhibitory synaptic weights , the overall synaptic drive to a neuron is determined by , the numbers of active excitatory and inhibitory inputs at each discrete time step. As AONCB dynamics unfolds in continuous time, we need to consider this discrete drive in the continuous-time limit as well, i.e., for vanishing time bins . When , we show in Appendix B that the overall synaptic drive specifies a compound Poisson process with bivariate jumps . Specifically, we have
(6) |
where are i.i.d. samples with bivariate distribution denoted by and where the overall driving Poisson process registers the number of synaptic events without multiple counts (see Fig. 4). By synaptic events, we mean these times for which at least one excitatory synapse or one inhibitory synapse activates. We say that registers these events without multiple count as it counts one event independent of the number of possibly coactivating synapses. Similarly, we denote by and the counting processes registering synaptic excitatory events and synaptic inhibitory events alone, respectively. These processes and are Poisson processes that are correlated in the presence of synchrony, as both and may register the same event. Note that this implies that . More generally, denoting by and the rates of and , respectively, the presence of synchrony implies that and , where is the typical activation rate of a single synapse.
FIG. 4.
Limit compound Poisson process with excitation and inhibition. (a) Under assumption of partial exchangeability, synaptic inputs can be distinguished only by the fact that they are either excitatory or inhibitory, which is marked by being colored in red or blue, respectively, in the discrete representation of correlated synaptic inputs with bin size . Accordingly, considering excitation and inhibition separately specifies two associated input-count processes and two cumulative counting processes. For nonzero spiking correlation , these processes are themselves correlated as captured by the joint distribution of excitatory and inhibitory input counts (center) and by the joint distribution of excitatory and inhibitory jumps (right). (b) The input count distribution is a finite-size approximation of the bivariate directing de Finetti measure , which we consider as a parameter as usual. For a smaller bin size , this distribution concentrates in (0,0), as an increasing proportion of time bins does not register any synaptic events, be they excitatory or inhibitory. In the presence of correlation, however, the conditioned jump distribution remains correlated but also dispersed. (c) In the limit , the input-count distribution is concentrated in (0,0), consistent with the fact that the average number of synaptic activations remains constant while the number of bins diverges. By contrast, the distribution of synaptic event size conditioned to distinct from (0,0) converges toward a well-defined distribution: . This distribution characterizes the jumps of a bivariate compound Poisson process, obtained as the limit of the cumulative count process when considering .
For simplicity, we explain how to obtain such limit compound Poisson processes by reasoning on the excitatory inputs alone. To this end, let us denote the marginal jump distribution of as . Given a fixed typical synaptic weight , the jumps are quantized as , with distributed on , as by convention jumps cannot have zero size. These jumps are naturally defined in the discrete setting, i.e., with , and their discrete distribution is given via conditioning as . For beta distributed marginals , we show in Appendix B that considering yields the jump distribution
(7) |
where denotes the digamma function. In the following, we explicitly index discrete count distributions, e.g., , to distinguish them from the corresponding jump distributions, i.e., . Equation (7) follows from observing that the probability to find a spike within a bin is , so that for fixed excitatory spiking rate when . As a result, the continuous-time spiking correlation is , so that we can interpret as a parameter controlling correlations. More generally, we show in Appendix C that the limit correlation depends only on the count distribution via
(8) |
where denotes expectations with respect to . This shows that zero spiking correlation corresponds to single synaptic activations, i.e., to an input drive modeled as a Poisson process, as opposed to a compound Poisson process. For Poisson-process models, the overall rate of synaptic events is necessarily equal to the sum of the individual spiking rate: . This is no longer the case in the presence of synchronous spiking, when nonzero input correlation arises from coincidental synaptic activations. Indeed, as the population spiking rate is conserved when , the rate of excitatory synaptic events governing satisfies so that
(9) |
Let us reiterate for clarity that, if synapses activate synchronously, this counts as only one synaptic event, which can come in variable size . Consistently, we have, in general, . When , we have perfect synchrony with and , whereas the independent spiking regime with is attained for , for which we have .
It is possible to generalize the above construction to mixed excitation and inhibition, but a closed-form treatment is possible only for special cases. For the independent case (i), in the limit , jumps are either excitatory alone or inhibitory alone; i.e., the jump distribution has support on . Accordingly, we show in Appendix D that
(10) |
where and are specified in Eqs. (7) and (9) by the parameters and , respectively. This shows that, as expected, in the absence of synchrony the driving compound Poisson process with bidimensional jump is obtained as the direct sum of two independent compound Poisson processes. In particular, the driving processes are such that , with rates satisfying . By contrast, for the maximally correlated case with (ii), we show in Appendix D that the jumps are given as , with distributed on [see Figs. 4(b) and 4(c)] according to
(11) |
Incidentally, the driving Poisson process has a rate determined by adapting Eq. (9):
for which one can check that .
All the closed-form results so far have been derived for synchrony parametrization in terms of beta distribution. There are other possible parametrizations, and these would lead to different count distributions but without known closed form. To address this limitation in the following, all our results hold for arbitrary distributions of the jump sizes on the positive orthant . In particular, our results are given in terms of expectations with respect to , still denoted by . Nonzero correlation between excitation and inhibition corresponds to those choices of for which with nonzero probability, which indicates the presence of synchronous excitatory and inhibitory inputs. Note that this modeling setting restricts nonzero correlations to be positive, which is an inherent limitation of our synchrony-based approach. When considering an arbitrary , the main caveat is understanding how such a distribution may correspond to a given input numbers and spiking correlations and . For this reason, we always consider that and follows beta distributed marginal distributions when discussing the roles of , and in shaping the voltage response of a neuron. In that respect, we show in Appendix C that the coefficient can always be deduced from the knowledge of a discrete count distribution on via
where the expectations are with respect to .
D. Instantaneous synapses and Marcus integrals
We are now in a position to formulate the mathematical problem at stake within the framework developed by Marcus to study shot-noise-driven systems [44,45]. Our goal is quantifying the subthreshold variability of an AONCB neuron subjected to synchronous inputs. Mathematically, this amounts to computing the first two moments of the stationary process solving the following stochastic dynamics:
(12) |
where are constants and where the reduced conductances and follow stochastic processes defined in terms of a compound Poisson process with bivariate jumps. Formally, the compound Poisson process is specified by , the rate of its governing Poisson process , and by the joint distribution of its jumps . Each point of the Poisson process represents a synaptic activation time , where is in with the convention that . At all these times, the synaptic input sizes are drawn as i.i.d. random variables in with probability distribution .
At this point, it is important to observe that the driving process is distinct from the conductance process . The latter process is formally defined for AONCB neurons as
where the dimensionless parameter is the ratio of the duration of synaptic activation relative to the passive membrane time constant. Note that the amplitude of scales in inverse proportion to in order to maintain the overall charge transfer during synaptic events of varying durations. Such a scaling ensures that the voltage response of AONCB neurons has finite, nonzero variability for small or vanishing synaptic time constant, i.e., for (see Fig. 5). The simplifying limit of instantaneous synapses is obtained for , which corresponds to infinitely fast synaptic activation. By virtue of its construction, the conductance process becomes a shot noise in the limit , which can be formally identified to . This is consistent with the definition of shot-noise processes as temporal derivative of compound Poisson processes, i.e., as collections of randomly weighted Dirac-delta masses.
FIG. 5.
Limit of instantaneous synapses. The voltage trace and the empirical voltage distribution are only marginally altered by taking the limit for short synaptic time constant: in (a) and in (b). In both (a) and (b), we consider the same compound Poisson-process drive with , and , and the resulting fluctuating voltage is simulated via a standard Euler discretization scheme. The corresponding empirical conductance and voltage distributions are shown on the right. The later voltage distribution asymptotically determines the stationary moments of .
Because of their high degree of idealization, shot-noise models are often amenable to exact stochastic analysis, albeit with some caveats. For equations akin to Eq. (12) in the limit of instantaneous synapses, such a caveat follows from the multiplicative nature of the conductance shot noise . In principle, one might expect to solve Eq. (12) with shot-noise drive via stochastic calculus, as for diffusion-based drive. This would involve interpreting the stochastic integral representations of solutions in terms of Stratonovich representations [65]. However, Stratonovich calculus is not well defined for shot-noise drives [66]. To remedy this point, Marcus has proposed to study stochastic equations subjected to regularized versions of shot noises, whose regularity is controlled by a nonnegative parameter [44,45]. For , the dynamical equations admit classical solutions, whereas the shot-noise-driven regime is recovered in the limit . The hope is to be able to characterize analytically the shot-noise-driven solution, or at least some of its moments, by considering regular solutions in the limit . We choose to refer to the control parameter as by design in the above. This is because AONCB models represent Marcus-type regularizations that are amenable to analysis in the limit of instantaneous synapses, i.e., when , for which the conductance process converges toward a form of shot noise.
Marcus interpretation of stochastic integration has practical implications for numerical simulations with shot noise [41]. According to this interpretation, shot-noise-driven solutions are conceived as limits of regularized solutions for which standard numerical scheme applies. Correspondingly, shot-noise-driven solutions to Eq. (12) can be simulated via a limit numerical scheme. We derive such a limit scheme in Appendix E. Specifically, we show that the voltage of shot-noise-driven AONCB neurons exponentially relaxes toward the leak reversal potential , except when subjected to synaptic impulses at times . At these times, the voltage updates discontinuously according to , where the jumps are given in Appendix E via the Marcus rule
(13) |
Observe that the above Marcus rule directly implies that no jump can cause the voltage to exit , the allowed range of variation for . Moreover, note that this rule specifies an exact even-driven simulation scheme given knowledge of the synaptic activation times and sizes [67]. We adopt the above Marcus-type numerical scheme in all the simulations that involve instantaneous synapses.
E. Moment calculations
When driven by stationary compound Poisson processes, AONCB neurons exhibit ergodic voltage dynamics. As a result, the typical voltage state, obtained by sampling the voltage at random time, is captured by a unique stationary distribution. Our main analytical results, which we give here, consist in exact formulas for the first two voltage moment with respect to that stationary distribution. Specifically, we derive the stationary mean voltage Eq. (14) in Appendix F and the stationary voltage variance Eq. (16) in Appendix G. These results are obtained by a probabilistic treatment exploiting the properties of compound Poisson processes within Marcus’ framework. This treatment yields compact, interpretable formulas in the limit of instantaneous synapses . Readers who are interested in the method of derivation for these results are encouraged to go over the calculations presented in Appendixes F–L.
In the limit of instantaneous synapses, , we find that the stationary voltage mean is
(14) |
where we define the first-order synaptic efficacies as
(15) |
Note the refers to the expectation with respect to the jump distribution in Eq. (15), whereas refers to the stationary expectation in Eq. (14). Equation (14) has the same form as for deterministic dynamics with constant conductances, in the sense that the mean voltage is a weighted sum of the reversal potentials , and . One can check that, for such deterministic dynamics, the synaptic efficacies involved in the stationary mean simply read . Thus, the impact of synaptic variability, and, in particular, of synchrony, entirely lies in the definition of the efficacies in Eq. (15). In the absence of synchrony, one can check that accounting for the shot-noise nature of the synaptic conductances leads to synaptic efficacies under exponential form: . In turn, accounting for input synchrony leads to synaptic efficacies expressed as expectation of these exponential forms in Eq. (15), consistent with the stochastic nature of the conductance jumps (). Our other main result, the formula for the stationary voltage variance, involves synaptic efficacies of similar form. Specifically, we find that
(16) |
where we define the second-order synaptic efficacies as
(17) |
Equation (16) also prominently features auxiliary second-order efficacies defined by . Owing to their prominent role, we also mention their explicit form:
(18) |
The other quantity of interest featuring in Eq. (16) is the cross-correlation coefficient
(19) |
which entirely captures the (non-negative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well.
In conclusion, let us stress that, for AONCB models, establishing the above exact expressions does not require any approximation other than taking the limit of instantaneous synapses. In particular, we neither resort to any diffusion approximations [37,38] nor invoke the effective-time-constant approximation [41–43]. We give in Appendix L an alternative factorized form for to justify the non-negativity of expression (16). In Fig. 6, we illustrate the excellent agreement of the analytically derived expressions (14) and (16) with numerical estimates obtained via Monte Carlo simulations of the AONCB dynamics for various input synchrony conditions. Discussing and interpreting quantitatively Eqs. (14) and (16) within a biophysically relevant context is the main focus of the remainder of this work.
FIG. 6.
Comparison of simulation and theory. (a) Examples of voltage traces obtained via Monte Carlo simulations of an AONCB neuron for various types of synchrony-based input correlations: uncorrelated (uncorr, yellow), within correlation and (within corr, cyan), and within and across correlation (across corr, magenta). (b) Comparison of the analytically derived expressions (14) and (16) with numerical estimates obtained via Monte Carlo simulations for the synchrony conditions considered in (a).
III. COMPARISON WITH EXPERIMENTAL DATA
A. Experimental measurements and parameter estimations
Cortical activity typically exhibits a high degree of variability in response to identical stimuli [68,69], with individual neuronal spiking exhibiting Poissonian characteristics [3,70]. Such variability is striking, because neurons are thought to typically receive a large number (≃104) of synaptic contacts [13]. As a result, in the absence of correlations, neuronal variability should average out, leading to quasideterministic neuronal voltage dynamics [71]. To explain how variability seemingly defeats averaging in large neural networks, it has been proposed that neurons operate in a special regime, whereby inhibitory and excitatory drive nearly cancel one another [16,17,19–21]. In such balanced networks, the voltage fluctuations become the main determinant of the dynamics, yielding a Poisson-like spiking activity [16,17,19–21]. However, depending upon the tightness of this balance, networks can exhibit distinct dynamical regimes with varying degree of synchrony [18].
In the following, we exploit the analytical framework of AONCB neurons to argue that the asynchronous picture predicts voltage fluctuations are an order of magnitude smaller than experimental observations [1,26–28]. Such observations indicate that the variability of the neuronal membrane voltage exhibits typical variance values of ≃4–9 mV2. Then, we claim that achieving such variability requires input synchrony within the setting of AONCB neurons. Experimental estimates of the spiking correlations are typically thought as weak with coefficients ranging from 0.01 to 0.04 [10–12]. Such weak values do not warrant the neglect of correlations owing to the typically high number of synaptic connections. Actually, if denotes the number of inputs, all assumed to play exchangeable roles, an empirical criterion to decide whether a correlation coefficient is weak is that [32,33]. Assuming the lower estimate of , this criterion is achieved for inputs, which is well below the typical number of excitatory synapses for cortical neurons. In the following, we consider only the response of AONCB neurons to synchronous drive with biophysically realistic spiking correlations ().
Two key parameters for our argument are the excitatory and inhibitory synaptic weights denoted by and , respectively. Typical values for these weights can be estimated via biophysical considerations within the framework of AONCB neurons. In order to develop these considerations, we assume the values for reversal potentials and for the passive membrane time constant. Given these assumptions, we set the upper range of excitatory synaptic weights so that, when delivered to a neuron close to its resting state, unitary excitatory inputs cause peak membrane fluctuations of ≃0.5 mV at the soma, attained after a peak time of ≃5 ms. Such fluctuations correspond to typically large in vivo synaptic activations of thalamo-cortical projections in rats [72]. Although activations of similar amplitude have been reported for cortico-cortical connections [73,74], recent large-scale in vivo studies have revealed that cortico-cortical excitatory connections are typically much weaker [75,76]. At the same time, these studies have shown that inhibitory synaptic conductances are about fourfold larger than excitatory ones but with similar time-scales. Fitting these values within the framework of AONCB neurons for reveals that the largest possible synaptic inputs correspond to dimensionless weights and . Following Refs. [75,76], we consider that the comparatively moderate cortico-cortical recurrent connections are an order of magnitude weaker than typical thalamo-cortical projections, i.e., and . Such a range is in keeping with estimates used in Ref. [38].
B. The effective-time-constant approximation holds in the asynchronous regime
Let us consider that neuronal inputs have zero (or negligible) correlation structure, which corresponds to assuming that all synapses are driven by independent Poisson processes. Incidentally, excitation and inhibition act independently. Within the framework of AONCB neurons, this latter assumption corresponds to choosing a joint jump distribution of the form
where denotes the Dirac delta function so that with probability one. Moreover, and are independently specified via Eq. (9), and the overall rate of synaptic events is purely additive: . Consequently, the cross-correlation efficacy in Eq. (16) vanishes, and the dimensionless efficacies simplify to
Further assuming that individual excitatory and inhibitory synapses act independently leads to considering that and depict the size of individual synaptic inputs, as opposed to aggregate events. This corresponds to taking and in our parametric model based on beta distributions. Then, as intuition suggests, the overall rates of excitation and inhibition activation are recovered as and , where and are the individual spiking rates.
Individual synaptic weights are small in the sense that , which warrants neglecting exponential corrections for the evaluation of the synaptic efficacies, at least in the absence of synchrony-based correlations. Accordingly, we have
as well as symmetric expressions for inhibitory efficacies. Plugging these values into Eq. (16) yields the classical mean-field estimate for the stationary variance:
which is exactly the same expression as that derived via the diffusion and effective-time-constant approximations in Refs. [46,47]. However, observe that the only approximation we made in obtaining the above expression is to neglect exponential corrections due to the relative weakness of biophysically relevant synaptic weights, which we hereafter refer to as the small-weight approximation.
C. Asynchronous inputs yield exceedingly small neural variability
In Fig. 7, we represent the stationary mean and variance as a function of the neuronal spiking input rates and but for distinct values of synaptic weights and . In Fig. 7(a), we consider synaptic weights as large as biophysically admissible based on recent in vivo studies [75,76], i.e., and . By contrast, in Fig. 7(b), we consider moderate synaptic weights and , which yield somatic postsynaptic deflections of typical amplitudes. In both cases, we consider input numbers and such that the mean voltage covers the same biophysical range of values as and varies between 0 and 50 Hz. Given a zero resting potential, we set this biophysical range to be bounded by as typically observed experimentally in electrophysiological recordings. These conditions correspond to constant aggregate weights set to so that
This implies that the AONCB neurons under consideration do not reach the high-conductance regime for which the passive conductance can be neglected, i.e., [77]. Away from the high-conductance regime, the variance magnitude is controlled by the denominator in Eq. (20). Accordingly, the variance in both cases is primarily dependent on the excitatory rate , since, for , the effective excitatory driving force dominates the effective inhibitory driving force . This is because the neuronal voltage typically sits close to the inhibitory reversal potential but far from the excitatory reversal potential . For instance, when close to rest , the ratio of the effective driving forces is fold in favor of excitation. Importantly, the magnitude of the variance is distinct for moderate synapses and for large synapses. This is because, for constant aggregate weights , the ratio of effective driving forces for large and moderate synapses scales in keeping with the ratio of the weights, and so does the ratio of variances away from the high-conductance regime. Thus, we have
and the variance decreases by one order of magnitude from large weights in Fig. 7(a) to moderate weights in Fig. 7(b).
FIG. 7.
Voltage mean and variance in the absence of input correlations. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with large weights and . Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with moderate weights and . For synaptic weights , the mean response is identical as for (a) and (b). By contrast, for , the variance is at least an order of magnitude smaller than that experimentally observed (4–9 mV2) for moderate weights as shown in (a). Reaching the lower range of realistic neural variability requires driving the cell via large weights as shown in (b).
The above numerical analysis reveals that achieving realistic levels of subthreshold variability for a biophysical mean range of variation requires AONCB neurons to be exclusively driven by large synaptic weights. This is confirmed by considering the voltage mean and variance in Fig. 8 as a function of the number of inputs and of the synaptic weights for a given level of inhibition. We choose this level of inhibition to be set by moderate synapses with in Fig. 8(a) and by large synapses with in Fig. 8(b). As expected, assuming that in the absence of input correlations, the voltage mean depends on only the product , which yields a similar mean range of variations for varying up to 2000 in Fig. 8(a) and up to 200 in Fig. 8(b). Thus, it is possible to achieve the same range of variations as with moderate synaptic with a fewer number of larger synaptic weights. By contrast, the voltage variance achieves realistic levels only for large synaptic weights in both conditions, with for moderate inhibitory background synapses in Fig. 8(a) and for large inhibitory background synapses in Fig. 8(b).
FIG. 8.
Dependence on the number of inputs and the synaptic weights in the absence of correlations. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by a varying number of excitatory synapses with varying weight at rate , with background inhibitory drive given by with moderate weights and . Column (b) depicts the same as in column (a) but for a background inhibitory drive given by with large weights and . For both conditions, achieving realistic level of variance, i.e., , while ensuring a biophysically relevant mean range of variation, i.e., , is possible only for large weights: for moderate inhibitory weights in (a) and for large weights.
D. Including input correlations yields realistic subthreshold variability
Without synchrony, achieving the experimentally observed variability necessitates an excitatory drive mediated via synaptic weights , which corresponds to the upper bounds of the biophysically admissible range and is in agreement with numerical results presented in Ref. [38]. Albeit possible, this is unrealistic given the wide distribution of amplitudes observed experimentally, whereby the vast majority of synaptic events are small to moderate, at least for cortico-cortical connections [75,76]. In principle, one can remedy this issue by allowing for synchronous activation of, say, synapses with moderate weight , as it amounts to the activation of a single synapse with large weight . A weaker assumption that yields a similar increase in neural variability is to ask for synapses to only tend to synchronize probabilistically, which amounts to requiring to be a random variable with some distribution mass on . This exactly amounts to modeling the input drive via a jump process as presented in Sec. II, with a jump distribution that probabilistically captures this degree of input synchrony. In turn, this distribution corresponds to a precise input correlation via Eq. (8).
We quantify the impact of nonzero correlation in Fig. 9, where we consider the cases of moderate weights and and large weights and as in Fig. 7 but for . Specifically, we consider an AONCB neuron subjected to two independent beta-binomial-derived compound Poisson process drives with rate and , respectively. These rates and are obtained via Eq. (9) by setting and for given input numbers and and spiking rates and . This ensures that the mean number of synaptic activations and remains constant when compared with Fig. 7. As a result, the mean response of the AONCB neuron is essentially left unchanged by the presence of correlations, with virtually identical biophysical range of variations . This is because, for correlation , the aggregate weights still satisfy with probability close to one given that . Then, in the absence of cross-correlation, i.e., , we still have
as well as by symmetry. However, for both moderate and large synaptic weights, the voltage variance now exhibits slightly larger magnitudes than observed experimentally. This is because we show in Appendix M that in the small-weight approximation
where we recognize as the second-order efficacy in the absence of correlations from Fig. 7. A similar statement holds for . This shows that correlations increase neural variability whenever or , which coincides with our previously given criterion to assess the relative weakness of correlations. Accordingly, when excitation and inhibition act independently, i.e., , we find that the increase in variability due to input synchrony satisfies
(20) |
The above relation follows from the fact that the small-weight approximation for is independent of correlations and from neglecting the exponential corrections due to the nonzero size of the synaptic weights. The above formula remains valid as long as the correlations and are weak enough so that the aggregate weights satisfy with probability close to one. To inspect the relevance of exponential corrections, we estimate in Appendix N the error incurred by neglecting exponential corrections. Focusing on the case of excitatory inputs, we find that, for correlation coefficients , neglecting exponential corrections incurs less than a 3% error if the number of inputs is smaller than for moderate synaptic weight or than for large synaptic weight .
FIG. 9.
Voltage mean and variance in the presence of excitatory and inhibitory input correlations but without correlation across excitation and inhibition: . Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with large weights and . Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with moderate dimensionless weights and . For synaptic weights , the mean response is identical as for (a) and (b). By contrast with the case of no correlation in Fig. 7, for and , the variance achieves similar levels as experimentally observed (4–9 mV2) for moderate weights as shown in (b) but slightly larger levels for large weights as shown in (a).
E. Including correlations between excitation and inhibition reduces subthreshold variability
The voltage variance estimated for realistic excitatory and inhibitory correlations, e.g., and , exceeds the typical levels measured in vivo, i.e., 4–9 mV2, for large synaptic weights. The inclusion of correlations between excitation and inhibition, i.e., , can reduce the voltage variance to more realistic levels. We confirm this point in Fig. 10, where we consider the cases of moderate weights and and large weights and as in Fig. 9 but for . Positive cross-correlation between excitation and inhibition only marginally impacts the mean voltage response. This is due to the fact that exponential corrections become slightly more relevant as the presence of cross-correlation leads to larger aggregate weights: with and possibly being jointly positive. By contrast with this marginal impact on the mean response, the voltage variance is significantly reduced when excitation and inhibition are correlated. This is in keeping with the intuition that the net effect of such cross-correlation is to cancel excitatory and inhibitory synaptic inputs with one another, before they can cause voltage fluctuations. The amount by which the voltage variance is reduced can be quantified in the small-weight approximation. In this approximation, we show in Appendix M that the efficacy capturing the impact of cross-correlations simplifies to
Using the above simplified expression and invoking the fact that the small-weight approximation for is independent of correlations, we show a decrease in the amount with
(21) |
Despite the above reduction in variance, we also show in Appendix M that positive input correlations always cause an overall increase of neural variability:
Note that the reduction of variability due to crucially depends on the instantaneous nature of correlations between excitation and inhibition. To see this, observe that Marcus rule Eq. (13) specifies instantaneous jumps via a weighted average of the reversal potentials and , which represent extreme values for voltage updates. Thus, perfectly synchronous excitation and inhibition updates the voltage toward an intermediary value rather than extreme ones, leading to smaller jumps on average. Such an effect can vanish or even reverse when synchrony breaks down, e.g., when inhibition substantially lags behind excitation.
FIG. 10.
Voltage mean and variance in the presence of excitatory and inhibitory input correlations and with correlation across excitation and inhibition: . Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with large weights and . Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by and synapses with moderate dimensionless weights and . For synaptic weights , the mean response is identical as for (a) and (b). Compared with the case of no cross-correlation in Fig. 9, for , the variance is reduced to a biophysical range similar to that experimentally observed (4–9 mV2) for moderate weights as shown in (a), as well as for large weights as shown in (b).
F. Asynchronous scaling limits require fixed-size synaptic weights
Our analysis reveals that the correlations must significantly impact the voltage variability whenever the number of inputs is such that or . Spiking correlations are typically measured in vivo to be larger than 0.01. Therefore, synchrony must shape the response of neurons that are driven by more than 100 active inputs, which is presumably allowed by the typically high number of synaptic contacts (≃104) in the cortex [13]. In practice, we find that synchrony can explain the relatively high level of neural variability observed in the subthreshold neuronal responses. Beyond these practical findings, we predict that input synchrony also has significant theoretical implications with respect to modeling spiking networks. Analytically tractable models for cortical activity are generally obtained by considering spiking networks in the infinite-size limit. Such infinite-size networks are tractable, because the neurons they comprise interact only via population averages, erasing any role for nonzero correlation structure. Distinct mean-field models assume that synaptic weights vanish according to distinct scalings with respect to the number of synapses, i.e., as . In particular, classical mean-field limits consider the scaling , balanced mean-field limits consider the scaling , with , and strong coupling limits consider the scaling , with as well.
Our analysis of AONCB neurons shows that the neglect of synchrony-based correlations is incompatible with the maintenance of neural variability in the infinite-size limit. Indeed, Eq. (20) shows that for any scaling with and , as for all the mean-field limits mentioned above, we have
Thus, in the absence of correlation and independent of the synaptic weight scaling, the subthreshold voltage variance of AONCB neurons must vanish in the limit of arbitrary large numbers of synapses. We expect such decay of the voltage variability to be characteristic of conductance-based models in the absence of input correlation. Indeed, dimensional analysis suggests that voltage variances for both current-based and conductance-based models are generically obtained via normalization by the reciprocal of the membrane time constant. However, by contrast with current-based models, the reciprocal of the membrane time constant for conductance-based models, i.e., , involves contributions from synaptic conductances. Thus, to ensure nonzero asymptotic variability, the denominator scaling must be balanced by the natural scaling of the Poissonian input drives, i.e., . In the absence of input correlations, this is possible only for fixed-size weights, which is incompatible with any scaling assumptions.
G. Synchrony allows for variability-preserving scaling limits with vanishing weights
Infinite-size networks with fixed-size synaptic weights are problematic for restricting modeled neurons to operate in the high-conductance regime, whereby the intrinsic conductance properties of the cell play no role. Such a regime is biophysically unrealistic, as it implies that the cell would respond to perturbations infinitely fast. We propose to address this issue by considering a new type of variability-preserving limit models obtained for the classical scaling but in the presence of synchrony-based correlations. For simplicity, let us consider our correlated input model with excitation alone in the limit of an arbitrary large number of inputs . When , the small-weight approximation Eq. (20) suggests that adopting the scaling , where denotes the aggregate synaptic weight, yields a nonzero contribution when as the numerator scales as . It turns out that this choice can be shown to be valid without resorting to any approximations. Indeed, under the classical scaling assumption, we show in Appendix O that the discrete jump distribution weakly converges to the continuous density in the sense that
(22) |
The above density has infinite mass over owing to its diverging behavior in zero and is referred to as a degenerate beta distribution. In spite of its degenerate nature, it is known that densities of the above form define well-posed processes, the so-called beta processes, which have been studied extensively in the field of nonparametric Bayesian inference [61,62]. These beta processes represent generalizations of our compound Poisson process drives insofar as they allow for a countable infinity of jumps to occur within a finite time window. This is a natural requirement to impose when considering an infinite pool of synchronous synaptic inputs, the overwhelming majority of which having nearly zero amplitude.
The above arguments show that one can define a generalized class of synchronous input models that can serve as the drive of AONCB neurons as well. Such generalizations are obtained as limits of compound Poisson processes and are specified via their Lévy-Khintchine measures, which formalize the role of [78,79]. Our results naturally extend to this generalized class. Concretely, for excitation alone, our results extend by replacing all expectations of the form by integral with respect to the measure . One can easily check that these expectations, which feature prominently in the definition of the various synaptic efficacies, all remain finite for Lévy-Khintchine measures. In particular, the voltage mean and variance of AONCB neurons remain finite with
Thus, considering the classical scaling limit preserves nonzero subthreshold variability in the infinite size limit as long as puts mass away from zero, i.e., for . Furthermore, we show in Appendix O that so that voltage variability consistently vanishes in the absence of spiking correlation, for which concentrates in zero, i.e., when .
IV. DISCUSSION
A. Synchrony modeling
We have presented a parametric representation of the neuronal drives resulting from a finite number of asynchronous or (weakly) synchronous synaptic inputs. Several parametric statistical models have been proposed for generating correlated spiking activities in a discrete setting [59,80–82]. Such models have been used to analyze the activity of neural populations via Bayesian inference methods [83–85], as well as maximum entropy methods [86,87]. Our approach is not to simulate or analyze complex neural dependencies but rather to derive from first principles the synchronous input models that could drive conductance-based neuronal models. This approach primarily relies on extending the definition of discrete-time correlated spiking models akin to Ref. [59] to the continuous-time setting. To do so, the main tenet of our approach is to realize that input synchrony and spiking correlation represent equivalent measures under the assumption of input exchangeability.
Input exchangeability posits that the driving inputs form a subset of an arbitrarily large pool of exchangeable random variables [55,56]. In particular, this implies that the main determinant of the neuronal drive is the number of active inputs, as opposed to the magnitude of these synaptic inputs. Then, the de Finetti theorem [57] states that the probability of observing a given input configuration can be represented in the discrete setting under an integral form [see Eq. (3)] involving a directing probability measure . Intuitively, represents the probability distribution of the fraction of coactivating inputs at any discrete time. Our approach identifies the directing measure as a free parameter that captures input synchrony. The more dispersed the distribution , the more synchronous the inputs, as previously noted in Refs. [88,89]. Our work elaborates on this observation to develop computationally tractable statistical models for synchronous spiking in the continuous-time limit, i.e., for vanishing discrete time step .
We derive our results using a discrete-time directing measure chosen as beta distribution , where the parameters and can be related to the individual spiking rate and the spiking correlation via and . For this specific choice of distribution, we are able to construct statistical models of the correlated spiking activity as generalized beta-binomial processes [60], which play an important role in statistical Bayesian inference [61,62]. This construction allows us to fully parametrize the synchronous activity of a finite number of inputs via the jump distribution of a compound Poisson process, which depends explicitly on the spiking correlation. For being continuously indexed in time, stationary compound Poisson processes can naturally serve as the drive to biophysically relevant neuronal models. The idea to utilize compound Poisson processes to model input synchrony was originally proposed in Refs. [90–92] but without constructing these processes as limits of discrete spiking models and without providing explicit functional form for their jump distributions. More generally, our synchrony modeling can be interpreted as a limit case of the formalism proposed in Refs. [93,94] to model correlated spiking activity via multidimensional Poisson processes.
B. Moment analysis
We analytically characterize the subthreshold variability of a tractable conductance-based neuronal model, the AONCB neurons, when driven by synchronous synaptic inputs. The analytical characterization of a neuron’s voltage fluctuations has been the focus of intense research [46,47,95–97]. These attempts have considered neuronal models that already incorporate some diffusion scaling hypotheses [98,99], formally obtained by assuming an infinite number of synaptic inputs. The primary benefit of these diffusion approximations is that one can treat the corresponding Fokker-Planck equations to quantify neuronal variability in conductance-based integrate-and-fire models while also including the effect of postspiking reset [37,38]. In practice, subthreshold variability is often estimated in the effective-time-constant approximation, while neglecting the multiplicative noise contributions due to voltage-dependent membrane fluctuations [46,95,96], although an exact treatment is also possible without this simplifying assumption [38]. By contrast, the analysis of conductance-based models has resisted exact treatments when driven by shot noise, as for compound Poisson input processes, rather than by Gaussian white noise, as in the diffusion approximation [41–43].
The exact treatment of shot-noise-driven neuronal dynamics is primarily hindered by the limitations of the Itô-Stratonovich integrals [65,100] to capture the effects of point-process-based noise sources, even without including a reset mechanism. These limitations were originally identified by Marcus, who proposed to approach the problem via a new type of stochastic equation [44,45]. The key to the Marcus equation is to define shot noise as limits of regularized, well-behaved approximations of that shot noise, for which classical calculus applies [66]. In practice, these approximations are canonically obtained as the solutions of shot-noise-driven Langevin equations with relaxation timescale , and shot noise is formally recovered in the limit . Our assertion here is that all-or-none conductances implement such a form of shot-noise regularization for which a natural limiting process can be defined when synapses operate instantaneously, i.e., . The main difference with the canonical Marcus approach is that our regularization is all-or-none, substituting each Dirac delta impulse with a finite steplike impulse of duration and magnitude , thereby introducing a synaptic timescale but without any relaxation mechanism.
The above assertion is the basis for introducing AONCB neurons, which is supported by our ability to obtain exact formulas for the first two moments of their stationary voltage dynamics [see Eqs. (14) and (16)]. For , these moments can be expressed in terms of synaptic efficacies that take exact but rather intricate integral forms. Fortunately, these efficacies drastically simplify in the instantaneous synapse limit , for which the canonical shot-noise drive is recovered. These resulting formulas mirror those obtained in the diffusion and effective-time-constant approximations [46,47], except that they involve synaptic efficacies whose expressions are original in three ways [see Eqs. (15), (G4), (G7), and (G8)]: First, independent of input synchrony, these efficacies all have exponential forms and saturate in the limit of large synaptic weights. Such saturation is a general characteristic of shot-noise-driven, continuously relaxing systems [101–103]. Second, these efficacies are defined as expectations with respect to the jump distribution of the driving compound Poisson process [see Eq. (11) and Appendix B]. A nonzero dispersion of , indicating that synaptic activation is truly modeled via random variables and , is the hallmark of input synchrony [91,92]. Third, these efficacies involve the overall rate of synaptic events [see Eq. (12)], which also depends on input synchrony. Such dependence can be naturally understood within the framework of Palm calculus [104], a form of calculus specially developed for stationary point processes.
C. Biophysical relevance
Our analysis allows us to investigate quantitatively how subthreshold variability depends on the numbers and strength of the synaptic contacts. This approach requires that we infer synaptic weights from the typical peak time and peak amplitude of the somatic membrane fluctuations caused by postsynaptic potentials [72,75,76]. Within our modeling framework, these weights are dimensionless quantities that we estimate by fitting the AONCB neuronal response to a single all-or-none synaptic activation at rest. For biophysically relevant parameters, this yields typically small synaptic weights in the sense that . These small values warrant adopting the small-weight approximation, for which expressions (14) and (16) simplify.
In the small-weight approximation, the mean voltage becomes independent of input synchrony, whereas the simplified voltage variance Eq. (20) depends on input synchrony only via the spiking correlation coefficients , and , as opposed to depending on a full jump distribution. Spike-count correlations have been experimentally shown to be weak in cortical circuits [10–12], and, for this reason, most theoretical approaches argued for asynchronous activity [17,105–109]. A putative role for synchrony in neural computations remains a matter of debate [110–112]. In modeled networks, although the tight balance regime implies asynchronous activity [19–21], the loosely balanced regime is compatible with the establishment of strong neuronal correlations [22–24]. When distributed over large networks, weak correlations can still give rise to precise synchrony, once information is pooled from a large enough number of synaptic inputs [32,33]. In this view, and assuming that distinct inputs play comparable roles, correlations measure the propensity of distinct synaptic inputs impinging on a neuron to coactivate, which represents a clear form of synchrony. Our analysis shows that considering synchrony in amounts consistent with the levels of observed spiking correlation is enough to account for the surprisingly large magnitude of subthreshold neuronal variability [1,26–28]. In contrast, the asynchronous regime yields unrealistically low variability, an observation that challenges the basis for the asynchronous state hypothesis.
Recent theoretical works [37,38] have also noted that the asynchronous state hypothesis seems at odds with certain features of the cortical activity such as the emergence of spontaneous activity or the maintenance of significant average polarization during evoked activity. Zerlaut et al. have analyzed under which conditions conductance-based networks can achieve a spectrum of asynchronous states with realistic neural features. In their work, a key variable to achieve this spectrum is a strong afferent drive that modulates a balanced network with moderate recurrent connections. Moderate recurrent conductances are inferred from allowing for up to 2 mV somatic deflections at rest, whereas the afferent drive is provided via even stronger synaptic conductances that can activate synchronously. These inferred conductances appear large in light of recent in vivo measurements [72,75,76], and the corresponding synaptic weights all satisfy within our framework. Correspondingly, the typical connectivity numbers considered are small with for recurrent connections, and for the coactivating afferent projections. Thus, results from Ref. [37] appear consistent with our observation that realistic subthreshold variability can be achieved asynchronously only for a restricted number of large synaptic weights. Our findings, however, predict that these results follow from connectivity sparseness and will not hold in denser networks, for which the pairwise spiking correlation will exceed the empirical criteria for asynchrony, e.g., in Ref. [37]). Sanzeni et al. have pointed out that implementing the effective-time-constant approximation in conductance-based models suppresses subthreshold variability, especially in the high-conductance state [77]. As mentioned here, this suppression causes the voltage variability to decay as in any scaling limit with vanishing synaptic weights. Sanzeni et al. observe that such decay is too fast to yield realistic variability for the balanced scaling, which assumes and . To remedy this point, these authors propose to adopt a slower scaling of the weights, i.e., and , which can be derived from the principle of rate conservation in neural networks. Such a scaling is sufficiently slow for variability to persist in networks with large connectivity number (≃105). However, as any scaling with vanishing weights, our exact analysis shows that such scaling must eventually lead to decaying variability, thereby challenging the basis for the synchronous state hypothesis.
Both of these studies focus on the network dynamics of conductance-based networks under the diffusion approximations. Diffusive behaviors rigorously emerge only under some scaling limit with vanishing weights [98,99]. By focusing on the single-cell level rather than the network level, we are able to demonstrate that the effective-time-constant approximation holds exactly for shot-noise-driven, conductance-based neurons, without any diffusive approximations. Consequently, suppression of variability must occur independent of any scaling choice, except in the presence of input synchrony. Although this observation poses a serious theoretical challenge to the asynchronous state hypothesis, observe that it does not invalidate the practical usefulness of the diffusion approximation. For instance, we show in Fig. 11 that the mean spiking response of an a shot-noise-driven AONCB neuron with an integrate- and-fire mechanism can be satisfactorily captured via the diffusion approximation. In addition, our analysis allows one to extend the diffusion approximation to include input synchrony.
FIG. 11.
Diffusion approximations in the presence of synchrony. (a) Comparison of an asynchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained via the effective-time-constant approximation (red trace). (b) Comparison of a synchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained by our exact analysis (red trace). Parameters: , and .
D. Limitations of the approach
A first limitation of our analysis is that we neglect the spike-generating mechanism as a source of neural variability. Most diffusion-based approaches model spike generation via the integrate-and-fire mechanism, whereby the membrane voltages reset to fixed value upon reaching a spike-initiation threshold [37,38,46,47,95–97]. Accounting for such a mechanism can impact our findings in two ways: (i) By confining voltage below the spiking threshold, the spiking mechanism may suppress the mean response enough for the neuron to operate well in the high-conductance regime for large input drives. Such a scenario will still produce exceedingly low variability due to variability quenching in the high-conductance regime, consistent with Ref. [1]. (ii) The additional variability due to postspiking resets may dominate the synaptic variability, so that a large overall subthreshold variability can be achieved in spite of low synaptic variability. This possibility also seems unlikely as dominant yet stereotypical resets would imply a quasideterministic neural response [71]. Addressing the above limitations quantitatively requires extending our exact analysis to include the integrate-and-fire mechanism using a technique from queueing theory [104]. This is beyond the scope of this work. We note, however, that implementing a postspiking reset to a fixed voltage level yields simulated trajectories that markedly differ from physiological ones (see Fig. 1), for which the postspiking voltage varies across conditions [26–28].
A second limitation of our analysis is our assumption of exchangeability, which is the lens through which we operate a link between spiking correlations and input drives. Taken literally, the exchangeability assumption states that synapses all have a typical strength and that conductance variability primarily stems from the variable numbers of coactivating synapses. This is certainly an oversimplification as synapses exhibit heterogeneity [113], which likely plays a role in shaping neural variability [114]. Distinguishing between heterogeneity and correlation contributions, however, is a fundamentally ambiguous task [115]. For instance, considering synchronous inputs with weight at rate and with jump probability [see Eqs. (5) and (9)] is indistinguishable from considering independent inputs with heterogeneous weights and rates . Within our modeling approach, accounting for synaptic heterogeneity, with dispersed distribution for synaptic weights , can be done by taking the jump distribution as
where refers to the -fold convolution of . This leads to an overdispersion of the jump distribution and, thus, increased subthreshold neural variability. Therefore, while we have assumed exchangeability, our approach can accommodate weight heterogeneity. The interpretation of our results in terms of synchrony rather than heterogeneity is supported by recent experimental evidence that cortical response selectivity derives from strength in numbers of synapses rather than difference in synaptic weights [116].
A third limitation of our analysis is to consider a perfect form of synchrony, with exactly simultaneous synaptic activations. Although seemingly unrealistic, we argue that perfect input synchrony can still yield biologically relevant estimates of the voltage variability. For instantaneous synchrony, the empirical spiking correlation is independent of the timescale over which spikes are counted, i.e., , as shown in Fig. 12(a) (blue line). This is a potential problem, because spiking correlations have been measured to vanish on small timescales in experimental recordings [117,118]. More realistic input models can be obtained by jittering instantaneously synchronous spikes. Such a procedure leads to a general decrease in the empirical spiking correlations with spiking correlations over all timescales , including for [vertical dashed line in Fig. 12(a)], which vanish in the limit of small timescales [red, yellow, and purple lines in Fig. 12(a)]. Analysis of the temporal structure of spiking correlation in Refs. [117,118] suggests that correlations lie within the range 0.01–0.04 for . We focus on this timescale because it is just larger than the membrane time constant of the neuron. Then, to achieve realistic correlations at , the instantaneous spiking correlation of the unjitterred synchronous input model, denoted by , may be increased. Adopting a jittering timescale of , Fig. 12(b) shows that with for instantaneous spiking correlation within the range 0.2–0.3. Note that, for very long timescales , this also implies that the empirical spiking correlation saturates at , as reported in Refs. [117,118]. To validate that our instantaneous model makes realistic prediction about the subthreshold variability, we simulate AONCB neurons in response to these jittered synchronous inputs. Figure 12(c) shows that the resulting stationary voltage distribution (red histogram) closely follows the distribution obtained by assuming instantaneous synchrony with chosen such that (blue trace and histogram). Furthermore, we can justify the choice of the timescale a posteriori. Specifically, in Fig. 12(d), we consider temporally structured inputs obtained from the same instantaneous synchrony but for various jittering timescale . Jittering at larger timescale reduces synchrony and voltage variance (vertical dashed lines). We then compare the resulting voltage variance with perfectly synchronous approximations obtained by matching spike-count correlation at various timescales (our choice is to match at 25 ms). Figure 12(d) shows that matching at increasing timescale yields higher variance, but matching at offers good approximations (gray square where variances are about the same). Extending our analytic results to include jittering will require modeling spiking correlations via multidimensional Poisson processes rather than via compound Poisson processes [93,94]. However, this is beyond the scope of this work. A remaining limitation of our synchrony modeling is that our analysis can account for only non-negative, instantaneous correlations between excitation and inhibition, while in reality such correlations may be negative and are expected to peak at a nonzero time lag.
FIG. 12.
Impact of jittering synchronous inputs. (a) Effect of jittering synchronous spike times via independent Gaussian centered time shifts with varied standard deviation : Without jitter, spiking correlation is independent of the size of the time bins used to count spikes (blue trace). Jittering with larger decreases spiking correlation for all bin sizes, with spiking correlation vanishing in the limit of small bin sizes. (b) Given a jitter standard deviation of , one obtains spike-count correlation of in bins by jittering a synchronous input with instantaneous correlation of . (c) Comparison of voltage trace obtained with instantaneous synchronous input (blue) and jittered correlated inputs (red) for . Both types of input are chosen so that they yield the same spiking correlation of with a bin size of . The stationary distributions are close to identical, leading to less than 1% error in the variance estimates. (d) Comparison between the voltage variances of an AONCB neuron driven by realistic synchronous inputs with various jitters (dashed line) and the voltage variances of the same AONCB neuron driven by instantaneously synchronous approximations (solid line). For each , different instantaneous approximations are obtained for different bin sizes by setting for various bin sizes . Good approximations are consistently obtained for (gray column). Other parameters: , and .
A fourth limitation of our analysis is that it is restricted to a form of synchrony that ignores temporal heterogeneity. This is a limitation, because a leading hypothesis for the emergence of variability is that neurons generate spikes as if through a doubly stochastic process, i.e., as a Poisson process with temporally fluctuating rate [119]. To better understand this limitation, let us interpret our exchangeability-based modeling approach within the framework of doubly stochastic processes [51,52]. This can be done most conveniently by reasoning on the discrete correlated spiking model specified by Eq. (3). Specifically, given fixed bin size , one can interpret the collection of i.i.d. variables as an instantaneously fluctuating rate. In this interpretation, nonzero correlations can be seen as emerging from a doubly stochastic process for which the rate fluctuates as uncorrelated noise, i.e., with zero correlation time. This zero correlation time is potentially a serious limitation, as it has been argued that shared variability is best modeled by a low-dimensional latent process evolving with slow, or even smooth, dynamics [82]. Addressing this limitation will require developing limit spiking model with nonzero correlation time using probabilistic techniques that are beyond the scope of this work [56].
A final limitation of our analysis is that it does not explain the consistent emergence of synchrony in network dynamics. It remains conceptually unclear how synchrony can emerge and persist in neural networks that are fundamentally plagued by noise and exhibit large degrees of temporal and cellular heterogeneity. It may well be that carefully taking into account the finite size of networks will be enough to produce the desired level of synchrony-based correlation, which is rather weak after all. Still, one would have to check whether achieving a given degree of synchrony requires the tuning of certain network features, such as the degree of shared input or the propensity of certain recurrent motifs [120] or the relative width of recurrent connections with respect to feedforward projections [121]. From a theoretical standpoint, the asynchronous state hypothesis answers the consistency problem by assuming no spiking correlations and, thus, no synchrony. One can justify this assumption in idealized mathematical models by demonstrating the so-called “propagation-of-chaos” property [122], which rigorously holds for certain scaling limits with vanishing weights and under the assumption of exchangeability [107–109]. In this light, the main theoretical challenge posed by our analysis is extending the latter exchangeability-based property to include nonzero correlations [123] and hopefully to characterize irregular synchronous state in some scaling limits.
ACKNOWLEDGMENTS
L. A. B., B. L., N. J. P., E. S., and T. T. were supported by the Vision Research program of the National Institutes of Health under Award No. R01EY024071. L. A. B. and T. T. were also supported by the CRCNS program of the National Science Foundation under Award No. DMS-2113213. We thank François Baccelli, David Hansel, and Nicolas Brunel for insightful discussions.
APPENDIX A: DISCRETE-TIME SPIKING CORRELATION
In this appendix, we consider first the discrete-time version of our model for possibly correlated excitatory synaptic inputs. In this model, we consider that observing synaptic inputs during time steps specifies a {0, 1}-valued matrix , where 1 indicates that an input is received and 0 indicates an absence of inputs. For simplicity, we further assume that the inputs are independent across time:
so that we can drop the time index and consider the population vector . Consequently, given the individual spiking rate , we have , where is the duration of the time step where a spike may or may not occur. Under the assumption that belongs to an infinitely exchangeable set of random variables, the de Finetti theorem states that there exists a probability measure on [0, 1] such that
Assuming the directing measure known, we can compute the spiking correlation attached to our model. To see this, first observe that, specifying the above probabilistic model for , we have
Then, using the total law of covariance and specifying the above probabilistic model for , we have
This directly yields that the spiking correlation reads
(A1) |
The exact same calculations can be performed for the partially exchangeable case of mixed excitation and inhibition. The assumption of partial exchangeability requires that, when considered separately, the {0, 1}-valued vectors and each belong to an infinitely exchangeable sequence of random variables. Then, de Finetti’s theorem states that the probability to find the full vector of inputs in any particular configuration is given by
(A2) |
where the directing measure fully parametrizes our probabilistic model. Performing similar calculations as for the case of excitation alone within this partially exchangeable setting yields
(A3) |
APPENDIX B: COMPOUND POISSON PROCESSES AS CONTINUOUS-TIME LIMITS
Let us consider the discrete-time model specified by Eq. (A2), which is obtained under the assumption of partial infinite exchangeability. Under this assumption, the probability laws of the inputs are entirely determined by the distribution of , where denotes the number of active excitatory inputs and denotes the number of inhibitory inputs. This distribution can be computed as
It is convenient to choose the directing measure as beta distributions, since these are conjugate to the binomial distributions. Such a choice yields a class of probabilistic models referred to as beta-binomial models, which have been studied extensively [61,62]. In this appendix, we always assume that the marginals and have the form and . Then, direct integrations shows that the marginal distributions for the number of excitatory inputs and inhibitory inputs are
Moreover, given individual spiking rates and within a time step , we have
The continuous-time limit is obtained by taking , which implies that the parameters and jointly vanish. When , the beta distributions and become deficient, and we have . In other words, time bins of size almost surely have no active inputs in the limit . Actually, one can show that
where denotes the digamma function. This indicates in the limit the times at which some excitatory inputs or some inhibitory inputs are active define a point process. Moreover, owing to the assumption of independence across time, this point process will actually be a Poisson point process. Specifically, consider a time and set for some large integer . Define the sequence of times
Considered separately, the sequences of times and constitute binomial approximations of Poisson processes which we denote by and , respectively. It is a classical result that these limit Poisson processes are recovered exactly when and that their rates are, respectively, given by
For all integer , the function is an increasing analytic functions on the domain with range . Thus, we always have and , and the extreme cases are achieved for perfect or zero correlations. Perfect correlations are achieved when or , which corresponds to or . This implies that and , consistent with all synapses activating simultaneously. Zero correlations are achieved when or , which corresponds to or . This implies that and , consistent with all synapses activating asynchronously, so that no inputs simultaneously activate. Observe that, in all generality, the rates and are such that the mean number of spikes over the duration is conserved in the limit . For instance, one can check that
When excitation and inhibition are considered separately, the limit process specifies two compound Poisson processes:
where and are Poisson processes with rate and and where are i.i.d. according to and are i.i.d. according to . Nonzero correlations between excitation and inhibition emerge when the Poisson processes and are not independent. This corresponds to the processes and sharing times, so excitation and inhibition occur simultaneously at these times. To understand this point intuitively, let us consider the limit Poisson process obtained by considering synaptic events without distinguishing excitation and inhibition. For perfect correlation, i.e., , all synapses activate synchronously and we have : All times are shared. By contrast, for zero correlation, i.e., , no synapses activate simultaneously and we have : No times are shared. For the intermediary regime of correlations, a nonzero fraction of times are shared, resulting in a driving Poisson process with overall rate satisfying . We investigate the above intuitive statements quantitatively in Appendix D by inspecting two key examples.
Let us conclude this appendix by recapitulating the general form of the limit compound process obtained in the continuous-time limit when jointly considering excitation and inhibition. This compound Poisson process can be represented as
where is that Poisson process registering all synaptic events without distinguishing excitation and inhibition and where the pairs are i.i.d. random jumps in . Formally, such a process is specified by the rate of , denoted by , and the bivariate distribution of the jumps , denoted by . These are defined as
(B1) |
where is the probability to register no synaptic activation during a time step . According to these definitions, is the infinitesimal likelihood that an input is active within a time bin, whereas is the probability that excitatory inputs and inhibitory inputs are active given that at least one input is active. One can similarly define the excitatory and inhibitory rates of events and , as well as the excitatory jump distribution and the inhibitory jump distribution . Specifically, we have
(B2) |
with and . Observe that, thus defined, the jump distributions and are specified as conditional marginal distributions of the joint jump distribution on the events and , respectively. These are such that and . To see why, observe, for instance, that
(B3) |
where we use the definitions of the rates and given in Eqs. (B1) and (B2) to establish that
APPENDIX C: CONTINUOUS-TIME SPIKING CORRELATION
Equations (A1) and (A3) carry over to the continuous time limit by observing that, for limit compound Poisson processes to emerge, one must have that and . This directly implies that, when , we have
(C1) |
All the stationary expectations appearing above can be computed via the jump distribution of the limit point process emerging in the limit [104]. Because this limit process is a compound Poisson process with discrete bivariate jumps, the resulting jump distribution is specified over . Denoting by the overall rate of synaptic events, one has . Then, by partial exchangeability of the {0, 1}-valued population vectors and , we have
(C2) |
where the bivariate jump is distributed as .
To further proceed, it is important to note the relation between the expectation , which is tied to the overall input process with rate , and the expectation , which is tied to the excitatory input process with rate . This relation is best captured by remarking that are not defined as the marginals of but only as conditional marginals on . In other words, we have , which implies that and with
(C3) |
(C4) |
with similar expressions for the inhibition-related quantities. Injecting Eqs. (C2)–(C4) in Eq. (C1) yields
APPENDIX D: TWO EXAMPLES OF LIMIT COMPOUND POISSON PROCESSES
The probability that plays a central role in Appendix B can be easily computed for zero correlation, i.e., , by considering a directing measure under product form . Then, integration with respect to the separable variables and yields
In turn, the limit compound Poisson process can be obtain in the limit by observing that
which implies that the overall rate is determined as , as expected. To characterize the limit compound Poisson process, it remains to exhibit , the distribution of the jumps and . Suppose that ; then, we have
Then, one can use the limit behaviors
so that, for , we have
A similar calculation shows that, for all , we have . Thus, whenever , so that the support of is . This is consistent with the intuition that excitation and inhibition happen at distinct times in the absence of correlations.
Let us now consider the case of maximum correlation for , where is a beta distribution with parameters and . Moreover, let us assume the deterministic coupling such that . Then, the joint distribution of the jumps can be evaluated via direct integration as
As excitation and inhibition are captured separately by the same marginal functions , we necessarily have , and we refer to the common spiking rate as . Then, the overall rate of synaptic events is obtained as
(D1) |
and one can check that differs from the excitatory- and inhibitory-specific rates and , which satisfy
(D2) |
To characterize the limit compound Poisson process, it remains to exhibit , the joint distribution of the jumps . A similar calculation as for the case of excitation alone yields
Remember that, within our model, spiking correlations do not depend on the number of neurons and that by construction we have . Thus, for the symmetric case under consideration, maximum correlation corresponds to . In particular, perfect correlation between excitation and inhibition can be attained only for . When , i.e., for partial correlations, the Poisson processes and share only a fraction of their times, yielding an aggregate Poisson process such that . The relations between , and can be directly recovered from the knowledge of by observing that
This implies that the fraction of times with nonzero excitation is given by
so that we consistently recover the value of already obtained in Eqs. (9) and (D2) via
APPENDIX E: MARCUS JUMP RULE
The goal of this appendix is to justify the Marcus-type update rule given in Eq. (13). To do so, let us first remark that, given a finite time interval , the number of synaptic activation times falling in this interval is almost surely finite. In particular, we have almost surely. Consequently, taking ensures that synaptic activation events do not overlap in time, so that it is enough to consider a single synaptic activation triggered with no lack of generality in . Let us denote the voltage just before the impulse onset as , which serves as initial condition for the ensuing voltage dynamics. As the dimensionless conductances remain equals to and for a duration , the voltage satisfies
where we assume for simplicity. The unique solution satisfying is
The Marcus-type rule follows from evaluating the jump update as the limit
which has the same form as the rule announced in Eq. (13). Otherwise, at fixed , the fraction of time for which the voltage is exponentially relaxing toward the leak reversal potential is larger than , where denotes the almost surely finite number of synaptic activations, which does not depend on . Thus, the voltage exponentially relaxes toward , except when it has jump discontinuities in .
APPENDIX F: STATIONARY VOLTAGE MEAN
For a positive synaptic activation time , the classical method of the variation of the constant applies to solve Eq. (1). This yields an expression for in terms of regular Riemann-Stieltjes integrals where the conductance traces and are treated as a form of deterministic quenched disorder. Specifically, given an initial condition , we have
where depends on via the all-or-none-conductance processes and . As usual, the stationary dynamics of the voltage is recovered by considering the limit of arbitrary large times , for which one can neglect the influence of the initial condition . Introducing the cumulative input processes defined by
and satisfying and , we have
(F1) |
In turn, expanding the integrand above yields the following expression for the stationary expectation of the voltage:
(F2) |
Our primary task is to evaluate the various stationary expectations appearing in the above formula. Such a goal can be achieved analytically for AONCB models. As the involved calculations tend to be cumbersome, we give only a detailed account in Appendices H and I. Here, we account for the key steps of the calculation, which ultimately produces an interpretable compact formula for in the limit of instantaneous synapses, i.e., when .
In order to establish this compact formula, it is worth introducing the stationary bivariate function
(F3) |
which naturally depends on via and . The function is of great interest, because all the stationary expectations at stake in Eq. (F2) can be derived from it. Before justifying this point, an important observation is that the expectation defining bears on only the cumulative input processes and , which specify bounded, piecewise continuous functions with probability one, independent of . As a result of this regular behavior, the expectation commute with the limit of instantaneous synapses, allowing one to write
where we exploit the fact that the cumulative input processes and converge toward the coupled compound Poisson processes and when :
(F4) |
The above remark allows one to compute the term due to current injection in Eq. (F2), where the expectation can be identified to . Indeed, utilizing the standard form for the moment-generating function for compound Poisson processes [51], we find that
where we introduce the first-order aggregate efficacy
Remember that, in the above definition, denotes the expectation with respect to the joint probability of the conductance jumps, i.e., .
It remains to evaluate the expectations associated to excitation and inhibition reversal potentials in Eq. (F2). These terms differ from the current-associated term in that they involve expectations of stochastic integrals with respect to the cumulative input processes . This is by contrast with evaluating Eq. (F3), which involves only expectations of functions that depend on . In principle, one could still hope to adopt a similar route as for the current-associated term, exploiting the compound Poisson process obtained in the limit of instantaneous synapses. However, such an approach would require that the operations of taking the limit of instantaneous synapses and evaluating the stationary expectation still commute. This is a major caveat, as such a commuting relation generally fails for point-process-based stochastic integrals. Therefore, one has to analytically evaluate the expectations at stake for positive synaptic activation time , without resorting to the simplifying limit of instantaneous synapses. This analytical requirement is the primary motivation to consider AONCB models.
The first step in the calculation is to realize that, for , the conductance traces and are bounded, piecewise continuous functions with probability one. Under these conditions, it then holds that
so that the sought-after expectations can be deduced from the closed-form knowledge of for positive . The analytical expression of can be obtained via careful manipulation of the processes and featured in the exponent in Eq. (F3) (see Appendix H). In a nutshell, these manipulations hinge on splitting the integrals defining and into independent contributions arising from spiking events occurring in the five nonoverlapping, contiguous intervals bounded by the times . There is no loss of generality in assuming the latter ordering, and, from the corresponding analytical expression, we can compute
where the effective first-order synaptic efficacies via Eq. (15) as
Observe that, by definition, and satisfy .
Altogether, upon evaluation of the integrals featured in Eq. (F2), these results allow one to produce the compact expression Eq. (14) for the stationary voltage mean in the limit of instantaneous synapses:
APPENDIX G: STATIONARY VOLTAGE VARIANCE
The calculation of the stationary voltage variance is more challenging than that of the stationary voltage mean. However, in the limit of instantaneous synapses, this calculation produces a compact, interpretable formula as well. Adopting a similar approach as for the stationary mean calculation, we start by expressing in the stationary limit in terms of a stochastic integrals involving the cumulative input processes and . Specifically, using Eq. (F1), we have
(G1) |
Our main goal is to compute the stationary expectation of the above quantity. As for the stationary voltage mean, our strategy is (i) to derive the exact stationary expectation of the integrands for finite synaptic activation time, (ii) to evaluate these integrands in the simplifying limit of instantaneous synapses, and (iii) to rearrange the terms obtained after integration into an interpretable final form. Enacting the above strategy is a rather tedious task, and, as for the calculation of the mean voltage, we present only the key steps of the calculation in the following.
The integrand terms at stake are obtained by expanding Eq. (G1), which yields the following quadratic expression for the stationary second moment of the voltage:
whose various coefficients need to be evaluated. These coefficients are conveniently specified in terms of the following symmetric random function:
which features prominently in Eq. (G1). Moreover, drawing on the calculation of the stationary mean voltage, we anticipate that the quadrivariate version of will play a central role in the calculation via its stationary expectation. Owing to this central role, we denote this expectation as
where we make the dependence explicit. As a mere expectation with respect to the cumulative input processes , the expectation can be evaluated in closed form for AONCB models. This again requires careful manipulations of the processes and , which need to split into independent contributions arising from spiking events occurring in nonoverlapping intervals. By contrast with the bivariate case, the quadrivariate case requires to consider nine contiguous intervals. There is no loss of generality to consider these interval bounds to be determined by the two following time orderings:
where stands for off-diagonal ordering and for diagonal ordering.
The reason to consider only the orders is that all the relevant calculations are made in the limit . By symmetry of , it is then enough to restrict our consideration to the limit , which leaves the choice of to be determined. By symmetry, one can always choose , so that the only remaining alternative is to decide wether belong to the diagonal region or the off-diagonal region . For the sake of completeness, we give the two expressions of on the regions and in Appendix I. Owing to their tediousness, we do not give the detailed calculations leading to these expressions, which are lengthy but straightforward elaborations on those used in Appendix H. Here, we stress that, for , these expressions reveal that is defined as a twice-differentiable quadrivariate function.
With these remarks in mind, the coefficients featured in Eq. (G2) can be categorized into three classes.
-
There is a single current-dependent inhomogeneous coefficientwhere we recognize that . As is merely a stationary expectation with respect to the cumulative input processes , it can be directly evaluated in the limit of instantaneous synapses. In other words, step (ii) can be performed before step (i), similarly as for the stationary voltage mean calculation. However, having a general analytical expression for on (see Appendix I), we can directly evaluate for all that
(G2) where we define the second-order aggregate efficacyIt is clear that the continuous function is smooth everywhere except on the diagonal, where it admits a slope discontinuity. As we shall see, this slope discontinuity is the reason why one needs to consider the region carefully, even when concerned only with the limit . That being said, the diagonal behavior plays no role here, and straightforward integration of on the negative orthant gives -
There are two current-dependent linear coefficientswhere the coefficient 2 above comes from the fact that and are actually resulting from the contributions of two symmetric terms in the expansion of Eq. (G1). Both and involve expectations of stochastic integrals akin to those evaluated for the stationary mean calculation. Therefore, these terms can be treated similarly by implementing steps (i) and (ii) sequentially. The trick is to realize that, for positive and , it holds thatThus, for any in the off-diagonal region , the analytical knowledge of (see Appendix I) allows one to evaluate
(G3) where the second-order synaptic efficacies are defined as(G4) Observe that these efficacies satisfy the familiar relation . Taking the limits of Eq. (G3) when specifies two bivariate functions that are continuous everywhere, except on the diagonal , where these functions present a jump discontinuity. This behavior is still regular enough to discard any potential contributions from diagonal terms, so that we can restrict ourselves to the region . Then, taking the limit after integration of over , we find that -
There are four quadratic coefficients associated to the reversal potential and , including two diagonal termsand two symmetric cross terms contributingNotice that it is enough to compute only one diagonal term, as the other term can be deduced by symmetry. Following the same method as for the linear terms, we start by remarking that, for all in the off-diagonal region , it holds thatAs before, the analytical knowledge of on the region (see Appendix I) allows one to evaluateThe above closed-form expressions allow one to compute and , the part of the coefficients and resulting from integration over the off-diagonal region , which admit well-defined limit values and withHowever, for quadratic terms, one also needs to include the contributions arising from the diagonal region , as suggested by the first-order jump discontinuity of on the diagonal . To confirm this point, one can show from the analytical expression of on (see Appendix I) that all relevant second-order derivative terms scale as over . This scaling leads to the nonzero contributions and resulting from the integration of these second-order derivative terms over the diagonal region , even in the limit . Actually, we find that these contributions also admit well-defined limit values and with (see Appendix J)
(G5) (G6) Remembering that the expression of can be deduced from that of by symmetry, Eq. (G5) defines , and, thus, , in terms of the useful auxiliary second-order efficacies and . These efficacies feature prominently in the final variance expression, and it is worth mentioning their explicit definitions as(G7) The other quantity of interest is the coefficient , which appears in both Eqs. (G5) and (G6). This non-negative coefficient, defined as(G8) entirely captures the (non-negative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well. Keeping these definitions in mind, the full quadratic coefficients are finally obtained as , and .
From there, injecting the analytical expressions of the various coefficients in the quadratic form Eq. (G2) leads to an explicit formula for the stationary voltage variance in the limit of instantaneous synapses. Then, one is left with only step (iii), which aims at exhibiting a compact, interpretable form for this formula. We show in Appendix K that lengthy but straightforward algebraic manipulations lead to the simplified form given in Eq. (16):
APPENDIX H: EVALUATION OF
The goal here is to justify the closed-form expression of via standard manipulation of exponential functionals of Poisson processes. By definition, assuming with no loss of generality the order , we have
(H1) |
We evaluate as a product of independent integral contributions.
Isolating these independent contributions from Eq. (H1) requires one to establish two preliminary results about the quantity
(H2) |
where denotes a Poisson process, denotes i.i.d. non-negative random variables, and is positive activation time. Assume ; then, given some real , we have
(H3) |
One can check that the three terms in Eq. (H3) above are independent for involving independent numbers of i.i.d. draws over the intervals , and , respectively. Similar manipulations for the order for yield
(H4) |
where the three independent contributions correspond to independent numbers of i.i.d. draws over the intervals , and , respectively.
As evaluating involves only taking the limit at fixed , it is enough to consider the order . With that in mind, we can apply Eqs. (H3) and (H4) with and or , to decompose the two terms of Eq. (H1) in six contributions:
It turns out that the contribution of the third term overlaps with that of the fourth and fifth terms. Further splitting of that third term produces the following expression:
where all five terms correspond to independent numbers of i.i.d. draws over the intervals , and . Then, we have
where all expectation terms can be computed via standard manipulation of the moment-generating function of Poisson processes [51]. The trick is to remember that, for all , given that a Poisson process admits points in , all these points are uniformly i.i.d. over . This trick allows one to simply represent all integral terms in terms of uniform random variables, whose expectations are easily computable. To see this, let us consider , for instance. We have
where are uniformly i.i.d. on . From the knowledge of the moment-generating function of Poisson random variables [51], one can evaluate
where denotes exemplary conductance jumps and denotes an independent uniform random variable. Furthermore, we have
so that we finally obtain
Similar calculations show that we have
APPENDIX I: EXPRESSION OF
Using similar calculations as in Appendix H, we can evaluate the quadrivariate expectation on the region , for which the order holds: . This requires one to isolate and consider nine independent contributions, corresponding to the nine contiguous intervals specified by the order. We find
where the non-negative terms making up the above sum are defined as
One can check that and and that , and are all uniformly on the region . This implies that, for all in , we have
Using similar calculations as in Appendix H, we can evaluate the quadrivariate expectation on the region , for which the order holds: . This requires one to isolate and consider nine independent contributions, corresponding to the nine contiguous intervals specified by the order. We find
(I1) |
where the non-negative terms making up the above sum are defined as
Observe that and and that and . Moreover, one can see that is continuous over the whole negative orthant by checking that
Actually, by computing the appropriate limit values of the relevant first- and second-order derivatives of , one can check that, for , all the integrands involved in specifying the coefficients of the quadratic form Eq. (G2) define continuous functions.
APPENDIX J: INTEGRALS OF THE QUADRATIC TERMS ON
Here, we treat only the quadratic term , as the other quadratic terms and involve a similar treatment. The goal is to compute , which is defined as the contribution to resulting from integrating over the diagonal region , in the limit . To this end, we first remark that
Injecting the analytical expression Eq. (I1) into the above relation and evaluating reveals that scales as , so that one expects that
To compute the exact value of , we perform the change of variable to write
where the function remains of the order of one on in the limit of instantaneous synapses. Actually, one can compute that
Then, for dealing with positive, continuous, uniformly bounded functions, one can safely exchange the integral and limit operations to get
A similar calculation for the quadratic cross term yields
In order to express in terms of , we need to introduce the quantity which satisfies
With the above observation, we remark that
so that we have the following compact expression for the quadratic diagonal term:
APPENDIX K: COMPACT VARIANCE EXPRESSION
Our goal is to find a compact, interpretable formula for the stationary variance from the knowledge of the quadratic form
Let us first assume no current injection, , so that one has to keep track of only the quadratic terms. Specifying the quadratic coefficient , and in Eq. (K1), we get
where we collect separately all the terms containing the coefficient and where we use the facts that by definition , and . Expanding and simplifying the coefficients of and above yield
Then, we can utilize the expression above for together with the stationary mean formula
(K1) |
to write the variance as
To factorize the above expression, let us reintroduce and and collect the terms where these two coefficients occur. This yields
Finally, injecting the expression of stationary mean Eq. (K1) in both parentheses above produces the compact formula
(K2) |
which is the same as the one given in Eq. (16).
APPENDIX L: FACTORIZED VARIANCE EXPRESSION
In this appendix, we reshape the variance expression given in Eq. (K2) under a form that is clearly non-negative. To this end, let us first remark that the calculation in Appendix J shows that
Then, setting in Eq. (K2), we obtain
(L1) |
Note that the above quantity is clearly non-negative as any variance shall be. From there, one can include the impact of the injected current by further considering all the terms in Eq. (K1), including the linear and inhomogeneous current-dependent terms. Similar algebraic manipulations confirm that Eq. (L1) remains valid so that the only impact of is via altering the expression , so that we ultimately obtain the following explicit compact form:
The above expression shows that as expected and that the variability vanishes if and only if with probability one. In turn, plugging this relation into the mean voltage expression and solving for reveals that we necessarily have . This is consistent with the intuition that variability can vanish only if excitation and inhibition perfectly cancel one another.
APPENDIX M: VARIANCE IN THE SMALL-WEIGHT APPROXIMATION
In this appendix, we compute the simplified expression for the variance obtained via the small-weight approximation. Second, let us compute the small-weight approximation of the second-order efficacy
which amounts to computing the expectation of the cross product of the jumps and . To estimate the above approximation, it is important to remember that first that and are not defined as the marginals of but as conditional marginals, for which we have and . Then, by the definition of the correlation coefficient in Eq. (4), we have
as the rates and are such that and . As a result, we obtain a simplified expression for the cross-correlation coefficient:
Observe that, as expected, vanishes when . Second, let us compute the small-weight approximation of the second-order efficacy
To estimate the above approximation, we use the definition of the correlation coefficient in Eq. (8):
as the rate is such that . This directly implies that
so that we evaluate
which simplifies to when excitation and inhibition act independently. A symmetric expression holds for the inhibitory efficacy . Plugging the above expressions for synaptic efficacies into the variance expression Eq. (16) yields the small-weight approximation
Let us note that the first term in the right-hand side above represents the small-weight approximation of the voltage variance in the absence of correlation between excitation and inhibition, i.e., for . Denoting the latter approximation by and using the fact that the small-weight expression for the mean voltage
is independent of correlations, we observe that, as intuition suggests, synchrony-based correlation between excitation and inhibition results in a decrease of the neural variability:
However, the overall contribution of correlation is to increase variability in the small-weight approximation. This can be shown under the assumptions that and , by observing that
where both terms are positive since we always have .
APPENDIX N: VALIDITY OF THE SMALL-WEIGHT APPROXIMATION
Biophysical estimates of the synaptic weights and and the synaptic input numbers and suggest that neurons operates in the small-weight regime. In this regime, we claim that exponential corrections due to finite-size effect can be neglected in the evaluation of synaptic efficacies, as long as the spiking correlations remains weak. Here, we make this latter statement quantitative by focusing on the first-order efficacies in the case of excitation alone. The relative error due to neglecting exponential corrections can be quantified as
Let us evaluate this relative error, assumed to be small, when correlations are parametrized via beta distributions with parameter . Assuming correlations to be weak, , amounts to assuming large, . Under the assumptions of small error, we can compute
By the calculations carried out in Appendix M, we have
Remembering that , this implies that we have
For a correlation coefficient , this means that neglecting exponential corrections incurs less than error if the number of inputs is smaller than for moderate synaptic weight or than for large synaptic weight .
APPENDIX O: INFINITE-SIZE LIMIT WITH SPIKING CORRELATIONS
The computation of the first two moments and requires one to evaluate various efficacies as expectations. Upon inspection, these expectations are all of the form , where is a smooth positive function that is bounded on with . Just as for the Lévy-Khintchine decomposition of stable jump processes [78,79], this observation allows one to generalize our results to processes that exhibit and countable infinity of jumps over finite, nonzero time intervals. For our parametric forms based on beta distributions, such processes emerge in the limit of an arbitrary large number of inputs, i.e., for . Let us consider the case of excitation alone for simplicity. Then, we need to make sure that all expectations of the form remain well posed in the limit for smooth, bounded test function with . To check this, observe that, for all , we have by Eqs. (7) and (9) that
where we have introduced the Gamma function . Rearranging terms and using the fact that for all , we obtain
where the last equality is uniform in and follows from the fact that, for all , we have
From there, given a test function , let us consider
The order zero term above can be interpreted as a Riemann sum so that one has
Thus, the jump densities is specified via the Lévy-Khintchine measure
which is a deficient measure for admitting a pole in zero. This singular behavior indicates that the limit jump process obtained when has a countable infinity of jumps within any finite, nonempty time interval. Generic stationary jump processes with independent increments, as is the case here, are entirely specified by their Lévy-Khintchine measure [78,79]. Moreover, one can check that, given knowledge of , one can consistently estimate the corresponding pairwise spiking correlation as
Performing integral with respect to the Lévy-Khintchine measure instead of the evaluating the expectation in Eqs. (14) and (16) yields
Observe that, as for all , the definition of the spiking correlation and voltage variance implies that we have so that neural variability consistently vanishes in the absence of correlations.
References
- [1].Churchland MM et al. , Stimulus onset quenches neural variability: A widespread cortical phenomenon, Nat. Neurosci 13, 369 EP (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Tolhurst D, Movshon JA, and Thompson I, The dependence of response amplitude and variance of cat visual cortical neurones on stimulus contrast, Exp. Brain Res 41, 414 (1981). [DOI] [PubMed] [Google Scholar]
- [3].Tolhurst DJ, Movshon JA, and Dean AF, The statistical reliability of signals in single neurons in cat and monkey visual cortex, Vision Res. 23, 775 (1983) [DOI] [PubMed] [Google Scholar]
- [4].Churchland MM, Byron MY, Ryu SI, Santhanam G, and Shenoy KV, Neural variability in premotor cortex provides a signature of motor preparation, J. Neurosci 26, 3697 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Rickert J, Riehle A, Aertsen A, Rotter S, and Nawrot MP, Dynamic encoding of movement direction in motor cortical neurons, J. Neurosci 29, 13870 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Stevens CF and Zador AM, Input synchrony and the irregular firing of cortical neurons, Nat. Neurosci 1, 210 (1998). [DOI] [PubMed] [Google Scholar]
- [7].Lampl I, Reichova I, and Ferster D, Synchronous membrane potential fluctuations in neurons of the cat visual cortex, Neuron 22, 361 (1999). [DOI] [PubMed] [Google Scholar]
- [8].Ecker AS, Berens P, Cotton RJ, Subramaniyan M, Denfield GH, Cadwell CR, Smirnakis SM, Bethge M, and Tolias AS, State dependence of noise correlations in macaque primary visual cortex, Neuron 82, 235 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Poulet JF and Petersen CC, Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice, Nature (London) 454, 881 (2008). [DOI] [PubMed] [Google Scholar]
- [10].Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, and Harris KD, The asynchronous state in cortical circuits, Science 327, 587 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, and Tolias AS, Decorrelated neuronal firing in cortical microcircuits, Science 327, 584 (2010). [DOI] [PubMed] [Google Scholar]
- [12].Cohen MR and Kohn A, Measuring and interpreting neuronal correlations, Nat. Neurosci 14, 811 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Braitenberg V and Schüz A, Cortex: Statistics and Geometry of Neuronal Connectivity (Springer Science & Business Media, New York, 2013). [Google Scholar]
- [14].Softky WR and Koch C, Cortical cells should fire regularly, but do not, Neural Comput. 4, 643 (1992). [Google Scholar]
- [15].Bell A, Mainen ZF, Tsodyks M, and Sejnowski TJ, “Balancing” of conductances may explain irregular cortical spiking, La Jolla, CA, Institute for Neural Computation Technical; Report No. INC-9502, 1995. [Google Scholar]
- [16].Amit DJ and Brunel N, Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex, Cereb. Cortex 7, 237 (1997). [DOI] [PubMed] [Google Scholar]
- [17].Brunel N, Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons, J. Comput. Neurosci 8, 183 (2000). [DOI] [PubMed] [Google Scholar]
- [18].Ahmadian Y and Miller KD, What is the dynamical regime of cerebral cortex?, Neuron 109, 3373 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Sompolinsky H, Crisanti A, and Sommers HJ, Chaos in random neural networks, Phys. Rev. Lett 61, 259 (1988). [DOI] [PubMed] [Google Scholar]
- [20].van Vreeswijk C and Sompolinsky H, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science 274, 1724 (1996). [DOI] [PubMed] [Google Scholar]
- [21].Vreeswijk C. v. and Sompolinsky H, Chaotic balanced state in a model of cortical circuits, Neural Comput. 10, 1321 (1998). [DOI] [PubMed] [Google Scholar]
- [22].Ahmadian Y, Rubin DB, and Miller KD, Analysis of the stabilized supralinear network, Neural Comput. 25, 1994 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Rubin DB, Van Hooser SD, and Miller KD, The stabilized supralinear network: A unifying circuit motif underlying multi-input integration in sensory cortex, Neuron 85, 402 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, and Miller KD, The dynamical regime of sensory cortex: Stable dynamics around a single stimulus-tuned attractor account for patterns of noise variability, Neuron 98, 846 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Haider B, Häusser M, and Carandini M, Inhibition dominates sensory responses in the awake cortex, Nature (London) 493, 97 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Tan AYY, Andoni S, and Priebe NJ, A spontaneous state of weakly correlated synaptic excitation and inhibition in visual cortex, Neuroscience 247, 364 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Tan AYY, Chen Y, Scholl B, Seidemann E, and Priebe NJ, Sensory stimulation shifts visual cortex from synchronous to asynchronous states, Nature (London) 509, 226 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Okun M, Steinmetz NA, Cossell L, Iacaruso MF, Ko H, Barthó P, Moore T, Hofer SB, Mrsic-Flogel TD, Carandini M et al. , Diverse coupling of neurons to populations in sensory cortex, Nature (London) 521, 511 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Hansel D and van Vreeswijk C, The mechanism of orientation selectivity in primary visual cortex without a functional map, J. Neurosci 32, 4049 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Pattadkal JJ, Mato G, van Vreeswijk C, Priebe NJ, and Hansel D, Emergent orientation selectivity from random networks in mouse visual cortex, Cell Rep. 24, 2042 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Shadlen MN and Newsome WT, The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding, J. Neurosci 18, 3870 (1998). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Chen Y, Geisler WS, and Seidemann E, Optimal decoding of correlated neural population responses in the primate visual cortex, Nat. Neurosci 9, 1412 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Polk A, Litwin-Kumar A, and Doiron B, Correlated neural variability in persistent state networks, Proc. Natl. Acad. Sci. U.S.A 109, 6295 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [34].Yu J and Ferster D, Membrane potential synchrony in primary visual cortex during sensory stimulation, Neuron 68, 1187 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Arroyo S, Bennett C, and Hestrin S, Correlation of synaptic inputs in the visual cortex of awake, behaving mice, Neuron 99, 1289 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [36].Okun M and Lampl I, Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities, Nat. Neurosci 11, 535 (2008). [DOI] [PubMed] [Google Scholar]
- [37].Zerlaut Y, Zucca S, Panzeri S, and Fellin T, The spectrum of asynchronous dynamics in spiking networks as a model for the diversity of non-rhythmic waking states in the neocortex, Cell Rep. 27, 1119 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Sanzeni A, Histed MH, and Brunel N, Emergence of irregular activity in networks of strongly coupled conductance-based neurons, Phys. Rev. X 12, 011044 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Stein RB, A theoretical analysis of neuronal variability, Biophys. J 5, 173 (1965). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [40].Tuckwell HC, Introduction to Theoretical Neurobiology: Linear Cable Theory and Dendritic Structure (Cambridge University Press, Cambridge, England, 1988), Vol. 1. [Google Scholar]
- [41].Richardson MJE, Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons, Phys. Rev. E 69, 051918 (2004). [DOI] [PubMed] [Google Scholar]
- [42].Richardson MJ and Gerstner W, Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance, Neural Comput. 17, 923 (2005). [DOI] [PubMed] [Google Scholar]
- [43].Richardson MJ and Gerstner W, Statistics of subthreshold neuronal voltage fluctuations due to conductance-based synaptic shot noise, Chaos 16, 026106 (2006). [DOI] [PubMed] [Google Scholar]
- [44].Marcus S, Modeling and analysis of stochastic differential equations driven by point processes, IEEE Trans. Inf. Theory 24, 164 (1978). [Google Scholar]
- [45].Marcus SI, Modeling and approximation of stochastic differential equations driven by semimartingales, Stochastics 4, 223 (1981). [Google Scholar]
- [46].Destexhe A, Rudolph M, Fellous J-M, and Sejnowski T, Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons, Neuroscience (Oxford) 107, 13 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [47].Meffin H, Burkitt AN, and Grayden DB, An analytical model for the 'large, fluctuating synaptic conductance state' typical of neocortical neurons in vivo, J. Comput. Neurosci 16, 159 (2004). [DOI] [PubMed] [Google Scholar]
- [48].Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, and Josić K, The mechanics of state-dependent neural correlations, Nat. Neurosci 19, 383 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, and Shea-Brown E, From the statistics of connectivity to the statistics of spike times in neuronal networks, Curr. Opin. Neurobiol 46, 109 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Rall W, Time constants and electrotonic length of membrane cylinders and neurons, Biophys. J 9, 1483 (1969). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [51].Daley DJ and Vere-Jones D, An Introduction to the Theory of Point Processes. Vol. I. Probability and Its Applications (Springer-Verlag, New York, 2003). [Google Scholar]
- [52].Daley DJ and Vere-Jones D, An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure (Springer Science & Business Media, New York, 2007). [Google Scholar]
- [53].Knight BW, The relationship between the firing rate of a single neuron and the level of activity in a population of neurons: Experimental evidence for resonant enhancement in the population response, J. Gen. Physiol 59, 767 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Knight B, Dynamics of encoding in a population of neurons, J. Gen. Physiol 59, 734 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [55].Kingman JF, Uses of exchangeability, Ann. Probab 6, 183 (1978). [Google Scholar]
- [56].Aldous DJ, Exchangeability and related topics, in Proceedings of the École d'Été de Probabilités de Saint-Flour XIII—1983 (Springer, New York, 1985), pp. 1–198. [Google Scholar]
- [57].De Finetti B, Funzione caratteristica di un fenomeno aleatorio, in Atti del Congresso Internazionale dei Matematici: Bologna del 3 al 10 de settembre di 1928 (1929), pp. 179–190, arXiv:1512.01229. [Google Scholar]
- [58].Gupta AK and Nadarajah S, Handbook of Beta Distribution and Its Applications (CRC Press, Boca Raton, 2004). [Google Scholar]
- [59].Macke JH, Berens P, Ecker AS, Tolias AS, and Bethge M, Generating spike trains with specified correlation coefficients, Neural Comput. 21, 397 (2009). [DOI] [PubMed] [Google Scholar]
- [60].Hjort NL, Nonparametric Bayes estimators based on beta processes in models for life history data, Ann. Stat 18, 1259 (1990). [Google Scholar]
- [61].Thibaux R and Jordan MI, Hierarchical beta processes and the indian buffet process, in Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Cambridge, MA, 2007), pp. 564–571. [Google Scholar]
- [62].Broderick T, Jordan MI, and Pitman J, Beta processes, stick-breaking and power laws, Bayesian Anal. 7, 439 (2012). [Google Scholar]
- [63].Berkes P, Wood F, and Pillow J, Characterizing neural dependencies with copula models, Adv. Neural Inf. Process. Syst 21, 129 (2008). [Google Scholar]
- [64].Balakrishnan N and Lai CD, Continuous Bivariate Distributions (Springer Science & Business Media, New York, 2009). [Google Scholar]
- [65].Stratonovich R, A new representation for stochastic integrals and equations, SIAM J. Control 4, 362 (1966). [Google Scholar]
- [66].Chechkin A and Pavlyukevich I, Marcus versus Stratonovich for systems with jump noise, J. Phys. A 47, 342001 (2014). [Google Scholar]
- [67].Matthes K, Zur Theorie der Bedienungsprozesse, in Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions, Random Processes (Liblice, 1962) (Czech Academy of Science, Prague, 1964), pp. 513–528. [Google Scholar]
- [68].Arieli A, Sterkin A, Grinvald A, and Aertsen A, Dynamics of ongoing activity: Explanation of the large variability in evoked cortical responses, Science 273, 1868 (1996). [DOI] [PubMed] [Google Scholar]
- [69].Mainen Z and Sejnowski T, Reliability of spike timing in neocortical neurons, Science 268, 1503 (1995). [DOI] [PubMed] [Google Scholar]
- [70].Rieke F, Warland D, de Ruyter van Steveninck R, and Bialek W, Spikes, A Bradford Book (MIT Press, Cambridge, MA, 1999), pp. xviii+395, exploring the neural code. [Google Scholar]
- [71].Softky WR and Koch C, The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPS, J. Neurosci 13, 334 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [72].Bruno RM and Sakmann B, Cortex is driven by weak but synchronously active thalamocortical synapses, Science 312, 1622 (2006). [DOI] [PubMed] [Google Scholar]
- [73].Jouhanneau J-S, Kremkow J, Dorrn AL, and Poulet JF, In vivo monosynaptic excitatory transmission between layer 2 cortical pyramidal neurons, Cell Rep. 13, 2098 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [74].Pala A and Petersen CC, In vivo measurement of cell-type-specific synaptic connectivity and synaptic transmission in layer 2/3 mouse barrel cortex, Neuron 85, 68 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [75].Seeman SC, Campagnola L, Davoudian PA, Hoggarth A, Hage TA, Bosma-Moody A, Baker CA, Lee JH, Mihalas S, Teeter C et al. , Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex, eLife 7, e37349 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [76].Campagnola L et al. , Local connectivity and synaptic dynamics in mouse and human neocortex, Science 375, eabj5861 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [77].Destexhe A, Rudolph M, and Paré D, The high-conductance state of neocortical neurons in vivo, Nat. Rev. Neurosci 4, 739 (2003). [DOI] [PubMed] [Google Scholar]
- [78].Khintchine A, Korrelationstheorie der stationären stochastischen prozesse, Math. Ann 109, 604 (1934). [Google Scholar]
- [79].Lévy P and Lévy P, Théorie de l'Addition des Variables Aléatoires (Gauthier-Villars, Paris, 1954). [Google Scholar]
- [80].Qaqish BF, A family of multivariate binary distributions for simulating correlated binary variables with specified marginal means and correlations, Biometrika 90, 455 (2003). [Google Scholar]
- [81].Niebur E, Generation of synthetic spike trains with defined pairwise correlations, Neural Comput. 19, 1720 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [82].Macke JH, Buesing L, Cunningham JP, Yu BM, Shenoy KV, and Sahani M, Empirical models of spiking in neural populations, Adv. Neural Inf. Process. Syst 24, 1350 (2011). [Google Scholar]
- [83].Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky E, and Simoncelli EP, Spatio-temporal correlations and visual signalling in a complete neuronal population, Nature (London) 454, 995 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [84].Park IM, Archer EW, Latimer K, and Pillow JW, Universal models for binary spike patterns using centered dirichlet processes, Adv. Neural Inf. Process. Syst 26, 2463 (2013). [Google Scholar]
- [85].Theis L, Chagas AM, Arnstein D, Schwarz C, and Bethge M, Beyond GLMs: A generative mixture modeling approach to neural system identification, PLoS Comput. Biol 9, e1003356 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [86].Schneidman E, Berry MJ, Segev R, and Bialek W, Weak pairwise correlations imply strongly correlated network states in a neural population, Nature (London) 440, 1007 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [87].Granot-Atedgi E, Tkačik G, Segev R, and Schneidman E, Stimulus-dependent maximum entropy models of neural population codes, PLoS Comput. Biol 9, e1002922 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [88].Bohté SM, Spekreijse H, and Roelfsema PR, The effects of pair-wise and higher-order correlations on the firing rate of a postsynaptic neuron, Neural Comput. 12, 153 (2000). [DOI] [PubMed] [Google Scholar]
- [89].Amari S.-i., Nakahara H, Wu S, and Sakai Y, Synchronous firing and higher-order interactions in neuron pool, Neural Comput. 15, 127 (2003). [DOI] [PubMed] [Google Scholar]
- [90].Kuhn A, Aertsen A, and Rotter S, Higher-order statistics of input ensembles and the response of simple model neurons, Neural Comput. 15, 67 (2003). [DOI] [PubMed] [Google Scholar]
- [91].Staude B, Rotter S, and Grün S, Cubic: Cumulant based inference of higher-order correlations in massively parallel spike trains, J. Comput. Neurosci 29, 327 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [92].Staude B, Rotter S et al. , Higher-order correlations in non-stationary parallel spike trains: Statistical modeling and inference, Front. Comput. Neurosci 4, 1228 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [93].Bäuerle N and Grübel R, Multivariate counting processes: Copulas and beyond, ASTIN Bull.: J. IAA 35, 379 (2005). [Google Scholar]
- [94].Trousdale J, Hu Y, Shea-Brown E, and Josić K, A generative spike train model with time-structured higher order correlations, Front. Comput. Neurosci 7, 84 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [95].Shelley M, McLaughlin D, Shapley R, and Wielaard J, States of high conductance in a large-scale model of the visual cortex, J. Comput. Neurosci 13, 93 (2002). [DOI] [PubMed] [Google Scholar]
- [96].Rudolph M and Destexhe A, Characterization of subthreshold voltage fluctuations in neuronal membranes, Neural Comput. 15, 2577 (2003). [DOI] [PubMed] [Google Scholar]
- [97].Kumar A, Schrader S, Aertsen A, and Rotter S, The high-conductance state of cortical networks, Neural Comput. 20, 1 (2008). [DOI] [PubMed] [Google Scholar]
- [98].Van Kampen NG, Stochastic Processes in Physics and Chemistry (Elsevier, New York, 1992), Vol. 1. [Google Scholar]
- [99].Risken H, Fokker-Planck equation, in The Fokker-Planck Equation (Springer, New York, 1996), pp. 63–95. [Google Scholar]
- [100].Itô K, 109. Stochastic integral, Proc. Imp. Acad. (Tokyo) 20, 519 (1944). [Google Scholar]
- [101].Baccelli F and Taillefumier T, Replica-mean-field limits for intensity-based neural networks, SIAM J. Appl. Dyn. Syst 18, 1756 (2019). [Google Scholar]
- [102].Baccelli F and Taillefumier T, The pair-replica-mean-field limit for intensity-based neural networks, SIAM J. Appl. Dyn. Syst 20, 165 (2021). [Google Scholar]
- [103].Yu L and Taillefumier TO, Metastable spiking networks in the replica-mean-field limit, PLoS Comput. Biol 18, e1010215 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [104].Baccelli F and Brémaud P, The palm calculus of point processes, in Elements of Queueing Theory (Springer, New York, 2003), pp. 1–74. [Google Scholar]
- [105].London M, Roth A, Beeren L, Häusser M, and Latham PE, Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex, Nature (London) 466, 123 EP (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [106].Abbott LF and van Vreeswijk C, Asynchronous states in networks of pulse-coupled oscillators, Phys. Rev. E 48, 1483 (1993). [DOI] [PubMed] [Google Scholar]
- [107].Baladron J, Fasoli D, Faugeras O, and Touboul J, Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons, J. Math. Neurosci 2, 10 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [108].Touboul J, Hermann G, and Faugeras O, Noise-induced behaviors in neural mean field dynamics, SIAM J. Appl. Dyn. Syst 11, 49 (2012). [Google Scholar]
- [109].Robert P and Touboul J, On the dynamics of random neuronal networks, J. Stat. Phys 165, 545 (2016). [Google Scholar]
- [110].Averbeck BB, Latham PE, and Pouget A, Neural correlations, population coding and computation, Nat. Rev. Neurosci 7, 358 EP (2006). [DOI] [PubMed] [Google Scholar]
- [111].Ecker AS, Berens P, Tolias AS, and Bethge M, The effect of noise correlations in populations of diversely tuned neurons, J. Neurosci 31, 14272 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [112].Hu Y, Zylberberg J, and Shea-Brown E, The sign rule and beyond: Boundary effects, flexibility, and noise correlations in neural population codes, PLoS Comput. Biol 10, e1003469 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [113].Buzsáki G and Mizuseki K, The log-dynamic brain: How skewed distributions affect network operations, Nat. Rev. Neurosci 15, 264 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [114].Iyer R, Menon V, Buice M, Koch C, and Mihalas S, The influence of synaptic weight distribution on neuronal population dynamics, PLoS Comput. Biol 9, e1003248 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [115].Amarasingham A, Geman S, and Harrison MT, Ambiguity and nonidentifiability in the statistical analysis of neural codes, Proc. Natl. Acad. Sci. U.S.A 112, 6455 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [116].Scholl B, Thomas CI, Ryan MA, Kamasawa N, and Fitzpatrick D, Cortical response selectivity derives from strength in numbers of synapses, Nature (London) 590, 111 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [117].Smith MA and Kohn A, Spatial and temporal scales of neuronal correlation in primary visual cortex, J. Neurosci 28, 12591 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [118].Smith MA and Sommer MA, Spatial and temporal scales of neuronal correlation in visual area v4, J. Neurosci 33, 5422 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [119].Litwin-Kumar A and Doiron B, Slow dynamics and high variability in balanced cortical networks with clustered connections, Nat. Neurosci 15, 1498 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [120].de la Rocha J, Doiron B, Shea-Brown E, Josić K, and Reyes A, Correlation between neural spike trains increases with firing rate, Nature (London) 448, 802 EP (2007). [DOI] [PubMed] [Google Scholar]
- [121].Rosenbaum R, Smith MA, Kohn A, Rubin JE, and Doiron B, The spatial structure of correlated neuronal variability, Nat. Neurosci 20, 107 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [122].Sznitman A-S, Topics in propagation of chaos, in Proceedings of the École d'Été de Probabilités de Saint-Flour XIX—1989, Lecture Notes Math. Vol. 1464 (Springer, Berlin, 1991), pp. 165–251. [Google Scholar]
- [123].Erny X, Löcherbach E, and Loukianova D, Conditional propagation of chaos for mean field systems of interacting neurons, Electron. J. Pro 26, 1 (2021). [Google Scholar]