Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jun 23.
Published in final edited form as: Phys Rev X. 2024 Feb 16;14(1):011021. doi: 10.1103/physrevx.14.011021

Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs

Logan A Becker 1,2, Baowang Li 1,2,3,4,5, Nicholas J Priebe 1,2,4, Eyal Seidemann 1,2,3,5, Thibaud Taillefumier 1,2,6,*
PMCID: PMC11194039  NIHMSID: NIHMS2000943  PMID: 38911939

Abstract

The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4–9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.

Subject Areas: Biological Physics, Complex Systems, Interdisciplinary Physics

I. INTRODUCTION

A common and striking feature of cortical activity is the high degree of neuronal spiking variability [1]. This high variability is notably present in sensory cortex and motor cortex, as well as in regions with intermediate representations [25]. The prevalence of this variability has led to it being a major constraint for modeling cortical networks. Cortical networks may operate in distinct regimes depending on species, cortical area, and brain states. In the asleep or anesthetized state, neurons tend to fire synchronously with strong correlations between the firing of distinct neurons [68]. In the awake state, although synchrony has been reported as well, stimulus drive, arousal, or attention tend to promote an irregular firing regime whereby neurons spike in a seemingly random manner, with decreased or little correlation [1,8,9]. This has led to the hypothesis that cortex primarily operates asynchronously [1012]. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experiences synchronous synaptic inputs is exceedingly low. That said, the asynchronous state hypothesis appears at odds with the high degree of observed spiking variability in cortex. Cortical neurons are thought to receive a large number of synaptic inputs (≃104) [13]. Although the impact of these inputs may vary across synapses, the law of large numbers implies that variability should average out when integrated at the soma. In principle, this would lead to clock-like spiking responses, contrary to experimental observations [14].

A number of mechanisms have been proposed to explain how high spiking variability emerges in cortical networks [15]. The prevailing approach posits that excitatory and inhibitory inputs converge on cortical neurons in a balanced manner. In balanced models, the overall excitatory and inhibitory drives cancel each other so that transient imbalances in the drive can bring the neuron’s membrane voltage across the spike-initiation threshold. Such balanced models result in spiking statistics that match those found in the neocortex [16,17]. However, these statistics can emerge in distinct dynamical regimes depending on whether the balance between excitation and inhibition is tight or loose [18]. In tightly balanced networks, whereby the net neuronal drive is negligible compared to the antagonizing components, activity correlation is effectively zero, leading to a strictly asynchronous regime [1921]. By contrast, in loosely balanced networks, the net neuronal drive remains of the same order as the antagonizing components, which allows for strong neuronal correlations during evoked activity, compatible with a synchronous regime [2224].

While the high spiking variability is an important constraint for cortical network modeling, there are other biophysical signatures that may be employed. We now have access to the subthreshold membrane voltage fluctuations that underlie spikes in awake, behaving animals (see Fig. 1). Membrane voltage recordings reveal two main deviations from the asynchronous hypothesis: First, membrane voltage does not hover near the spiking threshold and is modulated by the synaptic drive; second, it exhibits state- or stimulus-dependent non-Gaussian fluctuation statistics with positive skewness [2528]. In this work, we further argue that membrane voltage recordings reveal much larger voltage fluctuations than predicted by balanced cortical models [29,30].

FIG. 1.

FIG. 1.

Large trial-by-trial membrane voltage fluctuations. Membrane voltage responses are shown using whole cell recordings in awake behaving primates for both fixation alone trials (left) and visual stimulation trials (right). A drifting grating is presented for 1 s beginning at the arrow. Below, the membrane voltage traces are records of horizontal and vertical eye movements, illustrating that the animal was fixating during the stimulus. Red and green traces indicate different trials under the same conditions. Adapted from Ref. [27].

How could such large subthreshold variations in membrane voltage emerge? One way that fluctuations could emerge, even for large numbers of input, is if there is synchrony in the driving inputs [31]. In practice, input synchrony is revealed by the presence of positive spiking correlations, which quantify the propensity of distinct synaptic inputs to coactivate. Measurements of spiking correlations between pairs of neurons vary across reports but have generally been shown to be weak [1012]. That said, even weak correlations can have a large impact when the population of correlated inputs is large [32,33]. Furthermore, the existence of input synchrony, supported by weak but persistent spiking correlations, is consistent with at least two other experimental observations. First, intracellular recordings from pairs of neurons in both anesthetized and awake animals reveal a high degree of membrane voltage correlations [7,34,35]. Second, excitatory and inhibitory conductance inputs are highly correlated with each other within the same neuron [35,36]. These observations suggest that input synchrony could explain the observed level of subthreshold variability.

While our focus is on achieving realistic subthreshold variability, other challenges to asynchronous networks have been described. In particular, real neural networks exhibit distinct regimes of activity depending on the strength of their afferent drives. In that respect, Zerlaut et al. [37] showed that asynchronous networks can exhibit a spectrum of realistic regimes of activity if they have moderate recurrent connections and are driven by strong thalamic projections (see also Ref. [17]). Furthermore, it has been a challenge to identify the scaling rule that should apply to synaptic strengths for asynchrony to hold stably in idealized networks. Recently, Sanzeni, Histed, and Brunel [38] proposed that a realistic asynchronous regime is achieved for a particular large-coupling rule, whereby synaptic strengths scale in keeping with the logarithmic size of the network. Both studies consider balanced networks with conductance-based neuronal models, but neither focuses on the role of synchrony, consistent with the asynchronous state hypothesis. The asynchronous state hypothesis is theoretically attractive, because it represents a naturally stable regime of activity in infinite-size, balanced networks of current-based neuronal models [16,17,20,21]. Such neuronal models, however, neglect the voltage dependence of conductances, and it remains unclear whether the asynchronous regime is asymptotically stable for infinite-size, conductance-based network models.

Here, independent of the constraint of network stability, we ask whether biophysically relevant neuronal models can achieve the observed subthreshold variability under realistic levels of input synchrony. To answer this question, we derive exact analytical expressions for the stationary voltage variance of a single conductance-based neuron in response to synchronous shot-noise drives [39,40]. A benefit of shot-noise models compared to diffusion models is to allow for individual synaptic inputs to be temporally separated in distinct impulses, each corresponding to a transient positive conductance fluctuation [4143]. We develop our shot-noise analysis for a variant of classically considered neuronal models. We call this variant the all-or-none-conductance-based model for which synaptic activation occurs as an all-or-none process rather than as an exponentially relaxing process. To perform an exact treatment of these models, we develop original probabilistic techniques inspired from Marcus’ work about shot-noise-driven dynamics [44,45]. To model shot-noise drives with synchrony, we develop a statistical framework based on the property of input exchangeability, which assumes that no synaptic inputs play a particular role. In this framework, we show that input drives with varying degree of synchrony can be rigorously modeled via jump processes, while synchrony can be quantitatively related to measures of pairwise spiking correlations.

Our main results are biophysically interpretable formulas for the voltage mean and variance in the limit of instantaneous synapses. Crucially, these formulas explicitly depend on the input numbers, weights, and synchrony and hold without any forms of diffusion approximation. This is in contrast with analytical treatments which elaborate on the diffusion and effective-time-constant approximations [37,38,46,47]. We leverage these exact, explicit formulas to determine under which synchrony conditions a neuron can achieve the experimentally observed subthreshold variability. For biophysically relevant synaptic numbers and weights, we find that achieving realistic variability is possible in response to a restricted number of large asynchronous connections, compatible with the dominance of thalamo-cortical projections in the input layers of the visual cortex. However, we find that achieving realistic variability in response to a large number of moderate cortical inputs, as in superficial cortical visual layers, necessitates nonzero input synchrony in amounts that are consistent with the weak levels of measured spiking correlations observed in vivo.

In practice, persistent synchrony may spontaneously emerge in large but finite neural networks, as nonzero correlations are the hallmark of finite-dimensional interacting dynamics. The network structural features responsible for the magnitude of such correlations remains unclear, and we do not address this question here (see Refs. [48,49] for review). The persistence of synchrony is also problematic for theoretical approaches that consider networks in the infinite-size limits. Indeed, our analysis supports that, in the absence of synchrony and for all scaling of the synaptic weights, subthreshold variability must vanish in the limit of arbitrary large numbers of synapses. This suggests that, independent of any balanced condition, the mean-field dynamics that emerge in infinite-size networks of conductance-based neurons will not exhibit Poisson-like spiking variability, at least in the absence of additional constraints on the network structure or on the biophysical properties of the neurons. In current-based neuronal models, however, variability is not dampened by a conductance-dependent effective time constant. These findings, therefore, challenge the theoretical basis for the asynchronous state in conductance-based neuronal networks.

Our exact analysis, as well as its biophysical interpretations, is possible only at the cost of several caveats: First, we neglect the impact of the spike-generating mechanism (and of the postspiking reset) in shaping the subthreshold variability. Second, we quantify synchrony under the assumption of input exchangeability, that is, for synapses having a typical strength as opposed to being heterogeneous. Third, we consider input drives that implement an instantaneous form of synchrony with temporally precise synaptic coactivations. Fourth, we do not consider slow temporal fluctuations in the mean synaptic drive. Fifth, and perhaps most concerning, we do not account for the stable emergence of a synchronous regime in network models. We argue in the discussion that all the above caveats but the last one can be addressed without impacting our findings. Addressing the last caveat remains an open problem.

For reference, we list in Table I the main notations used in this work. These notations utilize the subscript {}e and {}i to refer to excitation or inhibition, respectively. The notation {}e/i means that the subscript can be either {}e or {}i. The notation {}ei is used to emphasize that a quantity depends jointly on excitation and inhibition.

TABLE I.

Main notations.

ae/i,1 First-order synaptic efficacies
ae/i,2 Second-order synaptic efficacies
ae/i,12 Auxiliary second-order synaptic efficacies
b, Rate of the driving Poisson process N
be/i Rate of the excitatory or inhibitory Poisson process Ne/i
C Membrane capacitance
cei, Cross-correlation synaptic efficacy
[,] Stationary covariance
E[] Stationary expectation
Eei[] Expectation with respect to the joint distribution pei or pei,kl
Ee/i[] Expectation with respect to the marginal distribution pe/i, or pe/i,k
ϵ=τs/τ Fast-conductance small parameter
G Passive leak conductance
ge/i Overall excitatory or inhibitory conductance
he/i=ge/i/C Reduced excitatory or inhibitory conductance
ke/i Number of coactivating excitatory or inhibitory synaptic inputs
Ke/i Total number of excitatory or inhibitory synaptic inputs
N Driving Poisson process with rate b
Ne/i Excitatory or inhibitory driving Poisson process with rate be/i
pei Bivariate jump distribution of (We,Wi)
pe/i Marginal jump distribution of We/i
pei,kl Bivariate distribution for the numbers of coactivating synapses (ke,ki)
pe/i,k Marginal synaptic count distribution ke/i
re/i Individual excitatory or inhibitory synaptic rate
ρei Spiking correlation between excitatory and inhibitory inputs
ρe/i Spiking correlation within excitatory or inhibitory inputs
τ Passive membrane time constant
τs Synaptic time constant
V[] Stationary variance
We/i Excitatory or inhibitory random jumps
Ve/i Excitatory or inhibitory reversal potentials
we/i Typical value for excitatory or inhibitory synaptic weights
Xk Binary variable indicating the activation of excitatory synapse k
Yl Binary variable indicating the activation of inhibitory synapse l
Z Driving compound Poisson process with base rate b and jump distribution pei

II. STOCHASTIC MODELING AND ANALYSIS

A. All-or-none-conductance-based neurons

We consider the subthreshold dynamics of an original neuronal model, which we called the all-or-none-conductance-based (AONCB) model. In this model, the membrane voltage V obeys the first-order stochastic differential equation

CV˙=GVL-V+geVe-V+giVi-V+I, (1)

where randomness arises from the stochastically activating excitatory and inhibitory conductances, respectively denoted by ge and gi [see Fig. 2(a)]. These conductances result from the action of Ke excitatory and Ki inhibitory synapses: ge(t)=k=1Kege,k(t) and gi(t)=k=1Kigi,k(t). In the absence of synaptic inputs, i.e., when ge=gi=0, and of external current I, the voltage exponentially relaxes toward its leak reversal potential VL with passive time constant τ=C/G, where C denotes the cell’s membrane capacitance and G denotes the cellular passive conductance [50]. In the presence of synaptic inputs, the transient synaptic currents Ie=geVe-V and Ii=giVi-V cause the membrane voltage to fluctuate. Conductance-based models account for the voltage dependence of synaptic currents via the driving forces Ve-V and Vi-V, where Ve and Vi denotes the excitatory and inhibitory reversal potential, respectively. Without loss of generality, we assume in the following that VL=0 and that Vi<VL=0<Ve.

FIG. 2.

FIG. 2.

All-or-none-conductance-based models. (a) Electrical diagram of conductance-based model for which the neuronal voltage V evolves in response to fluctuations of excitatory and inhibitory conductances ge and gi. (b) In all-or-none models, inputs delivered as Poisson processes transiently activate the excitatory and inhibitory conductances ge and gi during a finite, nonzero synaptic activation time τs>0. Simulation parameters: Ke=Ki=50,re=ri=10Hz,τ=15ms, and τs=2ms>0.

We model the spiking activity of the Ke+Ki upstream neurons as shot noise [39,40], which can be generically modeled as a Ke+Ki-dimensional stochastic point process [51,52]. Let us denote by Ne,k(t)1kKe its excitatory component and by Ni,k(t)1kKi its inhibitory component, where t denotes time and k is the neuron index. For each neuron k, the process Ne/i,k(t) is specified as the counting process registering the spiking occurrences of neuron k up to time t. In other words, Ne/i,k(t)=k1Te/i,k,nt, where Te/i,k,nnZ denotes the full sequence of spiking times of neuron k and where 1A denotes the indicator function of set A. Note that, by convention, we label spikes so that Te/i,k,00<Te/i,k,1 for all neuron k. Given a point-process model for the upstream spiking activity, classical conductance-based models consider that a single input to a synapse causes an instantaneous increase of its conductance, followed by an exponential decay with typical timescale τs>0. Here, we depart from this assumption and consider that the synaptic conductances ge/i,k operates all-or-none with a common activation time still referred to as τs. Specifically, we assume that the dynamics of the conductance ge/i,k follows

τsg˙e/i,k(t)=Cwe/i,knδt-Te/i,k,n-δt-Te/i,k,n-τs, (2)

where we/i,k0 is the dimensionless synaptic weight. The above equation prescribes that the nth spike delivery to synapse k at time Te/i,k,n is followed by an instantaneous increase of that synapse’s conductance by an amount we/i,k for a period τs. Thus, the synaptic response prescribed by Eq. (2) is all-or-none as opposed to being graded as in classical conductance-based models. Moreover, just as in classical models, Eq. (2) allows synapses to multiactivate, thereby neglecting nonlinear synaptic saturation [see Fig. 2(b)].

To be complete, AONCB neurons must, in principle, include a spike-generating mechanism. A customary choice is the integrate-and-fire mechanism [53,54]: A neuron emits a spike whenever its voltage V exceeds a threshold value VT and resets instantaneously to some value VR afterward. Such a mechanism impacts the neuronal subthreshold voltage dynamics via postspiking reset, which implements a nonlinear form of feedback. However, in this work, we focus on the variability that is generated by fluctuating, possibly synchronous, synaptic inputs. For this reason, we neglect the influence of the spiking reset in our analysis, and, actually, we ignore the spike-generating mechanism altogether. Finally, although our analysis of AONCB neurons applies to positive synaptic activation time τs>0, we discuss our results only in the limit of instantaneous synapses. This corresponds to taking τs0+ while adopting the scaling ge/i1/τs in order to maintain the charge transfer induced by a synaptic event. We will see that this limiting process preserves the response variability of AONCB neurons.

B. Quantifying the synchrony of exchangeable synaptic inputs

Our goal here is to introduce a discrete model for synaptic inputs, whereby synchrony can be rigorously quantified. To this end, let us suppose that the neuron under consideration receives inputs from Ke excitatory neurons and Ki inhibitory neurons, chosen from arbitrary large pools of NeKe excitatory neurons and NiKi inhibitory neurons. Adopting a discrete-time representation with elementary bin size Δt, we denote by x1,n,,xKe,n,y1,n,,yKi,n in {0,1}Ke×{0,1}Ki the input state within the nth bin. Our main simplifying assumption consists in modeling the Ne excitatory inputs and the Ni inhibitory inputs as separately exchangeable random variables X1,n,,XKe,n and Y1,n,,YKi,n that are distributed identically over {0,1}Ne and {0,1}Ni, respectively, and independently across time. This warrants dropping the dependence on time index n. By separately exchangeable, we mean that no subset of excitatory inputs or inhibitory inputs plays a distinct role so that, at all time, the respective distributions of X1,n,,XKe,n and Y1,n,,YKi,n are independent of the input labeling. In other words, for all permutations σe of 1,,Ne and σi of 1,,Ni, the joint distribution of Xσe(1),,XσeNe and Yσi(1),,YσiNi is identical to that of X1,,XNe and Y1,,YNi [55,56]. By contrast with independent random spiking variables, exchangeable ones can exhibit nonzero correlation structure. By symmetry, this structure is specified by three correlation coefficients:

ρe=CXk,XlVXk,ρi=CYk,YlVYk,ρei=CXk,YlVXkVYl,

where C[X,Y] and V[X] denote the covariance and the variance of the binary variables X and Z, respectively.

Interestingly, a more explicit form for ρe,ρe, and ρei can be obtained in the limit of an infinite-size pool Ne,Ni. This follows from de Finetti’s theorem [57], which states that the probability of observing a given input configuration for Ke excitatory neurons and Ki inhibitory neurons is given by

PX1,,XKe,Y1,,YKi=k=1KeθeXk1-θe1-Xkl=1KiθiXl1-θi1-XldFeiθe,θi,

where Fei is the directing de Finetti measure, defined as a bivariate distribution over the unit square [0, 1] × [0, 1]. In the equation above, the numbers θe and θi represent the (jointly fluctuating) probabilities that an excitatory neuron and an inhibitory neuron spike in a given time bin, respectively. The core message of the de Finetti theorem is that the spiking activity of neurons from infinite exchangeable pools is obtained as a mixture of conditionally independent binomial laws. This mixture is specified by the directing measure Fei, which fully parametrizes our synchronous input model. Independent spiking corresponds to choosing Fei as a point-mass measure concentrated on some probabilities πe/i=re/iΔt, where re/i denotes the individual spiking rate of a neuron: dFei(θ)=δθe-πeδθi-πidθedθi [see Fig. 3(a)]. By contrast, a dispersed directing measure Fei corresponds to the existence of correlations among the inputs [see Fig. 3(b)]. Accordingly, we show in Appendix A that the spiking pairwise correlation ρe/i takes the explicit form

ρe/i=Vθe/iEθe/i1-Eθe/i, (3)

whereas ρei, the correlation between excitation and inhibition, is given by

ρei=Cθe,θiEθeEθi1-Eθe1-Eθi. (4)

In the above formulas, Eθe/i,Vθe/i, and Cθe,θi denote expectation, variance, and covariance of θe,θi~Fei, respectively. Note that these formulas show that nonzero correlations ρe/i correspond to nonzero variance, as is always the case for dispersed distribution. Independence between excitation and inhibition for which ρei=0 corresponds to directing measure Fei with product form, i.e., Feiθe,θi=FeθeFiθi, where Fe and Fi denote the marginal distributions. Alternative forms of the directed measure Fei generally lead to nonzero cross correlation ρei, which necessarily satisfies 0<ρeiρeρi.

FIG. 3.

FIG. 3.

Parametrizing correlations via exchangeability. The activity of Ke=100 exchangeable synaptic inputs collected over N consecutive time bins can be represented as {0, 1}-valued array Xk,i1kKe,1iN, where Xk,i=1 if input k activates in time bin i. Under assumptions of exchangeability, the input spiking correlation is entirely captured by the count statistics of how many inputs coactivate within a given time bin. In the limit Ke, the distribution of the fraction of coactivating inputs coincides with the directing de Finetti measure, which we consider as a parametric choice in our approach. In the absence of correlation, synapses tend to activate in isolation: ρe=0 in (a). In the presence of correlation, synapses tend to coactivate, yielding a disproportionately large synaptic activation event: ρe=0.1 in (b). Considering the associated cumulative counts specifies discrete-time jump processes that can be generalized to the continuous-time limit, i.e., for time bins of vanishing duration Δt0+.

In this exchangeable setting, a reasonable parametric choice for the marginals Fe and Fi is given by beta distributions Beta(α,β), where α and β denote shape parameters [58]. Practically, this choice is motivated by the ability of beta distributions to efficiently fit correlated spiking data generated by existing algorithms [59]. Formally, this choice is motivated by the fact that beta distributions are conjugate priors for the binomial likelihood functions, so that the resulting probabilistic models can be studied analytically [6062]. For instance, for Fe~Betaαe,βe, the probability that ke synapses among the Ke inputs are jointly active within the same time bin follows the beta-binomial distribution

Pe,k=KekBαe+k,βe+Ke-kBαe,βe. (5)

Accordingly, the mean number of active excitatory inputs is Eke=Keαe/αe+βe=KereΔt. Utilizing Eq. (3), we also find that ρe=1/1+αe+βe. Note that the above results show that, by changing de Finetti’s measure, one can modify not only the spiking correlation, but also the mean spiking rate.

In the following, we exploit the above analytical results to illustrate that taking the continuous-time limit Δt0+ specifies synchronous input drives as compound Poisson processes [51,52]. To do so, we consider both excitation and inhibition, which in a discrete setting corresponds to considering bivariate probability distributions Pei,kl defined over 0,,Ke×0,,Ki. Ideally, these distributions Pei,kl should be such that its conditional marginals Pe,k and Pi,l, with distributions given by Eq. (5). Unfortunately, there does not seem to be a simple low-dimensional parametrization for such distributions Pei,kl, except in particular cases. To address this point, at least numerically, one can resort to a variety of methods including copulas [63,64]. For analytical calculations, we consider only two particular cases for which the marginals of Fei are given by the beta distributions: (i) the case of maximum positive correlation for which θe=θi, i.e., dFeiθe,θi=δθe-θiFθedθedθi with Fe=Fi=F, and (ii) the case of zero correlation for which θe and θi are independent, i.e., Feiθe,θi=FeθeFiθi.

C. Synchronous synaptic drives as compound Poisson processes

Under assumption of input exchangeability and given typical excitatory and inhibitory synaptic weights we/i, the overall synaptic drive to a neuron is determined by ke,ki, the numbers of active excitatory and inhibitory inputs at each discrete time step. As AONCB dynamics unfolds in continuous time, we need to consider this discrete drive in the continuous-time limit as well, i.e., for vanishing time bins Δt0+. When Δt0+, we show in Appendix B that the overall synaptic drive specifies a compound Poisson process Z with bivariate jumps We,Wi. Specifically, we have

Zt=nNtWe,n,nNtWi,n, (6)

where We,n,Wi,n are i.i.d. samples with bivariate distribution denoted by pei and where the overall driving Poisson process N registers the number of synaptic events without multiple counts (see Fig. 4). By synaptic events, we mean these times for which at least one excitatory synapse or one inhibitory synapse activates. We say that N registers these events without multiple count as it counts one event independent of the number of possibly coactivating synapses. Similarly, we denote by Ne and Ni the counting processes registering synaptic excitatory events and synaptic inhibitory events alone, respectively. These processes Ne and Ni are Poisson processes that are correlated in the presence of synchrony, as both Ne and Ni may register the same event. Note that this implies that maxNe(t),Ni(t)N(t)Ne(t)+Ni(t). More generally, denoting by b and be/i the rates of N and Ne/i, respectively, the presence of synchrony implies that maxbe,bibbe+bi and re/ibe/iKe/ire/i, where re/i is the typical activation rate of a single synapse.

FIG. 4.

FIG. 4.

Limit compound Poisson process with excitation and inhibition. (a) Under assumption of partial exchangeability, synaptic inputs can be distinguished only by the fact that they are either excitatory or inhibitory, which is marked by being colored in red or blue, respectively, in the discrete representation of correlated synaptic inputs with bin size Δt=1ms. Accordingly, considering excitation and inhibition separately specifies two associated input-count processes and two cumulative counting processes. For nonzero spiking correlation ρ=0.03, these processes are themselves correlated as captured by the joint distribution of excitatory and inhibitory input counts Pei,kl (center) and by the joint distribution of excitatory and inhibitory jumps Pei,kl/1-P00 (right). (b) The input count distribution Pei,kl is a finite-size approximation of the bivariate directing de Finetti measure Fei, which we consider as a parameter as usual. For a smaller bin size Δt=0.1ms, this distribution concentrates in (0,0), as an increasing proportion of time bins does not register any synaptic events, be they excitatory or inhibitory. In the presence of correlation, however, the conditioned jump distribution remains correlated but also dispersed. (c) In the limit Δt0, the input-count distribution is concentrated in (0,0), consistent with the fact that the average number of synaptic activations remains constant while the number of bins diverges. By contrast, the distribution of synaptic event size conditioned to distinct from (0,0) converges toward a well-defined distribution: pei,kl=limΔt0+Pei,kl/1-Pei,00. This distribution characterizes the jumps of a bivariate compound Poisson process, obtained as the limit of the cumulative count process when considering Δt0+.

For simplicity, we explain how to obtain such limit compound Poisson processes by reasoning on the excitatory inputs alone. To this end, let us denote the marginal jump distribution of We as pe. Given a fixed typical synaptic weight we, the jumps are quantized as We=kwe, with k distributed on 1,,Ke, as by convention jumps cannot have zero size. These jumps are naturally defined in the discrete setting, i.e., with Δt>0, and their discrete distribution is given via conditioning as Pe,k/1-Pe,0. For beta distributed marginals Fe, we show in Appendix B that considering Δt0+ yields the jump distribution

pe,k=limΔt0+Pe,k1-Pe,0=KekBk,βe+Ke-kψβe+Ke-ψβe, (7)

where ψ denotes the digamma function. In the following, we explicitly index discrete count distributions, e.g., pe,k, to distinguish them from the corresponding jump distributions, i.e., pe. Equation (7) follows from observing that the probability to find a spike within a bin is EXi=αe/αe+βe=reΔt, so that for fixed excitatory spiking rate re,αe0+ when Δt0+. As a result, the continuous-time spiking correlation is ρe=1/1+βe, so that we can interpret βe as a parameter controlling correlations. More generally, we show in Appendix C that the limit correlation ρe depends only on the count distribution pe,k via

ρe=Ee[k(k-1)]Ee[k]Ke-1, (8)

where Ee[] denotes expectations with respect to pe,k. This shows that zero spiking correlation corresponds to single synaptic activations, i.e., to an input drive modeled as a Poisson process, as opposed to a compound Poisson process. For Poisson-process models, the overall rate of synaptic events is necessarily equal to the sum of the individual spiking rate: be=Kere. This is no longer the case in the presence of synchronous spiking, when nonzero input correlation ρe>0 arises from coincidental synaptic activations. Indeed, as the population spiking rate is conserved when Δt0+, the rate of excitatory synaptic events be governing Ne satisfies Kere=beEe[k] so that

be=KereEe[k]=reβeψβe+Ke-ψβe. (9)

Let us reiterate for clarity that, if ke synapses activate synchronously, this counts as only one synaptic event, which can come in variable size k. Consistently, we have, in general, rebeKere. When βe0, we have perfect synchrony with ρe=1 and bere, whereas the independent spiking regime with ρe=0 is attained for βe, for which we have beKere.

It is possible to generalize the above construction to mixed excitation and inhibition, but a closed-form treatment is possible only for special cases. For the independent case (i), in the limit Δt0+, jumps are either excitatory alone or inhibitory alone; i.e., the jump distribution pei has support on 1,,Ke×{0}{0}×1,,Ki. Accordingly, we show in Appendix D that

pei,kl=limΔt0+PkPl1-Pe,0Pi,0=bebe+bipe,k1{l=0}+bebe+bipi,l1{k=0}, (10)

where pe/i,k and be/i are specified in Eqs. (7) and (9) by the parameters βe/i and Ke/i, respectively. This shows that, as expected, in the absence of synchrony the driving compound Poisson process Z with bidimensional jump is obtained as the direct sum of two independent compound Poisson processes. In particular, the driving processes are such that N=Ne+Ni, with rates satisfying b=be+bi. By contrast, for the maximally correlated case with re=ri=r (ii), we show in Appendix D that the jumps are given as We,Wi=kwe,lwi, with (k,l) distributed on 0,,Ke×0,,Ki{0,0} [see Figs. 4(b) and 4(c)] according to

pei,kl=limΔt0+Pei,kl1-Pei,00=KekKilBk+l,β+Ke+Ki-k-lψβ+Ke+Ki-ψ(β). (11)

Incidentally, the driving Poisson process N has a rate b determined by adapting Eq. (9):

b=rβψβ+Ke+Ki-ψ(β),

for which one can check that rbKe+Kir.

All the closed-form results so far have been derived for synchrony parametrization in terms of beta distribution. There are other possible parametrizations, and these would lead to different count distributions pei,kl but without known closed form. To address this limitation in the following, all our results hold for arbitrary distributions pei of the jump sizes We,Wi on the positive orthant (0,)×(0,). In particular, our results are given in terms of expectations with respect to pei, still denoted by Eei[]. Nonzero correlation between excitation and inhibition corresponds to those choices of pei for which WeWi>0 with nonzero probability, which indicates the presence of synchronous excitatory and inhibitory inputs. Note that this modeling setting restricts nonzero correlations to be positive, which is an inherent limitation of our synchrony-based approach. When considering an arbitrary pei, the main caveat is understanding how such a distribution may correspond to a given input numbers Ke/i and spiking correlations ρe/i and ρei. For this reason, we always consider that ke=We/we and ki=Wi/wi follows beta distributed marginal distributions when discussing the roles of we/i,Ke/i,ρe/i, and ρei in shaping the voltage response of a neuron. In that respect, we show in Appendix C that the coefficient ρei can always be deduced from the knowledge of a discrete count distribution pei,kl on 0,,Ke×0,,Ki{0,0} via

ρei=EeikekiKeEeikeKiEeiki0,

where the expectations are with respect to pei,kl.

D. Instantaneous synapses and Marcus integrals

We are now in a position to formulate the mathematical problem at stake within the framework developed by Marcus to study shot-noise-driven systems [44,45]. Our goal is quantifying the subthreshold variability of an AONCB neuron subjected to synchronous inputs. Mathematically, this amounts to computing the first two moments of the stationary process solving the following stochastic dynamics:

V˙=-V/τ+heVe-V+hiVi-V+I/C, (12)

where Vi<0<Ve are constants and where the reduced conductances he=ge/C and hi=gi/C follow stochastic processes defined in terms of a compound Poisson process Z with bivariate jumps. Formally, the compound Poisson process Z is specified by b, the rate of its governing Poisson process N, and by the joint distribution of its jumps pei. Each point of the Poisson process N represents a synaptic activation time Tn, where n is in Z with the convention that T00T1. At all these times, the synaptic input sizes are drawn as i.i.d. random variables We,n,Wi,n in R+×R+ with probability distribution pei.

At this point, it is important to observe that the driving process Z is distinct from the conductance process h=he,hi. The latter process is formally defined for AONCB neurons as

ht=Zt-Zt-ϵτϵτ=1ϵτn=Nt-ϵτ+1NtWe,n,n=Nt-ϵτ+1NtWi,n,

where the dimensionless parameter ϵ=τs/τ>0 is the ratio of the duration of synaptic activation relative to the passive membrane time constant. Note that the amplitude of h scales in inverse proportion to ϵ in order to maintain the overall charge transfer during synaptic events of varying durations. Such a scaling ensures that the voltage response of AONCB neurons has finite, nonzero variability for small or vanishing synaptic time constant, i.e., for ϵ1 (see Fig. 5). The simplifying limit of instantaneous synapses is obtained for ϵ=τs/τ0+, which corresponds to infinitely fast synaptic activation. By virtue of its construction, the conductance process h becomes a shot noise in the limit ϵ0+, which can be formally identified to dZ/dt. This is consistent with the definition of shot-noise processes as temporal derivative of compound Poisson processes, i.e., as collections of randomly weighted Dirac-delta masses.

FIG. 5.

FIG. 5.

Limit of instantaneous synapses. The voltage trace and the empirical voltage distribution are only marginally altered by taking the limit ϵ0+ for short synaptic time constant: τs=2ms in (a) and τs=0.02ms in (b). In both (a) and (b), we consider the same compound Poisson-process drive with ρe=0.03,ρi=0.06, and ρei=0, and the resulting fluctuating voltage V is simulated via a standard Euler discretization scheme. The corresponding empirical conductance and voltage distributions are shown on the right. The later voltage distribution asymptotically determines the stationary moments of V.

Because of their high degree of idealization, shot-noise models are often amenable to exact stochastic analysis, albeit with some caveats. For equations akin to Eq. (12) in the limit of instantaneous synapses, such a caveat follows from the multiplicative nature of the conductance shot noise h. In principle, one might expect to solve Eq. (12) with shot-noise drive via stochastic calculus, as for diffusion-based drive. This would involve interpreting the stochastic integral representations of solutions in terms of Stratonovich representations [65]. However, Stratonovich calculus is not well defined for shot-noise drives [66]. To remedy this point, Marcus has proposed to study stochastic equations subjected to regularized versions of shot noises, whose regularity is controlled by a nonnegative parameter ϵ [44,45]. For ϵ>0, the dynamical equations admit classical solutions, whereas the shot-noise-driven regime is recovered in the limit ϵ0+. The hope is to be able to characterize analytically the shot-noise-driven solution, or at least some of its moments, by considering regular solutions in the limit ϵ0+. We choose to refer to the control parameter as ϵ by design in the above. This is because AONCB models represent Marcus-type regularizations that are amenable to analysis in the limit of instantaneous synapses, i.e., when ϵ=τs/τ0+, for which the conductance process h converges toward a form of shot noise.

Marcus interpretation of stochastic integration has practical implications for numerical simulations with shot noise [41]. According to this interpretation, shot-noise-driven solutions are conceived as limits of regularized solutions for which standard numerical scheme applies. Correspondingly, shot-noise-driven solutions to Eq. (12) can be simulated via a limit numerical scheme. We derive such a limit scheme in Appendix E. Specifically, we show that the voltage of shot-noise-driven AONCB neurons exponentially relaxes toward the leak reversal potential VL=0, except when subjected to synaptic impulses at times TnnZ. At these times, the voltage V updates discontinuously according to VTn=VTn-+Jn, where the jumps are given in Appendix E via the Marcus rule

Jn=We,nVe+Wi,nViWe,n+Wi,n-VTn-×1-e-We,n+Wi,n. (13)

Observe that the above Marcus rule directly implies that no jump can cause the voltage to exit Vi,Ve, the allowed range of variation for V. Moreover, note that this rule specifies an exact even-driven simulation scheme given knowledge of the synaptic activation times and sizes Tn,We,n,Wi,nnZ [67]. We adopt the above Marcus-type numerical scheme in all the simulations that involve instantaneous synapses.

E. Moment calculations

When driven by stationary compound Poisson processes, AONCB neurons exhibit ergodic voltage dynamics. As a result, the typical voltage state, obtained by sampling the voltage at random time, is captured by a unique stationary distribution. Our main analytical results, which we give here, consist in exact formulas for the first two voltage moment with respect to that stationary distribution. Specifically, we derive the stationary mean voltage Eq. (14) in Appendix F and the stationary voltage variance Eq. (16) in Appendix G. These results are obtained by a probabilistic treatment exploiting the properties of compound Poisson processes within Marcus’ framework. This treatment yields compact, interpretable formulas in the limit of instantaneous synapses ϵ=τs/τ0+. Readers who are interested in the method of derivation for these results are encouraged to go over the calculations presented in Appendixes FL.

In the limit of instantaneous synapses, ϵ0+, we find that the stationary voltage mean is

E[V]=limϵ0+EVϵ=ae,1Ve+ai,1Vi+I/G1+ae,1+ai,1, (14)

where we define the first-order synaptic efficacies as

ae,1=bτEeiWeWe+Wi1-e-We+Wi,ai,1=bτEeiWiWe+Wi1-e-We+Wi. (15)

Note the Eei[] refers to the expectation with respect to the jump distribution pei in Eq. (15), whereas E[] refers to the stationary expectation in Eq. (14). Equation (14) has the same form as for deterministic dynamics with constant conductances, in the sense that the mean voltage is a weighted sum of the reversal potentials Ve,Vi, and VL=0. One can check that, for such deterministic dynamics, the synaptic efficacies involved in the stationary mean simply read ae/i,1=Ke/ire/iwe/i. Thus, the impact of synaptic variability, and, in particular, of synchrony, entirely lies in the definition of the efficacies in Eq. (15). In the absence of synchrony, one can check that accounting for the shot-noise nature of the synaptic conductances leads to synaptic efficacies under exponential form: ae/i,1=Ke/ire/i1-e-we/i. In turn, accounting for input synchrony leads to synaptic efficacies expressed as expectation of these exponential forms in Eq. (15), consistent with the stochastic nature of the conductance jumps (We,Wi). Our other main result, the formula for the stationary voltage variance, involves synaptic efficacies of similar form. Specifically, we find that

V[V]=11+ae,2+ai,2×ae,12Ve-E[V]2+ai,12Vi-E[V]2-ceiVe-Vi2, (16)

where we define the second-order synaptic efficacies as

ae,2=bτ2EeiWeWe+Wi1-e-2We+Wi,ai,2=bτ2EeiWiWe+Wi1-e-2We+Wi. (17)

Equation (16) also prominently features auxiliary second-order efficacies defined by ae/i,12=ae/i,1-ae/i,2. Owing to their prominent role, we also mention their explicit form:

ae,12=bτ2EeiWeWe+Wi1-e-We+Wi2,ai,12=bτ2EeiWiWe+Wi1-e-We+Wi2. (18)

The other quantity of interest featuring in Eq. (16) is the cross-correlation coefficient

cei=bτ2EeiWeWiWe+Wi21-e-We+Wi2, (19)

which entirely captures the (non-negative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well.

In conclusion, let us stress that, for AONCB models, establishing the above exact expressions does not require any approximation other than taking the limit of instantaneous synapses. In particular, we neither resort to any diffusion approximations [37,38] nor invoke the effective-time-constant approximation [4143]. We give in Appendix L an alternative factorized form for V[V] to justify the non-negativity of expression (16). In Fig. 6, we illustrate the excellent agreement of the analytically derived expressions (14) and (16) with numerical estimates obtained via Monte Carlo simulations of the AONCB dynamics for various input synchrony conditions. Discussing and interpreting quantitatively Eqs. (14) and (16) within a biophysically relevant context is the main focus of the remainder of this work.

FIG. 6.

FIG. 6.

Comparison of simulation and theory. (a) Examples of voltage traces obtained via Monte Carlo simulations of an AONCB neuron for various types of synchrony-based input correlations: uncorrelated ρe=ρi=ρei=0 (uncorr, yellow), within correlation ρe,ρi>0 and ρei=0 (within corr, cyan), and within and across correlation ρe,ρi,ρei>0 (across corr, magenta). (b) Comparison of the analytically derived expressions (14) and (16) with numerical estimates obtained via Monte Carlo simulations for the synchrony conditions considered in (a).

III. COMPARISON WITH EXPERIMENTAL DATA

A. Experimental measurements and parameter estimations

Cortical activity typically exhibits a high degree of variability in response to identical stimuli [68,69], with individual neuronal spiking exhibiting Poissonian characteristics [3,70]. Such variability is striking, because neurons are thought to typically receive a large number (≃104) of synaptic contacts [13]. As a result, in the absence of correlations, neuronal variability should average out, leading to quasideterministic neuronal voltage dynamics [71]. To explain how variability seemingly defeats averaging in large neural networks, it has been proposed that neurons operate in a special regime, whereby inhibitory and excitatory drive nearly cancel one another [16,17,1921]. In such balanced networks, the voltage fluctuations become the main determinant of the dynamics, yielding a Poisson-like spiking activity [16,17,1921]. However, depending upon the tightness of this balance, networks can exhibit distinct dynamical regimes with varying degree of synchrony [18].

In the following, we exploit the analytical framework of AONCB neurons to argue that the asynchronous picture predicts voltage fluctuations are an order of magnitude smaller than experimental observations [1,2628]. Such observations indicate that the variability of the neuronal membrane voltage exhibits typical variance values of ≃4–9 mV2. Then, we claim that achieving such variability requires input synchrony within the setting of AONCB neurons. Experimental estimates of the spiking correlations are typically thought as weak with coefficients ranging from 0.01 to 0.04 [1012]. Such weak values do not warrant the neglect of correlations owing to the typically high number of synaptic connections. Actually, if K denotes the number of inputs, all assumed to play exchangeable roles, an empirical criterion to decide whether a correlation coefficient ρ is weak is that ρ<1/K [32,33]. Assuming the lower estimate of ρ0.01, this criterion is achieved for K100 inputs, which is well below the typical number of excitatory synapses for cortical neurons. In the following, we consider only the response of AONCB neurons to synchronous drive with biophysically realistic spiking correlations (0ρ0.03).

Two key parameters for our argument are the excitatory and inhibitory synaptic weights denoted by we and wi, respectively. Typical values for these weights can be estimated via biophysical considerations within the framework of AONCB neurons. In order to develop these considerations, we assume the values Vi=-10mV<VL=0<Ve=60mV for reversal potentials and τ=15ms for the passive membrane time constant. Given these assumptions, we set the upper range of excitatory synaptic weights so that, when delivered to a neuron close to its resting state, unitary excitatory inputs cause peak membrane fluctuations of ≃0.5 mV at the soma, attained after a peak time of ≃5 ms. Such fluctuations correspond to typically large in vivo synaptic activations of thalamo-cortical projections in rats [72]. Although activations of similar amplitude have been reported for cortico-cortical connections [73,74], recent large-scale in vivo studies have revealed that cortico-cortical excitatory connections are typically much weaker [75,76]. At the same time, these studies have shown that inhibitory synaptic conductances are about fourfold larger than excitatory ones but with similar time-scales. Fitting these values within the framework of AONCB neurons for ϵ=τs/τ1/4 reveals that the largest possible synaptic inputs correspond to dimensionless weights we0.01 and wi0.04. Following Refs. [75,76], we consider that the comparatively moderate cortico-cortical recurrent connections are an order of magnitude weaker than typical thalamo-cortical projections, i.e., we0.001 and wi0.004. Such a range is in keeping with estimates used in Ref. [38].

B. The effective-time-constant approximation holds in the asynchronous regime

Let us consider that neuronal inputs have zero (or negligible) correlation structure, which corresponds to assuming that all synapses are driven by independent Poisson processes. Incidentally, excitation and inhibition act independently. Within the framework of AONCB neurons, this latter assumption corresponds to choosing a joint jump distribution of the form

peiWe,Wi=bebpeWeδWi+bibpiWiδWe,

where δ() denotes the Dirac delta function so that WeWi=0 with probability one. Moreover, be and bi are independently specified via Eq. (9), and the overall rate of synaptic events is purely additive: b=be+bi. Consequently, the cross-correlation efficacy cei in Eq. (16) vanishes, and the dimensionless efficacies simplify to

ae,1=beτEe1-e-Weandai,1=biτEi1-e-Wi.

Further assuming that individual excitatory and inhibitory synapses act independently leads to considering that pe and pi depict the size of individual synaptic inputs, as opposed to aggregate events. This corresponds to taking βe and βi in our parametric model based on beta distributions. Then, as intuition suggests, the overall rates of excitation and inhibition activation are recovered as be=Kere and bi=Kiri, where re and ri are the individual spiking rates.

Individual synaptic weights are small in the sense that we,wi1, which warrants neglecting exponential corrections for the evaluation of the synaptic efficacies, at least in the absence of synchrony-based correlations. Accordingly, we have

ae,1Kereτweandae,12Kereτwe2/2,

as well as symmetric expressions for inhibitory efficacies. Plugging these values into Eq. (16) yields the classical mean-field estimate for the stationary variance:

V[V]Kerewe2Ve-E[V]2+Kiriwi2Vi-E[V]221/τ+Kerewe+Kiriwi,

which is exactly the same expression as that derived via the diffusion and effective-time-constant approximations in Refs. [46,47]. However, observe that the only approximation we made in obtaining the above expression is to neglect exponential corrections due to the relative weakness of biophysically relevant synaptic weights, which we hereafter refer to as the small-weight approximation.

C. Asynchronous inputs yield exceedingly small neural variability

In Fig. 7, we represent the stationary mean E[V] and variance V[V] as a function of the neuronal spiking input rates re and ri but for distinct values of synaptic weights we and wi. In Fig. 7(a), we consider synaptic weights as large as biophysically admissible based on recent in vivo studies [75,76], i.e., we=0.01 and wi=0.04. By contrast, in Fig. 7(b), we consider moderate synaptic weights we=0.001 and wi=0.004, which yield somatic postsynaptic deflections of typical amplitudes. In both cases, we consider input numbers Ke and Ki such that the mean voltage E[V] covers the same biophysical range of values as re and ri varies between 0 and 50 Hz. Given a zero resting potential, we set this biophysical range to be bounded by ΔE[V]20mV as typically observed experimentally in electrophysiological recordings. These conditions correspond to constant aggregate weights set to Kewe=Kiwi=1 so that

Kerewe=Kiriwi50Hz1/τ.

This implies that the AONCB neurons under consideration do not reach the high-conductance regime for which the passive conductance can be neglected, i.e., Kerewe+Kerewi1/τ [77]. Away from the high-conductance regime, the variance magnitude is controlled by the denominator in Eq. (20). Accordingly, the variance in both cases is primarily dependent on the excitatory rate re, since, for Kewe=Kiwi=1, the effective excitatory driving force Fe=Kewe2Ve-E[V]2 dominates the effective inhibitory driving force Fi=Kiwi2Vi-E[V]2. This is because the neuronal voltage typically sits close to the inhibitory reversal potential but far from the excitatory reversal potential Ve-E[V]>E[V]-Vi. For instance, when close to rest E[V]0, the ratio of the effective driving forces is Kewe2Ve2/Kiwi2Vi29 fold in favor of excitation. Importantly, the magnitude of the variance is distinct for moderate synapses and for large synapses. This is because, for constant aggregate weights Kewe=Kiwi=1, the ratio of effective driving forces for large and moderate synapses scales in keeping with the ratio of the weights, and so does the ratio of variances away from the high-conductance regime. Thus, we have

Fewe=10-2/Fewe=10-3=Fiwi=10-2/Fiwi=10-3=10,

and the variance decreases by one order of magnitude from large weights in Fig. 7(a) to moderate weights in Fig. 7(b).

FIG. 7.

FIG. 7.

Voltage mean and variance in the absence of input correlations. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=100 and Ki=25 synapses with large weights we=0.01 and wi=0.04. Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=103 and Ki=250 synapses with moderate weights we=0.001 and wi=0.004. For synaptic weights we,wi1, the mean response is identical as Kewe=Kiwi=1 for (a) and (b). By contrast, for ρe=ρi=ρei=0, the variance is at least an order of magnitude smaller than that experimentally observed (4–9 mV2) for moderate weights as shown in (a). Reaching the lower range of realistic neural variability requires driving the cell via large weights as shown in (b).

The above numerical analysis reveals that achieving realistic levels of subthreshold variability for a biophysical mean range of variation requires AONCB neurons to be exclusively driven by large synaptic weights. This is confirmed by considering the voltage mean E[V] and variance V[V] in Fig. 8 as a function of the number of inputs Ke and of the synaptic weights we for a given level of inhibition. We choose this level of inhibition to be set by Ki=250 moderate synapses wi=0.004 with ri=20Hz in Fig. 8(a) and by Ki=25 large synapses wi=0.04 with ri=20Hz in Fig. 8(b). As expected, assuming that re=20Hz in the absence of input correlations, the voltage mean E[V] depends on only the product Kewe, which yields a similar mean range of variations for Ke varying up to 2000 in Fig. 8(a) and up to 200 in Fig. 8(b). Thus, it is possible to achieve the same range of variations as with moderate synaptic with a fewer number of larger synaptic weights. By contrast, the voltage variance V[V] achieves realistic levels only for large synaptic weights in both conditions, with we0.015 for moderate inhibitory background synapses in Fig. 8(a) and we0.01 for large inhibitory background synapses in Fig. 8(b).

FIG. 8.

FIG. 8.

Dependence on the number of inputs and the synaptic weights in the absence of correlations. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by a varying number of excitatory synapses Ke with varying weight we at rate re=20Hz, with background inhibitory drive given by Ki=250 with moderate weights wi=0.004 and ri=20Hz. Column (b) depicts the same as in column (a) but for a background inhibitory drive given by Ki=25 with large weights wi=0.04 and ri=20Hz. For both conditions, achieving realistic level of variance, i.e., V[V]4-9mV2, while ensuring a biophysically relevant mean range of variation, i.e., ΔE[V]10-20mV, is possible only for large weights: we0.015 for moderate inhibitory weights in (a) and we0.01 for large weights.

D. Including input correlations yields realistic subthreshold variability

Without synchrony, achieving the experimentally observed variability necessitates an excitatory drive mediated via synaptic weights we0.01, which corresponds to the upper bounds of the biophysically admissible range and is in agreement with numerical results presented in Ref. [38]. Albeit possible, this is unrealistic given the wide distribution of amplitudes observed experimentally, whereby the vast majority of synaptic events are small to moderate, at least for cortico-cortical connections [75,76]. In principle, one can remedy this issue by allowing for synchronous activation of, say, ke=10 synapses with moderate weight we=0.001, as it amounts to the activation of a single synapse with large weight kewe=0.01. A weaker assumption that yields a similar increase in neural variability is to ask for synapses to only tend to synchronize probabilistically, which amounts to requiring ke to be a random variable with some distribution mass on ke>1. This exactly amounts to modeling the input drive via a jump process as presented in Sec. II, with a jump distribution pe that probabilistically captures this degree of input synchrony. In turn, this distribution pe corresponds to a precise input correlation ρe via Eq. (8).

We quantify the impact of nonzero correlation in Fig. 9, where we consider the cases of moderate weights we=0.001 and we=0.004 and large weights we=0.01 and wi=0.04 as in Fig. 7 but for ρe=ρi=0.03. Specifically, we consider an AONCB neuron subjected to two independent beta-binomial-derived compound Poisson process drives with rate be and bi, respectively. These rates be and bi are obtained via Eq. (9) by setting βe=βi=1/ρe-1=1/ρi-1 and for given input numbers Ke and Ki and spiking rates re and ri. This ensures that the mean number of synaptic activations beEeike=Kere and biEki=Kiri remains constant when compared with Fig. 7. As a result, the mean response of the AONCB neuron is essentially left unchanged by the presence of correlations, with virtually identical biophysical range of variations ΔEei[V]10-20mV. This is because, for correlation ρe=ρi0.03, the aggregate weights still satisfy kewe,kiwi<1 with probability close to one given that Kewe=Kiwi=1. Then, in the absence of cross-correlation, i.e., ρei=0, we still have

ae,1=beτEe1-e-kewebeτweEeke=Kereτwe,

as well as ai,1Kiriτwi by symmetry. However, for both moderate and large synaptic weights, the voltage variance V[V] now exhibits slightly larger magnitudes than observed experimentally. This is because we show in Appendix M that in the small-weight approximation

ae,12=beτ2Ee1ekewe21+ρeKe1Kereτwe22,

where we recognize Kereτwe2/2=ae,12ρe=0 as the second-order efficacy in the absence of correlations from Fig. 7. A similar statement holds for ai,12. This shows that correlations increase neural variability whenever ρe>1/Ke or ρi>1/Ki, which coincides with our previously given criterion to assess the relative weakness of correlations. Accordingly, when excitation and inhibition act independently, i.e., ρei=0, we find that the increase in variability due to input synchrony Δρe/i=V[V]ρei=0-V[V]ρe/i=ρei=0 satisfies

Δρe/iρeKe-1Kerewe2Ve-E[V]221/τ+Kerewe+Kiriwi+ρiKi-1Kiriwi2Vi-E[V]221/τ+Kerewe+Kiriwi. (20)

The above relation follows from the fact that the small-weight approximation for E[V] is independent of correlations and from neglecting the exponential corrections due to the nonzero size of the synaptic weights. The above formula remains valid as long as the correlations ρe and ρi are weak enough so that the aggregate weights satisfy kewe,kiwi<1 with probability close to one. To inspect the relevance of exponential corrections, we estimate in Appendix N the error incurred by neglecting exponential corrections. Focusing on the case of excitatory inputs, we find that, for correlation coefficients ρe0.05, neglecting exponential corrections incurs less than a 3% error if the number of inputs is smaller than Ke1000 for moderate synaptic weight we=0.001 or than Ke100 for large synaptic weight we=0.01.

FIG. 9.

FIG. 9.

Voltage mean and variance in the presence of excitatory and inhibitory input correlations but without correlation across excitation and inhibition: ρe=ρi>ρei=0. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=100 and Ki=25 synapses with large weights we=0.01 and wi=0.04. Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=103 and Ki=250 synapses with moderate dimensionless weights we=0.001 and wi=0.004. For synaptic weights we,wi1, the mean response is identical as Kewe=Kiwi=1 for (a) and (b). By contrast with the case of no correlation in Fig. 7, for ρe=ρi=0.03 and ρei=0, the variance achieves similar levels as experimentally observed (4–9 mV2) for moderate weights as shown in (b) but slightly larger levels for large weights as shown in (a).

E. Including correlations between excitation and inhibition reduces subthreshold variability

The voltage variance estimated for realistic excitatory and inhibitory correlations, e.g., ρe=ρi=0.03 and ρei=0, exceeds the typical levels measured in vivo, i.e., 4–9 mV2, for large synaptic weights. The inclusion of correlations between excitation and inhibition, i.e., ρei>0, can reduce the voltage variance to more realistic levels. We confirm this point in Fig. 10, where we consider the cases of moderate weights we=0.001 and we=0.004 and large weights we=0.01 and wi=0.04 as in Fig. 9 but for ρe=ρi=ρei=0.03. Positive cross-correlation between excitation and inhibition only marginally impacts the mean voltage response. This is due to the fact that exponential corrections become slightly more relevant as the presence of cross-correlation leads to larger aggregate weights: We+Wi with We and Wi possibly being jointly positive. By contrast with this marginal impact on the mean response, the voltage variance is significantly reduced when excitation and inhibition are correlated. This is in keeping with the intuition that the net effect of such cross-correlation is to cancel excitatory and inhibitory synaptic inputs with one another, before they can cause voltage fluctuations. The amount by which the voltage variance is reduced can be quantified in the small-weight approximation. In this approximation, we show in Appendix M that the efficacy cei capturing the impact of cross-correlations simplifies to

ceibτ2EeiWeWi=ρeireriτ/2KeweKiwi.

Using the above simplified expression and invoking the fact that the small-weight approximation for E[V] is independent of correlations, we show a decrease in the amount Δρei=V[V]-V[V]ρei=0 with

Δρei-ρeireriKeweKiwiVe-E[V]E[V]-Vi1/τ+Kerewe+Kiriwi0. (21)

Despite the above reduction in variance, we also show in Appendix M that positive input correlations always cause an overall increase of neural variability:

0V[V]ρe/i=ρei=0V[V]V[V]ρei=0.

Note that the reduction of variability due to ρei>0 crucially depends on the instantaneous nature of correlations between excitation and inhibition. To see this, observe that Marcus rule Eq. (13) specifies instantaneous jumps via a weighted average of the reversal potentials Ve and Vi, which represent extreme values for voltage updates. Thus, perfectly synchronous excitation and inhibition updates the voltage toward an intermediary value rather than extreme ones, leading to smaller jumps on average. Such an effect can vanish or even reverse when synchrony breaks down, e.g., when inhibition substantially lags behind excitation.

FIG. 10.

FIG. 10.

Voltage mean and variance in the presence of excitatory and inhibitory input correlations and with correlation across excitation and inhibition: ρe=ρi=ρei>0. Column (a) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=100 and Ki=25 synapses with large weights we=0.01 and wi=0.04. Column (b) depicts the stationary subthreshold response of an AONCB neuron driven by Ke=103 and Ki=250 synapses with moderate dimensionless weights we=0.001 and wi=0.004. For synaptic weights we,wi1, the mean response is identical as Kewe=Kiwi=1 for (a) and (b). Compared with the case of no cross-correlation in Fig. 9, for ρe=ρi=ρei=0.03, the variance is reduced to a biophysical range similar to that experimentally observed (4–9 mV2) for moderate weights as shown in (a), as well as for large weights as shown in (b).

F. Asynchronous scaling limits require fixed-size synaptic weights

Our analysis reveals that the correlations must significantly impact the voltage variability whenever the number of inputs is such that Ke>1/ρe or Ki>1/ρi. Spiking correlations are typically measured in vivo to be larger than 0.01. Therefore, synchrony must shape the response of neurons that are driven by more than 100 active inputs, which is presumably allowed by the typically high number of synaptic contacts (≃104) in the cortex [13]. In practice, we find that synchrony can explain the relatively high level of neural variability observed in the subthreshold neuronal responses. Beyond these practical findings, we predict that input synchrony also has significant theoretical implications with respect to modeling spiking networks. Analytically tractable models for cortical activity are generally obtained by considering spiking networks in the infinite-size limit. Such infinite-size networks are tractable, because the neurons they comprise interact only via population averages, erasing any role for nonzero correlation structure. Distinct mean-field models assume that synaptic weights vanish according to distinct scalings with respect to the number of synapses, i.e., we/i0 as Ke/i. In particular, classical mean-field limits consider the scaling we/i~1/Ke/i, balanced mean-field limits consider the scaling we/i~1/Ke/i, with Kewe-Kiwi=O(1), and strong coupling limits consider the scaling we/i~1/lnKe/i, with Kewe-Kiwi=O(1) as well.

Our analysis of AONCB neurons shows that the neglect of synchrony-based correlations is incompatible with the maintenance of neural variability in the infinite-size limit. Indeed, Eq. (20) shows that for any scaling with 1/we=oKe and 1/wi=oKi, as for all the mean-field limits mentioned above, we have

V[V]=Owe+OwiKe,Ki0.

Thus, in the absence of correlation and independent of the synaptic weight scaling, the subthreshold voltage variance of AONCB neurons must vanish in the limit of arbitrary large numbers of synapses. We expect such decay of the voltage variability to be characteristic of conductance-based models in the absence of input correlation. Indeed, dimensional analysis suggests that voltage variances for both current-based and conductance-based models are generically obtained via normalization by the reciprocal of the membrane time constant. However, by contrast with current-based models, the reciprocal of the membrane time constant for conductance-based models, i.e., 1/τ+Kewere+Kiwiri, involves contributions from synaptic conductances. Thus, to ensure nonzero asymptotic variability, the denominator scaling OKewe+OKiwi must be balanced by the natural scaling of the Poissonian input drives, i.e., OKewe2+OKiwi2. In the absence of input correlations, this is possible only for fixed-size weights, which is incompatible with any scaling assumptions.

G. Synchrony allows for variability-preserving scaling limits with vanishing weights

Infinite-size networks with fixed-size synaptic weights are problematic for restricting modeled neurons to operate in the high-conductance regime, whereby the intrinsic conductance properties of the cell play no role. Such a regime is biophysically unrealistic, as it implies that the cell would respond to perturbations infinitely fast. We propose to address this issue by considering a new type of variability-preserving limit models obtained for the classical scaling but in the presence of synchrony-based correlations. For simplicity, let us consider our correlated input model with excitation alone in the limit of an arbitrary large number of inputs Ke. When ρe>0, the small-weight approximation Eq. (20) suggests that adopting the scaling we~Ωe/Ke, where Ωe denotes the aggregate synaptic weight, yields a nonzero contribution when Ke as the numerator scales as OKe2we2. It turns out that this choice can be shown to be valid without resorting to any approximations. Indeed, under the classical scaling assumption, we show in Appendix O that the discrete jump distribution pe,k weakly converges to the continuous density dνe/dw in the sense that

bek=1Kepe,kδwΩe-kKedwKeνedw=reβew1-Wewβe-1dw. (22)

The above density has infinite mass over 0,Ωe owing to its diverging behavior in zero and is referred to as a degenerate beta distribution. In spite of its degenerate nature, it is known that densities of the above form define well-posed processes, the so-called beta processes, which have been studied extensively in the field of nonparametric Bayesian inference [61,62]. These beta processes represent generalizations of our compound Poisson process drives insofar as they allow for a countable infinity of jumps to occur within a finite time window. This is a natural requirement to impose when considering an infinite pool of synchronous synaptic inputs, the overwhelming majority of which having nearly zero amplitude.

The above arguments show that one can define a generalized class of synchronous input models that can serve as the drive of AONCB neurons as well. Such generalizations are obtained as limits of compound Poisson processes and are specified via their Lévy-Khintchine measures, which formalize the role of νe [78,79]. Our results naturally extend to this generalized class. Concretely, for excitation alone, our results extend by replacing all expectations of the form beEe[] by integral with respect to the measure νe. One can easily check that these expectations, which feature prominently in the definition of the various synaptic efficacies, all remain finite for Lévy-Khintchine measures. In particular, the voltage mean and variance of AONCB neurons remain finite with

E[V]=Ve0Ωe1-e-wνe(dw)1/τ+0Ωe1-e-wνe(dw),V[V]=Ve-E[V]20Ωe1-e-w2νe(dw)2/τ+0Ωe1-e-2wνe(dw).

Thus, considering the classical scaling limit we1/Ke preserves nonzero subthreshold variability in the infinite size limit Ke as long as νe puts mass away from zero, i.e., for βe<ρe>0. Furthermore, we show in Appendix O that V[V]=Oρe so that voltage variability consistently vanishes in the absence of spiking correlation, for which νϵ concentrates in zero, i.e., when βeρe=0.

IV. DISCUSSION

A. Synchrony modeling

We have presented a parametric representation of the neuronal drives resulting from a finite number of asynchronous or (weakly) synchronous synaptic inputs. Several parametric statistical models have been proposed for generating correlated spiking activities in a discrete setting [59,8082]. Such models have been used to analyze the activity of neural populations via Bayesian inference methods [8385], as well as maximum entropy methods [86,87]. Our approach is not to simulate or analyze complex neural dependencies but rather to derive from first principles the synchronous input models that could drive conductance-based neuronal models. This approach primarily relies on extending the definition of discrete-time correlated spiking models akin to Ref. [59] to the continuous-time setting. To do so, the main tenet of our approach is to realize that input synchrony and spiking correlation represent equivalent measures under the assumption of input exchangeability.

Input exchangeability posits that the driving inputs form a subset of an arbitrarily large pool of exchangeable random variables [55,56]. In particular, this implies that the main determinant of the neuronal drive is the number of active inputs, as opposed to the magnitude of these synaptic inputs. Then, the de Finetti theorem [57] states that the probability of observing a given input configuration can be represented in the discrete setting under an integral form [see Eq. (3)] involving a directing probability measure F. Intuitively, F represents the probability distribution of the fraction of coactivating inputs at any discrete time. Our approach identifies the directing measure F as a free parameter that captures input synchrony. The more dispersed the distribution F, the more synchronous the inputs, as previously noted in Refs. [88,89]. Our work elaborates on this observation to develop computationally tractable statistical models for synchronous spiking in the continuous-time limit, i.e., for vanishing discrete time step Δt0+.

We derive our results using a discrete-time directing measure chosen as beta distribution F~B(α,β), where the parameters α and β can be related to the individual spiking rate r and the spiking correlation ρ via rΔt=α/(α+β) and ρ=1/(1+α+β). For this specific choice of distribution, we are able to construct statistical models of the correlated spiking activity as generalized beta-binomial processes [60], which play an important role in statistical Bayesian inference [61,62]. This construction allows us to fully parametrize the synchronous activity of a finite number of inputs via the jump distribution of a compound Poisson process, which depends explicitly on the spiking correlation. For being continuously indexed in time, stationary compound Poisson processes can naturally serve as the drive to biophysically relevant neuronal models. The idea to utilize compound Poisson processes to model input synchrony was originally proposed in Refs. [9092] but without constructing these processes as limits of discrete spiking models and without providing explicit functional form for their jump distributions. More generally, our synchrony modeling can be interpreted as a limit case of the formalism proposed in Refs. [93,94] to model correlated spiking activity via multidimensional Poisson processes.

B. Moment analysis

We analytically characterize the subthreshold variability of a tractable conductance-based neuronal model, the AONCB neurons, when driven by synchronous synaptic inputs. The analytical characterization of a neuron’s voltage fluctuations has been the focus of intense research [46,47,9597]. These attempts have considered neuronal models that already incorporate some diffusion scaling hypotheses [98,99], formally obtained by assuming an infinite number of synaptic inputs. The primary benefit of these diffusion approximations is that one can treat the corresponding Fokker-Planck equations to quantify neuronal variability in conductance-based integrate-and-fire models while also including the effect of postspiking reset [37,38]. In practice, subthreshold variability is often estimated in the effective-time-constant approximation, while neglecting the multiplicative noise contributions due to voltage-dependent membrane fluctuations [46,95,96], although an exact treatment is also possible without this simplifying assumption [38]. By contrast, the analysis of conductance-based models has resisted exact treatments when driven by shot noise, as for compound Poisson input processes, rather than by Gaussian white noise, as in the diffusion approximation [4143].

The exact treatment of shot-noise-driven neuronal dynamics is primarily hindered by the limitations of the Itô-Stratonovich integrals [65,100] to capture the effects of point-process-based noise sources, even without including a reset mechanism. These limitations were originally identified by Marcus, who proposed to approach the problem via a new type of stochastic equation [44,45]. The key to the Marcus equation is to define shot noise as limits of regularized, well-behaved approximations of that shot noise, for which classical calculus applies [66]. In practice, these approximations are canonically obtained as the solutions of shot-noise-driven Langevin equations with relaxation timescale τs, and shot noise is formally recovered in the limit τs0+. Our assertion here is that all-or-none conductances implement such a form of shot-noise regularization for which a natural limiting process can be defined when synapses operate instantaneously, i.e., τs0+. The main difference with the canonical Marcus approach is that our regularization is all-or-none, substituting each Dirac delta impulse with a finite steplike impulse of duration τs and magnitude 1/τs, thereby introducing a synaptic timescale but without any relaxation mechanism.

The above assertion is the basis for introducing AONCB neurons, which is supported by our ability to obtain exact formulas for the first two moments of their stationary voltage dynamics [see Eqs. (14) and (16)]. For τs>0, these moments can be expressed in terms of synaptic efficacies that take exact but rather intricate integral forms. Fortunately, these efficacies drastically simplify in the instantaneous synapse limit τs0+, for which the canonical shot-noise drive is recovered. These resulting formulas mirror those obtained in the diffusion and effective-time-constant approximations [46,47], except that they involve synaptic efficacies whose expressions are original in three ways [see Eqs. (15), (G4), (G7), and (G8)]: First, independent of input synchrony, these efficacies all have exponential forms and saturate in the limit of large synaptic weights. Such saturation is a general characteristic of shot-noise-driven, continuously relaxing systems [101103]. Second, these efficacies are defined as expectations with respect to the jump distribution pei of the driving compound Poisson process [see Eq. (11) and Appendix B]. A nonzero dispersion of pei, indicating that synaptic activation is truly modeled via random variables We and Wi, is the hallmark of input synchrony [91,92]. Third, these efficacies involve the overall rate of synaptic events b [see Eq. (12)], which also depends on input synchrony. Such dependence can be naturally understood within the framework of Palm calculus [104], a form of calculus specially developed for stationary point processes.

C. Biophysical relevance

Our analysis allows us to investigate quantitatively how subthreshold variability depends on the numbers and strength of the synaptic contacts. This approach requires that we infer synaptic weights from the typical peak time and peak amplitude of the somatic membrane fluctuations caused by postsynaptic potentials [72,75,76]. Within our modeling framework, these weights are dimensionless quantities that we estimate by fitting the AONCB neuronal response to a single all-or-none synaptic activation at rest. For biophysically relevant parameters, this yields typically small synaptic weights in the sense that we,wi1. These small values warrant adopting the small-weight approximation, for which expressions (14) and (16) simplify.

In the small-weight approximation, the mean voltage becomes independent of input synchrony, whereas the simplified voltage variance Eq. (20) depends on input synchrony only via the spiking correlation coefficients ρe,ρi, and ρei, as opposed to depending on a full jump distribution. Spike-count correlations have been experimentally shown to be weak in cortical circuits [1012], and, for this reason, most theoretical approaches argued for asynchronous activity [17,105109]. A putative role for synchrony in neural computations remains a matter of debate [110112]. In modeled networks, although the tight balance regime implies asynchronous activity [1921], the loosely balanced regime is compatible with the establishment of strong neuronal correlations [2224]. When distributed over large networks, weak correlations can still give rise to precise synchrony, once information is pooled from a large enough number of synaptic inputs [32,33]. In this view, and assuming that distinct inputs play comparable roles, correlations measure the propensity of distinct synaptic inputs impinging on a neuron to coactivate, which represents a clear form of synchrony. Our analysis shows that considering synchrony in amounts consistent with the levels of observed spiking correlation is enough to account for the surprisingly large magnitude of subthreshold neuronal variability [1,2628]. In contrast, the asynchronous regime yields unrealistically low variability, an observation that challenges the basis for the asynchronous state hypothesis.

Recent theoretical works [37,38] have also noted that the asynchronous state hypothesis seems at odds with certain features of the cortical activity such as the emergence of spontaneous activity or the maintenance of significant average polarization during evoked activity. Zerlaut et al. have analyzed under which conditions conductance-based networks can achieve a spectrum of asynchronous states with realistic neural features. In their work, a key variable to achieve this spectrum is a strong afferent drive that modulates a balanced network with moderate recurrent connections. Moderate recurrent conductances are inferred from allowing for up to 2 mV somatic deflections at rest, whereas the afferent drive is provided via even stronger synaptic conductances that can activate synchronously. These inferred conductances appear large in light of recent in vivo measurements [72,75,76], and the corresponding synaptic weights all satisfy we,wi0.01 within our framework. Correspondingly, the typical connectivity numbers considered are small with Ke=200,Ki=50 for recurrent connections, and Ke=10 for the coactivating afferent projections. Thus, results from Ref. [37] appear consistent with our observation that realistic subthreshold variability can be achieved asynchronously only for a restricted number of large synaptic weights. Our findings, however, predict that these results follow from connectivity sparseness and will not hold in denser networks, for which the pairwise spiking correlation will exceed the empirical criteria for asynchrony, e.g., ρe>1/Keρe<0.0051/Ke in Ref. [37]). Sanzeni et al. have pointed out that implementing the effective-time-constant approximation in conductance-based models suppresses subthreshold variability, especially in the high-conductance state [77]. As mentioned here, this suppression causes the voltage variability to decay as Owe+Owi in any scaling limit with vanishing synaptic weights. Sanzeni et al. observe that such decay is too fast to yield realistic variability for the balanced scaling, which assumes we~1/Ke and wi~1/Ki. To remedy this point, these authors propose to adopt a slower scaling of the weights, i.e., we~1/lnKe and wi~1/lnKi, which can be derived from the principle of rate conservation in neural networks. Such a scaling is sufficiently slow for variability to persist in networks with large connectivity number (≃105). However, as any scaling with vanishing weights, our exact analysis shows that such scaling must eventually lead to decaying variability, thereby challenging the basis for the synchronous state hypothesis.

Both of these studies focus on the network dynamics of conductance-based networks under the diffusion approximations. Diffusive behaviors rigorously emerge only under some scaling limit with vanishing weights [98,99]. By focusing on the single-cell level rather than the network level, we are able to demonstrate that the effective-time-constant approximation holds exactly for shot-noise-driven, conductance-based neurons, without any diffusive approximations. Consequently, suppression of variability must occur independent of any scaling choice, except in the presence of input synchrony. Although this observation poses a serious theoretical challenge to the asynchronous state hypothesis, observe that it does not invalidate the practical usefulness of the diffusion approximation. For instance, we show in Fig. 11 that the mean spiking response of an a shot-noise-driven AONCB neuron with an integrate- and-fire mechanism can be satisfactorily captured via the diffusion approximation. In addition, our analysis allows one to extend the diffusion approximation to include input synchrony.

FIG. 11.

FIG. 11.

Diffusion approximations in the presence of synchrony. (a) Comparison of an asynchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained via the effective-time-constant approximation (red trace). (b) Comparison of a synchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained by our exact analysis (red trace). Parameters: Ke=1000,Ki=350,τ=15ms,we=0.001,wi=0.004,re=ri=25Hz,ρe=ρi=0.03,ρei=0,VT=15mV, and VR=12mV.

D. Limitations of the approach

A first limitation of our analysis is that we neglect the spike-generating mechanism as a source of neural variability. Most diffusion-based approaches model spike generation via the integrate-and-fire mechanism, whereby the membrane voltages reset to fixed value upon reaching a spike-initiation threshold [37,38,46,47,9597]. Accounting for such a mechanism can impact our findings in two ways: (i) By confining voltage below the spiking threshold, the spiking mechanism may suppress the mean response enough for the neuron to operate well in the high-conductance regime for large input drives. Such a scenario will still produce exceedingly low variability due to variability quenching in the high-conductance regime, consistent with Ref. [1]. (ii) The additional variability due to postspiking resets may dominate the synaptic variability, so that a large overall subthreshold variability can be achieved in spite of low synaptic variability. This possibility also seems unlikely as dominant yet stereotypical resets would imply a quasideterministic neural response [71]. Addressing the above limitations quantitatively requires extending our exact analysis to include the integrate-and-fire mechanism using a technique from queueing theory [104]. This is beyond the scope of this work. We note, however, that implementing a postspiking reset to a fixed voltage level yields simulated trajectories that markedly differ from physiological ones (see Fig. 1), for which the postspiking voltage varies across conditions [2628].

A second limitation of our analysis is our assumption of exchangeability, which is the lens through which we operate a link between spiking correlations and input drives. Taken literally, the exchangeability assumption states that synapses all have a typical strength and that conductance variability primarily stems from the variable numbers of coactivating synapses. This is certainly an oversimplification as synapses exhibit heterogeneity [113], which likely plays a role in shaping neural variability [114]. Distinguishing between heterogeneity and correlation contributions, however, is a fundamentally ambiguous task [115]. For instance, considering Ke synchronous inputs with weight we at rate be and with jump probability pe [see Eqs. (5) and (9)] is indistinguishable from considering Ke independent inputs with heterogeneous weights we,2we,,Kewe and rates Kerepe,k. Within our modeling approach, accounting for synaptic heterogeneity, with dispersed distribution for synaptic weights qe(w), can be done by taking the jump distribution pe as

pew=k=1Kqekwpe,k,

where qe(k) refers to the k-fold convolution of qe(w). This leads to an overdispersion of the jump distribution pe and, thus, increased subthreshold neural variability. Therefore, while we have assumed exchangeability, our approach can accommodate weight heterogeneity. The interpretation of our results in terms of synchrony rather than heterogeneity is supported by recent experimental evidence that cortical response selectivity derives from strength in numbers of synapses rather than difference in synaptic weights [116].

A third limitation of our analysis is to consider a perfect form of synchrony, with exactly simultaneous synaptic activations. Although seemingly unrealistic, we argue that perfect input synchrony can still yield biologically relevant estimates of the voltage variability. For instantaneous synchrony, the empirical spiking correlation is independent of the timescale over which spikes are counted, i.e., ρemp=ρe, as shown in Fig. 12(a) (blue line). This is a potential problem, because spiking correlations have been measured to vanish on small timescales in experimental recordings [117,118]. More realistic input models can be obtained by jittering instantaneously synchronous spikes. Such a procedure leads to a general decrease in the empirical spiking correlations ρemp(Δt) with spiking correlations over all timescales Δt, including for Δt=25ms [vertical dashed line in Fig. 12(a)], which vanish in the limit of small timescales Δt0 [red, yellow, and purple lines in Fig. 12(a)]. Analysis of the temporal structure of spiking correlation in Refs. [117,118] suggests that correlations ρemp(Δt) lie within the range 0.01–0.04 for Δt25ms. We focus on this timescale because it is just larger than the membrane time constant of the neuron. Then, to achieve realistic correlations at Δt25ms, the instantaneous spiking correlation of the unjitterred synchronous input model, denoted by ρ, may be increased. Adopting a jittering timescale of σJ=50ms, Fig. 12(b) shows that ρemp(Δt)0.03 with Δt=25ms for instantaneous spiking correlation ρ within the range 0.2–0.3. Note that, for very long timescales Δt, this also implies that the empirical spiking correlation saturates at ρ0.2-0.3, as reported in Refs. [117,118]. To validate that our instantaneous model makes realistic prediction about the subthreshold variability, we simulate AONCB neurons in response to these jittered synchronous inputs. Figure 12(c) shows that the resulting stationary voltage distribution (red histogram) closely follows the distribution obtained by assuming instantaneous synchrony with ρe chosen such that ρe=ρemp(Δt=25ms) (blue trace and histogram). Furthermore, we can justify the choice of the timescale Δt=25ms a posteriori. Specifically, in Fig. 12(d), we consider temporally structured inputs obtained from the same instantaneous synchrony ρ but for various jittering timescale σJ. Jittering at larger timescale σJ reduces synchrony and voltage variance (vertical dashed lines). We then compare the resulting voltage variance with perfectly synchronous approximations obtained by matching spike-count correlation at various timescales (our choice is to match at 25 ms). Figure 12(d) shows that matching at increasing timescale yields higher variance, but matching at Δt25ms offers good approximations (gray square where variances are about the same). Extending our analytic results to include jittering will require modeling spiking correlations via multidimensional Poisson processes rather than via compound Poisson processes [93,94]. However, this is beyond the scope of this work. A remaining limitation of our synchrony modeling is that our analysis can account for only non-negative, instantaneous correlations between excitation and inhibition, while in reality such correlations may be negative and are expected to peak at a nonzero time lag.

FIG. 12.

FIG. 12.

Impact of jittering synchronous inputs. (a) Effect of jittering synchronous spike times via independent Gaussian centered time shifts with varied standard deviation σJ: Without jitter, spiking correlation is independent of the size of the time bins used to count spikes (blue trace). Jittering with larger σJ decreases spiking correlation for all bin sizes, with spiking correlation vanishing in the limit of small bin sizes. (b) Given a jitter standard deviation of σJ=50ms, one obtains spike-count correlation of ρ(Δt)=0.03 in Δt=25ms bins by jittering a synchronous input with instantaneous correlation of ρe=0.2-0.3. (c) Comparison of voltage trace obtained with instantaneous synchronous input (blue) and jittered correlated inputs (red) for σJ=50ms. Both types of input are chosen so that they yield the same spiking correlation of ρe=ρ(Δt)=0.03 with a bin size of Δt=25ms. The stationary distributions are close to identical, leading to less than 1% error in the variance estimates. (d) Comparison between the voltage variances of an AONCB neuron driven by realistic synchronous inputs with various jitters (dashed line) and the voltage variances of the same AONCB neuron driven by instantaneously synchronous approximations (solid line). For each σJ, different instantaneous approximations are obtained for different bin sizes Δt by setting ρe=ρ(Δt) for various bin sizes Δt. Good approximations are consistently obtained for Δt25ms (gray column). Other parameters: re=10Hz,Ke=1000, and we=10-3.

A fourth limitation of our analysis is that it is restricted to a form of synchrony that ignores temporal heterogeneity. This is a limitation, because a leading hypothesis for the emergence of variability is that neurons generate spikes as if through a doubly stochastic process, i.e., as a Poisson process with temporally fluctuating rate [119]. To better understand this limitation, let us interpret our exchangeability-based modeling approach within the framework of doubly stochastic processes [51,52]. This can be done most conveniently by reasoning on the discrete correlated spiking model specified by Eq. (3). Specifically, given fixed bin size Δt>0, one can interpret the collection of i.i.d. variables θ~F as an instantaneously fluctuating rate. In this interpretation, nonzero correlations can be seen as emerging from a doubly stochastic process for which the rate fluctuates as uncorrelated noise, i.e., with zero correlation time. This zero correlation time is potentially a serious limitation, as it has been argued that shared variability is best modeled by a low-dimensional latent process evolving with slow, or even smooth, dynamics [82]. Addressing this limitation will require developing limit spiking model with nonzero correlation time using probabilistic techniques that are beyond the scope of this work [56].

A final limitation of our analysis is that it does not explain the consistent emergence of synchrony in network dynamics. It remains conceptually unclear how synchrony can emerge and persist in neural networks that are fundamentally plagued by noise and exhibit large degrees of temporal and cellular heterogeneity. It may well be that carefully taking into account the finite size of networks will be enough to produce the desired level of synchrony-based correlation, which is rather weak after all. Still, one would have to check whether achieving a given degree of synchrony requires the tuning of certain network features, such as the degree of shared input or the propensity of certain recurrent motifs [120] or the relative width of recurrent connections with respect to feedforward projections [121]. From a theoretical standpoint, the asynchronous state hypothesis answers the consistency problem by assuming no spiking correlations and, thus, no synchrony. One can justify this assumption in idealized mathematical models by demonstrating the so-called “propagation-of-chaos” property [122], which rigorously holds for certain scaling limits with vanishing weights and under the assumption of exchangeability [107109]. In this light, the main theoretical challenge posed by our analysis is extending the latter exchangeability-based property to include nonzero correlations [123] and hopefully to characterize irregular synchronous state in some scaling limits.

ACKNOWLEDGMENTS

L. A. B., B. L., N. J. P., E. S., and T. T. were supported by the Vision Research program of the National Institutes of Health under Award No. R01EY024071. L. A. B. and T. T. were also supported by the CRCNS program of the National Science Foundation under Award No. DMS-2113213. We thank François Baccelli, David Hansel, and Nicolas Brunel for insightful discussions.

APPENDIX A: DISCRETE-TIME SPIKING CORRELATION

In this appendix, we consider first the discrete-time version of our model for possibly correlated excitatory synaptic inputs. In this model, we consider that observing Ke synaptic inputs during N time steps specifies a {0, 1}-valued matrix Xk,i1kKe,1iN, where 1 indicates that an input is received and 0 indicates an absence of inputs. For simplicity, we further assume that the inputs are independent across time:

PXk,i1kKe,1iN=i=1NPXk,i1kKe,

so that we can drop the time index and consider the population vector Xk1kKe. Consequently, given the individual spiking rate re, we have EXk=PXk=1=riΔt, where Δt is the duration of the time step where a spike may or may not occur. Under the assumption that Xk1kKe belongs to an infinitely exchangeable set of random variables, the de Finetti theorem states that there exists a probability measure Fe on [0, 1] such that

PXk1kKe=k=1KeθeXk1-θe1-XkdFeθe.

Assuming the directing measure Fe known, we can compute the spiking correlation attached to our model. To see this, first observe that, specifying the above probabilistic model for Ke=1, we have

EXk=EEXkθe=Eθe=θedFeθe.

Then, using the total law of covariance and specifying the above probabilistic model for K=2, we have

Xk,Xl=EXk,Xlθe+EXkθe,EXlθe=1k=lEVXkθe+θe,θe=1k=lEθe1θe+Vθe=1k=lEθe1Eθe+1klVθe.

This directly yields that the spiking correlation reads

ρe=CXk,XlVXk=VθeEθe1-Eθe. (A1)

The exact same calculations can be performed for the partially exchangeable case of mixed excitation and inhibition. The assumption of partial exchangeability requires that, when considered separately, the {0, 1}-valued vectors X1,,XKe and Y1,,YKe each belong to an infinitely exchangeable sequence of random variables. Then, de Finetti’s theorem states that the probability to find the full vector of inputs X1,,XKe,Y1,,YKe in any particular configuration is given by

PX1,,XKe,Y1,,YKi=k=1KeθeXk1-θe1-Xkl=1KiθiYl1-θi1-YldFeiθe,θi, (A2)

where the directing measure Fei fully parametrizes our probabilistic model. Performing similar calculations as for the case of excitation alone within this partially exchangeable setting yields

ρei=CXk,YlVXkVYl=Cθe,θiEθe1-EθeEθi1-Eθi. (A3)

APPENDIX B: COMPOUND POISSON PROCESSES AS CONTINUOUS-TIME LIMITS

Let us consider the discrete-time model specified by Eq. (A2), which is obtained under the assumption of partial infinite exchangeability. Under this assumption, the probability laws of the inputs are entirely determined by the distribution of ke,ki, where ke denotes the number of active excitatory inputs and ki denotes the number of inhibitory inputs. This distribution can be computed as

Pei,kl=Pke=k,ki=l=KekKilθek1-θeKe-k×θil1-θiKi-ldFeiθe,θi.

It is convenient to choose the directing measure as beta distributions, since these are conjugate to the binomial distributions. Such a choice yields a class of probabilistic models referred to as beta-binomial models, which have been studied extensively [61,62]. In this appendix, we always assume that the marginals Fe and Fi have the form Fe~Betaαe,βe and Fi~Betaαi,βi. Then, direct integrations shows that the marginal distributions for the number of excitatory inputs and inhibitory inputs are

Pe,k=l=0KiPei,kl=KekBαe+k,βe+Ke-kBαe,βeandPi,l=k=0KePei,kl=KilBαi+l,βi+Ki-lBαi,βi.

Moreover, given individual spiking rates re and ri within a time step Δt, we have

reΔt=EXk=PXk=1=Eθe=αeαe+βeandriΔt=EYl=PYl=1=Eθi=αiαi+βi.

The continuous-time limit is obtained by taking Δt0+, which implies that the parameters αe and αi jointly vanish. When αe,αi0+, the beta distributions Fe and Fi become deficient, and we have Pe,0,Pi,01. In other words, time bins of size Δt almost surely have no active inputs in the limit Δt0+. Actually, one can show that

1-Pe,0~ψKe+βe-ψβeαeand1-Pi,0~ψKi+βi-ψβiαi,

where ψ denotes the digamma function. This indicates in the limit Δt0+ the times at which some excitatory inputs or some inhibitory inputs are active define a point process. Moreover, owing to the assumption of independence across time, this point process will actually be a Poisson point process. Specifically, consider a time T>0 and set Δt=T/N for some large integer N. Define the sequence of times

Te,n=TNinfi>NTe,n-1/Tke,i1withTe,1=TNinfi0ke,i1,
Ti,n=TNinfi>NTi,n-1/Tki,i1withTi,1=TNinfi0ki,i1.

Considered separately, the sequences of times Te,nn1 and Ti,nn1 constitute binomial approximations of Poisson processes which we denote by Ne and Ni, respectively. It is a classical result that these limit Poisson processes are recovered exactly when N and that their rates are, respectively, given by

be=limΔt0+1-Pe,0Δt=ψKe+βe-ψβelimΔt0+αeΔt=ψKe+βe-ψβeβere,
bi=limΔt0+1-Pi,0Δt=ψKi+βi-ψβilimΔt0+αiΔt=ψKi+βi-ψβiβiri.

For all integer K>1, the function ββ(ψ(K+β)-ψ(β)) is an increasing analytic functions on the domain R+ with range [1,K]. Thus, we always have rebeKere and ribiKiri, and the extreme cases are achieved for perfect or zero correlations. Perfect correlations are achieved when ρe=1 or ρi=1, which corresponds to βe0 or βi0. This implies that be=re and bi=ri, consistent with all synapses activating simultaneously. Zero correlations are achieved when ρe=0 or ρi=0, which corresponds to βe or βi. This implies that be=Kere and bi=Kiri, consistent with all synapses activating asynchronously, so that no inputs simultaneously activate. Observe that, in all generality, the rates be and bi are such that the mean number of spikes over the duration T is conserved in the limit Δt0+. For instance, one can check that

KereT=ETe,nTke,NTe,n/T=En=1Ne(T)ke,n=ENe(T)Eke=beTEke.

When excitation and inhibition are considered separately, the limit process Δt0+ specifies two compound Poisson processes:

tn=1Ne(t)ke,nandtn=1Ni(t)ki,n,

where Ne and Ni are Poisson processes with rate be and bi and where ke,nn1 are i.i.d. according to pe and ke,nn1 are i.i.d. according to pi. Nonzero correlations between excitation and inhibition emerge when the Poisson processes Ne and Ni are not independent. This corresponds to the processes Ne and Ni sharing times, so excitation and inhibition occur simultaneously at these times. To understand this point intuitively, let us consider the limit Poisson process N obtained by considering synaptic events without distinguishing excitation and inhibition. For perfect correlation, i.e., ρei=1, all synapses activate synchronously and we have N=Ne=Ni: All times are shared. By contrast, for zero correlation, i.e., ρei=0, no synapses activate simultaneously and we have N=Ne+Ni: No times are shared. For the intermediary regime of correlations, a nonzero fraction of times are shared, resulting in a driving Poisson process N with overall rate b satisfying minbe,bib<be+bi. We investigate the above intuitive statements quantitatively in Appendix D by inspecting two key examples.

Let us conclude this appendix by recapitulating the general form of the limit compound process Z obtained in the continuous-time limit Δt0+ when jointly considering excitation and inhibition. This compound Poisson process can be represented as

tZt=nNtWe,n,nNtWi,n,

where N is that Poisson process registering all synaptic events without distinguishing excitation and inhibition and where the pairs We,n,Wi,n are i.i.d. random jumps in R×R{0,0}. Formally, such a process is specified by the rate of N, denoted by b, and the bivariate distribution of the jumps We,n,Wi,n, denoted by pei. These are defined as

b=limΔt0+1-Pei,00Δtandpei,kl=limΔt0+Pei,kl1-Pei,00fork,l0,0, (B1)

where Pei,00 is the probability to register no synaptic activation during a time step Δt. According to these definitions, b is the infinitesimal likelihood that an input is active within a time bin, whereas pei is the probability that k excitatory inputs and l inhibitory inputs are active given that at least one input is active. One can similarly define the excitatory and inhibitory rates of events be and bi, as well as the excitatory jump distribution pe and the inhibitory jump distribution pi,l. Specifically, we have

be=limΔt0+1-Pe,0Δtandpe,k=limΔt0+Pe,k1-Pe,0fork0,bi=limΔt0+1-Pi,0Δtandpi,l=limΔt0+Pi,l1-Pi,0forl0, (B2)

with Pe,k=l=0KiPei,k,l and Pi,k=k=0KePei,k,l. Observe that, thus defined, the jump distributions pe and pi are specified as conditional marginal distributions of the joint jump distribution pei on the events ke>0 and ki>0, respectively. These are such that pe,k=b/bel=0Kipei,kl and pi,l=b/bik=0Kepei,kl. To see why, observe, for instance, that

pe,k=limΔt0+Pe,k1-Pe,0=limΔt0+l=0KiPei,kl1-Pei,001-Pei,001-Pe,0=l=0Kipei,kllimΔt0+1-Pei,001-Pe,0=bbel=0Kipei,kl, (B3)

where we use the definitions of the rates b and be given in Eqs. (B1) and (B2) to establish that

limΔt0+1-Pei,001-Pe,0=limΔt0+1-Pei,00/ΔtlimΔt0+1-Pe,0/Δt=bbe.

APPENDIX C: CONTINUOUS-TIME SPIKING CORRELATION

Equations (A1) and (A3) carry over to the continuous time limit Δt0+ by observing that, for limit compound Poisson processes to emerge, one must have that EXk=Eθe=O(Δt) and EYl=Eθi=O(Δt). This directly implies that, when Δt0+, we have

ρe=CXk,XlVXk~EXkXlEXk2=EXkXlEXkandρei=CXk,YlVXkVYl~EXkYlEXk2EYl2=EXkYlEXkEYl. (C1)

All the stationary expectations appearing above can be computed via the jump distribution of the limit point process emerging in the limit Δt0+ [104]. Because this limit process is a compound Poisson process with discrete bivariate jumps, the resulting jump distribution pei is specified over 1,,Ke×1,,Ki{0,0}. Denoting by b the overall rate of synaptic events, one has limΔt0+EXkYl/Δt=bEeiXkYl. Then, by partial exchangeability of the {0, 1}-valued population vectors Xk1kKe and Yl1lKi, we have

EeiXkYl=EeiEXkYlke,ki=EeikeKekiKi=k=0Kel=0KikKelKipei,kl=EeikekiKeKi, (C2)

where the bivariate jump ke,ki is distributed as pei.

To further proceed, it is important to note the relation between the expectation Eei[], which is tied to the overall input process with rate b, and the expectation Ee[], which is tied to the excitatory input process with rate be. This relation is best captured by remarking that pe are not defined as the marginals of pei but only as conditional marginals on ke>0. In other words, we have pe,k=b/bel=0Kipei,kl, which implies that bEeiXkXl=beEeXkXl and EXk=bEeiXk=beEeXk with

EeXkXl=EeEXkXlke=Eekeke-1KeKe-1=k=0Kek(k-1)KeKe-1pe,k=Eekeke-1KeKe-1, (C3)
EeXk=EeEXkke=EekeKe=k=0Kel=0KikKepe,k=EekeKe, (C4)

with similar expressions for the inhibition-related quantities. Injecting Eqs. (C2)(C4) in Eq. (C1) yields

ρe=Eekeke-1EekeKe-1andρei=bEeikekiKebeEekeKibiEiki=EeikekiKeEeikeKiEeiki.

APPENDIX D: TWO EXAMPLES OF LIMIT COMPOUND POISSON PROCESSES

The probability Pei,00 that plays a central role in Appendix B can be easily computed for zero correlation, i.e., ρei=0, by considering a directing measure under product form Feiθe,θi=FeθeFiθi. Then, integration with respect to the separable variables θe and θi yields

Pei,kl=Pe,kPi,l=KekBαe+k,βe+Ke-kBαe,βe×KilBαi+l,βi+Ki-lBαi,βi.

In turn, the limit compound Poisson process can be obtain in the limit Δt0+ by observing that

1-Pe,0=beΔt+oΔt,1-Pe,0Pi,0=biΔt+o(Δt),and1-Pe,0Pi,0=be+biΔt+oΔt,

which implies that the overall rate is determined as b=limΔt0+1-Pe,0Pi,0/Δt=be+bi, as expected. To characterize the limit compound Poisson process, it remains to exhibit pei,kl, the distribution of the jumps ke and ki. Suppose that k1; then, we have

pei,kl=limΔt0+Pe,kPi,l1Pe,0Pi,0=limΔt0+1Pe,01Pe,0Pi,0Pi,lPe,k1Pe,0=limΔt0+1Pe,01Pe,0Pi,0limΔt0+Pi,llimΔt0+Pe,k1Pe,0.

Then, one can use the limit behaviors

limΔt0+1-Pe,01-Pe,0Pi,0=bebe+biandlimΔt0+Pi,l=1{l=0},

so that, for k1, we have

pei,kl=bebe+bi1{l=0}pe,kwithpe,k=limΔt0+Pe,k1-Pe,0=KekBk,βe+Ke-kψKe+βe-ψ(β).

A similar calculation shows that, for all l1, we have pei,kl=bi/be+bi1{k=0}pi,l. Thus, pei,kl=0 whenever k,l1, so that the support of pei,kl is 1,,Ke×{0}{0}×1,,Ki. This is consistent with the intuition that excitation and inhibition happen at distinct times in the absence of correlations.

Let us now consider the case of maximum correlation for Fe=Fi=F, where F is a beta distribution with parameters α and β. Moreover, let us assume the deterministic coupling θe=θi such that Feiθe,θi=Fθeδθi-θe. Then, the joint distribution of the jumps ke,ki can be evaluated via direct integration as

Pei,kl=KekKilθek1θeKek×θel1θiKildFθeδθiθe=KekKilθk+l(1θ)Ke+KikldFθ=KekKilBα+k+l,β+Ke+KiklBα,β.

As excitation and inhibition are captured separately by the same marginal functions Fe=Fi=F, we necessarily have α/(α+β)=EXk=EYl=reΔt=riΔt, and we refer to the common spiking rate as r. Then, the overall rate of synaptic events is obtained as

b=limΔt0+1-Pei,00Δt=limα0+1-Pei,00αlimΔt0+αΔt=ψKe+Ki+β-ψ(β)βr, (D1)

and one can check that b differs from the excitatory- and inhibitory-specific rates be and bi, which satisfy

be=limΔt0+1-Pe,0Δt=ψKe+β-ψ(β)βrandbi=limΔt0+1-Pi,0Δt=ψKi+β-ψ(β)βr. (D2)

To characterize the limit compound Poisson process, it remains to exhibit pei,kl, the joint distribution of the jumps ke,ki. A similar calculation as for the case of excitation alone yields

pei,kl=limΔt0+Pei,kl1-Pei,00=KekKilBk+l,β+Ke+Ki-k-lψKe+Ki+β-ψ(β).

Remember that, within our model, spiking correlations do not depend on the number of neurons and that by construction we have ρeiρeρi. Thus, for the symmetric case under consideration, maximum correlation corresponds to ρei=ρe=ρi=1/(1+β). In particular, perfect correlation between excitation and inhibition can be attained only for β0. When β>0, i.e., for partial correlations, the Poisson processes Ne and Ni share only a fraction of their times, yielding an aggregate Poisson process N such that minbe,bi<b<be+bi. The relations between b,be, and bi can be directly recovered from the knowledge of pei by observing that

Pke=0,ki>0=l=1Kipei,0l=ψKe+Ki+β-ψKe+βψKe+Ki+β-ψ(β),Pki=0,ke>0=k=1Kepei,k0=ψKe+Ki+β-ψKi+βψKe+Ki+β-ψ(β),Pki>0,ke>0=k=1Kel=1Kipei,kl=1-2ψKe+Ki+β-ψKe+β-ψKi+βψKe+Ki+β-ψ(β).

This implies that the fraction of times with nonzero excitation is given by

Pke>0=Pke>0,ki=0+P0ke>0,ki>0=ψKe+β-ψ(β)ψKe+Ki+β-ψ(β),

so that we consistently recover the value of be already obtained in Eqs. (9) and (D2) via

beT=ENe(T)=E1ke>0N(T)=bTEei1ke>0=bTP0ke>0.

APPENDIX E: MARCUS JUMP RULE

The goal of this appendix is to justify the Marcus-type update rule given in Eq. (13). To do so, let us first remark that, given a finite time interval [0,T], the number of synaptic activation times TnnZ falling in this interval is almost surely finite. In particular, we have Δ=inf0TnTmTTn-Tm>0 almost surely. Consequently, taking ϵ<Δ/τs ensures that synaptic activation events do not overlap in time, so that it is enough to consider a single synaptic activation triggered with no lack of generality in T0=0. Let us denote the voltage just before the impulse onset as VT0-=V0, which serves as initial condition for the ensuing voltage dynamics. As the dimensionless conductances remain equals to We/ϵ and Wi/ϵ for a duration [0,ϵτ], the voltage Vϵ satisfies

τV˙ϵ=-Vϵ+We/ϵVe-Vϵ+Wi/ϵVi-Vϵ,0tϵτ,

where we assume I=0 for simplicity. The unique solution satisfying V0-=V0 is

Vϵ(t)=V0e-t/τ1+We/ϵ+Wi/ϵ+WeVe+WiViϵ+We+Wi1-e-t/τ1+We/ϵ+Wi/ϵ,0tϵτ.

The Marcus-type rule follows from evaluating the jump update as the limit

limϵ0+VϵϵτV0=limϵ0+V0eϵ+We+Wi1+WeVe+WiViϵ+We+Wi1eϵ+We+Wi=WeVe+WiViWe+WiV01eWe+Wi,

which has the same form as the rule announced in Eq. (13). Otherwise, at fixed ϵ>0, the fraction of time for which the voltage Vϵ is exponentially relaxing toward the leak reversal potential VL=0 is larger than 1-Nϵ/T, where N denotes the almost surely finite number of synaptic activations, which does not depend on ϵ. Thus, the voltage V=limϵ0+Vϵ exponentially relaxes toward VL=0, except when it has jump discontinuities in TnnZ.

APPENDIX F: STATIONARY VOLTAGE MEAN

For a positive synaptic activation time t>0, the classical method of the variation of the constant applies to solve Eq. (1). This yields an expression for Vϵ(t) in terms of regular Riemann-Stieltjes integrals where the conductance traces he(t) and hi(t) are treated as a form of deterministic quenched disorder. Specifically, given an initial condition Vϵ(0), we have

Vϵt=Vϵ0e0t1/τ+heu+hiudu+0tVeheu+Vihiu+I/C×eut1/τ+hev+hivdvdu,

where Vϵ(t) depends on ϵ via the all-or-none-conductance processes he and hi. As usual, the stationary dynamics of the voltage Vϵ is recovered by considering the limit of arbitrary large times t, for which one can neglect the influence of the initial condition Vϵ(0). Introducing the cumulative input processes H=He,Hi defined by

He(t),Hi(t)=0the(u)du,0thi(u)du

and satisfying τdHe(t)=he(t)dt and τdHi(t)=hi(t)dt, we have

Vϵ=-0e(t/τ)+He(t)+Hi(t)dVeHe(t)+ViHi(t)+IGdtτ. (F1)

In turn, expanding the integrand above yields the following expression for the stationary expectation of the voltage:

EVϵ=Ve0et/τEeHet+HitdHet+Vi0et/τEeHet+HitdHit+IG0et/τEeHet+Hitdtτ. (F2)

Our primary task is to evaluate the various stationary expectations appearing in the above formula. Such a goal can be achieved analytically for AONCB models. As the involved calculations tend to be cumbersome, we give only a detailed account in Appendices H and I. Here, we account for the key steps of the calculation, which ultimately produces an interpretable compact formula for EVϵ in the limit of instantaneous synapses, i.e., when ϵ0.

In order to establish this compact formula, it is worth introducing the stationary bivariate function

Qϵt,s=EeHet+His, (F3)

which naturally depends on ϵ via He(t) and Hi(s). The function Qϵ is of great interest, because all the stationary expectations at stake in Eq. (F2) can be derived from it. Before justifying this point, an important observation is that the expectation defining Qϵ(t,s) bears on only the cumulative input processes He and Hi, which specify bounded, piecewise continuous functions with probability one, independent of ϵ. As a result of this regular behavior, the expectation commute with the limit of instantaneous synapses, allowing one to write

Qt,s=limϵ0+Qϵt,s=Eelimϵ0Het+His=EeZetZis,

where we exploit the fact that the cumulative input processes He and Hi converge toward the coupled compound Poisson processes Ze and Zi when ϵ0+:

Zet=nNetWe,nandZit=nNitWi,n. (F4)

The above remark allows one to compute the term due to current injection I in Eq. (F2), where the expectation can be identified to Qϵ(t,t). Indeed, utilizing the standard form for the moment-generating function for compound Poisson processes [51], we find that

Q(t,t)=eaei,1t/τ,

where we introduce the first-order aggregate efficacy

aei,1=bτ1-Eeie-We+Wi.

Remember that, in the above definition, Eei[] denotes the expectation with respect to the joint probability of the conductance jumps, i.e., pei.

It remains to evaluate the expectations associated to excitation and inhibition reversal potentials in Eq. (F2). These terms differ from the current-associated term in that they involve expectations of stochastic integrals with respect to the cumulative input processes He/i. This is by contrast with evaluating Eq. (F3), which involves only expectations of functions that depend on He/i. In principle, one could still hope to adopt a similar route as for the current-associated term, exploiting the compound Poisson process Z obtained in the limit of instantaneous synapses. However, such an approach would require that the operations of taking the limit of instantaneous synapses and evaluating the stationary expectation still commute. This is a major caveat, as such a commuting relation generally fails for point-process-based stochastic integrals. Therefore, one has to analytically evaluate the expectations at stake for positive synaptic activation time ϵ>0, without resorting to the simplifying limit of instantaneous synapses. This analytical requirement is the primary motivation to consider AONCB models.

The first step in the calculation is to realize that, for ϵ>0, the conductance traces he(t)=τdHe(t)/dt and hi(t)=τdHi(t)/dt are bounded, piecewise continuous functions with probability one. Under these conditions, it then holds that

limsttQϵ(t,s)=EdHe(t)dteHe(t)+Hi(t)andlimstsQϵt,s=EdHitdteHet+Hit,

so that the sought-after expectations can be deduced from the closed-form knowledge of Qϵ(t,s) for positive ϵ>0. The analytical expression of Qϵ(t,s) can be obtained via careful manipulation of the processes He and Hi featured in the exponent in Eq. (F3) (see Appendix H). In a nutshell, these manipulations hinge on splitting the integrals defining He(t) and Hi(s) into independent contributions arising from spiking events occurring in the five nonoverlapping, contiguous intervals bounded by the times 0-ϵτtst-ϵτs-ϵτ. There is no loss of generality in assuming the latter ordering, and, from the corresponding analytical expression, we can compute

limϵ0+limsttQϵ(t,s)=bae,1eaei,1t/τandlimϵ0+limstsQϵ(t,s)=bai,1eaei,1t/τ,

where the effective first-order synaptic efficacies via Eq. (15) as

ae,1=bτEeiWeWe+Wi1-e-We+Wiandai,1=bτEeiWiWe+Wi1-e-We+Wi.

Observe that, by definition, ae,1 and ai,1 satisfy ae,1+ai,1=aei,1.

Altogether, upon evaluation of the integrals featured in Eq. (F2), these results allow one to produce the compact expression Eq. (14) for the stationary voltage mean in the limit of instantaneous synapses:

E[V]=limϵ0+EVϵ=ae,1Ve+ai,1Vi+I/G1+ae,1+ai,1.

APPENDIX G: STATIONARY VOLTAGE VARIANCE

The calculation of the stationary voltage variance is more challenging than that of the stationary voltage mean. However, in the limit of instantaneous synapses, this calculation produces a compact, interpretable formula as well. Adopting a similar approach as for the stationary mean calculation, we start by expressing Vϵ2 in the stationary limit in terms of a stochastic integrals involving the cumulative input processes He and Hi. Specifically, using Eq. (F1), we have

Vϵ2=-0e(t/τ)+He(t)+Hi(t)dVeHe(t)+ViHi(t)+IGdtτ2=R-2e[(t+s)/τ]+He(t)+Hi(t)+He(s)+Hi(s)dVeHe(t)+ViHi(t)+IGdtτdVeHe(s)+ViHi(s)+IGdsτ. (G1)

Our main goal is to compute the stationary expectation of the above quantity. As for the stationary voltage mean, our strategy is (i) to derive the exact stationary expectation of the integrands for finite synaptic activation time, (ii) to evaluate these integrands in the simplifying limit of instantaneous synapses, and (iii) to rearrange the terms obtained after integration into an interpretable final form. Enacting the above strategy is a rather tedious task, and, as for the calculation of the mean voltage, we present only the key steps of the calculation in the following.

The integrand terms at stake are obtained by expanding Eq. (G1), which yields the following quadratic expression for the stationary second moment of the voltage:

EVϵ2=Ae,ϵVe2+Bei,ϵVeVi+Ai,ϵVi2+VeBeI,e+ViBiI,ϵ(I/G)+AI,ϵ(I/G)2,

whose various coefficients need to be evaluated. These coefficients are conveniently specified in terms of the following symmetric random function:

eit,s=eHet+Hit+Hes+His,

which features prominently in Eq. (G1). Moreover, drawing on the calculation of the stationary mean voltage, we anticipate that the quadrivariate version of ei(t,s) will play a central role in the calculation via its stationary expectation. Owing to this central role, we denote this expectation as

Rϵt,u,s,v=EeHet+Hiu+Hes+Hiv,

where we make the ϵ dependence explicit. As a mere expectation with respect to the cumulative input processes He,Hi, the expectation can be evaluated in closed form for AONCB models. This again requires careful manipulations of the processes He and Hi, which need to split into independent contributions arising from spiking events occurring in nonoverlapping intervals. By contrast with the bivariate case, the quadrivariate case requires to consider nine contiguous intervals. There is no loss of generality to consider these interval bounds to be determined by the two following time orderings:

Oorder.0-ϵτtut-ϵτu-ϵτsvs-ϵτv-ϵτ,Dorder.-ϵτtusvt-ϵτu-ϵτs-ϵτv-ϵτ,

where O stands for off-diagonal ordering and D for diagonal ordering.

The reason to consider only the O/D orders is that all the relevant calculations are made in the limit (u,v)(t,s). By symmetry of Rϵ(t,u,s,v), it is then enough to restrict our consideration to the limit (u,v)t-,s-, which leaves the choice of t,s0 to be determined. By symmetry, one can always choose t>s, so that the only remaining alternative is to decide wether (t,s) belong to the diagonal region 𝒟ϵ={t,s0|ϵτ|t-s} or the off-diagonal region 𝒪ϵ={t,s0|ϵτ<|t-s}. For the sake of completeness, we give the two expressions of Rϵ(t,u,s,v) on the regions 𝒪ϵ and 𝒟ϵ in Appendix I. Owing to their tediousness, we do not give the detailed calculations leading to these expressions, which are lengthy but straightforward elaborations on those used in Appendix H. Here, we stress that, for ϵ>0, these expressions reveal that Rϵ(t,u,s,v) is defined as a twice-differentiable quadrivariate function.

With these remarks in mind, the coefficients featured in Eq. (G2) can be categorized into three classes.

  1. There is a single current-dependent inhomogeneous coefficient
    AI,ϵ=R-2e(t+s)/τEei(t,s)dtdsτ2,
    where we recognize that Eei(t,s)=Rϵ(t,t,s,s)=defRϵ(t,s). As Rϵ(t,s) is merely a stationary expectation with respect to the cumulative input processes He,Hi, it can be directly evaluated in the limit of instantaneous synapses. In other words, step (ii) can be performed before step (i), similarly as for the stationary voltage mean calculation. However, having a general analytical expression for Rϵ(t,u,s,v) on 𝒪ϵ (see Appendix I), we can directly evaluate for all ts that
    R(t,s)=limϵ0+Rϵ(t,s)=e2aei,2max(t,s)-aei,1|t-s|/τ, (G2)
    where we define the second-order aggregate efficacy
    aei,2=bτ21-Eeie-2We+Wi.
    It is clear that the continuous function R(t,s) is smooth everywhere except on the diagonal, where it admits a slope discontinuity. As we shall see, this slope discontinuity is the reason why one needs to consider the 𝒟ϵ region carefully, even when concerned only with the limit ϵ0+. That being said, the diagonal behavior plays no role here, and straightforward integration of R(t,s) on the negative orthant gives
    AI=limϵ0+AI,ϵ=11+aei,11+aei,2.
  2. There are two current-dependent linear coefficients
    BeI,ϵ=2R-2e(t+s)/τEei(t,s)dHe(t)dsτandBiI,ϵ=2R-2e(t+s)/τEei(t,s)dHi(t)dsτ,
    where the coefficient 2 above comes from the fact that BeI,ϵ and BiI,ϵ are actually resulting from the contributions of two symmetric terms in the expansion of Eq. (G1). Both BeI,ϵ and BiI,ϵ involve expectations of stochastic integrals akin to those evaluated for the stationary mean calculation. Therefore, these terms can be treated similarly by implementing steps (i) and (ii) sequentially. The trick is to realize that, for positive ϵ and ts0, it holds that
    Eei(t,s)dHe(t)dt=limuttRϵ(t,u,s,s)andEei(t,s)dHi(t)dt=limvssRϵt,t,s,v.
    Thus, for any (t,s) in the off-diagonal region Oϵ, the analytical knowledge of Rϵ(t,u,s,v) (see Appendix I) allows one to evaluate
    limut-τtRϵ(t,u,s,s)Rϵ(t,s)=ae,1ift>s,ae,2-ae,1ift<s,andlimvs-τsRϵ(t,u,s,s)Rϵ(t,s)=ai,1ift>s,ai,2-ai,1ift<s, (G3)
    where the second-order synaptic efficacies are defined as
    ae,2=bτ2EeiWeWe+Wi1-e-2We+Wiandai,2=bτ2EeiWiWe+Wi1-e-2We+Wi. (G4)
    Observe that these efficacies satisfy the familiar relation ae,2+ai,2=aei,2. Taking the limits of Eq. (G3) when ϵ0+ specifies two bivariate functions that are continuous everywhere, except on the diagonal t=s, where these functions present a jump discontinuity. This behavior is still regular enough to discard any potential contributions from diagonal terms, so that we can restrict ourselves to the region Oϵ. Then, taking the limit ϵ0+ after integration of over Oϵ, we find that
    BeI=limϵ0+BeI,ϵ=ae,21+aei,11+aei,2andBiI=limϵ0+BiI,ϵ=bτai,21+aei,11+aei,2.
  3. There are four quadratic coefficients associated to the reversal potential Ve and Vi, including two diagonal terms
    Ae,ϵ=R-2e(t+s)/τEei(t,s)dHe(t)dHe(s)andAi,ϵ=R-2e(t+s)/τEei(t,s)dHi(t)dHi(s)
    and two symmetric cross terms contributing
    Bei,ϵ=2R-2e(t+s)/τEei(t,s)dHe(t)dHi(s).
    Notice that it is enough to compute only one diagonal term, as the other term can be deduced by symmetry. Following the same method as for the linear terms, we start by remarking that, for all (t,s) in the off-diagonal region 𝒪ϵ, it holds that
    Eei(t,s)dHe(t)dtdHe(s)ds=lim(u,v)(t,s)tsRϵt,u,s,v,Eei(t,s)dHe(t)dtdHi(s)ds=lim(u,v)(t,s)tvRϵt,u,s,v.
    As before, the analytical knowledge of Rϵ(t,u,s,v) on the Oϵ region (see Appendix I) allows one to evaluate
    lim(u,v)(t,s)-τ2tuRϵ(t,u,s,s)Rϵ(t,s)=ae,12ae,2-ae,1,lim(u,v)(t,s)-τ2tsRϵ(t,u,s,v)Rϵ(t,s)=12ae,12ai,2-ai,1+ai,12ae,2-ae,1.
    The above closed-form expressions allow one to compute Ae,ϵ and Bei,e, the part of the coefficients Ae,ϵ and Bei,ϵ resulting from integration over the off-diagonal region Oϵ, which admit well-defined limit values Ae=limϵ0+Ae,ε and Bei=limϵ0+Bei,e with
    Ae=limϵ0+𝒪ϵe(t+s)/τEei(t,s)dHe(t)dHe(s)=ae,12ae,2-ae,11+aei,11+bτaei,2,Bei=2limϵ0+𝒪ϵe(t+s)/τEei(t,s)dHe(t)dHi(s)=ae,12ai,2-ai,1+ai,12ae,2-ae,11+aei,11+aei,2.
    However, for quadratic terms, one also needs to include the contributions arising from the diagonal region 𝒟ϵ, as suggested by the first-order jump discontinuity of R(t,s)=limϵ0+Rϵ(t,s) on the diagonal t=s. To confirm this point, one can show from the analytical expression of Rϵ(t,u,s,v) on 𝒟ϵ (see Appendix I) that all relevant second-order derivative terms scale as 1/ϵ over 𝒟ϵ. This scaling leads to the nonzero contributions Ae,ϵ and Bei,ϵ resulting from the integration of these second-order derivative terms over the diagonal region 𝒟ϵ, even in the limit ϵ0+. Actually, we find that these contributions also admit well-defined limit values Ae=limϵ0+Ae,ϵ and Bei=limϵ0+Bei,ϵ with (see Appendix J)
    Ae=limϵ0+𝒟ϵe(t+s)/τEei(t,s)dHe(t)dHe(s)=ae,12-cei1+aei,2, (G5)
    Bei=2limϵ0+𝒟ϵe(t+s)/τEei(t,s)dHe(t)dHi(s)=2cei1+aei,2. (G6)
    Remembering that the expression of Ai can be deduced from that of Ae by symmetry, Eq. (G5) defines Ae, and, thus, Ai, in terms of the useful auxiliary second-order efficacies ae,12=ae,1-ae,2 and ai,12=ai,1-ai,2. These efficacies feature prominently in the final variance expression, and it is worth mentioning their explicit definitions as
    ae,12=bτ2EeiWeWe+Wi1-e-We+Wi2andai,12=bτ2EeiWiWe+Wi1-e-We+Wi2. (G7)
    The other quantity of interest is the coefficient cei, which appears in both Eqs. (G5) and (G6). This non-negative coefficient, defined as
    cei=bτ2EeiWeWiWe+Wi21-e-We+Wi2, (G8)

    entirely captures the (non-negative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well. Keeping these definitions in mind, the full quadratic coefficients are finally obtained as Ae=Ae+Ae,Ai=Ai+Ai, and Bei=Bei+Bei.

From there, injecting the analytical expressions of the various coefficients in the quadratic form Eq. (G2) leads to an explicit formula for the stationary voltage variance in the limit of instantaneous synapses. Then, one is left with only step (iii), which aims at exhibiting a compact, interpretable form for this formula. We show in Appendix K that lengthy but straightforward algebraic manipulations lead to the simplified form given in Eq. (16):

VV=limϵ0+VVϵ=11+aei,2ae,12Ve-EV2+ai,12Vi-EV2-ceiVe-Vi2.

APPENDIX H: EVALUATION OF Qϵ(t,s)FORϵ>0

The goal here is to justify the closed-form expression of Qϵ(t,s)=EeHe(t)+Hi(s) via standard manipulation of exponential functionals of Poisson processes. By definition, assuming with no loss of generality the order 0ts, we have

Het+His=1τt0heudu+s0hiudu=1ϵτt0duNuϵτ+1NuWe,k+s0duNuϵτ+1NuWi,k=1ϵτt0duNuϵτ+1NuWe,k+Wi,k+stduNuϵτ+1NuWi,k. (H1)

We evaluate Qϵ(t,s)=EeHe(t)+Hi(s) as a product of independent integral contributions.

Isolating these independent contributions from Eq. (H1) requires one to establish two preliminary results about the quantity

It,s=stk=Nu-Δ+1NuXkdu, (H2)

where N denotes a Poisson process, Xk denotes i.i.d. non-negative random variables, and Δ is positive activation time. Assume t-sΔ; then, given some real w<u-Δ, we have

It,s=stduk=Nv+1NuXkstduk=Nv+1NuΔXk=stduk=Nv+1NuXksΔtΔduk=Nv+1NuXk=tΔtduk=Nv+1NuXksΔsduk=Nv+1NuXk=tΔtduk=Nv+1NtΔXk+tΔtduk=NtΔ+1NuXksΔsduk=Nv+1NsXksΔsduk=Nu+1NsXk=tΔtduk=NtΔ+1NuXk+Δk=Ns+1NtΔXk+sΔsduk=Nu+1NsXk. (H3)

One can check that the three terms in Eq. (H3) above are independent for involving independent numbers of i.i.d. draws over the intervals (t-Δ,t],(s,t-Δ], and (s-Δ,s], respectively. Similar manipulations for the order for t-sΔ yield

It,s=stduk=Ns+1NuXk+t-sk=Nt-Δ+1NsXk+s-Δt-Δduk=Nu+1Nt-ΔXk, (H4)

where the three independent contributions correspond to independent numbers of i.i.d. draws over the intervals (s,t],(t-Δ,s], and (s-Δ,t-Δ], respectively.

As evaluating Qϵ involves only taking the limit st- at fixed ϵ>0, it is enough to consider the order 0-ϵτtst-ϵτ. With that in mind, we can apply Eqs. (H3) and (H4) with Δ=ϵτ and Xk=We,k+Wi,k or Xk=Wi,k, to decompose the two terms of Eq. (H1) in six contributions:

I(t,s)=-ϵτ0duk=N(t-ϵτ)+1N(u)We,k+Wi,k+ϵτk=N(t)+1N(-ϵτ)We,k+Wi,k+t-ϵτtduk=N(u)+1N(t)We,k+Wi,k+stduk=N(s)+1N(u)Wi,k+(t-s)k=N(t-ϵτ)+1N(s)Wi,k+s-ϵτt-ϵτduk=N(u)+1N(t-ϵτ)Wi,k.

It turns out that the contribution of the third term overlaps with that of the fourth and fifth terms. Further splitting of that third term produces the following expression:

I(t,s)=-ϵτ0duk=N(t-ϵτ)+1N(u)We,k+Wi,kI1+ϵτk=N(t)+1N(-ϵτ)We,k+Wi,kI2(t)+stduk=N(u)+1N(t)We,k+Wi,k+k=N(s)+1N(u)Wi,k+(s-t+ϵτ)k=N(s)+1N(t)We,k+Wi,kI3(t,s)+t-ϵτsduk=N(u)+1N(s)We,k+Wi,k+(t-s)k=N(t-ϵτ)+1N(s)Wi,kI4(s,t)+s-ϵτt-ϵτduk=N(u)+1N(t-ϵτ)Wi,kI5(t,s),

where all five terms correspond to independent numbers of i.i.d. draws over the intervals (-ϵτ,0],(t,-ϵτ],(s,t],(t-ϵτ,s], and (s-ϵτ,t-ϵτ]. Then, we have

Qϵ(t,s)=EeHe(t)+Hi(s)=Ee-I1/(ϵτ)Ee-I2(t)/(ϵτ)Ee-I3(t,s)/(ϵτ)Ee-I4(s,t)/(ϵτ)Ee-I5(t,s)/(ϵτ),

where all expectation terms can be computed via standard manipulation of the moment-generating function of Poisson processes [51]. The trick is to remember that, for all ts, given that a Poisson process admits K=N(t)-N(s) points in (s,t], all these K points are uniformly i.i.d. over (s,t]. This trick allows one to simply represent all integral terms in terms of uniform random variables, whose expectations are easily computable. To see this, let us consider I3(t,s), for instance. We have

I3t,s=tsk=Ns+1Nt1UkWe,k+Wi,k+UkWi,k+st+ϵτk=Ns+1NtWe,k+Wi,k=tsk=Ns+1NtUkWe,k+ϵτk=Ns+1NtWe,k+Wi,k,

where UkN(s)+1kN(t) are uniformly i.i.d. on [0,1]. From the knowledge of the moment-generating function of Poisson random variables [51], one can evaluate

EeI3t,s/ϵτ=Eets/ϵτk=Ns+1NtUkWe,kk=Ns+1NTWe,k+Wi,k=EEe[(ts)/ϵτ]UWeWe+WiN(t)N(s)N(t)N(s)]=expb(ts)Ee[(ts)/ϵτ]UWeWe+Wi1)),

where We,Wi denotes exemplary conductance jumps and U denotes an independent uniform random variable. Furthermore, we have

Ee-[(t-s)/ϵτ]UWe-We+Wi=EEe-[(t-s)/ϵτ]UWe-We+WiWe,Wi=Eeie-We+WiEe-[(t-s)/ϵτ]UWe=Eeie-We+Wi1-e-[(t-s)/ϵτ]Wet-sϵτWe,

so that we finally obtain

lnEe-I3(t,s)/(ϵτ)=ϵbτEeie-We+Wi1-e-[(t-s)/ϵτ]WeWe-t-sϵτ.

Similar calculations show that we have

lnEe-I1/(ϵτ)=ϵbτEei1-e-We+WiWe+Wi-1,lnEe-I2(t)/(ϵτ)=b(ϵτ+t)1-Eeie-We+Wi,lnEe-I4(s,t)/(ϵτ)=ϵbτEeie-(t-s)/ϵτWi1-e-(1+[(s-t)/ϵτ])We+WiWe+Wi-1+s-tϵτ,lnEe-I5(t,s)/(ϵτ)=ϵbτEei1-e-[(t-s)/ϵτ]WiWi-t-sϵτ.

APPENDIX I: EXPRESSION OF Rϵ(t,u,s,v)ON𝒪ϵand𝒟ϵ

Using similar calculations as in Appendix H, we can evaluate the quadrivariate expectation Rϵ(t,u,s,v) on the region 𝒪ϵ, for which the O order holds: 0-ϵτtut-ϵτu-ϵτsvs-ϵτv-ϵτ. This requires one to isolate and consider nine independent contributions, corresponding to the nine contiguous intervals specified by the O order. We find

lnRϵt,u,s,v=A1+A2t+A3t,u+A4u,t+A5t,u+A6u,s+A7s,v+A8v,s+A9s,v,

where the non-negative terms making up the above sum are defined as

A1=ϵbτEei1-e-2We+Wi2We+Wi-1,A2(t)=b(ϵτ+t)1-Eeie-2We+Wi,A3(t,u)=ϵbτEeie-2We+Wi1-e-[(t-u)/ϵτ]]WeWe-t-uϵτ,A4(u,t)=ϵbτEeie-We-(1+[(t-u)/ϵτ])Wi1-e-(1+[(u-t)/ϵτ])We+WiWe+Wi-1+u-tϵτ,A5(t,u)=ϵbτEeie-We+Wi1-e-(t-u)/ϵτWiWi-t-uϵτ,A6(u,s)=b(s+ϵτ-u)1-Eeie-We+Wi,A7(s,v)=ϵbτEeie-We+Wi1-e-[(s-v)/ϵτ]WeWe-s-vϵτ,A8(v,s)=ϵbτEeie-[(s-v)/ϵτ]Wi1-e-(1-[(s-v)/ϵτ])We+WiWe+Wi-1-s-vϵτ,A9(s,v)=ϵbτEei1-e-[(s-v)/ϵτ]Wi-s-vϵτ.Wi

One can check that A3(t,t)=A5(t,t)=0 and A7(s,s)=A9(s,s)=0 and that A1,A4(u,t), and A8(v,s) are all uniformly O(ϵ) on the region 𝒪ϵ. This implies that, for all (t,s) in 𝒪ϵ, we have

Rt,s=limϵ0+Rϵt,t,s,s=limϵ0+eA2t+A6t,s=e2btaei,2-bt-saei,1.

Using similar calculations as in Appendix H, we can evaluate the quadrivariate expectation Rϵ(t,u,s,v) on the region 𝒟ϵ, for which the D order holds: 0-ϵτtusvt-ϵτu-ϵτs-ϵτv-ϵτ. This requires one to isolate and consider nine independent contributions, corresponding to the nine contiguous intervals specified by the O order. We find

lnRϵ(t,u,s,v)=B1+B2(t)+B3(t,u)+B4(t,u,s)+B5(t,u,s,v)+B6(t,u,s,v)+B7(t,u,s,v)+B8(u,s,v)+B9(s,v) (I1)

where the non-negative terms making up the above sum are defined as

B1=ϵbτEei1-e-2We+Wi2We+Wi-1,B2(t)=b(ϵτ+t)1-Eeie-2We+Wi,B3(t,u)=ϵbτEeie-2We+Wi1-e-[(t-u)/ϵτ]WeWe-t-uϵτ,B4(t,u,s)=ϵbτEeie-(2-[(t-s)/ϵτ])We-(2-[(u-s)/ϵτ])Wi1-e-[(u-s)/ϵτ]We+WiWe+Wi-u-sϵτ,B5(t,u,s,v)=ϵbτEeie-(2-[(t-v)/ϵτ])We-(2-[(u-v)/ϵτ])Wi1-e-[(s-v)/ϵτ]2We+Wi2We+Wi-s-vϵτ,B6(t,u,s,v)=ϵbτEeie-((t-s)/ϵτ)We-([2t-(u+v)]/ϵτ)Wi1-e-(1-[(t-v)/ϵτ])2We+Wi2We+Wi-1-t-vϵτ,B7(t,u,s,v)=ϵbτEeie-((u-s)/ϵτ)We-((u-v)/ϵτ)Wi1-e-[(t-u)/ϵτ]We+2WiWe+2Wi-t-uϵτ,B8(u,s,v)=ϵbτEeie-((s-v)/ϵτ)Wi1-e-[(u-s)/ϵτ]We+WiWe+Wi-u-sϵτ,B9(s,v)=bϵτEeie-((s-v)/ϵτ)Wi1-e-[(s-v)/ϵτ]WiWi-s-vϵτ.

Observe that B1=A1 and B2(t)=A2(t) and that B3(t,t)=B7(t,t,s,v)=0 and B5(t,u,s,s)=B9(s,s)=0. Moreover, one can see that R(t,s) is continuous over the whole negative orthant by checking that

lims(t-ϵτ)-B4(t,t,s)=lims(t-ϵτ)+A4(t,s),lims(t-ϵτ)-B6(t,t,s,s)=lims(t-ϵτ)+A6(t,s),lims(t-ϵτ)-B8(t,s,s)=lims(t-ϵτ)+A8(t,s).

Actually, by computing the appropriate limit values of the relevant first- and second-order derivatives of Rϵ(t,u,s,v), one can check that, for ϵ>0, all the integrands involved in specifying the coefficients of the quadratic form Eq. (G2) define continuous functions.

APPENDIX J: INTEGRALS OF THE QUADRATIC TERMS ON 𝒟ϵ

Here, we treat only the quadratic term Ae, as the other quadratic terms Ai and Bei involve a similar treatment. The goal is to compute Ae, which is defined as the contribution to Ae resulting from integrating lim(u,v)(t,s)-tsRϵ(t,u,s,v) over the diagonal region 𝒟ϵ={t,s0|τϵ|t-s}, in the limit ϵ0+. To this end, we first remark that

tsRϵ(t,u,s,v)Rϵ(t,u,s,v)=tslnRϵ(t,u,s,v)+tlnRϵ(t,u,s,v)slnRϵ(t,u,s,v).

Injecting the analytical expression Eq. (I1) into the above relation and evaluating Iϵ(t,s)=limu,v(t,s)-tsRϵ(t,u,s,v) reveals that Iϵ(t,s) scales as 1/ϵ, so that one expects that

Ae=limϵ0+𝒟ϵe(t+s)/τIϵ(t,s)dtds>0.

To compute the exact value of Ae, we perform the change of variable x=(t-s)/(ϵτ)s=t-ϵτx to write

𝒟ϵe(t+s)/τIϵ(t,s)dtds=2-001ϵτe-ϵxIϵ(t,t+ϵτx)dxe2t/τdt,

where the function ϵe-ϵx/τIϵ(t,t+ϵx) remains of the order of one on 𝒟ϵ in the limit of instantaneous synapses. Actually, one can compute that

limϵ0+ϵe-ϵxIϵt,t+ϵτx=b2τEeiWe2We+Wie-xWe+Wi1-e-21-xWe+Wie2btaei,2.

Then, for dealing with positive, continuous, uniformly bounded functions, one can safely exchange the integral and limit operations to get

Ae=2001limϵ0+ϵτeϵxIϵt,t+ϵτxdxe2t/τdt=0e2t/τ1+aei,2dt01bEeiWe2We+Wiex/τWe+Wi1e21x/τWe+Widx=bτ21+aei,2EeiWe2We+Wi21eWe+Wi2.

A similar calculation for the quadratic cross term Bei yields

Bei=2cei1+aei,2withcei=bτ2EeiWeWiWe+Wi21-e-We+Wi2.

In order to express Ae in terms of cei, we need to introduce the quantity ae,12=ae,1-ae,2 which satisfies

ae,12=bτEeiWeWe+Wi1eWe+Wi12EeiWeWe+Wi1eWe+Wi2=bτEeiWeWe+Wi1eWe+Wi1121eWe+Wi=bτEeiWeWe+Wi1eWe+Wi1+eWe+Wi2=bτ2EeiWeWe+Wi1+eWe+Wi2.

With the above observation, we remark that

1+aei,2Aeae,12=bτ2EeiWe2We+Wi21eWe+Wi2EeiWeWe+Wi1eWe+Wi2=bτ2EeiWe2WeWe+WiWe+Wi21eWe+Wi2=bτ2EeiWeWiWe+Wi21eWe+Wi2=cei

so that we have the following compact expression for the quadratic diagonal term:

Ae=ae,12-cei1+aei,2.

APPENDIX K: COMPACT VARIANCE EXPRESSION

Our goal is to find a compact, interpretable formula for the stationary variance V[V] from the knowledge of the quadratic form

EV2=AeVe2+BeiVeVi+AiVi2+VeBeI+ViBiI(I/G)+AI(I/G)2.

Let us first assume no current injection, I=0, so that one has to keep track of only the quadratic terms. Specifying the quadratic coefficient Ae=Ae+Ae,Ai=Ai+Ai, and Bei=Bei+Bei in Eq. (K1), we get

EV2=ae,12ae,2ae,11+aei,11+aei,2+ae,12cei1+aei,2Ve2+ae,12ai,2ai,1+ai,12ae,2ae,11+aei,11+aei,2+2cei1+aei,2VeVi+ai,12ai,2ai,11+aei,11+aei,2+ai,12cei1+aei,2Vi2=ae,12ae,2ae,1+1+ae,1+ai,1ae,1ae21+aei,11+aei,2Ve2+ae,12ai,2ai,1+ai,12ae,2ae,11+aei,11+aei,2VeVi+ai,12ai,2ai,1+1+ae,1+ai,1ai,1ai21+aei,11+aei,2Vi2cei1+aei,2VeVi2,

where we collect separately all the terms containing the coefficient cei and where we use the facts that by definition ae,12=ae,1-ae,2,ai,12=ai,1-ai,2, and aei,1=ae,1+ai,1. Expanding and simplifying the coefficients of Ve2 and Vi2 above yield

EV2=ae,1ae,2+1+ai,1ae,1ae21+aei,11+aei,2Ve2+ae,12ai,2ai,1+ai,12ae,2ae,11+aei,11+aei,2VeVi+ai,1ai,2+1+ae,1ai,1ai21+aei,11+aei,2Vi2cei1+aei,2VeVi2.

Then, we can utilize the expression above for EV2 together with the stationary mean formula

E[V]=ae,1Ve+ai,1Vi1+aei,1 (K1)

to write the variance V[V]=EV2-E[V]2 as

V[V]=ae,1-ae,21+ai,12+ai,1-ai,2ae,121+aei,121+aei,2Ve2-ae,1-ae,2ae,11+ae,1+ai,1ai,1-ai,21+ai,11+aei,121+aei,2VeVi+ai,1-ai,21+ae,12+ae,1-ae,2ai,121+aei,121+aei,2Vi2-cei1+aei,2Ve-Vi2.

To factorize the above expression, let us reintroduce ae,12=ae,1-ae,2 and ai,12=ai,1-ai,2 and collect the terms where these two coefficients occur. This yields

VV=ae,121+aei,121+aei,21+ai,12Ve2ai,11+ae,12VeVi+ae,12Vi2+ai,121+aei,121+aei,21+ae,12Vi2ae,11+ai,12VeVi+ai,12Ve2cei1+aei,2VeVi2=ae,121+aei,21+ai,1Veae,1Vi1+aei,12+ai,121+aei,21+ai,eViai,1Ve1+aei,12cei1+aei,2VeVi2.

Finally, injecting the expression of stationary mean Eq. (K1) in both parentheses above produces the compact formula

V[V]=ae,121+aei,2Ve-E[V]2+ai,121+aei,2Vi-E[V]2-cei1+aei,2Ve-Vi2, (K2)

which is the same as the one given in Eq. (16).

APPENDIX L: FACTORIZED VARIANCE EXPRESSION

In this appendix, we reshape the variance expression given in Eq. (K2) under a form that is clearly non-negative. To this end, let us first remark that the calculation in Appendix J shows that

ae,12-cei=bτ2EeiWe2We+Wi21+e-We+Wi2.

Then, setting Ve-Vi2=Ve-E[V]-Vi-E[V]2=Ve-E[V]2-2Ve-E[V]Vi-E[V]+Vi-E[V]2 in Eq. (K2), we obtain

VV=11+aei,2ae,12VeEV2+ai,12ViEV2ceiVeVi2=11+aei,2ae,12ceiVeEV2+2ceiVeEVViEV+ai,12ceiViEV2=bτ1+aei,2EeiWe2VeEV22We+Wi2+2WeVeEVWiViEV2We+Wi2+Wi2ViEV22We+Wi21eWe+Wi2=bτ21+aei,2EeiWeVeEV+WiViEV2We+Wi21eWe+Wi2. (L1)

Note that the above quantity is clearly non-negative as any variance shall be. From there, one can include the impact of the injected current I by further considering all the terms in Eq. (K1), including the linear and inhomogeneous current-dependent terms. Similar algebraic manipulations confirm that Eq. (L1) remains valid so that the only impact of I is via altering the expression E[V], so that we ultimately obtain the following explicit compact form:

V[V]=EeiWeVe+WiViWe+Wi-E[V]21-e-We+Wi22/(bτ)+Eei1-e-2We+WiwithE[V]=bτEeiWeVe+WiViWe+Wi1-e-We+Wi+I/G1+bτEei1-e-We+Wi.

The above expression shows that as expected V[V]0 and that the variability vanishes if and only if We/Wi=E[V]-Vi/Ve-E[V] with probability one. In turn, plugging this relation into the mean voltage expression and solving for E[V] reveals that we necessarily have E[V]=I/G. This is consistent with the intuition that variability can vanish only if excitation and inhibition perfectly cancel one another.

APPENDIX M: VARIANCE IN THE SMALL-WEIGHT APPROXIMATION

In this appendix, we compute the simplified expression for the variance V[V] obtained via the small-weight approximation. Second, let us compute the small-weight approximation of the second-order efficacy

cei=bτ2EeiWeWiWe+Wi21-e-We+Wi2bτ2EeiWeWi=bτ2wewiEeikeki,

which amounts to computing the expectation of the cross product of the jumps ke and ki. To estimate the above approximation, it is important to remember that first that pe and pi are not defined as the marginals of pei but as conditional marginals, for which we have pe,k=b/bel=0Kipei,kl and pi,l=b/bik=0Kepei,kl. Then, by the definition of the correlation coefficient ρei in Eq. (4), we have

ρei=bEeikekiKebEeikeKibEeiki=bEeikekiKebeEekeKibiEiki=bEeikekiKeKireri,

as the rates be and bi are such that beEeke=Kere and biEeki=Kiri. As a result, we obtain a simplified expression for the cross-correlation coefficient:

cei=ρeireriτ/2KeweKiwi.

Observe that, as expected, cei vanishes when ρei=0. Second, let us compute the small-weight approximation of the second-order efficacy

ae,12=bτ2EeiWeWe+Wi1-e-We+Wi2bτ2EeiWeWe+Wi=bτ2we2Eeike2+wewiEeikeki.

To estimate the above approximation, we use the definition of the correlation coefficient ρe in Eq. (8):

ρe=beEekeke-1beEekeKe-1=bEeikeke-1KeKe-1re,

as the rate be is such that beEeke=Kere. This directly implies that

bEeike2=bEeikeke-1+bEeike=ρeKeKe-1re+Kere=Kere1+ρeKe-1.

so that we evaluate

ae,12=bτ2we2Eeike2+wewiEeikeki=reτ2Ke1+ρeKe-1we2+ρeirireτ2KeweKiwi,

which simplifies to ae,12=reτ/2Ke1+ρeKe-1we2 when excitation and inhibition act independently. A symmetric expression holds for the inhibitory efficacy ai,12. Plugging the above expressions for synaptic efficacies into the variance expression Eq. (16) yields the small-weight approximation

VV1+ρeKe1Kerewe2VeEV2+1+ρiKi1Kiriwi2ViEV221/τ+Kerewe+Kiriwi+ρeireriKeweKiwiVeEV2+ViEV2VeVi221/τ+Kerewe+Kiriwi.

Let us note that the first term in the right-hand side above represents the small-weight approximation of the voltage variance in the absence of correlation between excitation and inhibition, i.e., for ρei=0. Denoting the latter approximation by V[V]ρei=0 and using the fact that the small-weight expression for the mean voltage

E[V]=KereweVe+KiriwiVi1/τ+Kerewe+Kiriwi

is independent of correlations, we observe that, as intuition suggests, synchrony-based correlation between excitation and inhibition results in a decrease of the neural variability:

ΔV[V]ρei=V[V]-V[V]ρei=0-ρeireriKeweKiwiVe-E[V]E[V]-Vi1/τ+Kerewe+Kiriwi0.

However, the overall contribution of correlation is to increase variability in the small-weight approximation. This can be shown under the assumptions that Ke1 and Ki1, by observing that

ΔV[V]ρei,ρe/i=VVVVρe/i=ρei=0ρereKeweVeEVρiriKiwiViEV221/τ+Kerewe+Kiriwi+ρeρiρeireriKeweKiwiVeEVEVVi1/τ+Kerewe+Kiriwi0,

where both terms are positive since we always have 0ρeiρeρi.

APPENDIX N: VALIDITY OF THE SMALL-WEIGHT APPROXIMATION

Biophysical estimates of the synaptic weights we<0.01 and wi<0.04 and the synaptic input numbers Ke<10000 and Ki<2500 suggest that neurons operates in the small-weight regime. In this regime, we claim that exponential corrections due to finite-size effect can be neglected in the evaluation of synaptic efficacies, as long as the spiking correlations remains weak. Here, we make this latter statement quantitative by focusing on the first-order efficacies in the case of excitation alone. The relative error due to neglecting exponential corrections can be quantified as

=EeWe-Ee1-e-WeEe1-e-We0.

Let us evaluate this relative error, assumed to be small, when correlations are parametrized via beta distributions with parameter βe=1/ρe-1. Assuming correlations to be weak, ρe1, amounts to assuming large, βe1. Under the assumptions of small error, we can compute

Ee1-e-WeEeWe=weEekeandEeWe-1+e-WeEeWe2/2=we2Eeke2/2.

By the calculations carried out in Appendix M, we have

beEeke=KereandbeEeke2=Kere1+ρeKe-1.

Remembering that βe=1/ρe-1, this implies that we have

EeWe2/2EeWe-EeWe2/2we1+ρeKe-1/21-we1+ρeKe-1/2.

For a correlation coefficient ρe0.05, this means that neglecting exponential corrections incurs less than e=3% error if the number of inputs is smaller than Ke1000 for moderate synaptic weight we=0.001 or than Ke100 for large synaptic weight we=0.01.

APPENDIX O: INFINITE-SIZE LIMIT WITH SPIKING CORRELATIONS

The computation of the first two moments E[V] and EV2 requires one to evaluate various efficacies as expectations. Upon inspection, these expectations are all of the form bEeifWe,Wi, where f is a smooth positive function that is bounded on R+×R+ with f(0,0)=0. Just as for the Lévy-Khintchine decomposition of stable jump processes [78,79], this observation allows one to generalize our results to processes that exhibit and countable infinity of jumps over finite, nonzero time intervals. For our parametric forms based on beta distributions, such processes emerge in the limit of an arbitrary large number of inputs, i.e., for Ke,Ki. Let us consider the case of excitation alone for simplicity. Then, we need to make sure that all expectations of the form beEeifWe remain well posed in the limit Ke for smooth, bounded test function f with f(0)=0. To check this, observe that, for all 0<kKe, we have by Eqs. (7) and (9) that

bepe,k=βreKekBk,β+Kek=βreΓKe+1Γk+1ΓKek+1ΓkΓβ+Kek+1Γβ+Ke,

where we have introduced the Gamma function Γ. Rearranging terms and using the fact that Γ(z+1)=zΓ(z) for all z>0, we obtain

bepe,k=βrekKeΓKeΓβ+KeΓβ+KekKekΓKek=βrek1kKeβ1+o1Ke,

where the last equality is uniform in k and follows from the fact that, for all x>0, we have

limzΓ(z+x)Γ(z)=zx1+x21z+o1z.

From there, given a test function f, let us consider

beEefWe=k=1Kebepe,kδWekΩeKefWedWe=k=1Kebepe,kfkΩeKe=rek=1Keβk1kKeβ1fkΩeKe+o1.

The order zero term above can be interpreted as a Riemann sum so that one has

limKebeEefWe=relimKe1Kek=1KeβKek1kKefkΩeKe=re01βθ1(1θ)β1fθΩedθ=re0Ωeβw1wΩeβ1fwdw.

Thus, the jump densities is specified via the Lévy-Khintchine measure

νew=βw1-wΩeβ-1,

which is a deficient measure for admitting a pole in zero. This singular behavior indicates that the limit jump process obtained when Ke has a countable infinity of jumps within any finite, nonempty time interval. Generic stationary jump processes with independent increments, as is the case here, are entirely specified by their Lévy-Khintchine measure νe [78,79]. Moreover, one can check that, given knowledge of νe, one can consistently estimate the corresponding pairwise spiking correlation as

ρe=limKeEekeke1EekeKe1=limKebeEeke/Ke2beEeke/Ke=0Ωew2νewdwΩe0Ωewνewdw.

Performing integral with respect to the Lévy-Khintchine measure νe instead of the evaluating the expectation Ee[] in Eqs. (14) and (16) yields

E[V]=Ve0Ωe1-e-wνe(dw)1/τ+0Ωe1-e-wνe(dw)andV[V]=Ve-E[V]20Ωe1-e-w2νe(dw)2/τ+0Ωe1-e-2wνe(dw).

Observe that, as 1-e-w2w2 for all w0, the definition of the spiking correlation and voltage variance implies that we have V[V]=Oρe so that neural variability consistently vanishes in the absence of correlations.

References

  • [1].Churchland MM et al. , Stimulus onset quenches neural variability: A widespread cortical phenomenon, Nat. Neurosci 13, 369 EP (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Tolhurst D, Movshon JA, and Thompson I, The dependence of response amplitude and variance of cat visual cortical neurones on stimulus contrast, Exp. Brain Res 41, 414 (1981). [DOI] [PubMed] [Google Scholar]
  • [3].Tolhurst DJ, Movshon JA, and Dean AF, The statistical reliability of signals in single neurons in cat and monkey visual cortex, Vision Res. 23, 775 (1983) [DOI] [PubMed] [Google Scholar]
  • [4].Churchland MM, Byron MY, Ryu SI, Santhanam G, and Shenoy KV, Neural variability in premotor cortex provides a signature of motor preparation, J. Neurosci 26, 3697 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Rickert J, Riehle A, Aertsen A, Rotter S, and Nawrot MP, Dynamic encoding of movement direction in motor cortical neurons, J. Neurosci 29, 13870 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Stevens CF and Zador AM, Input synchrony and the irregular firing of cortical neurons, Nat. Neurosci 1, 210 (1998). [DOI] [PubMed] [Google Scholar]
  • [7].Lampl I, Reichova I, and Ferster D, Synchronous membrane potential fluctuations in neurons of the cat visual cortex, Neuron 22, 361 (1999). [DOI] [PubMed] [Google Scholar]
  • [8].Ecker AS, Berens P, Cotton RJ, Subramaniyan M, Denfield GH, Cadwell CR, Smirnakis SM, Bethge M, and Tolias AS, State dependence of noise correlations in macaque primary visual cortex, Neuron 82, 235 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Poulet JF and Petersen CC, Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice, Nature (London) 454, 881 (2008). [DOI] [PubMed] [Google Scholar]
  • [10].Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, and Harris KD, The asynchronous state in cortical circuits, Science 327, 587 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, and Tolias AS, Decorrelated neuronal firing in cortical microcircuits, Science 327, 584 (2010). [DOI] [PubMed] [Google Scholar]
  • [12].Cohen MR and Kohn A, Measuring and interpreting neuronal correlations, Nat. Neurosci 14, 811 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Braitenberg V and Schüz A, Cortex: Statistics and Geometry of Neuronal Connectivity (Springer Science & Business Media, New York, 2013). [Google Scholar]
  • [14].Softky WR and Koch C, Cortical cells should fire regularly, but do not, Neural Comput. 4, 643 (1992). [Google Scholar]
  • [15].Bell A, Mainen ZF, Tsodyks M, and Sejnowski TJ, “Balancing” of conductances may explain irregular cortical spiking, La Jolla, CA, Institute for Neural Computation Technical; Report No. INC-9502, 1995. [Google Scholar]
  • [16].Amit DJ and Brunel N, Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex, Cereb. Cortex 7, 237 (1997). [DOI] [PubMed] [Google Scholar]
  • [17].Brunel N, Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons, J. Comput. Neurosci 8, 183 (2000). [DOI] [PubMed] [Google Scholar]
  • [18].Ahmadian Y and Miller KD, What is the dynamical regime of cerebral cortex?, Neuron 109, 3373 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Sompolinsky H, Crisanti A, and Sommers HJ, Chaos in random neural networks, Phys. Rev. Lett 61, 259 (1988). [DOI] [PubMed] [Google Scholar]
  • [20].van Vreeswijk C and Sompolinsky H, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science 274, 1724 (1996). [DOI] [PubMed] [Google Scholar]
  • [21].Vreeswijk C. v. and Sompolinsky H, Chaotic balanced state in a model of cortical circuits, Neural Comput. 10, 1321 (1998). [DOI] [PubMed] [Google Scholar]
  • [22].Ahmadian Y, Rubin DB, and Miller KD, Analysis of the stabilized supralinear network, Neural Comput. 25, 1994 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Rubin DB, Van Hooser SD, and Miller KD, The stabilized supralinear network: A unifying circuit motif underlying multi-input integration in sensory cortex, Neuron 85, 402 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, and Miller KD, The dynamical regime of sensory cortex: Stable dynamics around a single stimulus-tuned attractor account for patterns of noise variability, Neuron 98, 846 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Haider B, Häusser M, and Carandini M, Inhibition dominates sensory responses in the awake cortex, Nature (London) 493, 97 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Tan AYY, Andoni S, and Priebe NJ, A spontaneous state of weakly correlated synaptic excitation and inhibition in visual cortex, Neuroscience 247, 364 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Tan AYY, Chen Y, Scholl B, Seidemann E, and Priebe NJ, Sensory stimulation shifts visual cortex from synchronous to asynchronous states, Nature (London) 509, 226 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Okun M, Steinmetz NA, Cossell L, Iacaruso MF, Ko H, Barthó P, Moore T, Hofer SB, Mrsic-Flogel TD, Carandini M et al. , Diverse coupling of neurons to populations in sensory cortex, Nature (London) 521, 511 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Hansel D and van Vreeswijk C, The mechanism of orientation selectivity in primary visual cortex without a functional map, J. Neurosci 32, 4049 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Pattadkal JJ, Mato G, van Vreeswijk C, Priebe NJ, and Hansel D, Emergent orientation selectivity from random networks in mouse visual cortex, Cell Rep. 24, 2042 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Shadlen MN and Newsome WT, The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding, J. Neurosci 18, 3870 (1998). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Chen Y, Geisler WS, and Seidemann E, Optimal decoding of correlated neural population responses in the primate visual cortex, Nat. Neurosci 9, 1412 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Polk A, Litwin-Kumar A, and Doiron B, Correlated neural variability in persistent state networks, Proc. Natl. Acad. Sci. U.S.A 109, 6295 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Yu J and Ferster D, Membrane potential synchrony in primary visual cortex during sensory stimulation, Neuron 68, 1187 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Arroyo S, Bennett C, and Hestrin S, Correlation of synaptic inputs in the visual cortex of awake, behaving mice, Neuron 99, 1289 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Okun M and Lampl I, Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities, Nat. Neurosci 11, 535 (2008). [DOI] [PubMed] [Google Scholar]
  • [37].Zerlaut Y, Zucca S, Panzeri S, and Fellin T, The spectrum of asynchronous dynamics in spiking networks as a model for the diversity of non-rhythmic waking states in the neocortex, Cell Rep. 27, 1119 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Sanzeni A, Histed MH, and Brunel N, Emergence of irregular activity in networks of strongly coupled conductance-based neurons, Phys. Rev. X 12, 011044 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Stein RB, A theoretical analysis of neuronal variability, Biophys. J 5, 173 (1965). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Tuckwell HC, Introduction to Theoretical Neurobiology: Linear Cable Theory and Dendritic Structure (Cambridge University Press, Cambridge, England, 1988), Vol. 1. [Google Scholar]
  • [41].Richardson MJE, Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons, Phys. Rev. E 69, 051918 (2004). [DOI] [PubMed] [Google Scholar]
  • [42].Richardson MJ and Gerstner W, Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance, Neural Comput. 17, 923 (2005). [DOI] [PubMed] [Google Scholar]
  • [43].Richardson MJ and Gerstner W, Statistics of subthreshold neuronal voltage fluctuations due to conductance-based synaptic shot noise, Chaos 16, 026106 (2006). [DOI] [PubMed] [Google Scholar]
  • [44].Marcus S, Modeling and analysis of stochastic differential equations driven by point processes, IEEE Trans. Inf. Theory 24, 164 (1978). [Google Scholar]
  • [45].Marcus SI, Modeling and approximation of stochastic differential equations driven by semimartingales, Stochastics 4, 223 (1981). [Google Scholar]
  • [46].Destexhe A, Rudolph M, Fellous J-M, and Sejnowski T, Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons, Neuroscience (Oxford) 107, 13 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].Meffin H, Burkitt AN, and Grayden DB, An analytical model for the 'large, fluctuating synaptic conductance state' typical of neocortical neurons in vivo, J. Comput. Neurosci 16, 159 (2004). [DOI] [PubMed] [Google Scholar]
  • [48].Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, and Josić K, The mechanics of state-dependent neural correlations, Nat. Neurosci 19, 383 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, and Shea-Brown E, From the statistics of connectivity to the statistics of spike times in neuronal networks, Curr. Opin. Neurobiol 46, 109 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Rall W, Time constants and electrotonic length of membrane cylinders and neurons, Biophys. J 9, 1483 (1969). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Daley DJ and Vere-Jones D, An Introduction to the Theory of Point Processes. Vol. I. Probability and Its Applications (Springer-Verlag, New York, 2003). [Google Scholar]
  • [52].Daley DJ and Vere-Jones D, An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure (Springer Science & Business Media, New York, 2007). [Google Scholar]
  • [53].Knight BW, The relationship between the firing rate of a single neuron and the level of activity in a population of neurons: Experimental evidence for resonant enhancement in the population response, J. Gen. Physiol 59, 767 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Knight B, Dynamics of encoding in a population of neurons, J. Gen. Physiol 59, 734 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Kingman JF, Uses of exchangeability, Ann. Probab 6, 183 (1978). [Google Scholar]
  • [56].Aldous DJ, Exchangeability and related topics, in Proceedings of the École d'Été de Probabilités de Saint-Flour XIII—1983 (Springer, New York, 1985), pp. 1–198. [Google Scholar]
  • [57].De Finetti B, Funzione caratteristica di un fenomeno aleatorio, in Atti del Congresso Internazionale dei Matematici: Bologna del 3 al 10 de settembre di 1928 (1929), pp. 179–190, arXiv:1512.01229. [Google Scholar]
  • [58].Gupta AK and Nadarajah S, Handbook of Beta Distribution and Its Applications (CRC Press, Boca Raton, 2004). [Google Scholar]
  • [59].Macke JH, Berens P, Ecker AS, Tolias AS, and Bethge M, Generating spike trains with specified correlation coefficients, Neural Comput. 21, 397 (2009). [DOI] [PubMed] [Google Scholar]
  • [60].Hjort NL, Nonparametric Bayes estimators based on beta processes in models for life history data, Ann. Stat 18, 1259 (1990). [Google Scholar]
  • [61].Thibaux R and Jordan MI, Hierarchical beta processes and the indian buffet process, in Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Cambridge, MA, 2007), pp. 564–571. [Google Scholar]
  • [62].Broderick T, Jordan MI, and Pitman J, Beta processes, stick-breaking and power laws, Bayesian Anal. 7, 439 (2012). [Google Scholar]
  • [63].Berkes P, Wood F, and Pillow J, Characterizing neural dependencies with copula models, Adv. Neural Inf. Process. Syst 21, 129 (2008). [Google Scholar]
  • [64].Balakrishnan N and Lai CD, Continuous Bivariate Distributions (Springer Science & Business Media, New York, 2009). [Google Scholar]
  • [65].Stratonovich R, A new representation for stochastic integrals and equations, SIAM J. Control 4, 362 (1966). [Google Scholar]
  • [66].Chechkin A and Pavlyukevich I, Marcus versus Stratonovich for systems with jump noise, J. Phys. A 47, 342001 (2014). [Google Scholar]
  • [67].Matthes K, Zur Theorie der Bedienungsprozesse, in Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions, Random Processes (Liblice, 1962) (Czech Academy of Science, Prague, 1964), pp. 513–528. [Google Scholar]
  • [68].Arieli A, Sterkin A, Grinvald A, and Aertsen A, Dynamics of ongoing activity: Explanation of the large variability in evoked cortical responses, Science 273, 1868 (1996). [DOI] [PubMed] [Google Scholar]
  • [69].Mainen Z and Sejnowski T, Reliability of spike timing in neocortical neurons, Science 268, 1503 (1995). [DOI] [PubMed] [Google Scholar]
  • [70].Rieke F, Warland D, de Ruyter van Steveninck R, and Bialek W, Spikes, A Bradford Book (MIT Press, Cambridge, MA, 1999), pp. xviii+395, exploring the neural code. [Google Scholar]
  • [71].Softky WR and Koch C, The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPS, J. Neurosci 13, 334 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Bruno RM and Sakmann B, Cortex is driven by weak but synchronously active thalamocortical synapses, Science 312, 1622 (2006). [DOI] [PubMed] [Google Scholar]
  • [73].Jouhanneau J-S, Kremkow J, Dorrn AL, and Poulet JF, In vivo monosynaptic excitatory transmission between layer 2 cortical pyramidal neurons, Cell Rep. 13, 2098 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Pala A and Petersen CC, In vivo measurement of cell-type-specific synaptic connectivity and synaptic transmission in layer 2/3 mouse barrel cortex, Neuron 85, 68 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [75].Seeman SC, Campagnola L, Davoudian PA, Hoggarth A, Hage TA, Bosma-Moody A, Baker CA, Lee JH, Mihalas S, Teeter C et al. , Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex, eLife 7, e37349 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [76].Campagnola L et al. , Local connectivity and synaptic dynamics in mouse and human neocortex, Science 375, eabj5861 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Destexhe A, Rudolph M, and Paré D, The high-conductance state of neocortical neurons in vivo, Nat. Rev. Neurosci 4, 739 (2003). [DOI] [PubMed] [Google Scholar]
  • [78].Khintchine A, Korrelationstheorie der stationären stochastischen prozesse, Math. Ann 109, 604 (1934). [Google Scholar]
  • [79].Lévy P and Lévy P, Théorie de l'Addition des Variables Aléatoires (Gauthier-Villars, Paris, 1954). [Google Scholar]
  • [80].Qaqish BF, A family of multivariate binary distributions for simulating correlated binary variables with specified marginal means and correlations, Biometrika 90, 455 (2003). [Google Scholar]
  • [81].Niebur E, Generation of synthetic spike trains with defined pairwise correlations, Neural Comput. 19, 1720 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [82].Macke JH, Buesing L, Cunningham JP, Yu BM, Shenoy KV, and Sahani M, Empirical models of spiking in neural populations, Adv. Neural Inf. Process. Syst 24, 1350 (2011). [Google Scholar]
  • [83].Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky E, and Simoncelli EP, Spatio-temporal correlations and visual signalling in a complete neuronal population, Nature (London) 454, 995 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [84].Park IM, Archer EW, Latimer K, and Pillow JW, Universal models for binary spike patterns using centered dirichlet processes, Adv. Neural Inf. Process. Syst 26, 2463 (2013). [Google Scholar]
  • [85].Theis L, Chagas AM, Arnstein D, Schwarz C, and Bethge M, Beyond GLMs: A generative mixture modeling approach to neural system identification, PLoS Comput. Biol 9, e1003356 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [86].Schneidman E, Berry MJ, Segev R, and Bialek W, Weak pairwise correlations imply strongly correlated network states in a neural population, Nature (London) 440, 1007 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Granot-Atedgi E, Tkačik G, Segev R, and Schneidman E, Stimulus-dependent maximum entropy models of neural population codes, PLoS Comput. Biol 9, e1002922 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [88].Bohté SM, Spekreijse H, and Roelfsema PR, The effects of pair-wise and higher-order correlations on the firing rate of a postsynaptic neuron, Neural Comput. 12, 153 (2000). [DOI] [PubMed] [Google Scholar]
  • [89].Amari S.-i., Nakahara H, Wu S, and Sakai Y, Synchronous firing and higher-order interactions in neuron pool, Neural Comput. 15, 127 (2003). [DOI] [PubMed] [Google Scholar]
  • [90].Kuhn A, Aertsen A, and Rotter S, Higher-order statistics of input ensembles and the response of simple model neurons, Neural Comput. 15, 67 (2003). [DOI] [PubMed] [Google Scholar]
  • [91].Staude B, Rotter S, and Grün S, Cubic: Cumulant based inference of higher-order correlations in massively parallel spike trains, J. Comput. Neurosci 29, 327 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].Staude B, Rotter S et al. , Higher-order correlations in non-stationary parallel spike trains: Statistical modeling and inference, Front. Comput. Neurosci 4, 1228 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [93].Bäuerle N and Grübel R, Multivariate counting processes: Copulas and beyond, ASTIN Bull.: J. IAA 35, 379 (2005). [Google Scholar]
  • [94].Trousdale J, Hu Y, Shea-Brown E, and Josić K, A generative spike train model with time-structured higher order correlations, Front. Comput. Neurosci 7, 84 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Shelley M, McLaughlin D, Shapley R, and Wielaard J, States of high conductance in a large-scale model of the visual cortex, J. Comput. Neurosci 13, 93 (2002). [DOI] [PubMed] [Google Scholar]
  • [96].Rudolph M and Destexhe A, Characterization of subthreshold voltage fluctuations in neuronal membranes, Neural Comput. 15, 2577 (2003). [DOI] [PubMed] [Google Scholar]
  • [97].Kumar A, Schrader S, Aertsen A, and Rotter S, The high-conductance state of cortical networks, Neural Comput. 20, 1 (2008). [DOI] [PubMed] [Google Scholar]
  • [98].Van Kampen NG, Stochastic Processes in Physics and Chemistry (Elsevier, New York, 1992), Vol. 1. [Google Scholar]
  • [99].Risken H, Fokker-Planck equation, in The Fokker-Planck Equation (Springer, New York, 1996), pp. 63–95. [Google Scholar]
  • [100].Itô K, 109. Stochastic integral, Proc. Imp. Acad. (Tokyo) 20, 519 (1944). [Google Scholar]
  • [101].Baccelli F and Taillefumier T, Replica-mean-field limits for intensity-based neural networks, SIAM J. Appl. Dyn. Syst 18, 1756 (2019). [Google Scholar]
  • [102].Baccelli F and Taillefumier T, The pair-replica-mean-field limit for intensity-based neural networks, SIAM J. Appl. Dyn. Syst 20, 165 (2021). [Google Scholar]
  • [103].Yu L and Taillefumier TO, Metastable spiking networks in the replica-mean-field limit, PLoS Comput. Biol 18, e1010215 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Baccelli F and Brémaud P, The palm calculus of point processes, in Elements of Queueing Theory (Springer, New York, 2003), pp. 1–74. [Google Scholar]
  • [105].London M, Roth A, Beeren L, Häusser M, and Latham PE, Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex, Nature (London) 466, 123 EP (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [106].Abbott LF and van Vreeswijk C, Asynchronous states in networks of pulse-coupled oscillators, Phys. Rev. E 48, 1483 (1993). [DOI] [PubMed] [Google Scholar]
  • [107].Baladron J, Fasoli D, Faugeras O, and Touboul J, Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons, J. Math. Neurosci 2, 10 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [108].Touboul J, Hermann G, and Faugeras O, Noise-induced behaviors in neural mean field dynamics, SIAM J. Appl. Dyn. Syst 11, 49 (2012). [Google Scholar]
  • [109].Robert P and Touboul J, On the dynamics of random neuronal networks, J. Stat. Phys 165, 545 (2016). [Google Scholar]
  • [110].Averbeck BB, Latham PE, and Pouget A, Neural correlations, population coding and computation, Nat. Rev. Neurosci 7, 358 EP (2006). [DOI] [PubMed] [Google Scholar]
  • [111].Ecker AS, Berens P, Tolias AS, and Bethge M, The effect of noise correlations in populations of diversely tuned neurons, J. Neurosci 31, 14272 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [112].Hu Y, Zylberberg J, and Shea-Brown E, The sign rule and beyond: Boundary effects, flexibility, and noise correlations in neural population codes, PLoS Comput. Biol 10, e1003469 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [113].Buzsáki G and Mizuseki K, The log-dynamic brain: How skewed distributions affect network operations, Nat. Rev. Neurosci 15, 264 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [114].Iyer R, Menon V, Buice M, Koch C, and Mihalas S, The influence of synaptic weight distribution on neuronal population dynamics, PLoS Comput. Biol 9, e1003248 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [115].Amarasingham A, Geman S, and Harrison MT, Ambiguity and nonidentifiability in the statistical analysis of neural codes, Proc. Natl. Acad. Sci. U.S.A 112, 6455 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [116].Scholl B, Thomas CI, Ryan MA, Kamasawa N, and Fitzpatrick D, Cortical response selectivity derives from strength in numbers of synapses, Nature (London) 590, 111 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [117].Smith MA and Kohn A, Spatial and temporal scales of neuronal correlation in primary visual cortex, J. Neurosci 28, 12591 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [118].Smith MA and Sommer MA, Spatial and temporal scales of neuronal correlation in visual area v4, J. Neurosci 33, 5422 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [119].Litwin-Kumar A and Doiron B, Slow dynamics and high variability in balanced cortical networks with clustered connections, Nat. Neurosci 15, 1498 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [120].de la Rocha J, Doiron B, Shea-Brown E, Josić K, and Reyes A, Correlation between neural spike trains increases with firing rate, Nature (London) 448, 802 EP (2007). [DOI] [PubMed] [Google Scholar]
  • [121].Rosenbaum R, Smith MA, Kohn A, Rubin JE, and Doiron B, The spatial structure of correlated neuronal variability, Nat. Neurosci 20, 107 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [122].Sznitman A-S, Topics in propagation of chaos, in Proceedings of the École d'Été de Probabilités de Saint-Flour XIX—1989, Lecture Notes Math. Vol. 1464 (Springer, Berlin, 1991), pp. 165–251. [Google Scholar]
  • [123].Erny X, Löcherbach E, and Loukianova D, Conditional propagation of chaos for mean field systems of interacting neurons, Electron. J. Pro 26, 1 (2021). [Google Scholar]

RESOURCES