Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 2.
Published in final edited form as: Phys Rev X. 2022 Mar 8;12(1):011044. doi: 10.1103/physrevx.12.011044

Emergence of Irregular Activity in Networks of Strongly Coupled Conductance-Based Neurons

A Sanzeni 1,2,3, M H Histed 3, N Brunel 2,4,*
PMCID: PMC9344604  NIHMSID: NIHMS1802482  PMID: 35923858

Abstract

Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1/K. When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.

Subject Areas: Biological Physics, Complex Systems, Interdisciplinary Physics

I. INTRODUCTION

Each neuron in the cortex receives inputs from hundreds to thousands of presynaptic neurons. If these inputs were to sum to produce a large net current, the central limit theorem argues that fluctuations should be small compared to the mean, leading to regular firing, as observed during in vitro experiments under constant current injection [1,2]. Cortical activity, however, is highly irregular, with a coefficient of variation of interspike intervals (CV of ISI) close to one [3,4]. To explain the observed irregularity, it has been proposed that neural networks operate in a balanced state, where strong feed forward and recurrent excitatory inputs are canceled by recurrent inhibition and firing is driven by fluctuations [5,6]. At the single-neuron level, in order for this state to emerge, input currents must satisfy two constraints. First, excitatory and inhibitory currents must be fine-tuned to produce an average input below threshold. Specifically, if K and J represent the average number of input connections per neuron and synaptic efficacy, respectively, the difference between excitatory and inhibitory presynaptic inputs must be of the order of 1/KJ. Second, input fluctuations should be large enough to drive firing.

It has been shown that the balanced state emerges dynamically (without fine-tuning) in randomly connected networks of binary units [7,8] and networks of current-based spiking neurons [9,10], provided that coupling is strong, and recurrent inhibition is powerful enough to counterbalance instabilities due to recurrent excitation. However, these results are all derived assuming that the firing of a presynaptic neuron produces a fixed amount of synaptic current, hence neglecting the dependence of synaptic current on the membrane potential, a key aspect of neuronal biophysics. In real synapses, synaptic inputs are mediated by changes in conductance, due to opening of synaptic receptor channels on the membrane, and synaptic currents are proportional to the product of synaptic conductance and a driving force which depends on the membrane potential. Models that incorporate this description are referred to as “conductance-based synapses”.

Large synaptic conductances have been shown to have major effects on the stationary [11] and dynamical [12] response of single cells and form the basis of the “high-conductance state” [1319] that has been argued to describe well in vivo data [2022] (but see Ref. [23] and Sec. IX). At the network level, conductance modulation plays a role in controlling signal propagation [24], input summation [25], interactions between traveling waves [26], and firing statistics [27]. However, most of the previously mentioned studies rely exclusively on numerical simulations, and, in spite of a few attempts at analytical descriptions of networks of conductance-based neurons [17,2832], an understanding of the behavior of such networks when coupling is strong is still lacking.

Here, we investigate networks of strongly coupled conductance-based neurons. We find that, for synapses of the order of 1/K, fluctuations are too weak to sustain firing, questioning the relevance of the balanced state idea to cortical dynamics. Our analysis, on the other hand, shows that stronger synapses [of the order of 1/log (K)] generate irregular firing when coupling is strong. We characterize the properties of networks with such a scaling, showing that they match properties observed in the cortex, and discuss constraints induced by the synaptic time constants. The model generates qualitatively different predictions compared to the current-based model, which could be tested experimentally.

II. MODELS OF SINGLE-NEURON AND NETWORK DYNAMICS

A. Membrane potential dynamics

We study the dynamics of networks of leaky integrate-and-fire (LIF) neurons with conductance-based synaptic inputs. The membrane potential Vj of the jth neuron in the network follows the equation

CjdVjdt=A=L,E,IgAj(VjEA), (1)

where Cj is the neuronal capacitance; EL, EE, and EI are the reversal potentials of the leak, excitatory, and inhibitory currents, respectively; while gLj, gEj, and gIj are the leak, excitatory, and inhibitory conductances, respectively. Assuming instantaneous synapses (the case of finite synaptic time constants is discussed in Sec. VIII), excitatory and inhibitory conductances are given by

gE,IjgLj=τjmajmnδ(ttmn). (2)

In Eq. (2), τj=Cj/gLj is the single-neuron membrane time constant, ajm are dimensionless measures of synaptic strength between neuron j and neuron m, and nδ(ttmn) represents the sum of all the spikes generated at times tmn by neuron m. Every time the membrane potential Vj reaches the firing threshold θ, the jth neuron emits a spike, and its membrane potential is set to a reset Vr and stays at that value for a refractory period τrp; after this time, the dynamics resumes, following Eq. (1).

We use ajm = a (ag) for all excitatory (inhibitory) synapses. In the homogeneous case, each neuron receives synaptic inputs from KE = K (KI = γK) excitatory (inhibitory) cells. In the network case, each neuron receives additional KX = K excitatory inputs from an external population firing with Poisson statistics with rate νX. We use excitatory and inhibitory neurons with the same biophysical properties; hence, the above assumptions imply that the firing rates of excitatory and inhibitory neurons are equal; ν = νE = νI. Models taking into account the biophysical diversity between the excitatory and inhibitory populations are discussed in the Appendix D. When heterogeneity is taken into account, the above-defined values of KE,I,X represent the means of Gaussian distributions. We use the following single-neuron parameters: τrp = 2 ms, θ = −55 mV, Vr = −65 mV, EE = 0 mV, EI −75 mV, EL = −80 mV, and τj = τL = 20 ms. We explore various scalings of a with K, and, in all cases, we assume that a ≪ 1. When a ≪ 1, an incoming spike produced by an excitatory presynaptic neuron produces a jump in the membrane potential of amplitude a(EEV), where V is the voltage just before spike arrival. In the cortex, V ~ −60 mV and average amplitudes of postsynaptic potentials are on the order of 0.5–1.0 mV [3339]. Thus, we expect realistic values of a to be on the order of 0.01.

B. Diffusion and effective time constant approximations

We assume that each cell receives projections from a large number of cells (K ≫ 1), neurons are sparsely connected and fire approximately as Poisson processes, each incoming spike provides a small change in conductance (a ≪ 1), and temporal correlations in synaptic inputs can be neglected. Under these assumptions, we can use the diffusion approximation and approximate the conductances as

gEgL=aτL[KrE+KrEζE],gIgL=agτL[γKrI+γKrIζI], (3)

where rE and rI are the firing rates of presynaptic E and I neurons, respectively, and ζE and ζI are independent Gaussian white noise terms with zero mean and unit variance density. In the single-neuron case, we take rE = νX, rI = ηνX, where η represents the ratio of I/E input rate. In the network case, rE = νX + ν, rI = ν, where νX is the external rate, while ν is the firing rate of excitatory and inhibitory neurons in the network, determined self-consistently (see below). We point out that, for some activity levels, the assumption of Poisson presynaptic firing made in the derivation of Eq. (3) breaks down, as neurons in the network show interspike intervals with CV significantly different from one [e.g., see Fig. 3(c)]. However, comparisons between mean field results and numerical simulations (see Appendix E) show that neglecting non-Poissonianity [as well as other contributions discussed above Eq. (3)] generates quantitative but not qualitative discrepancies, with magnitude that decreases with coupling strength. Moreover, in Appendix B, we show that if a ≪ 1, the firing of neurons in the network matches that of a Poisson process with a refractory period and, hence, when ν ≪ 1/τrp, deviations from Poissonianity become negligible.

FIG. 3.

FIG. 3.

Response of networks of conductance-based neurons for large K. (a) Scaling relation defined by self-consistency condition given by Eqs. (14) and (19) (black line), values of parameters used in (b)–(d) (colored dots). Constant scaling (a ~ K0 dotted line) and scaling of the balanced state model (a~1/K, dashed line) are shown for comparison. (b),(c) Firing rate and CV of ISI as a function of the external input, obtained from Eqs. (10) and (15) (colored lines) with the strong-coupling limit solution of Eqs. (20) and (16) (black line). (d) Probability distribution of the membrane potential obtained from Eq, (17). In (b)–(d), dotted and dashed lines represent quantities obtained with the scalings J ~ K0 and J~1/K, respectively, for values of K and J indicated in (a) (black dots). Parameters: γ = 1/4 and g = 30.

Using the diffusion approximation, Eq. (1) reduces to

τdVdt=V+μ+σ(V)τζ, (4)

where ζ is a white noise term, with zero mean and unit variance density, while

τ1=τL1+aK(rE+rIgγ),μ=τ{EL/τL+aK[rEEE+rIgγEI]},σ2(V)=a2Kτ[rE(VEE)2+g2γrI(VEI)2]. (5)

In Eq. (4), τ is an effective membrane time constant, while μ and σ2(V) represent the average and the variance of the synaptic current generated by incoming spikes, respectively.

The noise term in Eq. (4) can be decomposed into an additive and a multiplicative component. The latter has an effect on membrane voltage statistics that is of the same order of the contribution coming from synaptic shot noise [40], a factor which is neglected in deriving Eq. (3). Therefore, for a consistent analysis, we neglect the multiplicative component of the noise in the above derivation; this leads to an equation of the form of Eq. (4) with the substitution

σ(V)σ(μ). (6)

This approach is termed the effective time constant approximation [40]. Note that the substitution of Eq. (6) greatly simplifies mathematical expressions, but it is not a necessary ingredient for the results presented in this paper. In fact, all our results can be obtained without having to resort to this approximation (see Appendixes A, B, and D).

C. Current-based model

The previous definitions and results translate directly to current-based models, with the only exception that the dependency of excitatory and inhibitory synaptic currents on the membrane potential are neglected (see Ref. [10] for more details). Therefore, Eq. (1) becomes

τjdVjdt=Vj+IEjIIj, (7)

where

IAj=τjmJjmnδ(ttmn)

represent the excitatory and inhibitory input currents. Starting from Eq. (7), making assumptions analogous to those discussed above and using the diffusion approximation [10], the dynamics of current-based neurons is given by an equation of the form of Eq. (4) with

τ=τL,    μ=τJK[rEgγrI],    σ2=τJ2K[rE+g2γrI]. (8)

Note that, unlike what happens in conductance-based models, τ is a fixed parameter and does not depend on the network firing rate or external drive. Another difference between the current-based and conductance-based models is that in the latter, but not the former, model σ depends on V; as we discuss above, this difference is neglected in the main text, where we use the effective time constant approximation.

III. BEHAVIOR OF SINGLE-NEURON RESPONSE FOR LARGE K

We start our analysis by investigating the effects of synaptic conductance on single-neuron response. We consider a neuron receiving K (γK) excitatory (inhibitory) inputs, each with synaptic efficacy J (gJ), from cells firing with Poisson statistics with a rate

rE=νX,    rI=ηνX (9)

and analyze its membrane potential dynamics in the frameworks of current-based and conductance-based models. In both models, the membrane potential V follows a stochastic differential equation of the form of Eq. (4); differences emerge in the dependency of τ, μ, and σ on the parameters characterizing the connectivity, K and J. In particular, in the current-based model, the different terms in Eq. (8) can be written as

τ~τ0curr ,    μ~KJμ0curr ,    σ~KJσ0curr ,

where τ0curr , μ0curr , and σ0curr  are independent of J and K. In the conductance-based model, the efficacy of excitatory and inhibitory synapses depend on the membrane potential as J = a(EE,IV); the different terms in Eq. (4), under the assumption that Ka ≫ 1, become of the order of

τ~τ0cond Ka,    μ~μ0cond ,    σ~aσ0cond .

Here, all these terms depend on parameters in a completely different way than in the current-based case. As we show below, these differences drastically modify how the neural response changes as K and J are varied and, hence, the size of J ensuring a finite response for a given value of K.

The dynamics of a current-based neuron is shown in Fig. 1(a)(i), with parameters leading to irregular firing. Because of the chosen parameter values, the mean excitatory and inhibitory inputs approximately cancel each other, generating subthreshold average input and fluctuation-driven spikes, which leads to irregularity of firing. If all parameters are fixed while K is increased (J ~ K0), the response changes drastically [Fig. 1(a)(ii)], since the mean input becomes much larger than threshold and firing becomes regular. To understand this effect, we analyze how terms in Eq. (4) are modified as K increases. The evolution of the membrane potential in time is determined by two terms: a drift term −(Vμ)/τ, which drives the membrane potential toward its mean value μ, and a noise term σ/τ, which leads to fluctuations around this mean value. Increasing K modifies the equilibrium value μ of the drift force and the input noise, which increase proportionally to KJ(1 − γgη) and KJ2(γg2η + 1), respectively [Figs. 1(b) and 1(c)]. This observation)suggests that, to preserve irregular firing as K is increased, two ingredients are needed. First, the rates of excitatory and inhibitory inputs must be fine-tuned to maintain a mean input below threshold; this can be achieved by choosing γgη − 1 ~ 1/KJ. Second, the amplitude of input fluctuations should be preserved; this can be achieved by scaling synaptic efficacy as J~1/K. Once these two conditions are met, irregular firing is restored [Fig. 1(a)(iii)]. Importantly, in a network with J~1/K, irregular firing emerges without fine-tuning, since rates dynamically adjust to balance excitatory and inhibitory inputs and maintain mean inputs below threshold [7,8].

FIG. 1.

FIG. 1.

Effects of coupling strength on the firing behavior of current-based and conductance-based neurons. (a) Membrane potential of a single current-based neuron for (i) J = 0.3 mV, K = 103, g = γ = 1, and η such that 1 − gγη = 0.075; (ii) with K = 5 × 104; (iii) with K = 5 × 104 and scaled synaptic efficacy (J~1/K, which gives J = 0.04 mV) and input difference 1 − gγη = 0.01; (b),(c) Effect of coupling strength on drift force and input noise in a current-based neuron. (d) Membrane potential of a single conductance-based neuron for fixed input difference (g1 − γη = −2.8) and (i) a = 0.01, K = 103; (ii) K = 5 × 104; (iii) K = 5 × 104 and scaled synaptic efficacy (a~1/K, a = 0.001). (e),(f) Effect of coupling strength on drift force and input noise in a conductance-based neuron. In (a) and (d), dashed lines represent the threshold and reset (black) and equilibrium value of membrane potential (green). In (a) (ii) and (d)(ii), light purple traces represent dynamics in absence of a spiking mechanism. Input fluctuations in (c) and (f) represent input noise per unit time, i.e., the integral of στζ of Eq. (4) computed over an interval Δt and normalized over Δt.

We now show that the above solution does not work once synaptic conductance is taken into account. The dynamics of a conductance-based neuron in response to the inputs described above is shown in Fig. 1(d)(i). As in the current-based neuron, it features irregular firing, with mean input below threshold and spiking driven by fluctuations, and firing becomes regular for larger K, leaving all other parameters unchanged [Fig. 1(d)(ii)]. However, unlike the current-based neuron, input remains below threshold at large K; regular firing is produced by large fluctuations, which saturate the response and produce spikes that are regularly spaced because of the refractory period. These observations can be understood by inspecting the equation for the membrane potential dynamics [Eq. (4)]: increasing K leaves invariant the equilibrium value of the membrane potential μ but increases the drift force and the input noise amplitude as Ka and Ka, respectively [Figs. 1(e) and 1(f)]. Since the equilibrium membrane potential is fixed below threshold, response properties are determined by the interplay between drift force and input noise, which have opposite effects on the probability of spike generation. The response saturation observed in Fig. 1(d)(ii) shows that, as K increases at fixed a, fluctuations dominate over drift force. On the other hand, using the scaling a~1/K leaves the amplitude of fluctuations unchanged but generates a restoring force of the order of K [Fig. 1(e)] which dominates and completely abolishes firing at strong coupling [Fig. 1(d)(iii)].

Results in Fig. 1 show that the response of a conductance-based neuron when K is large depends on the balance between drift force and input noise. The scalings a ~ O(1) and a~1/K leave one of the two contributions dominant, suggesting that an intermediate scaling could keep a balance between them. Below, we derive such a scaling, showing that it preserves firing rate and CV of ISI when K becomes large.

IV. A SCALING RELATION THAT PRESERVES SINGLE-NEURON RESPONSE FOR LARGE K

We analyze under what conditions the response of a single conductance-based neuron is preserved when K is large. For a LIF neuron described by Eqs. (4)(6), the single cell transfer function, i.e., the dependency of the firing rate ν on the external drive νX, is given by [41,42]

ν=[τrp+τπvminvmaxdx exp(x2)[1+erf(x)]]1, (10)

with

v(x)=xμσ,    vmin=v(Vr),    vmax=v(θ). (11)

In the biologically relevant case of a ≪ 1, Eq. (10) simplifies significantly, using the fact that vmax, the distance between the average membrane potential and the threshold, is of the order of 1/a. Therefore, vmax is large when a is small; in this limit, the firing rate is given by the Kramers escape rate [43], and Eq. (10) becomes

ν=1τrp+QνX,    Q=τ¯πaKv¯exp(v¯2a), (12)

where we define v¯2=avmax2 and τ¯=aKνXτ. The motivation to introduce v¯ and τ¯ is that they remain of the order of 1 in the small a limit, provided the external inputs νX are at least of the order of 1/(aKτL). When the external inputs are such that νX ≫ 1/(aKτL), these quantities become independent of νX, and a and K are given by

τ¯=(1+gγη)1,    v¯=θμ¯σ¯,    μ¯=τ¯(EE+gγηEI),    σ¯2=τ¯[(μ¯EE)2+g2γη(μ¯EI)2]. (13)

The firing rate given by Eq. (12) remains finite when a is small and/or K is large if Q remains of the order of one; this condition leads to the following scaling relationship:

K~τ¯av¯exp(v¯2a); (14)

i.e., a should be of the order of 1/log K.

In Appendix C, we show that expressions analogous to Eq. (12) can be derived in integrate-and-fire neuron models which feature additional intrinsic voltage-dependent currents, as long as synapses are conductance based and input noise is small (a ≪ 1). Examples of such models include the exponential integrate-and-fire neurons with its spike-generating exponential current [44] and models with voltage-gated subthreshold currents [23]. Moreover, we show that, in these models, firing remains finite if a ~ 1/ log(K), and voltage-dependent currents generate corrections to the logarithmic scaling which are negligible when coupling is strong.

In Fig. 2(a), we compare the scaling defined by Eq. (14) with the a~1/K scaling of current-based neurons. At low values of K, the values of a obtained with the two scalings are similar; at larger values of K, synaptic strength defined by Eq. (14) decays as a ~ 1/log(K)—i.e., synapses are stronger in the conductance-based model than in the current-based model. Examples of single-neuron transfer function computed from Eq. (10) for different coupling strength are shown in Figs. 2(b) and 2(c). Responses are nonlinear at onset and close to saturation. As predicted by the theory, scaling a with K according to Eq. (14) preserves the firing rate over a region of inputs that increases with the coupling strength [Figs. 2(c) and 2(d)], while the average membrane potential remains below threshold [Fig. 2(d)]. The quantity v¯/a represents the distance from threshold of the equilibrium membrane potential in units of input fluctuations; Eq. (14) implies that this distance increases with the coupling strength. When K is very large, the effective membrane time constant, which is of the order of τ ~ 1/aKνX, becomes small and firing is driven by fluctuations that, on the timescale of this effective membrane time constant, are rare.

FIG. 2.

FIG. 2.

The scaling of Eq. (14) preserves the response of a single conductance-based neuron for large K. (a) The scaling relation preserving firing in conductance-based neurons [Eq. (14), solid line]; constant scaling (a ~ K0, dotted line) and scaling of the balanced state model (a~1/K, dashed line) are shown as a comparison. Colored dots indicate values of a and K used subsequently. (b)–(h) Response of conductance-based neurons, for different values of the coupling strength and synaptic efficacy (colored lines). The scaling of Eq. (14) preserves how the firing rate (b),(c), equilibrium value of the membrane potential (d), and CV of the interspike interval distribution (e) depend on external input rate νX. This invariance is achieved by increasing the drift force (f) and input fluctuation (g) in a way that weakly decreases (logarithmically in K) membrane potential fluctuations (h). Different scalings either saturate or suppress the response [(b); black lines correspond to K = 105 and a values as in (a)]. Parameters: a = 0.01 for K = 103, g = 12, η = 1.8, and γ = 1/4.

We next investigate if the above scaling preserves irregular firing by analyzing the CV of interspike intervals. This quantity is given by [10]

CV2=2πν2τ2vminvmaxdxex2xdyey2[1+erf(y)]2 (15)

and, for the biologically relevant case of a ≪ 1 and μ < θ, reduces to (see Appendix B for details)

CV=1τrpν; (16)

i.e., the CV is close to one at low rates, and it decays monotonically as the neuron approaches saturation. Critically, Eq. (16) depends on the coupling strength only through ν; hence, any scaling relation preserving firing rate also produces a CV of the order of one at a low rate. We validate numerically this result in Fig. 2(e).

We now investigate how Eq. (14) preserves irregular firing in conductance-based neurons. We have shown that increasing K at fixed a produces large input and membrane fluctuations, which saturate firing; the scaling a~1/K preserves input fluctuations but, because of the strong drift force, suppresses membrane potential fluctuations and, hence, firing. The scaling of Eq. (14), at every value of K, yields the value of a that balances the contribution of drift and input fluctuations, so that membrane fluctuations are of the right size to preserve the rate of threshold crossing. Note that, unlike what happens in the current-based model, both input fluctuations and drift force increase with K [Figs. 2(f) and 2(g)], while the membrane potential distribution, which is given by [45]

P(V)=2ντσv(V)vmaxdxθ[xv(Vr)]exp[x2v(V)2], (17)

slowly becomes narrower [Fig. 2(h)]. This result can be understood by noticing that, when a ≪ 1 and neglecting the contribution due to the refractory period, Eq. (17) reduces to

P(V)=1σπexp((Vμ)2σ2). (18)

Hence, the probability distribution becomes Gaussian when coupling is strong, with a variance proportional to σ2 ~ a. We note that, since a is of the order of 1/ log K, the width of the distribution becomes small only for unrealistically large values of K.

V. ASYNCHRONOUS IRREGULAR ACTIVITY IN NETWORK RESPONSE AT STRONG COUPLING

We have so far considered the case of a single neuron subjected to stochastic inputs. We now show how the above results generalize to the network case, where inputs to a neuron are produced by a combination of external and recurrent inputs.

We consider networks of recurrently connected excitatory and inhibitory neurons, firing at rate ν, stimulated by an external population firing with Poisson statistics with firing rate νX. Using again the diffusion approximation, the response of a single neuron in the networks is given by Eq. (10) [and, hence, Eq. (12)] with

rE=νX+ν,    rI=ν. (19)

Equation (10), if all neurons in a given population are described by the same single-cell parameters and the network is in an asynchronous state in which cells fire at a constant firing rate, provides an implicit equation whose solution is the network transfer function. Example solutions are shown in Fig. 3(b) (numerical validation of the mean field results is provided in Appendix E). In Appendix D, we prove that firing in the network is preserved when coupling is strong if parameters are rescaled according to Eq. (14). Moreover, we show that response nonlinearities are suppressed and the network response in the strong-coupling limit (i.e., when K goes infinity) is given, up to saturation, by

ν=ρνX. (20)

The parameter ρ, which is obtained by solving Eq. (12) self-consistently (see Appendix D for details), is the response gain in the strong-coupling limit. Finally, our derivation implies that Eq. (14) preserves irregular firing and creates a probability distribution of membrane potential whose width decreases only logarithmically as K increases [Figs. 3(c) and 3(d) and numerical validation in Appendix E], as in the single-neuron case. While this logarithmic decrease is a qualitative difference with the current-based balanced state in which the width stays finite in the large K limit, in practice, for realistic values of K, realistic fluctuations of membrane potential (a few mV) can be observed in both cases.

We now turn to the question of what happens in networks with different scalings between a and K. Our analysis of single-neuron response described above shows that scalings different from that of Eq. (14) fail to preserve firing for large K, as they let either input noise or drift dominate. However, the situation in networks might be different, since recurrent interactions could, in principle, adjust the statistics of input currents such that irregular firing at low rates is preserved when coupling becomes strong. Thus, we turn to the analysis of the network behavior when a scaling a ~ Kα is assumed. For α ≤ 0, the dominant contribution of input noise at the single-neuron level (Figs. 1 and 2) generates saturation of response and regular firing in the network (Fig. 3). This can be understood by noticing that, for large K, the factor Q in Eq. (12) becomes negligible and the self-consistency condition defining the network rate is solved by ν = 1/τrp. For α > 0, the network response for large K is determined by two competing elements. On the one hand, input drift dominates and tends to suppress firing (Figs. 1 and 2). On the other hand, for the network to be stable, inhibition must dominate recurrent interactions [9]. Hence, any suppression in network activity reduces recurrent inhibition and tends to increase neural activity. When these two elements conspire to generate a finite network response, the factor Q in Eq. (12) must be of the order of one and v¯~a~Kα. In this scenario, the network activity exhibits the following features (Fig. 3): (i) the mean inputs drive neurons very close to threshold (θμ¯~aσ¯~Kα); (ii) the response of the network to external inputs is linear and, up to corrections of the order of Kα, given by

ν=(EEθ)νXθ(1+gγ)EEgγEI; (21)

(iii) firing is irregular [because of Eq. (16)]; (iv) the width of the membrane potential distribution is of the order of a ~ Kα [because of Eq. (18)]. Therefore, scalings different from that in Eq. (14) can produce asynchronous irregular activity in networks of conductance-based neurons, but this leads to networks with membrane potentials narrowly distributed close to threshold, a property which seems at odds with what is observed in the cortex [4651].

VI. ROBUST LOG-NORMAL DISTRIBUTION OF FIRING RATES IN NETWORKS WITH HETEROGENEOUS CONNECTIVITY

Up to this point, we have assumed a number of connections equal for all neurons. In real networks, however, this number fluctuates from cell to cell. The goal of this section is to analyze the effects of heterogeneous connectivity in networks of conductance-based neurons.

We investigate numerically the effects of connection heterogeneity as follows. We choose a Gaussian distribution of the number of connections per neuron, with mean K and variance ΔK2 for excitatory connections and mean γK and variance γ2ΔK2 for inhibitory connections. The connectivity matrix is constructed by drawing first randomly E and I in-degrees KE,X,Ii from these Gaussian distributions for each neuron and then selecting at random KE,X,Ii E/I presynaptic neurons. We then simulate network dynamics and measure the distribution of rates and CV of the ISI in the population. Results for different values of CVK ΔK/K are shown in Figs. 4(a)4(c). For small and moderate values of connection heterogeneity, increasing CVK broadens the distribution of rates and CV of the ISI, but both distributions remain peaked around a mean rate that is close to that of homogeneous networks [Figs. 4(a) and 4(b)]. For larger CVK, on the other hand, the distribution of rates changes its shape, with a large fraction of neurons moving to very low rates, while others increase their rates [Fig. 4(a)] and the distribution of the CV of ISI becomes bimodal, with a peak at low CV corresponding to the high-rate neurons, while the peak at a CV close to 1 corresponds to neurons with very low firing rates [Fig. 4(b)].

FIG. 4.

FIG. 4.

Effects of heterogeneous connectivity on the network response. (a),(b) Distribution of ν and CV of ISI computed from network simulations (dots) and from the mean field analysis [(a), black lines] for different values of CVK [values are indicated by dots in (c)]. (c) Δν/ν (green, left axis) l Communication within Local and fraction of quiescent cells (brown, right axis) computed from network simulations as a function of CVK. For CVKCVK*,Δν/ν, Δν/ν increases linearly, as predicted by the mean field analysis; deviations from linear scaling emerge for CVKCVK*, when a significant fraction of cells become quiescent. The deviation from linear scaling at low CVK is due to a sampling error in estimating the firing rate from simulations. (d) CVK* as a function of K computed from the mean field theory (green, left axis), with a rescaled according to Eq. (14). For large K, CVK* decays proportionally to a (brown, right axis). When K is too low, the network is silent and CVK*=0. In (a)–(c), K = 103, g = 20, a = 1.6 × 10−3, NE = NX = NI/γ = 10K, and νX = 0.05/τrp. In network simulations, the dynamics is run for 20 s using a time step of 50 μs. Parameters in (d) are as in Fig. 3.

To characterize more systematically the change in the distribution of rates with CVK, we measure, for each value of CVK, the fraction of quiescent cells, defined as the number of cells that do not spike during 20 s of the simulated dynamics [Fig. 4(c)]. This analysis shows that the number of quiescent cells, and, hence, the distribution of rates, changes abruptly as the CVK is above a critical value CVK*. Importantly, unlike our definition of the fraction of quiescent cells, this abrupt change is a property of the network that is independent of the duration of the simulation.

To understand these numerical results, we perform a mean field analysis of the effects of connection heterogeneity on the distribution of rates (Appendix F). This analysis captures quantitatively numerical simulations [Fig. 4(a)] and shows that, in the limit of small CVK and a, rates in the network are given by

νi=ν0exp[ΩCVKazi], (22)

where ν0 is the population average in the absence of heterogeneity, zi is a Gaussian random variable, and the prefactor Ω is independent of a, K, and νX. The exponent in Eq. (22) represents a quenched disorder in the value of vi, i.e., in the distance from threshold of the single cell μi in units of input noise. As shown in Appendix F, Eq. (22) implies that the distribution of rates is log-normal, a feature consistent with experimental observations [5254] and distributions of rates in networks of current-based LIF neurons [55]. It also implies that the variance of the distribution Δν/ν should increase linearly with CVK, a prediction which is confirmed by numerical simulations [Fig. 4(c)]. The derivation in Appendix F also provides an explanation for the change in the shape of the distribution for larger CVK. In fact, for larger heterogeneity, the small CVK approximation is not valid, and fluctuations in input connectivity produce cells for which μi far from θ, that are firing either at an extremely low rate (μi < θ) or regularly (μi > θ). The latter generates the peak at low values in the CV of the ISI seen for large values CVK.

The quantity CVK* represents the level of connection heterogeneity above which significant deviations from the asynchronous irregular state emerges; i.e., large fractions of neurons show extremely low or regular firing. Equation (22) suggests that CVK* should increase linearly with a. We validate this prediction with our mean field model, by computing the minimal value of CVK at which 1% of the cells fire at a rate of 10−3 spk/s [Fig. 4(d)]. Note that the derivation of Eq. (22) assumes only a to be small and does not depend on the scaling relation between a and K. On the other hand, the fact that CVK* increases linearly with a makes the state emerging in networks of conductance-based neurons with a ~ 1/ log(K) significantly more robust to connection fluctuations than that emerging with a ~ Kα, for which CVK*~Kα, and with current-based neurons, where CVK*~1/K [56]. Note that, while in randomly connected networks CVK~1/K, a larger degree of heterogeneity is observed in cortical networks [50,5662]. Our results show that networks of conductance-based neurons could potentially be much more robust to such heterogeneities than networks of current-based neurons.

VII. COMPARISON WITH EXPERIMENTAL DATA

The relation between synaptic efficacy and number of connections per neuron has been recently studied experimentally using a culture preparation [63]. In this paper, it is found that cultures in which K is larger have weaker synapses than cultures with smaller K (Fig. 5). In what follows, we compare these data with the scalings expected in networks of current-based and conductance-based neurons and discuss implications for in vivo networks.

FIG. 5.

FIG. 5.

Comparison of predictions given by current-based and the conductance-based models in describing experimental data from cultures. (a) Strength of excitatory (EPSP) and inhibitory (IPSP) postsynaptic potentials recorded in Ref. [63] are compared with best fits using scaling relationships derived from networks with current-based synapses (dashed line) and conductance-based synapses (continuous line). Root mean square (rms) and best fit parameters are rms = 2.2 mV, g = 1.1, and J0 = 20 mV for the current-based model and rms = 2.4 mV, g = 3.4, and v¯=0.08 for the conductance-based model. (b) Value of v¯/a predicted by the conductance-based model as a function of K. (c) Ratio between excitatory and leak conductance as a function of K, for νE = νI = νX = 1 spk/s (black) and νE = νI = νX = 5 spk/s (gray) obtained with a rescaled as Eq. (14) (continuous line) and as 1/K (dashed line). (d) Ratio between τ and τL as a function of K; parameters and scaling as in (c).

In the current-based model, the strength of excitatory and inhibitory postsynaptic potentials as a function of K can be written as JE=J0/K and JI = gJE, respectively. In the conductance-based model, these quantities become JE = (VEE)a and JI = g(VEI)a, where a=a(K,v¯) is given by Eq. (14) while, for the dataset of Ref. [63], V ~ −60 mV, JE ~ JI, EE ~ 0 mV, and EI ~ −80 mV. For each model, we infer free parameters from the data with a least-squares optimization in logarithmic scale (best fit, g = 1.1 and J0 = 20 mV in the current-based model and g = 3.4 and v¯=0.08 in the conductance-based model) and compute the expected synaptic strength as a function of K [lines in Fig. 5(a)]. Our analysis shows that the performances of the current-based and the conductance-based model in describing the data, over the range of K explored in the experiment, are similar, with the former being slightly better than the latter (root mean square 2.2 vs 2.4 mV). This result is consistent with the observation made in Ref. [63] that, when fitted with a power law J ~ Kβ, data are best described by β = 0.59 but are compatible with a broad range of values (95% confidence interval [0.47:0.70]). Note that, even though both models give similar results for PSP amplitudes in the range of values of K present in cultures (approximately 50–10 00), they give significantly different predictions for larger values of K. For instance, for K = 10 000, JE is expected to be approximately 0.2 mV in the current-based model and approximately 0.7 mV in the conductance-based model.

In Fig. 5(b), we plot the distance between the equilibrium membrane potential μ and threshold θ in units of input fluctuations and v¯/a as a function of K using the value of v¯ obtained above and find that the expected value in vivo, where K ~ 103–104, is in the range 2–3. In Figs. 5(c) and 5(d), we plot how total synaptic excitatory conductance and the effective membrane time constant change as a function of K. Both quantities change significantly faster using the conductance-based scaling [gE/gL ~ K/ log (K); τ/τL ~ log (K)/K] than expected by the scaling of the current-based model (gE/gL~K;τ/τL~1/K). For K in the range 103–104 and mean firing rates in the range 1–5 spk/s, the total synaptic conductance is found be in a range from about 2 to 50 times the leak conductance, while the effective membrane time constant is found to be smaller than the membrane time constant by a factor of 2–50. We compare these values with available experimental data in Sec. IX.

VIII. EFFECTS OF FINITE SYNAPTIC TIME CONSTANTS

Results discussed in previous sections show that the effective membrane time constant τ decreases with presynaptic activity and with coupling strength. This observation raises the question whether the assumption of negligible synaptic time constants we have made in our analysis is reasonable. Synaptic decay time constants of experimentally recorded postsynaptic currents range from a few milliseconds (for AMPA and GABAA receptor-mediated currents) to tens of milliseconds (for GABAB and NMDA receptor-mediated currents; see, e.g., Ref. [64]); i.e., they are comparable to the membrane time constant already at weak coupling, where τ ~ τL is typically in the range 10–30 ms [65]. Interestingly, experiments suggest that synaptic dynamics might be faster in physiological conditions (e.g., Ref. [66] finds a 0.5 ms decay time constant for the AMPA receptor at 35°C). Nonetheless, in the strong-coupling limit, the effective membrane time constant goes to zero, and so our assumption of negligible synaptic time constant clearly breaks down in that limit. In this section, we analyze models with finite coupling strength and show that synaptic dynamics modifies the drift-diffusion balance characteristic of conductance-based models, making it input dependent. At the end of the section, we discuss how this input-dependent drift-diffusion balance can be preserved in the strong-coupling limit.

With finite synaptic time constants, the temporal evolution of conductances in Eq. (2) is replaced by

τE,IdgE,Ijdt=gE,Ij+gLjτE,Imajmnδ(ttmn), (23)

where τE/τI are the decay time constant of E/I synaptic conductances, respectively. The single-neuron membrane potential dynamics is described by Eqs. (1) and (23). Here, for simplicity, we take excitatory and inhibitory synaptic currents to have the same decay time constant: τE = τI = τS. Figure 6(a) shows how the synaptic time constant modifies the mean firing rate of single integrate-and-fire neurons in response to K (γK) excitatory (inhibitory) inputs with synaptic strength a (ga) and frequency νX (ηνX). The figure shows that, though the mean firing rate is close to predictions obtained with instantaneous synapses for low νX, deviations emerge as input increases and firing is strongly suppressed for large νX. To understand these numerical results, we resort again to the diffusion approximation [67,68], together with the effective time constant approximation [11,69], to derive a simplified expression of the single-neuron membrane potential dynamics with finite synaptic time constant (details in Appendix G):

τdVdt=(Vμ)+σττSz, (24)

where τ, μ, and σ are as in the case of negligible synaptic time constant [Eq. (5)] while z is an Ornstein-Uhlenbeck process with correlation time τS. Thus, compared to the instantaneous synapse case [Eq. (4)], input fluctuations with frequency larger than 1/τS are suppressed, and, for large τS/τ, the membrane potential dynamics is given by

V(t)=μ+σττSz(t); (25)

i.e., the membrane potential is essentially slaved to a time-dependent effective reversal potential given by the rhs of Eq. (25) [14]. Note that Eq. (25) is valid only in the subthreshold regime. When the rhs of Eq. (25) exceeds the threshold, the neuron fires a burst of action potentials whose frequency, in the strong-coupling limit, is close to the inverse of the refractory period [70]. As νX increases, the equilibrium value μ remains constant while τ decreases, leading to a suppression of membrane fluctuations [Figs. 6(a) and 6(c)] and, in turn, to the suppression of response observed in Fig. 6(a). Therefore, the filtering of synaptic input induced by synaptic dynamics breaks the drift-diffusion balance which supports firing in conductance-based neurons. In Appendix H, we show that the suppression of the single-neuron firing rate described here cannot be prevented by short-term synaptic plasticity.

FIG. 6.

FIG. 6.

Effects of synaptic time constant on single-neuron and network response. (a) Single-neuron response as a function of input rate νX, computed numerically from Eqs. (1) and (23). Different colors correspond to different values of τS (purple, 1 ms; blue, 2 ms; red, 5 ms). Firing rates (first row) match predictions obtained for instantaneous synapses (lines) for small τS/τ; significant deviations and response suppression emerge for larger τS/τ. The effective membrane time constant (τ, second row) decreases with the input rate and reaches the value τS/τ ~ 1 (dashed line) for lower levels of external drive when τS is larger. The equilibrium value of the membrane potential (μ, third row) increases with the input rate and is independent of τS (black dotted line represents the spiking threshold). The magnitude of fluctuations of the membrane potential (σV, fourth row) has a nonmonotonic relationship with the input rate and peaks at a value of νX for which τ is of the same order as τS. (b) Analogous to (a) but in the network case. Firing rates are no longer suppressed as τS/τ increases but approach the response scaling predicted by Eq. (21) (dashed line). As discussed in the text, high firing rates are obtained by increasing the value of μ toward threshold. (c) Examples of membrane potential dynamics for a single neuron in the absence of spiking mechanisms and for two different values of τS. Colors correspond to increasing νX = 5 (blue), 40 (orange), and 100 spk/s (green), respectively. High-frequency fluctuations are suppressed as νX increases. (d) Analogous to (c) but in the network case and for νX = 5, 40, and 100 spk/s. Increasing νX reduces recurrent inhibition and produces membrane potential trajectories which are increasingly closer to the firing threshold. Simulations parameters are K = 103, a = 0.01, g = 12, η = 1.4, and γ 1/4 (single neuron); K = 103, a = 0.002, g = 22, and γ = 1/4 (network). Simulations are performed with the simulator brian2 [71], with neurons receiving inputs from independent Poisson units firing at rates X and γKηνX, in the single-neuron case, or X, in the network case. Network simulations use NE,I = 10K excitatory and inhibitory neurons.

We next examine the effect of a finite synaptic time constant on network response. Numerically computed responses in networks of neurons with a finite synaptic time constant are shown in Fig. 6(b). The network response is close to the prediction obtained with instantaneous synapses for small τS/τ, and deviations emerge for τS/τ ~ 1. Hence, analogously to the single-neuron case, network properties discussed in the case of instantaneous synapses remain valid for low inputs. However, unlike the single-neuron case, no suppression appears for larger τS/τ. This lack of suppression in the network response, analogously to the one we discuss in networks with instantaneous synapses and a ~ Kα, is a consequence of the fact that, to have stable dynamics when K is large, inhibition must dominate recurrent interactions [9]. In this regime, any change which would produce suppression of single-neuron response (e.g., increase of νX) lowers recurrent inhibition and increases the equilibrium value of the membrane potential μ [Figs. 6(b) and 6(d)]. The balance between these two effects determines the network firing rate and, when τS/τ ≫ 1, generates a response which (see the derivation in Appendix G), up to corrections of the order of 1/KτS, is given by Eq. (21) [dashed line in Fig. 6(b)]. Similarly to what happens in networks with instantaneous synapses and a ~ Kα, this finite response emerges because recurrent interactions set μ very close to threshold, at a distance θμ~1/K that matches the size of the membrane potential fluctuations [Eq. (25), στ/τS~1/K]. Hence, as the input to the network increases, recurrent interactions restore the drift-diffusion balance by adjusting the membrane potential mean μ close to threshold, so that fluctuations can sustain firing. Moreover, the single-neuron membrane potential correlation approaches τS and firing becomes bursty, with periods of regular spiking randomly interspersed in time.

We next discuss the effects of the values of τS and coupling strength on how the model response evolves with inputs; this discussion is relevant for both the single-neuron and the network model. In Appendix G, using existing analytical expansions [67,68,70,72] and numerical simulations, we show that neural responses obtained with finite τS are in good agreement with predictions obtained using a short synaptic time constant approximation for τS/τ ≲ 0.1 and are captured by predictions obtained with a large synaptic time constant approximation for τS/τ ≳ 1. The input value at which τS/τ ~ 1, i.e., νX ~ 1/aKτS, determines the input range over which the model expresses one of the two behaviors. Therefore, models with larger (smaller) τS or coupling strength have a smaller (larger) region of inputs in which their response is captured by results obtained with instantaneous synapses (Figs. 6 and 7). Importantly, when biologically relevant parameters are considered (e.g., Fig. 6), both the small and the large τS/τ behaviors are expected to appear. In fact, biological synapses span a wide range of parameters, and most neuron types typically express both fast and slow synaptic receptors; in this condition, fast synapses (characterized by τS of a few milliseconds) are the ones that drive rapid membrane potential fluctuations and, hence, firing. Assuming aK ~ 10, we find that the transition from small to large τ/τS in the cortex is expected to appear for inputs νX ~ 1/aKτS ~ 10–100 spk/s, which is compatible with experimentally observed firing rates [23,4654].

FIG. 7.

FIG. 7.

Single-neuron and network response with finite synaptic time constants, when both a and τS are rescaled with K. (a) Single-neuron response as a function of input rate νX, computed numerically from Eqs. (1) and (23). Different colors correspond to different values of K (103, purple; 104, light blue; 105, yellow; 106, red) with a and τS scaled as in Eqs. (14) and (26); for K = 103, a = 0.01 and τS = 1 ms (i.e., τS*=10 ms). The scaling relation described in the main text preserves the response properties observed in Fig. 6. (b) Analogous to (a) but in the network case; colors correspond to K = 500, 103, 2 × 103, and 4 × 103. For K = 103, a = 0.002 and τS = 1 ms.

We next investigate if and under which conditions the input-dependent behavior described in this section is preserved in the strong-coupling limit. For large inputs, the membrane potential dynamics of Eq. (25) becomes independent of a for large K, and, hence, the model behavior is independent of the scaling relation used. For low inputs and finite coupling, the model behaves as in the case of instantaneous synapses, and, therefore, response properties can be preserved in the strong-coupling limit only if a ~ 1/ log(K). With this scaling, the value of νX separating the low and large input regimes decreases with coupling strength as log(K)/S. This is problematic because, as coupling increases, the model loses its low input behavior and converges to a pathological state in which, for all inputs, membrane potential fluctuations become small, the single-neuron response is suppressed, and, in the network case, the membrane potential is squeezed close to threshold. Thus, to preserve the input-dependent behavior in the strong-coupling limit, the synaptic time constant should decrease with coupling strength as

τS=τS*aK~log(K)K, (26)

where τS* is a constant independent of a and K. In Fig. 7, we show that the scaling of Eq. (26) preserves the input-dependent response as coupling increases.

The activity-dependent drift-diffusion balance described here produces features that are not present in models with instantaneous synapses and that can be tested experimentally (see Table I for a summary). First, the increase of μ with inputs is absent in strongly coupled networks with instantaneous synapses and is consistent with the increased membrane potential observed in cortical circuits with the strength of sensory stimuli [23,49]. Second, with instantaneous synapses, the decay time constant of the autocorrelation of the membrane potential is of the order of τ and, hence, decreases, without bounds, as 1/νX with inputs. The finite synaptic time constant modifies the input dependence of the autocorrelation time constant—it decreases with τ for low inputs and becomes constant (of the order of τS) for larger inputs. Third, with a finite synaptic time constant, firing becomes more bursty as input increases; this effect should be more prominent in networks with stronger coupling (e.g., prefrontal cortex). Fourth, synaptic dynamics makes the robustness of network response to connection heterogeneity input dependent: For small inputs, τS/τ ≪ 1 and CVK*~1/log(K); for large inputs, τS/τ ≫ 1 and CVK*~1/K (derivation in Appendix G). Therefore, the model predicts that networks of neurons with heterogeneous connections and a log-normal distribution of rates for low inputs (e.g., Refs. [5254]) should show an increasing number of silent and regular spiking cells as the input strength increases.

TABLE I.

Overview of networks of current-based and conductance-based neurons. The synaptic time constant strongly affects response properties in networks of conductance-based neurons. Properties similar to what is observed in the cortex emerge in these networks if a ~ 1/ log K and input rates are lower than or comparable to 1/τS* [defined in Eq. (26)]. The model predicts that response properties should gradually mutate as the input to the network increases and, for large inputs, should coincide with those indicated in the last line of the table. In the table, the different quantities related to the membrane potential represent the mean distance from threshold (θμ), the size of temporal fluctuations (σV), and the membrane potential correlation time constant (τV).

Synaptic model Ratio of synaptic and membrane time constant (τS/τ) Synaptic strength Membrane potential statistics Activity structure Heterogeneity of in-degree supported (CVK*)
Current-based (balanced state model) Constant, independent of vX, a, and K J~(1/K) θμ ~ σV ~ 1; τV ~ τL Irregular firing, CV of ISI ~ 1 ~(1/K)
Conductance-based ≪ 1 for vX ≪ (1/τS*); always satisfied for instantaneous synapses (τS*=0) a ~ (1/ log K) θμ ~ 1;
σV~(1/log K); τV ~ log(K)/K
Irregular firing, CV of ISI ~ 1 ~ (1/ log K)
a ~ Kα, α > 0 θμ ~ σV ~ K(−α/2); τV ~Kα−1 Irregular firing, CV of ISI ~ 1 ~Kα
≫ 1 for vX ≫ (1/τS*) Any scaling θμ~σV~(1/K); τV ~ τL Irregular bursting ~(1/K)

IX. DISCUSSION

In this work, we analyzed networks of strongly coupled conductance-based neurons. The study of this regime is motivated by the experimental observation that in cortex K is large, with single neurons typically receiving inputs from thousands of presynaptic cells. We showed that the classical balanced state idea [5,6], which was developed in the context of current-based models and features synaptic strength of the order of 1/K [7,8], results in current fluctuations of very small amplitude, which can generate firing in networks only if the mean membrane potential is extremely close to threshold. This is inconsistent with intracellular recordings in the cortex that show large membrane potential fluctuations (see, e.g., Refs. [21,4651]). To overcome this problem, we introduced a new scaling relation which, in the case of instantaneous synaptic currents, maintains firing by preserving the balance of input drift and diffusion at the single-neuron level. Assuming this scaling, the network response automatically shows multiple features that are observed in the cortex in vivo: irregular firing, wide distribution of rates, membrane potential with non-negligible distance from threshold, and fluctuation size. When finite synaptic time constants are included in the model, we showed that these properties are preserved for low inputs but are gradually modified as inputs increase: The membrane mean approaches threshold, while its fluctuations decrease in size and develop non-negligible temporal correlations. These properties, which are summarized in Table I, provide a list of predictions that could be tested experimentally by analyzing the membrane potential dynamics as a function of the input strength in cortical neurons.

When synaptic time constants are negligible with respect to the membrane time constant, our theory shows properties that are analogous to those of the classical balanced state model: linear transfer function, CV of the order of one, and distribution of membrane potentials with finite width. However, these properties emerge from a different underlying dynamics than in the current-based model. In current-based models, the mean input current is at a distance of the order of one from threshold in units of input fluctuations. In conductance-based models, this distance increases with coupling strength, and firing is generated by large fluctuations at strong coupling. The different operating mechanism manifests itself in two ways: the strength of synapses needed to sustain firing and the robustness to connection heterogeneity, as we discuss in the next paragraphs.

The scaling relation determines how strong synapses should be to allow firing at a given firing rate, for a given value of K. In current-based neurons, irregular firing is produced as long as synaptic strengths are of the order of 1/K. In conductance-based neurons, stronger synapses are needed, with a scaling which approaches 1/ log (K) for large K. We showed that both scaling relations are in agreement with data obtained from culture preparations [63], which are limited to relatively small networks, and argued that differences might be important in vivo, where K should be larger.

In current-based models, the mean input current must be set at an appropriate level to produce irregular firing; this constraint is realized by recurrent dynamics in networks with random connectivity and strong enough inhibition [79]. However, in networks with structural heterogeneity, with connection heterogeneity larger than 1/K, the variability in mean input currents produces significant departures from the asynchronous irregular state, with large fractions of neurons that become silent or fire regularly [56]. This problem is relevant in cortical networks [56], where significant heterogeneity of in-degrees has been reported [50,5762], and different mechanisms have been proposed to solve it [56]. Here, we showed that networks of conductance-based neurons also generate irregular activity without any need for fine-tuning and, furthermore, can support irregular activity with substantial structural heterogeneity, up to the order of 1/ log(K). Therefore, these networks are more robust to connection heterogeneity than the current-based model and do not need the introduction of additional mechanism to sustain the asynchronous irregular state.

When the synaptic time constant is much larger than the effective membrane time constant, we showed that, regardless of synaptic strength, the size of membrane potential fluctuations decreases and firing in the network is preserved by a reduction of the distance from threshold of the mean membrane potential. Moreover, the robustness to heterogeneity in connection fluctuations decreases substantially (the maximum supported heterogeneity becomes of the order of 1/K), and the membrane potential dynamics becomes correlated over a timescale fixed by the synaptic time constant. The network response at low rates is well approximated by that of networks with instantaneous synapses, and the regime of large synaptic time constant is reached gradually, as the input to the network increases (Fig. 6). This observation provides a list of predictions on how properties of cortical networks should evolve with input strength (summary in Table I.) that are testable experimentally. While some of these predictions require new experiments to be validated, we point out that one of them—that the equilibrium value of the membrane potential should increase with inputs—is consistent with the increased membrane potential observed in cortical circuits with the strength of sensory stimuli [23,49].

In conductance-based models, we showed that response properties observed at finite coupling survive in the strong-coupling K → ∞ limit only if unitary conductances obey a specific scaling law [Eq. (14)], and synaptic time constants also obey a scaling law [Eq. (26)]. While there is evidence in cortical cultures that average synaptic strengths do decay with increasing connectivity [63], no such evidence exists to our knowledge to support decreasing synaptic time constants with increasing connectivity. However, it is well known that synaptic decay time constants depend on subunit composition of the receptors (see, e.g., Ref. [73] for GABA receptors, Ref. [74] for NMDA receptors, and Ref. [75] for AMPA receptors), and subunit composition can depend on synaptic activity (e.g., Ref. [76]). It is thus tempting to speculate that both scaling laws could be implemented in neurobiological circuits. If such plasticity exists, our theory predicts that it should produce smaller synaptic time constants in networks with larger K.

In our analytical calculations, we have neglected correlations between neurons and assumed the network operates in the asynchronous regime. This assumption is consistent with observations that correlations between cells in cortex in vivo can in some cases be small, i.e., of the order of 0.01 [77,78]. It is also consistent with the results of our numerical simulations, which show good agreement with the calculations in networks with connection probabilities of 0.1, on the same order of magnitude as observed connection probabilities in cortex. However, correlations between neurons can vary significantly between cortical state, layer, and firing rate, with many studies finding average correlation coefficients of the order of 0.1 or more (e.g., Ref. [79]). Intriguingly, weak but nonzero correlations between inputs, on the order of 0.1, have been argued to be necessary to quantitatively capture the amplitude of membrane potential fluctuations observed in the cat cortex [21]. Understanding how correlations affect the results obtained in our work is an important problem which should be addressed in the future.

Experimental evidence suggests that the response to multiple inputs in cortex is nonlinear (for an overview, see Ref. [80]). Such nonlinearities, which are thought to be fundamental to perform complex computations, cannot be captured by the classical balanced state model, as it features a linear transfer function [7,8]. Several studies have shown how relaxing assumptions underlying the classical balanced state model can lead to nonlinear responses. In particular, moderate coupling and power-law single-neuron input-output function [8082], short-term plasticity [83], and differential inputs to subsets of excitatory neurons [84] can lead to nonlinearities. We have recently shown [85] that nonlinear responses appear in networks of current-based spiking neurons when coupling is moderate and only at response onset or close to single-neuron saturation. Here, we have shown that response onset and saturation nonlinearities appear also in networks of conductance-based neurons when coupling is moderate. In addition, we have found that synaptic time constants provide an additional source of nonlinearity, with nonlinear responses emerging as the network transitions between the response onset and saturation. A full classification of the nonlinearities generated in these networks is outside the scope of this work but could be performed by generalizing the approach developed in Ref. [85].

The strength of coupling in a network, both in the current-based model [81,85] and in the conductance-based model (e.g., Fig. 3), determines the structure of its response and, hence, the computations it can implement. Recent theoretical work, analyzing experimental data in the framework of current-based models, has suggested that the cortex operates in a regime of moderate coupling [82,86], where response nonlinearities are prominent. In conductance-based models, the effective membrane time constant can be informative on the strength of coupling in a network, as it decreases with coupling strength. Results from in vivo recordings in the cat parietal cortex [21] showed evidence that single-neuron response is sped up by network interactions. In particular, measurements are compatible with inhibitory conductance approximately 3 times larger than leak conductance and support the idea that the cortex operates in a “high-conductance state” [22]. This limited increase in conductance supports the idea of moderate coupling in cortical networks, in agreement with what was found in previous work [82,86]. More recent studies have, however, obtained results that seem at odds with the high-conductance state idea. Recent whole cell recordings have reported that an intrinsic voltage-gated conductance, whose strength decreases with membrane potential, contributes to the modulation of neuronal conductance of cells in the primary visual cortex of awake macaques and anesthetized mice [23]. For spontaneous activity, this intrinsic conductance is the dominant contribution to the cell conductance and drives its (unexpected) decrease with increased depolarization. For activity driven by sensory stimuli, on the other hand, modulations coming from synaptic interactions overcome the effect of the intrinsic conductance, and neuronal conductance increases with increased depolarization. The decrease in conductance observed during spontaneous activity in Ref. [23] seems incompatible with previous experimental results [22], and it is still unclear which differences between experimental preparations underlie these differences. While a resolution of this discrepancy will require additional experimental work, we point out that our work is relevant for both scenarios. In fact, our analysis shows that voltage-dependent currents, such as that produced by the voltage-gated channels [23] or during spike generation [44], affect quantitatively, but not qualitatively, the single-neuron response. Moreover, our theory explains the mechanisms shaping response properties at finite coupling and identifies a scaling relation that preserves these properties in the strong-coupling limit. Therefore, results described in this contribution seem to be a general property of networks of spiking neurons with conductance-based synapses, and they should be relevant for a wide range of single-neuron models and coupling strengths.

Understanding the dynamical regime of operation of the cortex is an important open question in neuroscience, as it constrains which computations can be performed by a network [81]. Most of the theories of neural networks have been derived using rate models or current-based spiking neurons. Our work provides theoretical tools to investigate the dynamics of strongly coupled conductance-based neurons, and it suggests predictions that could be tested experimentally.

ACKNOWLEDGMENTS

We thank Alex Reyes for providing his data and Vincent Hakim and Magnus Richardson for comments on the manuscript. This work was supported by the NIMH Intramural Research Program and by NIH BRAIN U01 NS108683 (to M. H. H. and N. B.). A. S. was partially supported by the Gatsby Charitable Foundation GAT3708 and by NSF Grant DBI-1707398. This work used the computational resources of the NIH HPC Biowulf cluster and the Duke Compute Cluster.

APPENDIX A: CALCULATIONS IN THE MULTIPLICATIVE NOISE CASE

In the main text, we analyze the distribution of membrane potential, firing rate, and CV using the effective time constant approximation, which neglects the dependence of the noise amplitude on the membrane potential. This approximation is motivated by the fact that corrections to this approximation are of the same order of shot noise corrections to the diffusion approximation used to describe synaptic inputs [87]. In this section, we derive results without resorting to the effective time constant approximation (i.e., keeping the voltage dependence of the noise term) and show that the results derived in the main text remain valid, even though it complicates the calculations. The inclusion of shot noise corrections is outside the scope of this contribution.

1. Equations for arbitrary drift and diffusion terms

In this section, we compute the probability distribution of the membrane potential, the firing rate, and the CV of ISI of a neuron whose membrane potential follows the equation

dVdt=A(V)+B(V)ζ. (A1)

Equation (4) of the main text is a special form of Eq. (A1) with

A(V)=μVτ,    B(V)=σ(V)τ. (A2)

The Fokker-Plank equation associated to Eq. (A1), in the Stratonovich regularization scheme, is given by

dPdt=JV,

where P is the probability of finding a neuron with membrane potential V and J is the corresponding probability current given by

J=(A+12BBV)P12B2PV. (A3)

We are interested in the stationary behavior of the system in which P does not depend on time and the current J is piecewise constant. In particular, for V between the activation threshold θ and the resting potential Vr, J is equal to the neuron firing rate ν, and the normalization condition implies

VrθP(V)dV+ντrp=1,

where τrp is the refractory period.

To derive the probability distribution of the neuron potential, we introduce in Eq. (A3) the integrating factor

W(V)=exp[2VduA(u)+12B(u)B(u)uB(u)2]

and obtain

2νW(V)θ(VVr)=V[W(V)B(V)2P(V)].

Using the boundary condition P(θ) = 0, we find

P(V)=2νW(V)B(V)2VθduW(u)θ(uVr) (A4)

and

1ν=τrp+2θdx1W(x)B(x)2xθduW(u)θ(uVr).

Integrating by parts, we obtain

1ν=τrp+2VrθdvW(v)vdx1W(x)B(x)2. (A5)

This solution is obtained in general form in Ref. [41] and for the specific form of Eq. (A2) in Ref. [11].

We now compute the coefficient of variation of the interspike interval. The moments Tk of the interspike intervals of the stochastic process defined by Eq. (A1) are given by (see Ref. [43])

B(x)22d2Tk(x)dx2+(A(x)+12B(x)B(x)x)dTk(x)dx=kTk1(x)

with boundary conditions

Tk(θ)=0,    dTk(b)dx=0;

i.e., θ is an absorbing boundary and b is a reflective boundary (we then consider the limit b → −∞). The general solution of an equation of the form

d2f(x)dx2+P(x)df(x)dx=Q(x) (A6)

is

f(x)=θxdttdzQ(z) exp (tzdwP(w)).

For T1(x), we have

P(x)=2A(x)+B(x)B(x)xB(x)2,    Q(x)=2B(x)2.

For T2(x), we look for a solution of the form

T2(x)=T1(x)2+R(x)

and find that R obeys an equation of the form of Eq. (A6) with

P(x)=2A(x)+B(x)B(x)xB(x)2,    Q(x)=2(dT1(x)dx)2.

Combining the previous results, the CV of ISI is obtained as

CV2=R(x)T1(x)2; (A7)

the explicit expression of the CV is given in the following section.

Equations (17), (10), and (15) of the main text are obtained from Eqs. (A4), (A5), and (A7), respectively, using Eq. (A2).

2. Equations for conductance-based LIF neurons

Starting from Eqs. (4) and (5) of the main text, we write the different terms as

τ1=τL1+aKω1,μ=τ{EL/τL+aK[rEEE+rIgγEI]},σ2=a2Kτχ[(VES)2+ED2], (A8)

where, to shorten the expressions, we introduce two auxiliary variables with time dimension:

ω1=rE+rIgγ,    χ1=rE+rIg2γ, (A9)

as well as two variables with voltage dimensions:

ES=χ(rEEE+rIg2γEI),ED=χrErIg2γ(EEEI). (A10)

The terms −(Vμ)/τ and σ(V)ζ/τ in Eq. (4) represent the input drift and noise to the membrane dynamics, respectively. The voltage dependence of these terms is sketched in Fig. 8.

In the large K limit, the different terms in Eq. (4) scale as

τ~ωaK,    μ~ω(rEEE+rIgγEI),στ~ωχK(VES)2+ED2; (A11)

while the values of ω, μ, ES, and ES are independent of K. It follows that the noise term στ and the time constant τ in Eq. (4) become small in the strong-coupling limit. This result is analogous to what we obtain in the main text with the effective time constant approximation, since this approximation does not change how these terms scale with a and K.

We now insert the drift and diffusion terms of the conductance-based LIF neuron in Eqs. (A4), (A5), and (A7) and obtain

P(V)=2νχEDeF(V)/aa2K[(VES)2+ED2]u(V)vmaxdxθ[xu(Vr)]eF(x)/a, (A12)
1ν=τrp+2χa2Kvminvmaxdvvdx1x2+1exp[F(v)F(x)a], (A13)

and

CV2=8χ2ν2a4K2vminvmaxdvvdzexp[F(v)F(z)a]×{zdw1w2+1exp[F(z)F(w)a]}2, (A14)

where

F(x)=2χaKτ[12(1a2Kτ2χ) log(x2+1)αarctan(x)],    u(V)=VESED,    vmin=u(Vr),    vmax=u(θ),    α=u(μ). (A15)

Equations (A12) and (A13) are analogous to those derived in Ref. [11]. To simplify the following analysis, we neglect the contribution of the term a2/2χ, which derives from the regularization scheme. This assumption is justified by the fact that, for large K, τ ~ 1/aK and the factor a2/2χ is of the order of a ≪ 1.

FIG. 8.

FIG. 8.

Drift and diffusion terms of Eq. (4) as a function of the voltage. (a) Input drift as a function of membrane potential V produced with both inhibitory and excitatory inputs (black line), excitatory inputs only (red dotted line), or inhibitory inputs only (blue dotted line). The drift term decreases monotonically with V, and it is zero at V = μ, which is a stable fixed point of the deterministic dynamics. (b) The noise variance is quadratic in V. Its minimum at V=ES is equal to EDa2K/χ. Note that the minimum amplitudes of drift and variance are obtained at different values of V.

APPENDIX B: CALCULATIONS IN THE STRONG-COUPLING REGIME—SINGLE NEURONS

In the main text, we derive a simplified expression for the single-neuron response neglecting the dependency of noise on membrane potential. In this section, we generalize this result to the case in which the full noise expression is considered. We compute simplified expressions of the single-neuron transfer function and CV, in both the subthreshold regime μ < θ and the suprathreshold regime μ > θ. These expressions are validated numerically in Fig. 9 and used in the last part of this section to define a scaling relation between a and K which preserves single-neuron firing in the strong-coupling limit.

1. Single-neuron transfer function at strong coupling

The starting point of our analysis is the observation that the integrand in Eq. (A13) depends exponentially on 1/a ≫ 1. This suggests to perform the integration with a perturbative expansion of the exponent. We show below that, since the exponent has a stationary point at x = v = α (see Fig. 10), the integration gives two qualitatively different results if α is larger or smaller than the upper bound of the integral vmax. Moreover, since the condition αvmax corresponds to θμ, the two behaviors correspond to supra- and subthreshold regimes, respectively.

FIG. 9.

FIG. 9.

Response of a single conductance-based neuron to noisy inputs. Estimates of firing rate [(a),(b),(e),(f)], μ [(c),(g)], and CV [(d),(h)] obtained with numerical integration of Eqs. (A13), (13), and (A14) for different values of a and K (colored dots). For the two regimes μ < θ (first row) and μ > θ (second row), the transfer function saturates as K increases. Note that the same change in a has a more drastic effect if μ < θ; this is due to the exponential dependence that appears in Eq. (B6). The approximated expressions (continuous lines) capture the properties of the transfer function [(a) Eq. (B6) and (e) Eq. (B4)] and CV [(c) Eq. (B17) and (g) Eq. (B9)]. For small inputs (f), Eq. (B4) fails to describe the transfer function for some values of K, because the corresponding μ is below threshold. Simulations parameter are g = 12, γ = 1/4, and η = 1.5 (top) or 0.6 (bottom).

FIG. 10.

FIG. 10.

Graphical representation of the exponent in Eq. (A13). The function F(v)F(x) is stationary at x = v = α; this point is a maximum for x and a minimum for v. Parameters are as in Fig. 8. In this figure, α = 1.2 (black diamond).

a. Suprathreshold regime vmax < α (θ < μ)

The exponent in Eq. (A13) is negative for every value of x, except for x = v, in which it is zero. The integral in x can be written as

I=vdxg(x)efv(x)/a=vdxg(x)e(1/a)[fv(v)(xv)+fv(v)/2(xv)2+]. (B1)

With a change of variable z = (xv)/a, we obtain

I=a0dzg(v+az)efv(v)z+afv(v)(z2/2)+. (B2)

Neglecting all the terms of the order of a, we get

I=ag(v)fv(v). (B3)

Performing the integration in v, we obtain

1ν=τrp+τ log(μVrμθ). (B4)

Equation (B4) is the transfer function of a deterministic conductance-based neuron with the addition of the refractory period. This is not surprising, since the noise term becomes negligible compared to mean inputs in the small a limit. In Fig. 9(b), we show that Eq. (B4) gives a good description of the transfer function predicted by the mean field theory in the suprathreshold regime.

b. Subthreshold regime vmax > α (θ > μ)

First, we consider α < vmin (μ < Vr). For every value of v, the integral in x in Eq. (A13) has a maximum in the integration interval; hence, it can be performed through the saddle-point method and gives

1ντrp=4πχτa2K(α2+1)vminvmaxdvexp[F(v)F(α)a]. (B5)

In the last equation, the exponent in the integrand has a minimum for v = α and is maximum at v = vmax; we expand the exponent around v = vmax and, keeping term up to the first order, obtain

1ντrp=τπa2Kτχ(α2+1)vmax2+1|vmaxα|exp[F(vmax)F(α)a]. (B6)

In the regime vmin < α < vmax, the integral in v of Eq. (A13) can be divided into three parts:

vminvmaxdv=vminαϵdv+αϵα+ϵdv+α+ϵvmaxdv; (B7)

the third integral is analogous to the case α < vmin, and, hence, it has an exponential dependency on the parameters and dominates the other terms. In Fig. 9(a), we show that Eq. (B6) gives a good description of the transfer function predicted by the mean field theory for μ < θ.

2. Single-neuron CV of ISI at strong coupling

In this section, we provide details of the derivation of the approximated expressions of the response CV. Starting from the mean field result of Eq. (A14), we compute integrals using the approach discussed above.

a. Suprathreshold regime vmax < α (θ < μ)

The inner integral in Eq. (A14) yields in the small a limit

zdw1w2+1exp[F(z)F(w)a]=az2+11dF(z)dz, (B8)

from which we obtain

CV2=aν2(aKτ)3a2K2χ[log(vminαvmaxα)+3α2+4αvmax+12(αvmax)23α2+4αvmin+12(αvmin)2]; (B9)

hence, the rescaling needed to preserve the deterministic component a ~ 1/K produces CV2 ~ a ≪ 1. We validate this result numerically in Figs. 9(h) and 11(f).

b. Subthreshold regime vmax > α (θ > μ)

The integral defining the CV [Eq. (A14)] can be expressed as

vdz exp[F(v)F(z)a]g(z)=v*dz exp[F(v)F(z)a]g(z)+v*vdz exp[F(v)F(z)a]g(z) (B10)

with

g(z)={zdw1w2+1exp[F(z)F(w)a]}2v*=αϵ. (B11)

The first integral gives

v*dz exp[F(v)F(z)a]g(z)=a3(v*+1)2[dF(v*)dz]3. (B12)

In the second integral,

g(z)=aπ(α2+1)2d2F(α)dz2exp[2F(z)2F(α)a], (B13)

from which we get

v*vdz exp[F(v)+F(z)2F(α)a]aπ(α2+1)2d2F(α)dz2. (B14)

Integrating in z, we obtain

v*vdz exp[F(z)a]=adF(v)dzexp[F(v)a]. (B15)

Integrating in v, we obtain

CV2=8χ2ν2π(α2+1)2d2F(α)dz2(dF(vmax)dz)2(exp[F(vmax)F(α)a]aK)2. (B16)
FIG. 11.

FIG. 11.

Scaling relationships preserving firing in the large K limit. Colored dots represent mean field transfer function [(a),(b)], CV [(c),(d)], and membrane potential [(e),(f)] obtained from Eqs. (A13), (A14), and (A8), respectively. Different colors correspond to different values of a and K which are scaled according to Eqs. (B22) (first row) and (B21) (second row). Mean field predictions are well described by the relevant approximated expressions (continuous lines). For μ < θ, the transfer function and CV are described by Eqs. (B22) (a) and (B17) (c); both quantities are invariant as K increases. For μ > θ, the transfer function and CV are described by Eqs. (B21) (a) and (B9) (c); note that, as explained in the text, the firing is preserved while the CV becomes smaller as K increases (different line colors correspond to different values of K). Parameters: g = 12 and γ = 1/4.

Using Eq. (B6), we obtain

CV=1ντrp, (B17)

which corresponds to the CVof the ISI of a Poisson process with dead time, with rate ν and refractory period τrp. We validate this result numerically in Figs. 9(d) and 11(c).

3. Scaling relations preserving firing in the strong-coupling limit

In this section, we use the simplified expressions derived above to define scaling relations of a with K which preserves neural response in the strong-coupling limit. Importantly, the scaling defined here depends on the operating regime of the neuron, i.e., on the asymptotic value of μ.

In the limit of large K, terms in Eq. (A8) can be written as

τ1=aKνX(1+ηgγ),    ω1=νX(1+ηgγ),    χ1=νX(1+ηg2γ), (B18)

while μ, ED, ES, vmax, α, and the function F(x) are independent of K, a, and νE. We show in the previous section that the single-neuron transfer function is given by

1ν=τrp+QνE (B19)

with

Q={(1aKexpF(vmax)F(α)a)π(1+ηg2γ)(1+ηgγ)3(α2+1)vmax2+1vmaxαfor μ<θ1aK(1+ηgγ)log(μθμVr)for μ>θ (B20)

For μ > θ, the parameters a and K in Eq. (B20) appear only in the combination aK. It follows that a rescaling

a~1K (B21)

leaves invariant the neural response for large K. For μ < θ, Eq. (B20), and hence the transfer function, is invariant under the rescaling

K~1aexp[F(vmax)F(α)a]. (B22)

In Figs. 11(a) and 11(d), we show neural responses computed for different values of K with a rescaled according to Eqs. (B21) or (B22); as predicted, the network transfer function remains invariant as K increases. Note that the response remains nonlinear in the limit of large K; we show in the next section that, in the network case, because of the self-consistency relation, nonlinearities are suppressed by the scaling relation.

Finally, from Figs. 11(c) and 11(f), we see that the rescaling preserves the CV for μ < θ and suppresses it for μ > θ. In the case μ < θ, the CV is given by Eq. (B17). This expression shows that the scaling relation of Eq. (B22) also leaves invariant the CV. Interestingly, in some parameter regime, the CV in Figs. 9(d) and 11(c) shows a nonmonotonic behavior with νX which is not captured by Eq. (B17). In particular, a CVabove one is observed when μ is below the reset Vr. As pointed out in Ref. [88], this supra-Poissonian firing is explained by the fact that, when μ < Vr, spiking probability is higher just after firing that it is afterward. In agreement with this interpretation, we find that the nonmonotonic behavior of the CV disappears in the large K limit, where the region of inputs for which μ < Vr becomes negligible. Thus, our analysis shows that the irregularity of firing is preserved in the strong-coupling limit of a single neuron with μ < θ.

In the case μ > θ, the CV is given by Eq. (B9). This expression shows that the scaling relation of Eq. (B21) produces a CV which decreases as 1/K in the strong-coupling limit. It follows that, in a single neuron with μ > θ, the strong-coupling limit produces finite firing that is regular.

Starting from the next section, we focus our attention to a network of conductance-based neurons. Since we are interested in describing the irregular firing observed in the cortex, we focus our study on networks with μ < θ.

APPENDIX C: FIRING RATE AND SCALING RELATION IN LEAKY INTEGRATE-AND-FIRE NEURON MODELS WITH VOLTAGE-DEPENDENT CURRENTS

In the main text, we show that, when coupling is strong and a ≪ 1, the response of a single LIF neuron with conductance-based synapses is well approximated by Eq. (12), i.e., the Kramers escape rate. Using this expression, we show that the scaling relation of Eq. (14) allows finite firing in a single neuron and in networks of neurons. Here, we show that the first-order approximation of this scaling, i.e., a ~ 1/log(K), appears also in neuron models with additional biophysical details, such as spike-generating currents [44] and voltage-gated subthreshold currents [23], as long as coupling is strong, a is small, and synapses are conductance based.

We consider integrate-and-fire models featuring voltage-dependent currents, indicated here as ϕ(V), and conductance-based synapses. In these models, the membrane potential dynamics can be written as

CjdVjdt=A=L,E,IgAj(VjEA)+ψ(V). (C1)

In the LIF, ψ(V) = 0 and Eq. (C1) reduces to Eq. (1) analyzed in the main text. In the exponential integrate-and-fire model (EIF) [44], the function ψ(V) = ΔTgL exp[(Vθ)/ΔT] describes the spike generation current; in this model, once the membrane potential crosses the threshold θ, it diverges to infinity in finite time. The current generated by inward-rectifier voltage-gated channels, such as the one recently reported in Ref. [23], is captured by an expression of the form ψ(V) = −gin(V)(VEin), where gin(V) and Ein represent the conductance and the reversal potential of the channels, respectively; in the case of Ref. [23], 1/gin(V) is shown to be well approximated by a linear increasing function of V.

The dynamics Eq. (C1), following an approach analogous to the one we use for the derivation of Eq. (4), can be approximated by

τdVdt=H(V)V+στζ,H(V)=12(Vμ)2ττLgLVψ(x)dx, (C2)

where ζ is a white noise term, with zero mean and unit variance density, while τ, μ, and σ(V) are as in Eq. (5). In what follows, as in the main text, we use the effective time constant approximation [40]—i.e., we neglect the multiplicative component of the noise term in Eq. (C2)—and make the substitution σ(V) → σ(μ*), where μ* is the mean value of the membrane potential dynamics.

The firing rate of a neuron following Eq. (C2) can be computed exactly using Eq. (A5) and is given by

ν=[τrp+2τσ2dxmax(Vr,x)exp(H(z)H(x)σ2)dz]1. (C3)

In what follows, we provide a more intuitive derivation of the single-neuron response, which is valid in the biologically relevant case of a ≪ 1. The function H in Eq. (C2) can be thought of as an energy function which drives the dynamics of the membrane potential. In the case of LIF neurons, H is a quadratic function with a minimum at V = μ. In neuron models with a spike generation current, such as the EIF model [44], the shape of the function H far from threshold is qualitatively similar to that of the LIF model (with a minimum at V = μ*) but becomes markedly different close to threshold, where the potential energy has a maximum at V = θ* and goes to −∞ for V > θ*. Here, we focus on the case in which additional subthreshold voltage-gated currents do not lead to additional minima of the energy function, a scenario that can happen with potassium inward-rectifier currents (e.g., see Ref. [89], Chap. 4.4.3). In models in which H has a single minimum in the subthreshold range at μ* and a maximum at θ*, the firing rate of a neuron when input noise is small (i.e., when a ≪ 1) can again be computed using the Kramers escape rate, which gives the average time it takes for the membrane potential to go from μ* to θ* (see Ref. [43], Sec. V.5.3):

1ντrp=2πτ¯ϒ¯aKνXexp(Δ¯a), (C4)

where

ϒ¯=(d2HdV2|θ*d2HdV2|μ*)1/2,    Δ¯=H(θ*)H(μ*)σ¯,    τ¯=aKνXτ,    σ¯=σa, (C5)

while .¯ indicates quantities that remain of the order of 1 in the small a limit, provided the external inputs νX are at least of the order of 1/(aKτL). Equation (C4) is the generalization of Eq. (12) to the case of integrate-and-fire neuron models with voltage-dependent currents; it shows that, at the dominant order, finite firing emerges if a ~ 1/ log K. Moreover, Eq. (C4) shows that corrections to the logarithmic scaling depend on the specific type of voltage-dependent currents used in the model.

APPENDIX D: CALCULATIONS IN THE STRONG-COUPLING REGIME—NETWORKS

In this section, we show how the results on the strong-coupling limit of single-neuron response can be generalized to the network case. First, we analyze the problem in the case in which excitatory and inhibitory neurons have the same biophysical properties (model A). In this model, we start by discussing the results using the effective time constant approximation and then discuss the full results. Then, we study the case in which excitatory and inhibitory neurons have different biophysical properties (model B).

1. Model A, effective time constant approximation

As discussed in the main text, the network response in model A with the effective time constant approximation is obtained by solving the self-consistency condition given by Eqs. (19) and (10). At strong coupling, this condition can be simplified to the form of Eq. (12). In the strong-coupling limit, when νX ≫ 1/aKτL and ν ≫ 1/τrp, the right-hand side of Eq. (10) depends on ν and νX only through their ratio. Therefore, we look for solutions of the simplified self-consistency condition with a Taylor expansion

ννX=k=1k=ρkxk1,    x=τrpνX. (D1)

Keeping only terms up to first order in x, the self-consistency condition becomes

1ρ1(1+ρ2ρ12)x=Q(ρ1)+ρ2dQ(y)dy|y=ρ1x,

from which we find

ρ1=1Q(ρ1) (D2)

The solution of Eq. (D2) provides the linear component of the network response; this is preserved in the strong-coupling limit with an expression analogous to Eq. (14) but with

rEνX=1+ρ1,    rIνX=ρ1.

This uniquely defines a scaling between a and K [see Fig. 3(a) for an example of the scaling function]. We test the validity of our result in Fig. 3(b). The numerical analysis shows that, as K increases, the scaling relation prevents saturation and suppression of the network response. However, unlike what happens in the single-neuron case, the shape of the transfer function is not preserved and becomes increasingly linear as K becomes larger. This is analogous to what happens in the balanced state model [7,8,10,85], where the network transfer function becomes linear in the strong-coupling limit. For the case under investigation here, we can understand this suppression of nonlinearities by looking at the second-order terms in the expansion of Eq. (D1). Keeping the dominant contribution in a, we find

ρ2~aρ1σ¯22v¯max(σ¯dμdy+(θμ)dσ¯dy). (D3)

Hence, ρ2 goes to zero as a decreases, producing a linear transfer function. This follows directly from the self-consistency relation and is not present in the single-neuron case, where, in fact, a nonlinear transfer function is observed in the large K limit. Figure 3(b) shows that linearity is reached really slowly with K; this follows directly from Eq. (D3), where the suppression of nonlinear terms is controlled by a, which slowly goes to zero with K (approximately logarithmically).

2. Model A, multiplicative noise

In this section, we generalize the approach used above, relaxing the effective time constant approximation. As discussed in Appendix B, Eq. (A13) in the strong-coupling limit becomes

1ν=τrp+QνX (D4)

with

Q=1aKexp[F(vmax)F(α)a]π[1+ννX(1+g2γ)][1+ννX(1+gγ)]3(α2+1)vmax2+1|vmaxα| (D5)

and

τ1=aKω1,    ω1=νX[1+ννX(1+gγ)],    χ1=νX[1+ννX(1+g2γ)],    μ=EE+ννX(EE+gγEI)1+ννX(1+gγ),    ES=EE+ννX(EE+g2γEI)1+ννX(1+g2γ),    ED=(EEEI)(1+ννX)ννXg2γ1+ννX(1+g2γ). (D6)

Here, we assume aK ≫ 1/τLνX so that the function Q depends on ν and νX only through the combination ν/νX. We show below that a scaling relation analogous to that of single neurons holds; hence, for K large enough aK ≫ 1/τLνX is automatically implemented. To solve the self-consistency condition, we express the firing rate ν with a Taylor expansion

τrpν=k=1k=ρkxk,    x=τrpνX. (D7)

Note that in Eq. (D7) we assume ρ0 = 0; we come back to this point at the end of the section. Under this assumption, yν/νX=k=1k=ρkxk1 and the function Q depends only on powers of the dimensionless variable x. Keeping only terms up to first order in x, Eq. (D4) becomes

1ρ1(1+ρ2ρ12)x=Q(ρ1)+ρ2dQ(y)dy|y=ρ1x, (D8)

from which we find

ρ1=1Q(ρ1). (D9)

The solution of Eq. (D9) provides the linear component of the network response, i.e., its gain; we discuss this function in more detail at the end of this section.

From Eq. (D9), we find that the network gain ρ1 is preserved in the strong-coupling limit if the factor

1aKexp[F(vmax)F(α)a] (D10)

is constant. Equation (D10) uniquely defines a scaling between a and K [see Fig. 12(c) for an example of the scaling function]. We test the validity of the scaling in Fig. 12 as follows: Given a set of parameters a, K, and ρ1, we compute numerically the transfer function from Eq. (A13); then we increase K, determine the corresponding change in a using Eq. (D10), and compute again the transfer function—results of this procedure are shown in Fig. 12(a). The numerical analysis shows that, as K increases, our scaling relation prevents saturation and the network response remains finite.

As in the case with diffusion approximation, the shape of the transfer function is not preserved by the scaling and an increasing linear response is observed. We can understand this suppression of nonlinearities by looking at the second-order terms in the expansion of Eq. (D4); we find

ρ2=ρ12ρ1dlog[Q(y)]dy+1, (D11)

and, keeping the dominant contribution in 1/a at the denominator,

ρ2~aρ1dF[vmax(y),y]dy|ρ1+dF[α(y),y]dy|ρ1. (D12)

Hence, ρ2 goes to zero as a decreases, producing a linear transfer function. The nonlinearities at low rate in Fig. 12(a) (e.g., see red and yellow lines) show that our assumption ρ0 = 0 is not valid, in general. However, it turns out that the above-defined scaling relation suppresses also these nonlinearities in the limit of strong coupling (e.g., blue and cyan lines).

FIG. 12.

FIG. 12.

Strong-coupling limit of networks of conductance-based neurons in model A. Numerically computed network transfer function (a), CV (b), and probability distribution of the membrane potential (d) obtained from Eqs. (D4), (A12), and (B17). Different colors correspond to different values of a and K which are changed according to the scaling relation (D10) (c). As K increases the network transfer function and CV converges to the expression derived in the main text (black lines). Note that, unlike the case of a single neuron, the network transfer functions become linear. The probability distribution of the membrane potential becomes Gaussian and slowly converges to a delta function. (e) and (f) show the network gain and membrane potential, respectively, for different values of a at fixed K. Note that, unlike what happens in current-based networks (black dashed lines), the gain is not monotonic with g. Simulation parameters are as in Fig. 9; in (a)–(d), g = 20.

We now characterize the dependency of the transfer function gain, i.e., its slope, on network parameters. For fixed network parameters, the network gain ρ1 is defined as the solution of Eq. (D9); solutions as a function of a and g are shown in Fig. 12(e). At fixed values of a, the gain initially decreases as g increases, and, for g large enough, the opposite trend appears. This behavior is due to two different effects which are produced by the increase of g: On one hand, it increases the strength of recurrent inhibition; on the other hand, it decreases the equilibrium membrane potential μ and brings it closer to the inhibitory reversal potential Ei, which, in turn, weakens inhibition [see Fig. 12(f)]. Figure 12(e) shows that the gain is finite only for a finite range of the parameter g; divergences appear because recurrent inhibition is not sufficiently strong to balance excitation. At small g, the unbalance is produced by weak efficacy of inhibitory synapses; at large g, inhibition is suppressed by the approach of the membrane potential to the reversal point of inhibitory synapses. Increasing the value of a produces an upward shift in the curve and, at the same time, decreases the range of values in which the gain is finite. The observed decrease in gain generated at low values of g is observed also in networks of current-based neurons [10], where the gain is found to be 1/( – 1). Finally, we note that the difference between conductance- and current-based model decreases with a.

To conclude this analysis, we give an approximated expression of the probability distribution of the membrane potential of Eq. (A12) which, in the strong-coupling limit, becomes

P(V)=νω|vmaxμ|[u(Vmax)2+1u(V)2+1]e[F(vmax)F(V)]/aaK, (D13)

where Vmax is the value of the membrane potential V which maximizes the integrand of Eq. (A12) while the function u() is defined in Eq. (A15). Examples of the probability distribution and the corresponding approximated expressions are given in Fig. 12(d).

3. Model B, multiplicative noise

In this section, we generalize the results obtained so far to the case of networks with excitatory and inhibitory neurons with different biophysical properties.

a. Model definition

Here, we take into account the diversity of the two types of neurons with

τj=τE,    ajm=aEX,aEE,aEI, (D14)

for excitatory neurons and

τj=τI,    ajm=aIX,aIE,aEE, (D15)

for inhibitory neurons. We use the parametrization

aEX=aE,    aEE=aE,    aEI=gEaE,    aIX=aI,    aIE=aI,    aII=gIaI, (D16)

and

KEX=KE,    KEE=KE,    KEI=γEKE,    KIX=KI,    KIE=KI,    KII=γIKI. (D17)

Equation (1) becomes

τEdVEdt=(VEμE)σE(VE)τEζE,τIdVIdt=(VIμI)σI(VI)τIζI. (D18)

The expressions for excitatory neurons are

τE1=τL,E1+aEKEωE1,    ωE1=νEX+νE+gEγEνI,    μE=τE{EL+aEKEτL,E[νEXEE+νEEE+νIgEγEEI]}    σE2=aE2KEτEχE[(VES,E)2+ED,E2],    χE1=νEX+νE+gE2γEνI,    ES,E=χE[νEXEE+νEEE+νIgE2γEEI)],    ED,E=χE(νEX+νE)gE2γEνI(EEEI); (D19)

analogous expressions are valid for inhibitory neurons.

The firing rate is given by solving a system of two equations:

1νEτrp=2χEaE2KEvmin,Evmax,Edvvdx1x2+1exp[FE(v)FE(x)aE],1νIτrp=2χIaI2KIvmin,Ivmax,Idvvdx1x2+1exp[FI(v)FI(x)aI], (D20)

with

FE(x)=2χEaEKEτE[12log(x2+1)αEarctan(x)],    vmin,E=VrES,EED,E,    vmax,E=θES,EED,E,    αE=μEES,EED,E, (D21)

and analogous expressions for the inhibitory population. The probability distribution of the membrane potential and the CV are straightforward generalizations of Eqs. (A12) and (A14).

b. Scaling analysis

We parametrize inputs to the two populations as νEX and νIX = ηνEX. Using an analysis analogous to the one depicted above, we obtain a simplified expression for the self-consistency Eq. (D20) that is

1νEτrp=QE(νE/νEX,νI/νEX)νEX,    1νIτrp=Qi(νE/νEX,νI/νEX)νEX, (D22)

where

QE=[1aEKEexpFE(vmax,E)FE(αE)aE]π[1+νEνEX+gE2γEνIνEX][1+νEνEX+gEγEνIνEX]3(αE2+1)vmax,E2+1|vmax,EαE| (D23)

and

QI=[1aIKIexpFI(vmax,I)FI(αI)aI]π[η+νEνEX+gI2γIνIνEX][η+νEνEX+gIγIνIνEX]3(αI2+1)vmax,I2+1|vmax,IαI|. (D24)

We investigate the solution in the strong-coupling limit using an expansion:

τrpνE=k=1k=ρkExk,    τrpνI=k=1k=ρkIxk,    x=τrpνEX, (D25)

and obtain

ρ1E=1QE(ρ1E,ρ1I),    ρ1I=1QI(ρ1E,ρ1I). (D26)

Equation (D26) defines the gain of the excitatory and inhibitory populations. As for model A, requiring that network gain is preserved in the large K limit is equivalent to assuming the products

1ajKjexpFj(vmax,j)Fj(αj)aj (D27)

constant; these constraints defines how synaptic strength should scale with K to preserve the response gain. We note that, since Fj(vmax,j)Fj(αj) is different for the two populations, in the general case there are two different scalings for the two populations; in Fig. 13, we verify this prediction.

FIG. 13.

FIG. 13.

Limit of large K for networks, model B. The firing rate and CV of excitatory and inhibitory neurons in a network predicted by the mean field model for different values of inputs and K; the expected asymptotic behavior is shown in black. On the left [(c),(f)], we show the corresponding scaling relations with dots associated to the connectivity parameters. Simulations parameter: The two populations have ge = 20.0 and gi = 19.0; for both populations, a = 0.0005 for K = 105; other parameters are as in Fig. 9.

APPENDIX E: SIMULATIONS VS THEORY

All the results shown in the main text are based on the mean field analysis of the network dynamics. In this section, we investigate how the predictions of the mean field theory compare to numerical simulations of networks of conductance-based neurons.

Using the simulator Brian2 [71], we simulate the dynamics of networks of spiking neurons defined by Eq. (1). We investigate networks of NE excitatory and NI inhibitory neurons; the two groups are driven by two populations of Poisson units of size NEX and NIX, respectively. Simulations are performed for NE = NI = NEX = NIX = 10K and 100K, with no significant differences between the two. We use uniformly distributed delays of excitatory and inhibitory synapses. Delays are drawn randomly and independently at each existing synapse from uniform distributions in the range [0, 10] ms (E synapses) and [0, 1] ms (I synapses). For fixed network parameters, the dynamics is simulated for 10 s with a time step of 10 μs. We perform simulations for different values of K; the value of a is rescaled according to the scaling relation of Eq. (D10). From the resulting activity, we measure the firing rate, CV, and probability distribution of the membrane potential; results are shown in Fig. 14. Mean field predictions are in qualitative agreement with numerical simulations, and the agreement improves as a decreases. Deviations from mean field are expected to arise potentially from three factors: (i) finite size of conductance jumps due to presynaptic action potentials; (ii) correlations in synaptic inputs to different neurons in the network due to recurrent connectivity; (iii) temporal correlations in synaptic inputs due to non-Poissonian firing behavior. In our simulations, deviations due to (i) and (ii) become small when both a and the connection probability are small. Deviations due to (iii) become small when ν ≪ 1/τrp, since, as shown in Eq. (B17) in Appendix B, the statistics of presynaptic neurons firing tend to those of a Poisson process. As predicted by the mean field analysis, with increasing K (and decreasing a) the network response becomes linear and approaches the asymptotic scaling; the firing remains irregular, as shown by the CV, and the membrane potential becomes Gaussian distributed.

FIG. 14.

FIG. 14.

Comparison of mean field theory and numerical simulations. Network transfer function (first row), CV of ISI distribution (second row), and probability distribution of the membrane potential at νE = 0.05τrp (third row). In each, we show the mean field prediction (green), results from numerical simulations (red), and the value expected in the strong-coupling limit (black). Different columns correspond to different values of K and a which are scaled according to Eq. (D10). The agreement between network simulations (red) and mean field predictions (green) improves as a decreases, as expected, since we use the diffusion approximation to derive the results. Simulation parameters are g = 20 and NE = NI = NEX = NIX = 100K.

APPENDIX F: EFFECTS OF HETEROGENEITY IN THE CONNECTIVITY BETWEEN NEURONS

In this section, we describe how fluctuations in single cell properties modify the expressions described above; in particular, we investigate the effect of heterogeneities in the number of connections per neuron in the simplified framework of model A. The formalism described here is a generalization to networks of conductance-based neurons of the analysis done in Refs. [55,90] for networks of current-based neurons.

We assume that the ith neuron in the network receives projections from KXi, KEi, and KIi external, excitatory, and inhibitory neurons, respectively. These numbers are drawn randomly from Gaussian distributions with mean K (γK) and variance ΔK2 (γ2ΔK2) for excitatory (inhibitory) synapses. Note that ΔK2 is assumed to be sufficiently small so that the probability to generate a negative number can be neglected. Fluctuations in the number of connections are expected to produce a distribution of rates in the population, characterized by mean and variance ν and Δν2. As a result, the rates of incoming excitatory and inhibitory spikes differ from cell to cell and become

KEirEi=K(rE+ΔEzEi),    KIirIi=γK(rI+ΔIzIi),    rE=ν+νX,    rI=ν,    ΔE2=CVK2(ν2+νX2)+Δν2KCVK2(ν2+νX2),    ΔI2=CVK2ν2+Δν2γKCVK2ν2, (F1)

where rE,I are the average presynaptic rates and zE,Ii are realizations of a quenched normal noise with zero mean and unit variance, fixed in a given realization of the network connectivity. Starting from Eq. (F1), the rate νi of the cell is derived as in the case without heterogeneities; the main difference is that it is now a function of the particular realizations of zEi and zIi. The quantities ν and Δν2 are obtained from population averages through the self-consistency relations

ν=ν(zE,zI),    Δν2=ν(zE,zI)2ν2, (F2)

where 〈·〉 represents the Gaussian average over the variables zE and zI. Once ν and Δν2 are known, the probability distribution of firing rate in the population is given by

P(ν)=12πdzEdzIezE2/2ezI2/2δ[νν(zE,zI)]. (F3)

As shown in the main text [Fig. 4(a)], Eq. (F3) captures quantitatively the heterogeneity in rates observed in numerical simulations.

In the large K (small a) limit, the mathematical expressions derived above simplify significantly. First, as long as the parameter μi of the ith neuron is below threshold, its rate is given by an expression analogous to Eq. (12) which, for small ΔE,I, can be written

Qi=Qexp(Γzi),    Γ2=(vmax2rEΔE)2+(vmax2rIΔI)2, (F4)

where zi is generated from a Gaussian random variable with zero mean and unit variance. Moreover, if responses are far from saturation, the single rate can be written as

νi=νXQi=ν0exp(Γzi),    Γ2=Ω2CVK2a2,    Ω2=[(avmax2(rE/νX))2(ρ2+1)2+(avmax2(rI/νX))2ρ2], (F5)

where ν0 is the rate in the absence of quenched noise [i.e., Eq. (20) in the main text]. It is easy to show that, in Eq. (F5), Ω2 is independent of a, K, and νX in the large K (small a) limit. Finally, as noted in Ref. [55], if the single-neuron rate can be expressed as an exponential function of a quenched variable z, Eq. (F3) can be integrated exactly and the distribution of rates is log-normal and given by

P(ν)=12πΓνexp([log(ν)log(ν0)]22Γ2). (F6)

Therefore, when the derivation of Eq. (F5) is valid, rates in the network should follow a log-normal distribution, with parameters given by

ν=ν0exp(Γ22),    Δν2=ν2[exp(Γ22)1]. (F7)

For Γ2 ≪ 1, we find Δν/ν ≈ Γ/2, which scales linearly with CVK, consistent with numerical results shown in Fig. 4(c).

APPENDIX G: FINITE SYNAPTIC TIME CONSTANTS

In this section, we discuss the effect of the synaptic time constant on single-neuron and network responses. First, we derive an approximated expression for the single-neuron membrane time constant; we then compute approximated expressions which are valid for different values of the ratio τS/τ; at the end of the section, we discuss the response of networks of neurons with large τS/τ.

The single-neuron membrane potential dynamics is given by

CjV˙j(t)=gLj(VjEL)A=E,IgAj(t)(VjEA),τEg˙Ej=gEj+gLjτEmajmnδ(ttmnD),τIg˙Ij=gIj+gLjτImajmnδ(ttmnD). (G1)

Using the effective time constant approximation [40], we have

CV˙=g0(Vμ)gEF(μEE)gIF(μEI),τEg˙EF=gEF+gLσEτEζE,τIg˙IF=gIF+gLσIτIζI, (G2)

where gAF represents the fluctuating component of the conductance gA, i.e.,

gA(t)=gA0+gAF(t) (G3)

and

ζA(t)ζB(t)=δA,Bδ(tt),    g0=gL+gE0+gI0,    gA0=gLaAτARA,    σA2=aA2τARA. (G4)

We are interested in stationary response, so we introduce the term

z=(μEE)gEF+(μEI)gIF (G5)

with derivative

z˙=(μEE)gEF+gLσEτEζEτE+(μEI)gIF+gLσIτIζIτI. (G6)

Since we are interested in understanding the effect of an additional timescale, we can simplify the analysis assuming a unique synaptic timescale τE = τI = τS and obtain

τSz˙=z+σzτSζ,σz2=gL2[σE2(μEE)2+σI2(μEI)2]. (G7)

To have the correct limit for τS → 0, we impose aA = aA0τL/τS, where aA0 is the value of the synaptic efficacy in the limit of the instantaneous synaptic timescale. With these assumptions, the system equation becomes

τdVdt=(Vμ)σττSz,    τsdzdt=z+τSζ. (G8)

One can check that, in the limit τS → 0, the become analogous to those of the main text with η=z/τS. In what follows, we provide approximated expressions for the single-neuron transfer function in three regimes: small time constant [67,68], large time constant [70], and intermediate values [72]. We also note that a numerical procedure to compute the firing rate exactly for any value synaptic time constant was introduced recently, using Fredholm theory [91].

1. Single-neuron transfer function for different values of τS/τ

For τS/τ ≪ 1, as shown in Ref. [67,68], the firing rate can be computed with a perturbative expansion and is given by

1ν=τπv˜minv˜maxdx[1+erf(x)],    v˜(x)=xμσα˜τSτ. (G9)

with α¯=ζ(1/2)1.46. As shown in Fig. 15, Eq. (G9) generates small corrections around the prediction obtained with instantaneous synapses and captures well the response for values τS/τ ≲ 0.1.

FIG. 15.

FIG. 15.

Synaptic time constant suppresses single-neuron response in the strong-coupling limit. Single-neuron response for different values of K, with a rescaled according to Eq. (14). Rates are plotted as a function of K (first row) and τS/τ (second row); different columns correspond to different synaptic time constant τS (title). As K increases, because of the synaptic time constant τS non-negligible compared to the membrane time constant τ, rates computed numerically from Eq. (G1) (black dots) depart from the prediction of Eq. (10) (green). The dependency of the rate on K is captured by Eq. (G9) (blue) for small values of τS/τ and by Eq. (G11) (red) for large values of τS/τ. This decay cannot be prevented by a new scaling relation of a with K and provides an upper bound to how much coupling can be increased while preserving response. Simulations parameter: a = 0.006 for K = 103, g = 12, and η = 1.46.

For τS/τ ≈ 1, as shown in Ref. [72] using the Rice formula [92], the single-neuron firing rate is well approximated by the rate of upward threshold crossing of the membrane potential dynamics without reset. Starting from Eq. (G8) and using the results of Ref. [72], we obtain

ν=12πττSexp[vmax2(1+τSτ)]. (G10)

For τS/τ ≫ 1, as shown in Ref. [70], the neuron fires only when fluctuations of z are large enough for V to be above threshold; the corresponding rate is given by

ν=vmax/ϵdwew2π1τrp+τlog(vminϵwvmaxϵw),    ϵ=ττS. (G11)

As shown in Fig. 15, Eq. (G11) captures the response for values τS/τ ≳ 1 and predicts a strong suppression of response at larger τS/τ.

Higher-order terms in the τS/τ expansion could be computed using the approach described in Ref. [91]. However, Fig. 15 shows that Eqs. (G9)(G11) are sufficient to capture quantitatively responses observed in numerical simulations for different regimes of τS/τ. Equations (G9)(G11) show that the single-neuron response is a nonlinear function of input rates; this nonlinearity prevents a scaling relation between a and K to rescue the suppression observed in Figs. 15 and 6(a).

2. Network response for τS/τ larger than one

In this section, we study responses in networks of neurons with large τS/τ. As in the case of instantaneous synapses, the network response can be obtained by solving the self-consistency relation given by the single-neuron transfer function using input rates

rE=νX+ν,    rI=ν.

In particular, solutions of the implicit equation generated by Eq. (G11) give the network response in the region of inputs for which τS/τ ≫ 1. In this region of inputs, assuming coupling to be strong, the implicit equation becomes

ν=τ/τSτrpvmaxπexp(vmax2τSτ). (G12)

Equation (G12), which is validated numerically in Fig. 16, implies that firing is preserved if vmaxτS/τ is of the order of one, i.e., if

μ~θσττS~θ1Kσ/aτS[νX+ν(1+gγ)]. (G13)

Combining the above equation with the definition of μ, we obtain Eq. (21), which captures the behavior of network response observed in numerical simulations for τS/τ ≫ 1 (Figs. 6 and 16).

FIG. 16.

FIG. 16.

Approximation of network response for large τS/τ. Plots analogous to Figs. 6(b) and 6(c) of the main text. Dots represent network response as a function of input rate νX, computed numerically from Eqs. (1) and (23) for τS = 1 ms (green) and τS = 10 ms (red). Continuous lines correspond to the prediction obtained with instantaneous synapses (black) and for large synaptic time constant [Eqs. (G12), (5), and (G13), colored lines]. As explained in the text, the latter predictions are valid only for large τS/τ; because of this, we plot only values obtained for τS/τ > 1. For τS/τ ≫ 1, the network response is well described by Eq. (21) of the main text.

Equation (G12) can be used to understand the effect of connection heterogeneity in networks with large τS/τ. In particular, generalizing the analysis of Appendix F, we find that rates in the network, in the limit of small CVK and large K, are given by

νi=ν0exp[ΩSCVKKzi], (G14)

where ν0 is the population average in the absence of heterogeneity [i.e., the solution of Eq. (G12)] and zi is a Gaussian random variable of zero mean and unit variance. The prefactor ΩS, which is independent of a and K, is given by

ΩS2=[(f(rE,rI)rE)2(ν2+νX2)+(f(rE,rI)rI)2ν2],f(rE,rI)=vmax2τSKτ. (G15)

Equation (G15) is a generalization of Eq. (22) to the case of large τS/τ. It shows that, in this limit, the state of the network is preserved with connection fluctuations up to CVK~1/K.

FIG. 17.

FIG. 17.

Effect of short-term plasticity on single-neuron response. (a) Dependence of depression variable xE on the presynaptic rate given by Eq. (H4), for different values of U. (b) Effective membrane time constant τX as a function of νX, with depression variable as in (a). Short-term plasticity limits the decrease of τ with inputs. (c) Firing rate of conductance-based neurons computed numerically as in Fig. 6(a), but with depression variable as in (a). Even with short-term plasticity, single-neuron response is suppressed for large νX. Parameters are τD = 100 ms, K = 103, a = 0.01, τS = 0.1 ms, g = 12, η = 1.4, and γ = 1/4.

APPENDIX H: SHORT-TERM PLASTICITY

In the main text, we show that the finite synaptic time constant generates suppression of single-neuron response for large inputs. In this section, we investigate how this suppression is modified when short-term plasticity is taken into account. We focus our analysis on short-term depression, since this type of plasticity is the most commonly observed in cortical neurons when the presynaptic firing rate is large [93].

We consider a neuron receiving K (γK) excitatory (inhibitory) inputs, each with synaptic time constant τS, from cells firing with Poisson statistics with a rate rE = νX, rI = ηνX. Following the approach of Refs. [94,95], we include synaptic depression in Eq. (G1) by assuming that a spike of the mth presynaptic neuron generates an input to the jth postsynaptic neuron given by gLτSajmxjm(t), where xjm(t) ∈ [0, 1] is the depression variable representing the fraction of available neurotransmitter at the synapse. We assume that xjm(t) evolves in time as

dxjmdt=1xjmτD, (H1)

in the absence of presynaptic spikes, and as

xjmxjm(1U), (H2)

in response to a presynaptic spike. In the model, U ∈ [0, 1] describes the fraction of available resources used to produce the postsynaptic input, while τD indicates the timescale over which such resources are regenerated. As shown in Refs. [94,95], xjm satisfies the recursive relation

xjm(tmn+1)=1+[xjm(tmn)(1U)1]e(tmn+1tmn)/τD, (H3)

where xjm(tmn+1) and xjm(tmn) indicate the values of xjm after n + 1th and nth presynaptic spike, respectively. With such synaptic dynamics, the statistics of inputs to single neurons are given by Eq. (G8) with the following substitutions:

τ1τx1=τL1+aK(xErE+xIrIgγ),μμx=τx{EL/τL+aK[xErEEE+xIrIgγEI]},σ2(V)σx2(V)=a2Kτx[yErE(VEE)2+g2γyIrI(VEI)2],xE,I=11+UrE,IτD,yE,I=xE,I1+U(1U/2)rE,IτD. (H4)

Equation (H4) shows that short-term depression affects the single-neuron response for rates rE,I ~ 1/τDU or larger. In particular, the effective time constant τx decreases monotonically with input firing rate and, for νX ≫ 1/τDU, plateaus at

τx*=UτDaK(1+gγ). (H5)

In the main text, we show that the ratio τ/τS determines neural response: Activity increases (decreases) with inputs for τ/τS ≫ 1 (τ/τS ≪ 1). In the absence of short-term depression, τ ~ 1/aKνX and the regime τ/τS ≪ 1 is always reached for large inputs, regardless of the value of τS. Equation (H5) shows that, with short-term depression and for certain parameters, the regime τ/τS ≪ 1 is not reached; this suggests that short-term depression might prevent suppression of single-neuron response for large inputs. To validate this intuition, we compute numerically the response of conductance-based synapses neurons with a finite synaptic time constant and short-term depression; results are shown in Fig. 17. Simulations show that, for parameters in which short-term depression prevents the regime τ/τS ≪ 1 to appear, the single-neuron response is still suppressed for large νX. This numerical result can be understood by noticing that, with short-term depression and for rE,I ≫ 1/τDU, the equilibrium value of the membrane potential μx remains constant, while the variance of the synaptic input σx2 decreases with presynaptic input as 1/(rXτD)2 [Eq. (H4)]. For parameters such that τx*/τS1, these properties lead to an exponential suppression of the single-neuron firing [Eq. (12)] when inputs are large.

Results described in this section show that short-term plasticity suppresses neural response for large inputs. This suppression, unlike that generated by a finite synaptic time constant, emerges because synaptic current fluctuations become small, while the effective time constant remains finite. It follows that, in models with short-term plasticity, the autocorrelation of the membrane potential can be of the order of τ* for large inputs and can be larger than the synaptic time constant. Finally, we point out that, analogously to models without short-term plasticity, response suppression does not appear in networks of neurons, as it is prevented by recurrent interactions.

References

  • [1].Douglas RJ and Martin KAC, A Functional Microcircuit for Cat Visual Cortex, J. Physiol 440, 735 (1991). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Mainen ZF and Sejnowski TJ, Reliability of Spike Timing in Neocortical Neurons, Science 268, 1503 (1995). [DOI] [PubMed] [Google Scholar]
  • [3].Softky WR and Koch C, The Highly Irregular Firing of Cortical Cells Is Inconsistent with Temporal Integration of Random EPSPS, J. Neurosci 13, 334 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Compte A, Constantinidis C, Tegnér J, Raghavachari S, Chafee MV, Goldman-Rakic PS, and Wang X-J, Temporally Irregular Mnemonic Persistent Activity in Prefrontal Neurons of Monkeys during a Delayed Response Task, J. Neurophysiol 90, 3441 (2003). [DOI] [PubMed] [Google Scholar]
  • [5].Shadlen MN and Newsome WT, Noise, Neural Codes and Cortical Organization, Curr. Opin. Neurobiol 4, 569 (1994). [DOI] [PubMed] [Google Scholar]
  • [6].Shadlen MN and Newsome WT, The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding, J. Neurosci 18, 3870 (1998). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].van Vreeswijk C and Sompolinsky H, Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity, Science 274, 1724 (1996). [DOI] [PubMed] [Google Scholar]
  • [8].van Vreeswijk C and Sompolinsky H, Chaotic Balanced State in a Model of Cortical Circuits, Neural Comput 10, 1321 (1998). [DOI] [PubMed] [Google Scholar]
  • [9].Amit DJ and Brunel N, Model of Global Spontaneous Activity and Local Structured Activity during Delay Periods in the Cerebral Cortex, Cereb. Cortex 7, 237 (1997). [DOI] [PubMed] [Google Scholar]
  • [10].Brunel N, Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Neurons, J. Comput. Neurosci 8, 183 (2000). [DOI] [PubMed] [Google Scholar]
  • [11].Richardson MJE, Effects of Synaptic Conductance on the Voltage Distribution and Firing Rate of Spiking Neurons, Phys. Rev. E 69, 051918 (2004). [DOI] [PubMed] [Google Scholar]
  • [12].Capaday C and van Vreeswijk C, Direct Control of Firing Rate Gain by Dendritic Shunting Inhibition, J. Integr. Neurosci 05, 199 (2006). [DOI] [PubMed] [Google Scholar]
  • [13].Destexhe A, Rudolph M, Fellous JM, and Sejnowski TJ, Fluctuating Synaptic Conductances Recreate in vivo-like Activity in Neocortical Neurons, Neuroscience 107, 13 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Shelley M, McLaughlin D, Shapley R, and Wielaard J, States of High Conductance in a Large-Scale Model of the Visual Cortex, J. Comput. Neurosci 13, 93 (2002). [DOI] [PubMed] [Google Scholar]
  • [15].Rudolph M and Destexhe A, Characterization of Subthreshold Voltage Fluctuations in Neuronal Membranes, Neural Comput 15, 2577 (2003). [DOI] [PubMed] [Google Scholar]
  • [16].Lerchner A, Ahmadi M, and Hertz J, High-Conductance States in a Mean-Field Cortical Network Model, Neurocomputing;Variable Star Bulletin 58–60, 935 (2004). [Google Scholar]
  • [17].Meffin H, Burkitt AN, and Grayden DB, An Analytical Model for the “Large, Fluctuating Synaptic Conductance State” Typical of Neocortical Neurons in vivo, J. Comput. Neurosci 16, 159 (2004). [DOI] [PubMed] [Google Scholar]
  • [18].Rudolph M, Pelletier JG, Paré D, and Destexhe A, Characterization of Synaptic Conductances and Integrative Properties during Electrically Induced EEG-Activated States in Neocortical Neurons in vivo, J. Neurophysiol 94, 2805 (2005). [DOI] [PubMed] [Google Scholar]
  • [19].Kumar A, Schrader S, Aertsen A, and Rotter S, The High-Conductance State of Cortical Networks, Neural Comput 20, 1 (2008). [DOI] [PubMed] [Google Scholar]
  • [20].Pare D, Shink E, Gaudreau H, Destexhe A, and Lang EJ, Impact of Spontaneous Synaptic Activity on the Resting Properties of Cat Neocortical Pyramidal Neurons in vivo, J. Neurophysiol 79, 1450 (1998). [DOI] [PubMed] [Google Scholar]
  • [21].Destexhe A and Paré D, Impact of Network Activity on the Integrative Properties of Neocortical Pyramidal Neurons in vivo, J. Neurophysiol 81, 1531 (1999). [DOI] [PubMed] [Google Scholar]
  • [22].Destexhe A, Rudolph M, and Paré D, The High-Conductance State of Neocortical Neurons in vivo, Nat. Rev. Neurosci 4, 739 (2003). [DOI] [PubMed] [Google Scholar]
  • [23].Li B, Routh BN, Johnston D, Seidemann E, and Priebe NJ, Voltage-Gated Intrinsic Conductances Shape the Input-Output Relationship of Cortical Neurons in Behaving Primate v1, Neuron 107, 185 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Vogels TP and Abbott LF, Signal Propagation and Logic Gating in Networks of Integrate-and-Fire Neurons, J. Neurosci 25, 10786 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Histed MH, Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks, eNeuro 5, ENEURO.0356–17.2018 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Chemla S, Reynaud A, di Volo M, Zerlaut Y, Perrinet L, Destexhe A, and Chavane F, Suppressive Traveling Waves Shape Representations of Illusory Motion in Primary Visual Cortex of Awake Primate, J. Neurosci 39, 4282 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Cavallari S, Panzeri S, and Mazzoni A, Comparison of the Dynamics of Neural Interactions between Current-Based and Conductance-Based Integrate-and-Fire Recurrent Networks, Front. Neural Circuits 8, 12 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Brunel N and Wang X-J, Effects of Neuromodulation in a Cortical Network Model of Object Working Memory Dominated by Recurrent Inhibition, J. Comput. Neurosci 11, 63 (2001). [DOI] [PubMed] [Google Scholar]
  • [29].Zerlaut Y, Chemla S, Chavane F, and Destexhe A, Modeling Mesoscopic Cortical Dynamics Using a Mean-Field Model of Conductance-Based Networks of Adaptive Exponential Integrate-and-Fire Neurons, J. Comput. Neurosci 44, 45 (2018). [DOI] [PubMed] [Google Scholar]
  • [30].Ebsch C and Rosenbaum R, Imbalanced Amplification: A Mechanism of Amplification and Suppression from Local Imbalance of Excitation and Inhibition in Cortical Circuits, PLoS Comput. Biol 14, e1006048 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Capone C, di Volo M, Romagnoni A, Mattia M, and Destexhe A, State-Dependent Mean-Field Formalism to Model Different Activity States in Conductance-Based Networks of Spiking Neurons, Phys. Rev. E 100, 062413 (2019). [DOI] [PubMed] [Google Scholar]
  • [32].di Volo M, Romagnoni A, Capone C, and Destexhe A, Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation, Neural Comput 31, 653 (2019). [DOI] [PubMed] [Google Scholar]
  • [33].Bruno RM and Sakmann B, Cortex Is Driven by Weak but Synchronously Active Thalamocortical Synapses, Science 312, 1622 (2006). [DOI] [PubMed] [Google Scholar]
  • [34].Markram H, Lubke J, Frotscher M, and Sakmann B, Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs, Science 275, 213 (1997). [DOI] [PubMed] [Google Scholar]
  • [35].Sjöström PJ, Turrigiano GG, and Nelson SB, Rate, Timing, and Cooperativity Jointly Determine Cortical Synaptic Plasticity, Neuron 32, 1149 (2001). [DOI] [PubMed] [Google Scholar]
  • [36].Holmgren C, Harkany T, Svennenfors B, and Zilberter Y, Pyramidal Cell Communication within Local Networks in Layer 2/3 of Rat Neocortex, J. Physiol 551, 139 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Lefort S, Tomm C, Sarria JCF, and Petersen CCH, The Excitatory Neuronal Network of the C2 Barrel Column in Mouse Primary Somatosensory Cortex, Neuron 61, 301 (2009). [DOI] [PubMed] [Google Scholar]
  • [38].Perin R, Berger TK, and Markram H, A Synaptic Organizing Principle for Cortical Neuronal Groups, Proc. Natl. Acad. Sci. U.S.A 108, 5419 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Jiang X, Shen S, Cadwell CR, Berens P, Sinz F, Ecker AS, Patel S, and Tolias AS, Principles of Connectivity among Morphologically Defined Cell Types in Adult Neocortex, Science 350, aac9462 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Richardson MJE and Gerstner W, Synaptic Shot Noise and Conductance Fluctuations Affect the Membrane Voltage with Equal Significance, Neural Comput 17, 923 (2005). [DOI] [PubMed] [Google Scholar]
  • [41].Siegert AJF, On the First Passage Time Probability Problem, Phys. Rev 81, 617 (1951). [Google Scholar]
  • [42].Ricciardi LM, Diffusion Processes and Related Topics in Biology (Springer-Verlag, Berlin, 1977). [Google Scholar]
  • [43].Gardiner C, Stochastic Methods, 4th ed. (Springer, New York, 2009). [Google Scholar]
  • [44].Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, and Brunel N, How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs, J. Neurosci 23, 11628 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Brunel N and Hakim V, Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates, Neural Comput 11, 1621 (1999). [DOI] [PubMed] [Google Scholar]
  • [46].Anderson J, Lampl I, Reichova I, Carandini M, and Ferster D, Stimulus Dependence of Two-State Fluctuations of Membrane Potential in Cat Visual Cortex, Nat. Neurosci 3, 617 (2000). [DOI] [PubMed] [Google Scholar]
  • [47].Crochet S and Petersen CCH, Correlating Whisker Behavior with Membrane Potential in Barrel Cortex of Awake Mice, Nat. Neurosci 9, 608 (2006). [DOI] [PubMed] [Google Scholar]
  • [48].Poulet JFA and Petersen CCH, Internal Brain State Regulates Membrane Potential Synchrony in Barrel Cortex of Behaving Mice, Nature (London) 454, 881 (2008). [DOI] [PubMed] [Google Scholar]
  • [49].Tan AY, Chen Y, Scholl B, Seidemann E, and Priebe NJ, Sensory Stimulation Shifts Visual Cortex from Synchronous to Asynchronous States, Nature (London) 509, 226 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Okun M, Steinmetz N, Cossell L, Iacaruso MF, Ko H, Bartho P, Moore T, Hofer SB, Mrsic-Flogel TD, Carandini M, and Harris KD, Diverse Coupling of Neurons to Populations in Sensory Cortex, Nature (London) 521, 511 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Yu J, Gutnisky DA, Hires SA, and Svoboda K, Layer 4 Fast-Spiking Interneurons Filter Thalamocortical Signals during Active Somatosensation, Nat. Neurosci 19, 1647 (2016). [DOI] [PubMed] [Google Scholar]
  • [52].Hromádka T, DeWeese MR, and Zador AM, Sparse Representation of Sounds in the Unanesthetized Auditory Cortex, PLoS Biol 6, e16 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].O’Connor DH, Peron SP, Huber D, and Svoboda K, Neural Activity in Barrel Cortex Underlying Vibrissa-Based Object Localization in Mice, Neuron 67, 1048 (2010). [DOI] [PubMed] [Google Scholar]
  • [54].Buzsáki G and Mizuseki K, The Log-Dynamic Brain: How Skewed Distributions Affect Network Operations, Nat. Rev. Neurosci 15, 264 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Roxin A, Brunel N, Hansel D, Mongillo G, and van Vreeswijk C, On the Distribution of Firing Rates in Networks of Cortical Neurons, J. Neurosci 31, 16217 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Landau ID, Egger R, Dercksen VJ, Oberlaender M, and Sompolinsky H, The Impact of Structural Heterogeneity on Excitation-Inhibition Balance in Cortical Networks, Neuron 92, 1106 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [57].Maçarico da Costa N and Martin KAC, How Thalamus Connects to Spiny Stellate Cells in the Cat Visual Cortex, J. Neurosci 31, 2925 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [58].Furuta T, Deschênes M, and Kaneko T, Anisotropic Distribution of Thalamocortical Boutons in Barrels, J. Neurosci 31, 6432 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, and Mrsic-Flogel TD, Functional Specificity of Local Synaptic Connections in Neocortical Networks, Nature (London) 473, 87 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [60].Xue M, Atallah BV, and Scanziani M, Equalizing Excitation-Inhibition Ratios across Visual Cortical Neurons, Nature (London) 511, 596 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [61].Schoonover CE, Tapia J-C, Schilling VC, Wimmer V, Blazeski R, Zhang W, Mason CA, and Bruno RM, Comparative Strength and Dendritic Organization of Thalamocortical and Corticocortical Synapses onto Excitatory Layer 4 Neurons, J. Neurosci 34, 6746 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [62].Znamenskiy P, Kim M-H, Muir DR, Iacaruso MF, Hofer SB, and Mrsic-Flogel TD, Functional Selectivity and Specific Connectivity of Inhibitory Neurons in Primary Visual Cortex, bioRxiv:294835 [Google Scholar]
  • [63].Barral J and Reyes A, Synaptic Scaling Rule Preserves Excitatory-Inhibitory Balance and Salient Neuronal Network Dynamics, Nat. Neurosci 19, 1690 (2016). [DOI] [PubMed] [Google Scholar]
  • [64].Destexhe A, Mainen Z, and Sejnowski T, Kinetic Models of Synaptic Transmission (MIT Press, Cambridge, MA, 1998), Vol. 2. [Google Scholar]
  • [65].McCormick DA, Connors BW, Lighthall JW, and Prince DA, Comparative Electrophysiology of Pyramidal and Sparsely Spiny Stellate Neurons of the Neocortex, J. Neurophysiol 54, 782 (1985). [DOI] [PubMed] [Google Scholar]
  • [66].Postlethwaite M, Hennig MH, Steinert JR, Graham BP, and Forsythe ID, Acceleration of AMPA Receptor Kinetics Underlies Temperature-Dependent Changes in Synaptic Strength at the Rat Calyx of Held, J. Physiol 579, 69 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [67].Brunel N and Sergi S, Firing Frequency of Leaky Integrate-and-Fire Neurons with Synaptic Current Dynamics, J. Theor. Biol 195, 87 (1998). [DOI] [PubMed] [Google Scholar]
  • [68].Oleskiw TD, Bair W, Shea-Brown E, and Brunel N, Firing Rate of the Leaky Integrate-and-Fire Neuron with Stochastic Conductance-Based Synaptic Inputs with Short Decay Times, arXiv:2002.11181 [Google Scholar]
  • [69].Johannesma PIM, Diffusion Models for the Stochastic Activity of Neurons (Springer, Berlin, 1968), pp. 116–144. [Google Scholar]
  • [70].Moreno-Bote R and Parga N, Role of Synaptic Filtering on the Firing Response of Simple Model Neurons, Phys. Rev. Lett 92, 028102 (2004). [DOI] [PubMed] [Google Scholar]
  • [71].Stimberg M, Brette R, and Goodman DF, Brian 2, an Intuitive and Efficient Neural Simulator, eLife 8, e47314 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Badel L, Firing Statistics and Correlations in Spiking Neurons: A Level-Crossing Approach, Phys. Rev. E 84, 041919 (2011). [DOI] [PubMed] [Google Scholar]
  • [73].Eyre MD, Renzi M, Farrant M, and Nusser Z, Setting the Time Course of Inhibitory Synaptic Currents by Mixing Multiple GABAA Receptor α Subunit Isoforms, J. Neurosci 32, 5853 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Paoletti P, Bellone C, and Zhou Q, NMDA Receptor Subunit Diversity: Impact on Receptor Properties, Synaptic Plasticity and Disease, Nat. Rev. Neurosci 14, 383 (2013). [DOI] [PubMed] [Google Scholar]
  • [75].Greger IH, Watson JF, and Cull-Candy SG, Structural and Functional Architecture of AMPA-Type Glutamate Receptors and Their Auxiliary Proteins, Neuron 94, 713 (2017). [DOI] [PubMed] [Google Scholar]
  • [76].Liu SJ and Cull-Candy SG, Activity-Dependent Change in AMPA Receptor Properties in Cerebellar Stellate Cells, J. Neurosci 22, 3881 (2002). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, and Harris KD, The Asynchronous State in Cortical Circuits, Science 327, 587 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, and Tolias AS, Decorrelated Neuronal Firing in Cortical Microcircuits, Science 327, 584 (2010). [DOI] [PubMed] [Google Scholar]
  • [79].Cohen MR and Kohn A, Measuring and Interpreting Neuronal Correlations, Nat. Neurosci 14, 811 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [80].Rubin DB, VanHooser SD, and Miller KD, The Stabilized Supralinear Network: A Unifying Circuit Motif Underlying Multi-input Integration in Sensory Cortex, Neuron 85, 402 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [81].Ahmadian Y, Rubin DB, and Miller KD, Analysis of the Stabilized Supralinear Network, Neural Comput 25, 1994 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [82].Ahmadian Y and Miller KD, What Is the Dynamical Regime of Cerebral Cortex?, Neuron 109, 3373 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [83].Mongillo G, Hansel D, and van Vreeswijk C, Bistability and Spatiotemporal Irregularity in Neuronal Networks with Nonlinear Synaptic Transmission, Phys. Rev. Lett 108, 158101 (2012). [DOI] [PubMed] [Google Scholar]
  • [84].Baker C, Zhu V, and Rosenbaum R, Nonlinear Stimulus Representations in Neural Circuits with Approximate Excitatory-Inhibitory Balance, PLoS Comput. Biol 16, e1008192 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [85].Sanzeni A, Histed MH, and Brunel N, Response Nonlinearities in Networks of Spiking Neurons, PLoS Comput. Biol 16, e1008165 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [86].Sanzeni A, Akitake B, Goldbach HC, Leedy CE, Brunel N, and Histed MH, Inhibition Stabilization Is a Widespread Property of Cortical Networks, eLife 9, e54875 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Richardson MJE, Firing-Rate Response of Linear and Nonlinear Integrate-and-Fire Neurons to Modulated Current-Based and Conductance-Based Synaptic Drive, Phys. Rev. E 76, 021919 (2007). [DOI] [PubMed] [Google Scholar]
  • [88].Barbieri F and Brunel N, Irregular Persistent Activity Induced by Synaptic Excitatory Feedback, Front. Comput. Neurosci 1, 5 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Ermentrout B and Terman D, The Mathematical Foundations of Neuroscience (Springer (NY), 2010), Vol. 35. [Google Scholar]
  • [90].Amit DJ and Brunel N, Dynamics of a Recurrent Network of Spiking Neurons before and following Learning, Network 8, 373 (1997). [Google Scholar]
  • [91].van Vreeswijk C and Farkhooi F, Fredholm Theory for the Mean First-Passage Time of Integrate-and-Fire Oscillators with Colored Noise Input, Phys. Rev. E 100, 060402(R) (2019). [DOI] [PubMed] [Google Scholar]
  • [92].Rice SO, Mathematical Analysis of Random Noise, Bell Syst. Tech. J 23, 282 (1944). [Google Scholar]
  • [93].Markram H, Wang Y, and Tsodyks M, Differential Signaling via the Same Axon of Neocortical Pyramidal Neurons, Proc. Natl. Acad. Sci. U.S.A 95, 5323 (1998). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [94].Tsodyks MV and Markram H, The Neural Code between Neocortical Pyramidal Neurons Depends on Neurotransmitter Release Probability, Proc. Natl. Acad. Sci. U.S.A 94, 719 (1997). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Lindner B, Gangloff D, Longtin A, and Lewis JE, Broadband Coding with Dynamic Synapses, J. Neurosci 29, 2076 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES