Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2024 Jun 27;20(6):e1012190. doi: 10.1371/journal.pcbi.1012190

The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations

Caleb J Holt 1, Kenneth D Miller 2, Yashar Ahmadian 3,*
Editor: Tatiana Engel4
PMCID: PMC11236182  PMID: 38935792

Abstract

When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column’s receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.

Author summary

When large populations of brain’s neurons fire in synchrony, the resulting electrical signals, which are measurable even at the scalp, exhibit oscillatory behaviour. Gamma rhythms are a fast sub-type of such oscillations generated by neural populations in visual processing areas of the cerebral cortex. The characteristics of gamma oscillations depend on the visual scene observed by the animal. For example, visual stimuli with stronger contrast evoke higher frequency oscillations. Moreover, when contrast varies over the visual scene, signals measured at different cortical locations oscillate at frequencies determined by the local contrasts in the corresponding visual scene locations. The circuit mechanisms underlying these phenomena are largely unknown. Here, we show how a model of cortical circuits explains the contrast dependence of gamma frequency as arising from the empirical observation that a given percent increase in the input to a cortical neuron results in a higher percent increase in its output. Furthermore, the model imputes the local contrast dependence of gamma frequency to a certain spatial pattern of connectivity between cortical neurons. Relating properties of an easily measurable brain signal to features of brain circuits can link anomalies in the two, which may potentially be exploited in diagnosis of brain disorders.

Introduction

When presented with a stimulus, populations of neurons within visual cortices exhibit elevated rhythmic activity with frequencies in the so-called gamma band (30–80 Hz) [1, 2]. These gamma oscillations can be observed in local field potential (LFP) or electroencephalogram (EEG) recordings and, when present, manifest as peaks in the LFP/EEG power-spectra. It has been proposed that gamma oscillations perform key functions in neural processing such as feature binding [3], dynamic communication or routing between cortical areas [47], or as a timing or “clock” mechanism that can enable coding by spike timing [812]. These proposals remain controversial [1, 13].

While the computational role of gamma rhythms is not well understood, much is known about their phenomenology. For example, defining characteristics of gamma oscillations, such as the width and height of the spectral gamma peak, as well as its location on the frequency axis (peak frequency), exhibit systematic dependencies on various stimulus parameters [1, 2, 14, 15]. In particular, in the primary visual cortex (V1) of macaque monkeys, the power-spectrum gamma peak moves to higher frequencies as the contrast of a large and uniform grating stimulus is increased [1, 2]. This establishes a monotonic relationship between gamma peak frequency and the grating contrast. We will refer to this contrast-frequency relationship, obtained using a grating stimulus with uniform contrast, as the “contrast dependence” of gamma peak frequency.

Moreover, when animals are presented with a stimulus with non-uniform contrast that varies over the visual field (and hence over nearby cortical columns in V1), it is the local stimulus contrast that determines the peak frequency of gamma oscillations at a cortical location [1]. Specifically, [1] used a Gabor stimulus (which has smoothly decaying contrast with increasing distance from the stimulus center), and found that the gamma peak frequency of different V1 recording sites match the predictions resulting from the frequency-contrast relationship obtained from the uniform grating experiment, but using the local Gabor contrast in that site’s receptive field. We refer to this second effect as the “local contrast dependence” of gamma peak frequency.

It is well-known that networks of excitatory and inhibitory neurons with biologically realistic neural and synaptic time-constants can exhibit oscillations with frequency in the gamma band (e.g., [16, 17]; see [18] for a review). However, no mechanistic circuit model of visual cortex has been proposed which can robustly and comprehensively account for the contrast dependence of gamma oscillations. [2] did propose a rate model that accounts for the increase of gamma peak frequency with increasing global contrast. Their treatment only modeled the interactions between a single excitatory and a single inhibitory population, which is sufficient for spatially uniform stimuli, but cannot explain the local contrast dependence of the gamma peak frequency. Moreover, even in the case of a uniform-contrast stimulus, this model could only produce very weak contrast-dependence of peak frequency, and further required a contrast-dependent scaling of the intrinsic time-constant of excitatory neurons. Here, we develop a parsimonious and self-contained mechanistic model (with fixed neural and network parameters) which accounts for the global as well as local contrast dependence of the gamma peak.

It is not clear how the local contrast dependence of gamma oscillations can be reconciled with key features of cortical circuits. This locality would trivially emerge if cortical columns were non- or weakly interacting; in that case each column’s oscillation properties would be determined by its feedforward input (controlled by the local contrast). However, nearby cortical columns do interact strongly via the prominent horizontal connections connecting them [19]. These interactions manifest, e.g., in contextual modulations of V1 responses, such as in surround suppression [20], which are thought to be partly mediated by horizontal connections [21].

Surround suppression is the phenomena wherein stimuli outside the classical receptive field (RF) of V1 neurons, which by themselves cannot drive the cell to respond, nevertheless modulate the cells’ response, typically by suppressing it. Surround suppression results in a non-monotonic “size tuning curve”, which is obtained by measuring a cell’s response to circular gratings of varying sizes centered on that cell’s RF: the response first increases with increasing stimulus size, but then decreases as the grating increasingly covers regions surrounding the RF. Here we test whether a model of V1, featuring biologically plausible horizontal connections, can capture both surround suppression and the local contrast dependence of gamma oscillations.

A parsimonious, biologically plausible model of cortical circuitry which has successfully accounted for a range of cortical contextual modulations and their contrast dependence is the stabilized supralinear network (SSN) [22, 23]. In particular, the SSN robustly captures the contrast dependencies of surround suppression, e.g., that size tuning curves peak at smaller stimulus sizes with increasing stimulus contrast [22]. Being a recurrently connected firing rate model with excitatory and inhibitory neurons, we expect the SSN to be able to exhibit oscillations similar to gamma rhythms. However, to capture fast dynamical phenomena, and in particular the gamma band resonance frequency, it is key to properly account for fast synaptic filtering as provided by the fast ionotropic receptors, AMPA and GABAA [17, 24, 25]. At the same time, it is useful to include the slower NMDA conductances, to help stabilize the network dynamics given strong overall recurrent excitation. We thus started by extending the SSN model to properly account for input currents through different synaptic receptor types, with different filtering timescales.

The synchrony and coherence characteristics of gamma oscillations depend considerably on the stimulus condition. In some conditions, such as for large high-contrast gratings, these oscillations are very pronounced and result in rather sharp peaks in the power spectrum of LFP or multi-unit activity [2, 15]. However, the gamma phase is only auto-coherent over relatively short intervals (not lasting more than a few periods of oscillation) and their timing and duration vary stochastically [13, 26]. These characteristics are consistent with selective amplification by the network of gamma frequencies in the input noise, rather than a noise-free periodic oscillation, and gamma oscillations have been modelled as such [2630]. We therefore used a noise-driven SSN with multiple synaptic receptor types to model gamma oscillations.

We start the Results section by developing an extension of the SSN that models the dynamics of input currents through different synaptic receptor types, with different timescales. We then study a reduced noise-driven SSN composed of two units representing excitatory (E) and inhibitory (I) sub-populations. We show that, for a wide range of biological parameters, this reduced SSN model generates gamma oscillations with peak frequency that robustly increases with increasing external drive to the network. We show that this robust contrast dependence is a consequence of a key feature of the SSN: the supralinear input-output (I/O) function of its neurons (which is known to fit well the non-saturating and expansive relationship between the firing rate and membrane voltage of V1 neurons [31, 32]). We next investigate the gamma peak’s local contrast dependence using an expanded retinotopically organized SSN model of V1, with E and I units in different cortical columns. We show that this network is capable of reproducing the local contrast dependence of gamma peak frequency while exhibiting realistic surround suppression. However, as we show, this is only possible when the spatial fall-off of excitatory connection strengths has two distinct components: a sharp immediate fall across a cortical column’s width, and a slower fall off that can range over several columns. This “local plus long-range” spatial structure of horizontal connections, which we will more shortly refer to as “columnar structure”, balances the trade-off between capturing local contrast dependence (requiring short-range or weak horizontal connections) and surround suppression (requiring the opposite). We show that achieving this balance does not require fine-tuning of parameters and is robust to considerable parameter variations. We end by providing a mathematical explanation of the mechanism underlying local contrast dependence reconciled with strong surround suppression in this model, based on the structure of its normal oscillatory modes. Finally, in the Discussion, we conclude by discussing the implications of our findings for the structure of cortical horizontal connections and the shape of neural input/output nonlinearities.

Results

Noise-driven SSN with multiple synaptic currents

As motivated in the Introduction, and with the aim of modeling gamma oscillations, we started by extending the SSN model to properly account for synaptic currents through different receptor types with different kinetics. In its original form, the SSN’s activity dynamics are governed by standard firing rate equations [33], in which each neuron is described by a single dynamical variable: either its output firing rate [22, 23] or its total input current [34]. In the extended model, by contrast, each neuron will have more than one dynamical variable, corresponding to its input currents through different synaptic receptor types. Concretely, we will include the three main ionotropic synaptic receptors in the model: AMPA and NMDA receptors which mediate excitatory inputs, and GABAA (henceforth abbreviated to GABA) receptors which mediate the inhibitory input. For a network of N neurons, we will arrange these input currents to different neurons into three N-dimensional vectors, hα, where α ∈ {AMPA,NMDA, GABA} denotes the receptor type. To model the kinetics of different receptors, we will ignore the very fast rise-times of all receptor types (as the corresponding timescales are much faster than the characteristic timescales of gamma oscillations), and only account for the receptor decay-times, which we denote by τα. With this assumption, the dynamics of htα are governed by (see Methods for a derivation)

ταdhtαdt+htα=Wαrt+Itα (1)

where rt is the vector of firing rates, Wαrt and Itα denote the recurrent and external inputs to the network mediated by receptor α, respectively, and Wα are N × N matrices denoting the contributions of different receptor-types to recurrent connectivity weightsl; the total recurrent connectivity weight matrix is thus given by W ≡ ∑α Wα. As in the cortex, the external input to the network is excitatory, and for simplicity we further assume that it only enters through AMPA receptors (i.e. Itα is nonzero only for α = AMPA, and we will thus drop this superscript and denote this input by It); including an NMDA components will not affect our results, as NMDA is slow relative to gamma band timescales. To close the system for the dynamical variables htα, we have to relate the output rate of a neuron to its total input current. The fast synaptic filtering provided by AMPA and GABA allows for a static (or instantaneous) approximation to the input-output (I/O) transfer function of neurons [35, 36] (see Methods for further justification of this approximation):

rt=F(httotal)=F(βhtβ), (2)

where the I/O function F(⋅) acts element-wise on its vector argument. As in the original SSN, we take this I/O transfer function to be a supralinear rectified power-law, which is the essential ingredient of the SSN (see Fig 1A inset): F(v)k[v]+n, where k is a positive constant, n > 1 (corresponding to supralinearity), and [x]+ ≡ max(0, x) denotes rectification. While the I/O function of biological neurons saturates at high firing rates (e.g., due to refractoriness), throughout the natural dynamic range of cortical neurons firing rates stay relatively low. In fact, in V1 neurons the relationship between the firing rate and the mean membrane potential (an approximate surrogate for the neuron’s net input) shows no saturation throughout the entire range of firing rates driven by visual stimuli, and is well approximated by a supralinear rectified power-law [31, 32].

Fig 1. Contrast dependence of the gamma peak frequency in the 2-population model.

Fig 1

A: Schematic of the 2-population Stabilized Supralinear Network (SSN). Excitatory (E) connections end in a circle; inhibitory (I) connections end in a line. Each unit represents a sub-population of V1 neurons of the corresponding E /I type. Both receive inputs from the stimulus, as well as noise input. Inset: The rectified power-law Input/Output transfer function of SSN units (black). Red lines indicate the slope of the I/O function at particular locations. B: Local field potential (LFP) traces, modeled as total net input to the E unit, from the stochastic model simulations under four different stimulus contrasts (c): 0% (black) equivalent to no stimulus or spontaneous activity, 25% (blue), 50% (green), 100% (red). The same color scheme for stimulus contrast is used throughout the paper. (Note that we take stimulus strength (input firing rate) to be proportional to contrast, although in reality it is monotonic but sublinear in contrast, [38]). C: Mean firing rates of the excitatory (orange) and inhibitory (cyan) units as a function of contrast, from the stochastic simulations (dots) and the noise-free approximation of the fixed point, Eq 3 (stars). Note that the dots and stars closely overlap. D: Reproduction of figure 1I from [1] (with permission) showing the average of experimentally measured LFP power-spectra in Macaque V1. The inset shows the dependence of gamma peak frequency on the contrast of the grating stimulus covering the recording site’s receptive field. E: LFP power-spectra for c = 0%, 25%, 50%, 100% (black, blue, green, and red curves, respectively) calculated from the noise-driven stochastic SSN simulations (dots), or using the linearized approximation (solid lines). F: Gamma peak frequency as a function of contrast, obtained from power-spectra calculated using stochastic simulations (dots and dashed line) or the linearized approximation (stars and solid line).

Two-population model

We start by studying a reduced two-population model of V1 consisting of two units (or representative mean-field neurons): one excitatory and one inhibitory unit, respectively representing the excitatory and inhibitory neural sub-populations in the retinotopically relevant region of V1. This reduced model is appropriate for studying conditions in which the spatial profile of the activity is irrelevant, e.g., for a full-field grating stimulus where we can assume the relevant V1 network is uniformly activated by the stimulus. Both units receive external inputs, and make reciprocal synaptic connections with each other as well as themselves (Fig 1A).

As pointed out in the Introduction, empirical evidence is most consistent with visual cortical gamma oscillations resulting from noise-driven fluctuations [13, 26, 27, 29]. To model such noise-driven oscillations using the SSN, as in [34], we assumed the external input consists of two terms It = IDC + ηt, where IDC represents the feedforward stimulus drive to the network (by a steady time-independent stimulus) and scales with the contrast of the visual stimulus, and ηt represents the stochastic noise input to the network. This input noise could be attributed to several sources, including sources that are (biologically) external or internal to V1. External noise can originate upstream in the lateral geniculate nucleus (LGN) of thalamus, or in feedback from higher areas. Internally generated noise results from the network’s own irregular spiking (not explicitly modeled) which survives mean-field averaging as a finite-size effect (given the finite size of the implicitly-modeled spiking neuron sub-populations underlying the SSN’s units). For parsimony, we assumed noise statistics are independent of stimuli, and thus of contrast. (Internally generated spiking noise is expected to have power that grows with the emerging firing rates in the network, as in a Poisson process. Since we are not interested in modeling changes in the gamma power —as opposed to peak frequency— with increasing contrast, we ignored this scaling, as it would not qualitatively affect the contrast dependence of peak frequency). More specifically, we assumed that noise inputs to different neurons are independent, and each component is temporally correlated pink noise with a correlation time on the order of a few milliseconds (our main results are robust to changes in this parameter, as well as to the introduction of input noise correlation across neurons).

For the first results shown in Fig 1, we directly simulated the stochastic Eq 1. Fig 1C (dots) shows the average firing rates found in these simulations, and their contrast-dependence. The LFP signal is thought to result primarily from inputs to pyramidal cells, as they have relatively large dipole moments [37]; we therefore took the net input to the E sub-population to represent the LFP signal. Fig 1B shows examples of raw simulated LFP traces for different stimulus contrasts. For high enough contrast (including all nonzero contrasts shown), the LFP signal exhibits oscillatory behavior. These oscillations can be studied via their power-spectra (Fig 1E, dots; see Methods). As Fig 1F shows, the peak frequencies of the simulated LFP power-spectra shift to higher frequencies with increasing contrast. The two-population SSN model thus captures the empirically observed contrast dependence of gamma peak frequency (compare Fig 1F with Fig 1I of [1] reproduced here as Fig 1D).

Linearized approximation

To understand this behavior better, we employed a linearization scheme for calculating the LFP power-spectra. The linearization method allows for faster numerical computation of the LFP power-spectra, without the need to simulate the stochastic system Eq 1. More importantly, the linearized framework allows for analytical approximations and insights, which as we show below, elucidate the mechanism underlying the contrast dependence of the gamma peak. We thus explain this approximation with some detail here (see Methods for further details).

First, we note that the linearized approximation scheme is only meaningful when the noise-free network is in a regime of damped (as opposed to sustained) oscillations that decay to a stable steady state (fixed point) with constant neural activity. While in the absence of noise, damped oscillations are transient, input noise constantly rekindles them (c.f. Fig 1B). When input noise fluctuations are sufficiently fast, noise-driven damped oscillations manifest as a resonance peak in the power spectrum of network activity (as in Fig 1E). Empirically recorded gamma peaks are consistent with such a mechanism and have been modelled as such [27, 29]. Alternatively, the noise-free network can be in a regime of sustained oscillations. Changes in the network’s connectivity parameters or stimulus input can lead to a transition, a so-called Hopf bifurcation, between the regimes of damped vs. sustained oscillations. With weak noise, sustained oscillations create very sharp peaks in the LFP power spectrum, followed by trailing peaks at subsequent harmonics which are rarely visible in LFP recordings. However, given strong enough noise, sustained oscillations above a Hopf bifurcation can also lead to realistic gamma peaks [28, 30]. Indeed, in the presence of noise, the Hopf bifurcation ceases to be a well-defined sharp transition, and the network behavior just below or just above the transition point —as defined in the noise-free network— can be very similar (we will provide examples of this below, in the context of the retinotopic SSN model). Thus, because of the theoretical and computational benefits of the linearized approximation noted above, we limited our main explorations and analyses to the regime of damped oscillations below the Hopf bifuraction. We briefly examine the regime above the Hopf bifurcation at the end of the Results.

Assuming the network is below the Hopf bifurcation, the linearization scheme proceeds as follows. In any stimulus condition (corresponding to a given IDC), we first find the network’s steady state in the absence of noise, by numerically solving the noise-free version of Eq 1 (without linearization). The corresponding fixed-point equations can be simplified if we sum them over α, and define h*αh*α and IDCαIDCα. We then arrive at the same fixed-point equation for h* as in the original SSN [23]:

h*=WF(h*)+IDC. (3)

After numerically finding h*, we then expand Eq 1 to first order in the noise and noise-drive deviations around the fixed point, δhtαhtα-h*α, to obtain

ταdδhtαdt=-δhtα+W˜αβδhtβ+ηtα (4)

where we defined

W˜αWαdiag(F(h*)), (5)

where F′(h*) denotes the vector of gains (slopes) of the I/O functions of different neurons at the operating point h* (see the red tangent lines in Fig 1A inset), and diag constructs a diagonal matrix from the vector. As we explain in the next subsection, the neural gains and their dependence on the operating point rates (themselves dependent on the stimulus IDCα, via Eq 3) play a crucial role in the contrast dependence of gamma peak frequency.

Technically, the linear approximation is valid for small noise strengths, but we found that for the noise levels that elicited fluctuations with realistic sizes, the approximation was very good. As shown in Fig 1E and 1F, the LFP power-spectra and their peak frequencies obtained using the linear approximation agree very well with those estimated from the direct stochastic simulations of Eq 1. The firing rates of E and I units at the fixed-point solution Eq 3 also provide a very good approximation to their mean steady-state rates, at different contrasts, as obtained from direct stochastic simulations of Eq 1 (Fig 1C). Below, we will thus calculate all power-spectra using the computationally faster noise-free determination of the fixed point, Eq 3, and the linear approximation, Eq 4, instead of stochastic simulations of Eq 1.

Robustness of the two-population model

To demonstrate that the SSN robustly produces the contrast dependence of gamma peak frequency, we simulated 1000 different instances of the 2-population network with parameters randomly drawn from wide but biologically plausible ranges. The sampled parameters were the weights of the connections between the two units (EI, IE) and their self-connections (EE, II), the relative strength of input to the excitatory and inhibitory units, and the NMDA fraction of excitatory synaptic weights.

The parameters were sampled independently except for the enforcement of two inequality constraints which previous work has shown to be necessary for ensuring the network’s dynamical stability without strong inhibition domination leading to very weak excitatory activity (see Methods and S1 Table for details). The parameter set was also rejected if the resulting SSN did not reach a stable fixed point for all studied stimulus conditions (this corresponds to our modelling choice to have the SSN in a damped oscillation regime).

The majority of randomly sampled models produced steady-state excitatory and inhibitory firing rates that were within biologically plausible ranges, across all contrasts (Fig 2A and 2D). Furthermore, many two-population networks produced peak frequencies that were in the gamma band (30–80 Hz) for all contrast conditions, though some produced peaks at higher frequencies for the highest contrasts (Fig 2B). The distributions also shift towards higher frequencies with increasing contrast, suggesting that the two-population SSN is indeed able to robustly reproduce the contrast dependence of gamma peak frequency. To demonstrate this more directly, we show the distributions of the changes in peak frequency normalized by the change in contrast in Fig 2E. No sampled network produced a negative change in peak frequency with increasing stimulus contrast. As a further corroboration of our model, we also studied how the width of the gamma peak changed with increasing contrast. While [1, 2] did not quantify changes in their gamma peak width with increasing contrast, their results suggest that no significant systematic change in width was observed (Fig 1D). Similarly, in our two-population network, changes in the half-width of the gamma peak are relatively small, and the direction of change can be positive or negative with similar probability (Fig 2F).

Fig 2. Robustness of the contrast-dependence of gamma peak frequency to network parameter variations.

Fig 2

One thousand 2-population SSN’s were simulated with randomly sampled parameters (but conditioned on producing stable noise-free steady-states), across wide biologically plausible ranges. All histograms show counts of sampled networks; the total numbers (n’s) vary across different histograms, as different subsets of network produced the corresponding feature or value in the corresponding condition (e.g., a gamma peak at 50% contrast). A: Distributions of the excitatory unit’s firing rate in response to 25%, 50%, and 100% contrast stimuli (blue, green, red), plotted on a logarithmic scale. 100% of networks shown across all contrasts. B: Distributions of the gamma peak frequencies at different stimulus contrasts. The n’s (upper right) give the number of networks with a power spectrum peak above 20 Hz. C: Distributions of the gamma peak widths at different stimulus contrasts. D: Same as panel A, but for the inhibitory unit. E: Distributions of the change in gamma peak-frequency normalized by the change in stimulus contrast, either 25% and 50% (cyan) or 50% and 100% (yellow). F: Same as panel E, but for gamma peak-width.

Mechanism underlying the contrast dependence of gamma peak frequency

As we will now show, the SSN sheds light on the mechanism underlying the contrast dependence of the gamma peak, and specifically pins it to the increasing neuronal gain with increasing neuronal activity, due to the expansive, supralinear nature of the neuronal I/O transfer function. This also explains the robustness of the effect to changes in connectivity and external input parameters as demonstrated in the previous subsection.

In the linearized approximation, the LFP power-spectrum (see Eqs 2426 in Methods) can be expressed in terms of the so-called Jacobian matrix, i.e. the matrix of couplings of the dynamical variables, δhtα, in the linear system Eq 4; thus, for a network of N neurons, the Jacobian is a 3N × 3N matrix, or 6 × 6 in the two-population model (see Eq 33 for the explicit form). The existence of damped oscillations and the value of their frequency (which is the resonance frequency manifesting as a peak in the power-spectrum) are in turn determined by the existence of complex eigenvalue of the Jacobian matrix, and the value of their imaginary parts. Previously, the eigenvalues of the Jacobian for a standard E-I firing rate network, without different synaptic current types, were analyzed by [16], and conditions for emergence of damped or sustained oscillations were found. In Methods (see Eigenvalue spectra of rate-based and multi-receptor SSNs in the presence and absence of NMDA), we show that, given the slowness of NMDA receptors relative to gamma timescales, the effect of NMDA receptors on the relevant complex eigenvalues of the Jacobian can be safely ignored, and as a result, only two (out of 6) eigenvalues of the resulting Jacobian can become complex (and thus able to create a gamma peak). Moreover, this pair (which we denote by λ±) correspond to the two eigenvalues of a standard E-I rate model [16] whose E and I neural time-constants are given, respectively, by the AMPA and GABA decay times:

2λ±=γE(W˜EE-1)-γI(W˜II+1)±[γE(W˜EE-1)+γI(W˜II+1)]2-4γEγIW˜EIW˜IE (6)

where γE=τAMPA-1 and γI=τGABA-1. Here we defined

W˜abWabF(ha*)(a,b{E,I}), (7)

where WabαWabα is the total synaptic weight from unit b to unit a, and (as in Eq 5) F(ha*) is the gain of unit a, i.e. the slope of its I/O function, at the operating point set by the stimulus. We refer to W˜ab as effective synaptic connection weights. Unlike raw synaptic weights, these effective weights are modulated by the neural gains, and thereby by the activity levels in the steady state operating point, which is in turn controlled by the stimulus.

As mentioned, network oscillations emerge when the above eigenvalues are complex (in which case λ+ and λ are complex conjugates). This happens when the expression under the radical in Eq 6 is negative, i.e.

4γEγIW˜EIW˜IE>[γE(W˜EE-1)+γI(W˜II+1)]2. (8)

Qualitatively, the left hand side of the above inequality is a measure of the strength of the effective negative feedback between the E and I sub-populations, while the right hand size is a measure of the positive feedback in the network (arising from the network’s recurrent excitation and disinhibition). Oscillations thus emerge when the negative feedback loop between E and I is sufficiently strong, in the precise sense of Eq 8.

During spontaneous activity (when the external input is zero or very weak), the rates of both E and I populations are very small. This means that the spontaneous activity operating point sits near the rectification of the neuronal I/O transfer functions where the neural gains are very low (Fig 1A left). Thus in the spontaneous activity state, the dimensionless effective connections are relatively small. In the limit of W˜ab0, the left hand side of Eq 8 goes to zero, while its right side goes to (γIγE)2 which is generically positive; hence the inequality is not satisfied. This shows that the spontaneous activity state generically does not exhibit oscillations, in agreement with lack of empirical observation of gamma oscillation during spontaneous activity.

On the other hand, when condition Eq 8 does hold, the frequency of the oscillations are given by the imaginary part of the eigenvalues, i.e.

resonancefrequency=12πγEγIW˜EIW˜IE-[γE(W˜EE-1)/2+γI(W˜II+1)/2]2 (9)

(the division by 2π is because the eigenvalue imaginary parts give the angular frequency). As we will discuss further below, the resonance frequency (or approximately the gamma peak frequency Fig 3B) thus depends on the effective connections weights and is thus modulated by the neural gains.

Fig 3. The supralinear nature of the neural transfer function can explain the contrast dependence of gamma frequency.

Fig 3

A: Schematic diagrams of the 2-population SSN (see Fig 1A) receiving a low (left) or high (right) contrast stimulus. The thickness of connection lines represents the strength of the corresponding effective connection weight, which is the product of the anatomical weight and the input/ouput gain of the presynaptic neuron. The gain is the slope (red line) of the neural supralinear transfer function (black curve), shown inside the circles representing the E (orange) and I (cyan) units. A resonance frequency exists when the effective “negative feedback” (gray arrow enlcosing a minus sign) dominates the effective “positive feedback” (gray arrows enclosing positive signs), in the sense of the inequality Eq 8. As the stimulus drive (c) increases (right panel), the neurons’ firing rates at the network’s operating point increase. As the transfer function is supralinear, this translates to higher neural gains and stronger effective connections. When a resonance frequency already exists at the lower contrast, this strengthening of effective recurrent connections leads to an increase in the gamma peak frequency, approximately given by the imaginary part of the linearized SSN’s complex eigenvalue, Eq 6. B: The eigenvalue formula Eq 9 provides an excellent approximation to the gamma peak frequency across sampled networks and contrasts simulated in Fig 2; correlation coefficient = 0.98 (p < 10−6), for all data points combined across 25% (blue), 50% (green), and 100% (red) contrasts. C: The negative feedback loop contribution to the resonance frequency (Eq 9 with the second term under the square root neglected) overestimates gamma peak frequency but is positively and significantly correlated with it; correlation coefficient = 0.67 (p < 10−6).

(Note, however, that the scale or order of magnitude of this frequency is set by γE and γI, i.e., by the decay times of AMPA and GABA, as the effective connection weights are dimensionless and cannot determine the dimensionful scale of the gamma frequency; ignoring the “positive feedback” contribution in Eq 9, we find resonance frequency γEγI/(2π), which for τAMPAτGABA∼ 4–6 ms, is on the order of 30–40 Hz).

Eq 9 provides the insight into the contrast dependence of the gamma peak frequency (see Fig 3). As contrasts increase the fixed-point firing rates increase (Fig 1C). Because the SSN I/O transfer function is non-saturating and supralinear, as the rates increase the gains (i.e., the slope of the I/O transfer functions) of the E and I cells are also guaranteed to increase (Fig 3A right vs. left). The increase in the gains leads in turn to the strengthening of the effective connection weights, Eq 7, and therefore of the network’s negative E-I feedback loop. When Eq 8 is satisfied, a rough approximation (Fig 3B vs. Fig 3C) to the resonance frequency is obtained by ignoring the positive feedback contribution (the second term under the square root in Eq 9). With only the negative feedback contribution retained, it is clear that an increase in neural gains leads to an increase in the resonance frequency (the precise conditions for this to occur are given in Methods, under Eigenvalue spectra of rate-based and multi-receptor SSNs in the presence and absence of NMDA). Thus as contrast increases, we expect the gamma peak in the LFP power-spectrum to move to higher frequencies, due to increasing neural gains and effective connectivity as dictated by the supralinear neural I/O transfer function.

Retinotopic SSN

We next investigated whether the SSN can account for the locality of contrast dependence of gamma peak frequency, when V1 receives a stimulus with a spatially varying contrast profile. To this end, we expanded our network from two units representing global E and I populations to many units that are retinotopically organized. We thus model the cortex as a two-dimensional grid that has an E and I sub-population (corresponding to SSN units) at each grid location, corresponding to a cortical column (Fig 4A).

Fig 4. A retinotopically-structured SSN model of V1, with a boost to local, intra-columnar excitatory connectivity, exhibits a local contrast-dependence of gamma peak frequency, as well as robust surround suppression of firing rates.

Fig 4

A: Schematic of the model’s retinotopic grid, horizontal connectivity, and stimulus inputs. Each cortical column has an excitatory and an inhibitory sub-population (orange and cyan balls) which receive feedforward inputs (green arrows) from the visual stimulus, here a grating, according to the column’s retinotopic location. Orange lines show horizontal connections projecting from two E units; note boost to local connectivity represented by larger central connection. Inhibitory connections (cyan lines) only targeted the same column, to a very good approximation. B: LFP power-spectra in the center column evoked by flat gratings of contrasts 25% (blue), 50% (green), and 100% (red). C: Gamma peak frequency as a function of flat grating contrast. Note that peaks were defined as local maxima of the relative power-spectrum, i.e., the point-wise ratio of the absolute power-spectrum (as shown in B), to the power-spectrum at zero contrast; see Methods. D: Firing rate responses of E (orange) and I (cyan) center sub-populations as a function of grating contrast. E: The Gabor stimulus with non-uniform contrast (falling off from center according to a Gaussian). The colored circles show the five different cortical locations (retinotopically mapped to the visual field) probed by the LFP “electrodes”. The orange probe was at the center and the distance between adjacent probe locations was 0.2° of visual angle (corresponding to 0.4 mm in V1, the width of the model columns). F: LFP power-spectra evoked by the Gabor stimulus at different probe locations (legend shows the probe distances from the Gabor center). G: Gamma peak frequency of the power-spectra at increasing distance from the Gabor center. The golden curve is the prediction for peak frequency in the displaced probe location based on the Gabor contrast in that location and the gamma peak frequency obtained in the center location for the flat grating of the same contrast. The predictor’s fit to actual Gabor frequencies is very tight (R2 = 0.98), exhibiting local gamma contrast dependence. H: Size tuning curves of the center E (orange) and I (cyan) subpopulations, at full contrast. E and I firing rates vary non-monotonically with grating size and exhibit surround suppression (suppression indices were 0.33 and 0.15, respectively).

In the retinotopic SSN, the stimulus input can vary across the network: each column can receive a different input proportional to the contrast within its receptive field. We presented this network with uniform-contrast grating stimuli of various sizes and contrasts (the stimulus in Fig 4A), as well as a Gabor stimulus (Fig 4E), similar to the one used in [1], with a contrast profile that decays smoothly with deviation from the stimulus center according to a Gaussian profile (see Eq 54). Gamma peak frequency shows only a weak dependence on stimulus orientation [2], possibly due to the averaging of LFP over an area larger than the size of orientation minicolumns. To keep our model parsimonious and computationally more tractable, we thus chose the size of our cortical columns to be roughly half the hypercolumn size in Macaque, and neglected the orientation map structure, and the dependence of external inputs and horizontal connections on preferred orientation.

We wish to study the trade-off in this model between capturing surround suppression of firing rates and capturing the local dependence of gamma peak frequency, and asked whether parameter choices exist for which the model can capture both of these effects. In particular, we studied the effect of the spatial profile of the horizontal recurrent connections between and within different cortical columns on this trade-off. In one extreme, we can consider a network in which long-range connections between different columns are very weak, and thus cortical columns are weakly interacting and can be approximated by independent two-population networks which were studied above (Figs 1 and 2). In this case, the frequency of the gamma resonance in each column depends only on the gains and activity levels in that column, which are in turn set by the feedforward input to that column, controlled by the local stimulus contrast. Therefore a network with such a connectivity structure would trivially reproduce the local contrast-dependence of gamma peak frequency. However, due to lack of strong inter-columnar interactions, such a network would fail to produce significant surround suppression of firing rates. In the other extreme, inter-columnar strengths are strong and, importantly, have a smooth fall-off (e.g. an exponential fall) with growing distance between pre- and post-synaptic columns; this is the case in most cortical network models, including the SSN model of [22] that captures surround suppression and its various contrast-dependencies. However, as we will show below, in such networks, when horizontal connections are strong enough to produce surround suppression, the gamma peak frequency is typically shared across all activated columns, regardless of the spatial contrast profile of the stimulus, and thus this connectivity structure cannot capture the local contrast-dependence of gamma peak frequency. Indeed, as shown below, within this class of networks (i.e., those with a smooth spatial connectivity profile), we did not find connectivity parameters (controlling the range and strength of horizontal connections) for which the network could produce significant surround suppression and yet capture the local contrast-dependence of gamma (see Figs 5 and 6 below).

Fig 5. Behaviour of retinotopic V1 models with and without boosted intra-columnar recurrent excitatory connectivity (columnar vs. non-columnar models, respectively) across their parameter space.

Fig 5

We simulated 2000 different networks of each type with parameters (11 in total) randomly sampled across wide, biologically plausible ranges. All histograms show counts of sampled networks; the total number of samples vary across histograms, as only subsets of networks exhibited the corresponding feature with a value in the shown range. Panels A-E and F-J show results for the columnar and non-columnar models, respectively. A & F: Distributions of gamma peak frequency, recorded at stimulus center, for different contrasts of the uniform grating (histograms for 25%, 50%, and 100% contrasts in blue, green and red, respectively). D & I: Distributions of the change in gamma peak-frequency normalized by the change in grating contrast, changing from 25% to 50% (cyan) or from 50% to 100% (yellow). B & G: Distributions of the suppression index for the center E (orange) and I (cyan) sub-populations. C & H: The distributions of the coefficient of variation R2, as a measure of the locality of gamma peak contrast dependence. The R2 quantifies the goodness-of-fit of predicted gamma peak frequency based on local Gabor contrast (see Fig 4G showing such a fit in an example network). E & J: The joint distribution of R2 and the suppression index of the center E sub-population. Only a very small minority of sampled non-columnar networks produced R2 > − 1 and R2 > 0 to appear in H and J; hence the small n’s, corresponding to 1.4% and 1% of samples, respectively.

Fig 6. Gamma peak contrast dependence and surround suppression in an example non-columnar retinotopic SSN without a boost in intra-columnar E-connections.

Fig 6

Panel descriptions are the same as for the right three columns in Fig 4. Among 2000 sampled non-columnar networks, this network was the best sample we found in terms of capturing the local contrast dependence of gamma peak frequency (subject to having a nonzero suppression index, and producing realistic non-multiple gamma peaks in the LFP power-spectra which are not unbiologically sharp). However, as seen in panel D, this network only yielded an R2 of 0.61, as an index of locality for the gamma peak’s contrast dependence, and gamma peak frequency stayed roughly constant over most of the area covered by the Gabor stimulus (panel B).

We then asked whether connectivity structures which involve a sum of strong, spatially smooth long-range connections and an additional boost to local, intra-columnar connectivity could produce both of these effects. In such a structure, the connection strength between two units first undergoes a sharp drop when the distance between the units exceeds the width of a column, and then falls off smoothly over a longer distance, possibly ranging over several columns. This modification can be thought of as adding a local, intra-columnar-only component to a connectivity profile with smooth fall-off. Specifically, we let the excitatory horizontal connections in our model have such a form. Denoting the strength of connection from the unit of type b at location y to the unit of type a at location x by Wx,a|y,b (with a, b ∈ {E, I}), we thus chose:

Wx,a|y,Eλa,Eδx,y+(1-λa,E)e-x-yσa,E(a{E,I}), (10)

where δx,y (the local component) is the Kronecker delta: 1 when x = y and zero otherwise. The λE,E and λI,E parameters lie between 0 and 1, and interpolate between the two extremes of connectivity structure: for λa,E = 0 the horizontal connectivity profile has only one spatial scale and falls off smoothly with distance, while for λa,E = 1 connectivity is purely local and intra-columnar. The orange lines in Fig 4A show examples of this connectivity profile. Below we will refer to horizontal excitatory connectivity structures with nonzero (and significant) λa,E as columnar, and to those with λa,E = 0 as non-columnar; we will also refer to SSN models with these connectivity types as “columnar” and “non-columnar” models or networks, respectively, for short. As in previous work [22], we chose a smooth Gaussian profile for inhibitory connections (see Eq 51), with a relatively short range (see the cyan lines in Fig 4A); thus inhibitory projections essentially only targeted the source column. We also explored networks with longer-range inhibitory connections, and found that our results were not sensitive to this choice (see S1 Fig, to be compared with Fig 5 which is discussed below). Horizontal projections of cortical inhibitory neurons indeed have a shorter range compared to projections of excitatory neurons, whose axons arborize over long distances with a characteristic patchy pattern [39].

In Fig 4B–4H, we show the behavior of firing rates and the gamma peak in an example columnar network (with λE,E = 0.72 and λI,E = 0.70) in response to different stimuli. We first presented this network with flat gratings of varying sizes and contrasts, and measured the LFP power-spectrum and the firing rate responses of the E and I sub-populations at the “center” column, i.e., at the retinotopic location on which the grating was centered. Firing rates of center E and I both increased with contrast (Fig 4D), and for large enough gratings, we verified that the gamma peak frequency also increases with increasing contrast (Fig 4B and 4C), matching the results of the reduced 2-population model and our previously built intuition. To study surround suppression, we formed the so-called size-tuning curve of the center E and I populations, based on their responses to full-contrast flat gratings of different sizes (Fig 4H). Both E and I responses showed surround suppression: the response first grows but then drops with increasing grating size. The center E sub-population had a suppression index (SI, see Methods for the definition; SI = 0 is no suppression, = 1 is complete suppression) of 0.33 consistent with biologically reported values of suppression indices [15].

To study the locality of contrast dependence, we modeled the experiment of [1], and presented this network with a Gabor stimulus, which has spatially varying contrast with a gaussian profile. We then computed the LFP spectrum at five locations (“columns”) of increasing distance from the center of the Gabor stimulus (the colored squares in Fig 4E) with the farthest one lying at 0.8 degrees of visual angle from the Gabor center (compare with the Gabor’s σ which was 0.5° as in [1]). The LFP power-spectra at all locations are shown in Fig 4F. As seen, the gamma peak moves to lower frequencies with increasing distance from the Gabor center, which is accompanied by a decrease in the local contrast (i.e., the contrast of the Gabor stimulus at the receptive field location of the recording site). To quantify the locality of this contrast dependence, we again followed [1], by comparing the actual peak frequency at location x with a prediction that solely depended on the local contrast, c(x), of the Gabor stimulus at x. The prediction was the peak frequency of gamma recorded at the center location, when the network is presented with a large flat grating of (uniform) contrast equal to c(x). We found that the prediction was in very close agreement with the actual peak frequencies at all distances (Fig 4F). As a measure of the locality of gamma contrast-dependence, we used the corresponding coefficient of determination, R2, which quantifies the agreement between the predicted and actual peak frequencies. (By definition, R2=1-SSEVar, where SSE denotes the sum of squared differences between the predicted and the actual gamma peak frequencies, and Var denotes the variance of the latter; R2 is thus bounded above by 1, which is attained when the prediction perfectly matches the actual data). In the example shown in Fig 4F, we found R2 = 0.98.

To investigate whether the above behavior did or did not require fine tuning of network parameters, we simulated 2000 networks with randomly picked parameters. There were 11 parameters in total, characterising the strengths and ranges of horizontal connections, including λa,E, the strength of feedforward connections, and the ratio of NMDA to AMPA in recurrent excitatory synapses. These parameters were were picked randomly and independently (except for the enforcement of three inequality constraints, similar to the samplings of the two-population model in Fig 2) across wide ranges of values consistent with biological estimates; see Methods and S1 Table for details. Again, similar to the sampling of the two-population model, sampled networks which failed to reach a stable steady-state in any stimulus condition were discarded.

The vast majority of sampled networks produced biologically plausible center excitatory and inhibitory firing rates which increased with increasing contrast (E and I firing rates did not exceed 100 Hz in, respectively, 90% and 81% of networks). The majority of networks produced surround suppression in excitatory (70% of samples) and inhibitory (53% of samples) populations, with many networks yielding strong suppression in both populations (Fig 5B). In addition, in response to grating stimuli (with uniform contrast profile), many networks produced gamma-band peaks in the LFP power spectrum that moved upward in frequency with increasing contrast (Fig 5A and 5D; the number of networks yielding gamma peak for each nonzero contrast is denoted in panel A). Finally, for each network, we again quantified the locality of contrast dependence of peak frequency, using the R2 coefficient for the match between peak frequencies obtained at different recording locations on the Gabor stimulus, and their predictions based on the local Gabor contrast and the peak frequency obtained using the flat grating with that contrast. A sizable fraction of networks resulted in a high R2 signifying local contrast-dependence of gamma peak frequency (Fig 5C). Moreover, many of these networks exhibited strong surround suppression as well (Fig 5E). Sixty six networks (4.2% of samples) yielded an R2 > 0.8 and SIE > 0.25.

In sum, the columnar model, which emphasizes the intra-columnar excitatory connectivity (Eq 10), can robustly exhibit strong surround suppression in conjunction with gamma peak frequencies controlled by the local contrast, as observed empirically, without requiring a fine tuning of parameters.

Retinotopic SSNs with non-columnar excitatory connectivity do not account for local contrast dependence of gamma frequency

To further show the importance of a boost in intra-columnar excitatory connectivity for obtaining local contrast dependence despite strong surround suppression, we next sampled retinotopic SSN models without this structure in horizontal connections (“non-columnar” model). In these models horizontal excitatory connections fall off smoothly over distance between the source and target columns (corresponding to λa,E = 0 in the notation of Eq 10). We found that while many sampled non-columnar models exhibited strong surround suppression (on average stronger than in the sampled columnar models) none of the sampled models exhibited gamma peak frequencies with sufficiently local contrast-dependence.

The sampled non-columnar networks robustly exhibited surround suppression which was strong in a large fraction of these networks, and, especially in excitatory units, was on average stronger than in the sampled columnar models (Fig 5G vs. Fig 5B). Many networks produced LFP power-spectrum peaks in the gamma band with frequency increasing with contrast (Fig 5F and 5I), but sampled networks with these properties were about half as common as in the case of the columnar model (Fig 5A and 5D). Moreover, when presented with the Gabor stimulus, we found that the contrast dependence of gamma peak frequency in the vast majority of non-columnar networks was far from local, and the same peak frequency was shared across most of the retinotopic region stimulated by the Gabor stimulus (see the power spectra of an example sampled non-columnar network in Fig 6A and 6B). Only 1% of sampled non-columnar networks exhibited a positive R2 (our measure of local contrast dependence), compared to 15% of columnar networks (Fig 5H and 5J). Only 4 non-columnar networks (out of 2000 sampled networks) exhibited an R2 > 0.6 in conjunction with any degree of surround suppression (positive suppression index) of E firing rates (Fig 5J). However, three of these networks which had the highest R2’s exhibited non-biological gamma-band power spectra, featuring either multiple gamma peaks or unrealistically sharp ones. The fourth network produced realistic gamma peaks and achieved an R2 = 0.61; the LFP power spectra, gamma frequency contrast dependence, and size tuning curves for this example network are shown in Fig 6.

We conclude that the non-columnar model class cannot robustly exhibit both surround suppression of firing rates and local contrast dependence of gamma peak frequency.

Contrast dependence of gamma frequency above the Hopf bifurcation

As mentioned above, under Linearized approximation, and for reasons discussed there, we have heretofore explored the behaviour of the SSN in a regime of noise-driven damped oscillations, which, in the absence of noise, would decay to a fixed point of neural activity. Changes in connectivity or stimulus parameters can nevertheless lead to a Hopf bifurcation (as defined in the noise-free network), namely a transition to a regime of sustained oscillations. In the presence of noise, however, this regime change ceases to be a well-defined and sharp transition. Therefore, given sufficiently strong noise, we expect, on theoretical grounds, that the contrast dependence of gamma oscillations would be qualitatively similar just below or just above the Hopf bifurcation (also note that the same network could be above or below this bifurcation in different stimulus conditions). We directly verified this prediction in the two example networks presented in Figs 4 and 6. Theoretically we expect that a sufficient increase in the strength of recurrent excitation (i.e., the strength of the E to E connections) would result in a Hopf bifurcation. As shown in S2 Fig, we first mapped out this Hopf bifurcation, as we increase the strength of E to E connections, in the noise-free versions of the two, columnar and non-columnar, example networks. We then simulated (as opposed to using the linearized approximation) the two noise-driven networks at a value of recurrent E to E connections above the Hopf bifurcation, and calculated the LFP power spectra in these networks in the different stimulus conditions considered above. The results are presented in S3 and S4 Figs. A comparison of those figures with the corresponding panels in Figs 4 and 6, respectively, shows that the qualitative behavior is indeed very similar in the versions of each network above the Hopf bifurcation (i.e., with connectivity parameters for which the networks exhibit sustained periodic oscillations in the absence of input noise). In particular, the gamma peak frequencies of the columnar example model (see S3 Fig panel E) still exhibits a local contrast-dependence (albeit with a decreased R2 compared to Fig 4G), while the non-columnar model (see S4 Fig panel E) exhibits a virtually constant gamma peak frequency at different locations, despite the stimulus contrast varying strongly over space.

Mechanism underlying the local contrast dependence of gamma peak frequency in the columnar SSN

We can understand the mechanism underlying the local contrast dependence of gamma frequency in the columnar model, and its failure in the non-columnar model, by looking at the spatial profile of the normal oscillatory modes of these networks. Normal modes are the eigenvectors of the Jacobian matrix, the effective connectivity matrix of the linearized network introduced before Eq 6, and normal oscillatory modes are the Jacobian eigenvectors with complex eigenvalues. (As described above, the oscillation frequency is given by the imaginary part of the eigenvalue, and thus we are particularly interested in the eigenvectors of eigenvalues with imaginary part in the gamma band). The Jacobian is dependent on the operating point of the linearization, which is in turn set by the stimulus. The relevant stimulus condition for us is the Gabor stimulus, or more generally a stimulus with non-uniform contrast.

As we discussed above, gamma peak frequency would be trivially determined by local contrast in a network with only local connectivity (corresponding to λaE = 1 in our model) and disconnected cortical columns. The disconnected columns act like the 2-population model of the first part, and can oscillate independently of other columns by a frequency set by the operating point of that column, which is in turn set by the stimulus input to that column and thus the local contrast. In such a network, all Jacobian eigenvectors are completely localized spatially at a single column. Since the mode is localized, its eigenvalue and hence its natural frequency, are entirely determined by the stimulus contrast over that column.

By contrast, in the model with long-range connections and λaE = 0 (the non-columnar model) the eigenvectors can spatially cover a large region of retinotopic space, and lead to coherent and synchronous oscillations at the same frequency (set by that mode’s eigenvalue) across many columns. To see this, consider the case of a stimulus with uniform contrast. Such a stimulus does not break the network’s translational symmetry (since recurrent connections do not care about absolute location, and only depend on relative distance of pre- and post-synaptic columns). Due to this symmetry all eigenmodes are completely delocalized and have (sinusoidal) plane wave spatial profiles extending over the entire retinotopic space. A non-uniform stimulus does break the translational symmetry and leads to relative localization of eigenvectors. However, the scale of this localization is set by the scale over which the stimulus contrast varies appreciably. If this variation is smooth, the eigenvectors can still cover a large region. In the case of the Gabor stimulus, the σ of the Gaussian profile of this stimulus sets this length scale. Thus eigenvectors tend to cover much of the space covered by the stimulus. This can be seen in Fig 7D showing oscillatory eigenvectors of an example non-columnar network, for modes which maximally contribute to the gamma peaks recorded at different locations. When such an eigenvector has a complex oscillatory eigenvalue (which is sufficiently close to the imaginary axis so that its oscillations are not strongly damped), it will give rise to coherent gamma oscillations across the area covered by the stimulus, at the same frequency set by the mode’s eigenvalue. The corresponding peak will thus appear at the same frequency in the power-spectra of the LFP recorded across this space, despite smooth variations in local contrast (Fig 7C top). This mechanism thus breaks the local contrast dependence of peak frequency.

Fig 7. Mechanism of local contrast dependence in the retinotopic SSN with columnar structure and its failure in the non-columnar model.

Fig 7

The left and right columns correspond to the columnar (retinotopic SSN with boosted local, intra-columnar excitatory connectivity) and non-columnar (retinotopic SSN without the boost in local connectivity) example networks in Figs 4 and 6, respectively. A: Top: relative LFP power spectra recorded at different probe locations in the Gabor stimulus condition (see Fig 4E; same colors are used here to denote the different LFP probes). Relative power spectrum is the pointwise ratio of the evoked power spectrum (evoked by the Gabor stimulus) to the spontaneous power spectrum in the absence of visual stimulus (the absolute power spectrum for the same conditions was given in Fig 4F). Bottom: the eigenvalue spectrum in the complex plane, with real and imaginary axis exchanged so that the imaginary axis aligns with the frequency axis on top (eigenvalues are also scaled by 1/(2π) to correspond to non-angular frequency). The eigenvalues were weighted separately for each probe, according to Eq 11, and the eigenvalue with the highest weight was circled with the probe’s color (see Fig 4E). This eigenvalue contributes the strongest peak to the power spectrum at that probe’s location. B: Each sub-panel corresponds to one of the probe locations (as indicated by the frame color), and plots the absolute value of the highest-weight eigenvector (more precisely, the function |Ra(x)| defined in Eq 37) over cortical space. Thus, this is the eigenvector corresponding to the circled eigenvalue in panel A, bottom. The λ-rank in each sub-panel is the order (counting from 0) of the eigenvalue according to decreasing imaginary part, which is the eigen-mode’s natural frequency. The green dot in each sub-panel shows the location of the LFP probe. C-D: Same as A and B, but for the retinotopic SSN model with no columnar structure.

Our columnar model, via its parameters λaE, interpolates between disconnected networks and the kind of network just discussed. With a sufficient boost of intra-columnar connectivity (i.e., with sufficiently large λaE) the eigenvectors of this model become approximately localized, not to single columns, but to a small number of columns receiving similar stimulus contrasts. This is shown in an example columnar network in Fig 7B, showing different Jacobian eigenvectors (all for the Gabor stimulus condition) which are approximately localized at different locations. Even those eigenvectors that are relatively more spread, tend to extend over rings encircling the Gabor’s center, and thus cover an area receiving the same contrast. The eigenvalue and natural frequency of such modes is thus largely controlled by that contrast value: modes that do not extend to a given location do not contribute to the power-spectrum recorded at that location, while modes that are localized nearby only “see” the local contrast.

This observation further explains the typical shape of the eigenvalue spectrum observed in columnar networks. As seen in Fig 7A (bottom) the eigenvalue spectrum consists of a near-continuum of eigenvalues extending along the imaginary axis. Eigenvalues with higher (lower) imaginary parts (i.e., the corresponding modes’ natural frequency) have eigenvectors localized at regions of higher (lower) contrast. In this way, eigenvalues with different imaginary parts roughly correspond to different locations that have different local contrasts, with imaginary part decreasing with local contrast. By contrast, oscillatory eigenvalues in non-columnar networks, especially eigenvalues near the imaginary axis within the gamma band, tend to be isolated (Fig 7C bottom shows this in an example non-columnar network); the corresponding mode can not be associated with a given location or contrast. Indeed since the eigenvectors of these modes extend over many columns receiving varying contrasts, the mode’s eigenvalue and oscillation frequency are not determined by stimulus contrast at any single column, but rather by the entire spatial profile of contrast, in a complex manner.

The above qualitative discussion can be made quantitative using the linearized approximation. In Methods (see Eqs 39 and 41), we derive an expression for the power-spectrum at a location as a sum of individual contributions by different eigenmodes. (The sum also includes terms that are contributions of different pairs of modes, which, depending on whether the two modes interfere constructively or destructively, can be positive or negative —see Eq 43; we did not take into account pair contributions in weighting of eigenmodes). As show in Eq 41, mode a, with eigenvalue λa, contributes a peak to the power spectrum located at frequency given by the imaginary part of λa (the mode’s natural frequency). The half-width of the peak is given by minus the real part of the eigenvalue, which we denote by γa. Finally, the peak amplitude is proportional to

|Ra(x)|2γa2, (11)

where Ra(x) is the component of the mode’s right eigenvector at column x’s E population (after summing the components corresponding to different synaptic receptors; see Eq 37). Thus this peak only leaves an imprint on the LFP power-spectrum in locations where this eigenvector has appreciable components. The amplitude is also inversely related to the square of the peak’s half-width, γa, which by definition measures the distance between the eigenvalue and the imaginary axis. Thus eigenvalues closer to this axis produce stronger and sharper peaks, which appear in the LFP spectrum probed at location x, only if the corresponding right eigenvector has strong (excitatory) components there. In Fig 7, separately for each of the five LFP probe locations on the Gabor stimulus, we picked the mode with the highest amplitude as defined in Eq 11. The corresponding eigenvalue is circled in the eigenvalue spectrum plots (bottom plots in Fig 7A and 7C for the columnar and non-columnar example models, respectively). In Fig 7B and 7D we then plot the corresponding eigenvectors; more precisely, we have plotted |Ra(x)| which, according to Eq 11, controls the strength of a mode’s contribution at different locations x. As observed, in the non-columnar model, the eigenvectors spread over the entire region covered by the Gabor stimulus. Thus the same mode makes the strongest contribution to the LFP spectra (shown in Fig 7A and 7C, top plots) at all probe locations, except for the one that is farthest from the Gabor center. Since this best mode is shared across the first four probe locations, its fixed eigenvalue (Fig 7C, bottom plot) determines the location of the power-spectrum peak (Fig 7C, top plot) in all but the farthest probe location.

By contrast, in the columnar model, eigenvectors cover a considerably smaller area within which contrast varies little. As a result, each mode only affects the LFP power spectrum locally, and when the probe moves, the best mode changes quickly, as if the best eigenvector “moves” with the probe (Fig 7B). In turn, the corresponding best eigenvalues also move to lower frequencies along the imaginary axis (Fig 7A, top), as the probe moves farther from the Gabor center, according to the local contrast “seen” by their eigenvector.

Inter-columnar projections in the columnar model are nevertheless sufficiently strong to be able to give rise to strong surround suppression, as evidenced above. It is also worth noting that for large gratings that give rise to surround suppression, the contrast is uniform over a broad area, in which case even the eigenvectors of the columnar model tend to cover a broad area (mathematically, this is because when stimulated with a uniform stimulus, the columnar model also has approximate translational invariance, and therefore its eigenvectors tend towards delocalized approximate plane waves).

In summary, the columnar model can balance the requirements for locality of gamma contrast dependence and strong surround suppression, because of the intermediate spatial spread of its eigenvectors, which tend to cover relatively small areas with roughly uniform contrast.

Discussion

In this work we have shown that the expanded SSN is able to robustly display the contrast dependence of gamma peak frequency in both a two-population and a retinotopic network. The retinotopic model successfully balances the trade-off in horizontal connection strength such that both the local contrast dependence of the gamma peak frequency and the surround suppression of firing rates are observed robustly. In order to capture gamma oscillations using the SSN, we expanded the model beyond an E-I network to a varied synaptic network model. Crucially, the SSN account sheds light on the mechanism underlying the contrast dependence of gamma peak frequency and points to the key role of the non-saturating and expansive neural transfer function, observed empirically [40, 41], in giving rise to this effect.

Finding the power-spectra using the linearization to Eq 1, helped us make analytic simplifications. From these simplifications, we gained insights on how the SSN captures the gamma contrast dependence. As contrast increases, firing rates increase, which, due to the supralinear neural transfer function of SSN, lead to increasing neural gains. This in turn strengthens effective connectivity, leading to faster oscillations. Moreover, by finding the power-spectra via linearization, we were able to rapidly compute power-spectra which allowed for extensive explorations of the model’s parameter space. Nevertheless, we also explored the behavior of LFP power spectra in our example networks using direct simulations of the networks’ stochastic dynamics. These simulations validated the accuracy of the linearized approximation below the Hopf bifurcation (i.e., transition to a regime of sustained periodic oscillations in the noise-free network), and also showed that the contrast-dependence of gamma frequency behaves similarly above and below the Hopf bifurcation (which could occur in the same network in different stimulus conditions), as expected on theoretical grounds.

In this work, for simplicity, we assumed an instantaneous I/O function between net synaptic input (∑β hβ) and the output rate. This is based on the approximations discussed in [36], which is valid when the fast synaptic filtering time-constants (τAMPA and τGABA) are much smaller than the neuronal membrane time-constants. However, our framework can easily be generalized beyond this approximation by using the full neuronal linear response filter obtained from the Fokker-Planck treatment of [36]. The main change due to such a modification would be to render the neural gains frequency dependent. We expect this dependence to be weak because we are in the regime of fast synaptic filtering as compared to the neuronal membrane time constant, and so we expect the transfer function to be approximately instantaneous [36]. Therefore we do not expect that including the full neuronal linear response filter would change our qualitative results.

As we have shown, retinotopic SSN networks account for the co-existence of the local contrast dependence of gamma peak frequency and strong surround suppression by balancing long-range horizontal connection strength that decreases exponentially with distance with an additional strengthening of very local excitatory connections. One possible scenario is that the additional local connection strength is needed to compensate for the possible effects of the model’s coarse retinotopic grid which might have distorted the functional effects of a connectivity profile with a smooth, single-scale fall-off, without a local boost. Alternatively, this may represent a measurable increase in connection strength above an exponential function of distance at short distances, below ∼200 − 400 μm. Indeed, it has been previously noted that anatomical findings on the spatial profile of horizontal connections in the macaque cortex point to such a mixture of short-range or local and long-range connections, with the local component not extending beyond 400 μm (the size of our model’s columns) [42].

It is notable that, in modeling surround suppression in V1, [43] also found that they needed to increase the central weight strengths, relative to an exponential fall-off of strength on a grid, for their SSN model to account for two other observations. These observations were the decrease in inhibition received by a cell when it is surround suppressed [44, 45], and the fact that the strongest surround suppression occurs when surround orientation matches center orientation, even if the center orientation is not the cell’s preferred orientation [46, 47]. Other phenomena they addressed could be explained by their SSN model with or without this extra local strength. We believe that other visual cortical phenomena previously addressed with the SSN model [22, 34, 45, 48] would not be affected by such boosting, as the mechanisms inferred behind them appear independent of these connectivity details.

Recently some evidence of enhanced connectivity at very short distances (∼20 μm) has been found in mouse V1 [49]. Optogenetic stimulation of ten cells found excitation of nearby cells only at such short distances from one of the stimulated cells, with suppression at longer distances. In a model, this required an extra component of connectivity that decreased with distance on a very short length scale, in addition to one with a longer length scale. As they point out, such extra local strength might account for the observation that preferred orientations in mouse visual cortex are correlated on a similar very short length scale [50]. It will be interesting to see if evidence of such a short-length-scale component of connectivity is evident in monkeys, where the local contrast dependence of gamma was measured, and conversely to see if mice show similar local contrast dependence of gamma.

Methods

Stabilized supralinear network (SSN) with different synaptic receptor types

In its original form, the Stabilized Supralinear Network (SSN) is a firing rate network of excitatory and inhibitory neurons that have a supralinear rectified power-law input/output (I/O) transfer function:

F(h)=k[h]+n (12)

where n > 1 and [h]+ ≡ max(0, h) denotes rectification of h. The dynamics can either be formulated in terms of the inputs to the units [34] or in terms of their output firing rates [22, 23]. Here we adopt the former case for which the dynamical state of the network, in a network of N neurons, is given by the N-dimensional vector of inputs ht, which evolves according to the dynamical system

Tdhtdt+ht=WF(ht)+It. (13)

Here, I is the external input vector, T = diag(τ1, …, τN) is the diagonal matrix of synaptic time constants, and F acts element-wise. Finally, W is the N × N matrix of recurrent connection weights between the units in the network. This connectivty matrix observes Dale’s law, meaning the sign of the weight does not change over columns. If we order neurons such that excitatory neurons appear first and inhibitory neurons second, this matrix takes the form

W=WEEWEIWIEWII (14)

where WXY (X, Y ∈ [E, I]) have non-negative elements.

The above model does not take into account the distinct dynamics of currents through different synaptic receptor channels: AMPA, GABAA (henceforth GABA), and NMDA. Only the fast receptors, AMPA and GABA, have timescales relevant to gamma band oscillations. These receptors have very fast rise times (on the order of 1 millisecond), which correspond to frequencies much higher than the gamma band. We therefore ignored the rise times of all receptors. The slow decay time of the NMDA makes the portion of fluctuating excitatory inputs filtered by this receptor to have a negligible contribution to power within the gamma band. (A nonzero NMDA rise time would not alter this conclusion. Therefore for simplicity and uniformity we also neglected the rise time of NMDA receptors in our model). The reason we nevertheless include NMDA in the model is that the ratio of NMDA to AMPA connectivity (see Eqs 16 and 17 below) controls the portion of excitatory connection strengths that contributes to gamma oscillations (see Eigenvalue spectra of rate-based and multi-receptor SSNs in the presence and absence of NMDA in Methods for an analytic exposition of this point).

With these assumptions, upon arrival of an action potential in a pre-synaptic terminal at time t = 0, the post-synaptic current through receptor channel α (α ∈ {A = AMPA, G = GABA, N = NMDA}) with decay time τα is given by wαθ(t)ταe-t/τα where θ(t) is the Heaviside step function and wα is the contribution of receptor α to the synaptic weight. This is the impulse response solution to the differential equation ταdhαdt+hα=wδ(t), where δ(t) is the Dirac delta representing the spike at time t = 0. In the mean-field firing rate treatment, the delta function is averaged and is replaced by a smooth rate function r(t). Extending this to cover post-synaptic currents from all synapses into all neurons we obtain the equation

ταdhtαdt+htα=Wαrt (15)

where rt and hα are N-dimensional vectors of the neurons’ firings rates and input currents of type α, respectively, and Wα are N × N matrices containing the contribution of receptor type α to the recurrent synaptic weights. If we add an external input to the right side (before filtering by the synaptic receptors), we obtain Eq 1. Since AMPA and NMDA only contribute to excitatory synapses, and GABA only to inhibitory ones, in general the Wα have the following block structure

WA=(WEEA0WIEA0),WN=(WEEN0WIEN0),WG=(0-WEIG0-WIIG). (16)

For simplicity, we further assumed that the fraction of NMDA and AMPA is the same in all excitatory synapses. In this case all Wα can be written in terms of the four blocks of the full connectivity matrix W ≡ ∑α Wα, introduced in Eq 14:

WN=ρN1-ρNWA=ρN(WEE0WIE0),WG=(0-WEI0-WII). (17)

where the scalar ρN is the fractional contribution of NMDA to excitatory synaptic weights.

As noted in Results, to close the system of equations for the dynamical variables htα, we have to relate the output rate of a neuron to its total input current, httotal=βhtβ. In general, the relationship between the total input and the firing rate of a neuron, or the mean firing rate of a population of statistically equivalent neurons, is nonlinear and dynamical, meaning the rate at a given instant depends on the preceding history of input, and not just on the instantaneous input. However, as shown by [35, 36], the firing rate of spiking neurons receiving low-pass filtered noise with fast auto-correlation timescales is approximately a function of the instantaneous input. The fast filtered noise is exactly what irregular spiking of the spiking network generates after synaptic filtering (as in Eq 15) by the fast AMPA and GABA receptors. (While our rate model does not explicitly model (irregular) spiking, it can be thought of as a mean-field approximation to a spiking network where each SSN unit or “neuron” represents a sub-population of spiking neurons, with the rate of that unit representing the average firing rate of the underlying spiking population). We thus use this static approximation to the I/O transfer function and assume the firing rates of our model units are given by Eq 2: rt=F(httotal)=F(βhtβ), where F(.) is the rectified power-law function of Eq 12.

We do note, however, that this static approximation can be lifted in a straightforward manner at the level of our linearized approximation (which underlies our qualitative understanding of the contrast-dependence of the gamma peak): upon linearization, a dynamic neural transfer function would result in the neural gain variables (see Eq 22) becoming frequency-dependent gain filters. However, as long as those gain filters are feature-less over the gamma band (i.e., they vary sufficiently slowly over this band of frequencies, and in particular do not have features such as peaks within this band), their frequency dependence would not qualitatively affect the location of the gamma peak and its stimulus dependence. Thus we expect that the static I/O approximation will not alter our qualitative results.

Modelling of gamma oscillations and local field potential

As discussed in the Introduction and Results, gamma oscillations are most consistent with noise-driven damped oscillations. We thus assumed the external input consisted of a time-independent term representing the feedforward drive due to a static stimulus, and dynamical noise:

Itα=IDCα+ηtα. (18)

Given that external inputs to cortex are excitatory and only fast noise is relevant to gamma oscillations we assumed that ηtα was only nonzero for α = AMPA. We took ηt to have independent and identically distributed components (with zero mean) across our sub-population units, and took it to be temporally pink noise, with fast correlation time, τcorr:

ηi(t1)ηj(t2)=δijση2e-|t1-t2|τcorr. (19)

We assume that local field potential (LFP) recordings predominantly measure the inputs to the surrounding pyramidal neurons [37], and thus in our model use the current input into our excitatory units as the surrogate for LFP. More precisely we take the LFP signal at location x to be the total current input, htotal, averaged over the E neurons within a given distance of x (the average could be weighted with weights that decrease with distance). This can be written as the inner product of htotal with an x-dependent weight vector:

LFPt(x)ex·htotal=αex·hα, (20)

where the weight vector ex only has nonzero components for E neurons that are within a given radius of location x. In particular, in the two-population model which lacks retinotopy exe = (1, 0)T. In the retinotopic model, we assumed that the spatial range of the LFP recording does not exceed the half-width of our the model’s cortical columns (0.2 mm), and therefore we took ex to be a one-hot vector with the component for the E unit at location x equal to one, and the rest zero.

LFP power-spectra in the linearized approximation

In order to study the power-spectra, and gain intuition about them, we linearized the dynamics around the noise-free fixed point. (Recall that we are modelling gamma as noise-driven damped oscillations, i.e. the network is in a regime where without noise it reaches a stable fixed point). As shown in Results, the fixed point satisfies Eq 3. The linear approximation consists of a first-order Taylor expansion in powers of the noise, ηtα, and noise-driven fluctuations, δhtαhtα-h*α, around the stable fixed point. (Note that while the fixed point equation only involves the total current h*αh*α, after numerically finding h*, we can obtain the fixed-point value of the receptor-specific currents via h*α=WαF(h*)+IDCα). This yields

ταdδhtαdt=-δhα+WαΦβδhtβ+ηt, (21)

where we defined the gain matrix Φ as a diagonal matrix whose diagonal entries are

ΦiiF(hi*)=nk[hi*]+n-1=nk1nri*1-1n. (22)

Taking the Fourier transform of Eq 21, and solving for δh˜fα (the Fourier transform of δhtα, where f denotes frequency) we obtain

δh˜fα=βGαβ(f)η˜fβ, (23)

where the Green’s function, Ghαβ(f), is given by

[G(f)-1]αβ(-i2πfτα+1)δαβIN×N-WαΦ. (24)

where IN×N is the identity matrix.

Since, by Eq 20, the LFP is a linear function of hα, the power-spectrum of LFP can be written in terms of the cross-spectrum matrix of δh˜fα, which we denote by Chαβ(f). Specifically

PLFP(f;x)=α,βexTChαβ(f)ex. (25)

Using Eq 23 and Chαβ(f)δh˜fαδh˜fβ we have

Chαβ(f)=γ,δGαγ(f)Cηγδ(f)Gβδ(f),=GαA(f)CηAA(f)GβA(f). (26)

Here, Cηγ(f) is the cross-spectrum of the input noise, and in the second line we relied on our assumption that noise only enters the AMPA channel, and thus only γ = δ = A ≡ AMPA contribute to the sums. From Eq 19, we have CηA,A(f)=Pnoise(f)IN×N, where

Pnoise(f)=2τcorrση2|1-2πiτcorrf|2 (27)

is the power-spectrum of noise. We finally obtain

PLFP(f;x)=Pnoise(f)ux(f)2, (28)

where we defined

ux(f)βGβA(f)ex. (29)

Definition of suppression index, and gamma peak frequency and width

Suppression index was based on the size tuning curve, r(R), of the center E or I units, measured for gratings of 100% contrast. It was defined by

SI=1-r(Rmax)maxRr(R) (30)

where R is the grating radius, r(R) is the size tuning curve, and Rmax is the maximum grating radius used.

As in [1], we identified the gamma peak frequency with the frequency (within the extended gamma band 10–100 Hz) at which the difference of the evoked (c > 0) and spontaneous (c = 0) LFP log-spectra (or the ratio of those spectra) is maximized:

fpeak(c)=argmaxf[logPLFP(f;c)-logPLFP(f;c=0)]. (31)

As a measure of gamma peak width (or half-width) at contrast c, we used the half-width at half-height of the relative power spectrum PLFP(f;c)PLFP(f;0).

The eigen-decomposition of the LFP power-spectrum

The linearized dynamics of Eq 4 can be written in terms of the Jacobian matrix, J as

dδhtαdt=βJαβδhtβ (32)

where

Jαβτα-1(-δαβIN×N+WαΦ). (33)

(with α, β ∈ {AMPA, GABA, NMDA}) is the (α, β) block (an N × N matrix) of the full 3N × 3N Jacobian matrix. The normal modes of this linear system correspond to eigenvectors of the Jacobian, which evolve in time according to eλt which for λ = γ + 0 with real γ and ω0 can be written e−|γ|t(cos ωt + i sin ωt) (we used the fact that stability require that the eigenvalue real part, γ, is negative). We thus see that this mode oscillates at (angular) frequency ω0 = Imλ, and decays at the rate given by |γ| = −Reλ. (For brevity, in this section we will use the angular frequency ω = 2πf instead of f). Comparison of Eqs 4 and 24 shows that the inverse Green’s function can be written in terms of the Jacobian as [G(f)-1]αβ=τα(-iωI-J)αβ. Defining a diagonal matrix T with the first, second, and last third of its diagonal elements given by τAMPA, τGABA and τNMDA, respectively. We can write

G(f)=(-iωI-J)-1T-1 (34)

We start by rewriting the Green’s function in terms of the eigen-decomposition of the Jacobian J=VΛV1, where Λ is the diagonal matrix of eigenvalues, λa, and V is a matrix with columns given by corresponding (right) eigenvectors. Equivalently we can write J=aλaRaLa, where the right eigenvectors Ra are the columns of V and the left eigenvectors, La, are the rows of V−1. Using this decomposition we obtain

G(f)=a1-iω-λaRaLaT-1. (35)

We can then rewrite Eq 29 as

ux(f)=τAMPA1aRa(x)-iω-λaLaA (36)

where LaA is the row-vector formed by the AMPA components of La, and we defined

Ra(x)=β[Ra]β,E,x (37)

namely, the (possibly complex) scalar function Ra(x) is the E, x component (component on E subpopulation at column x) of the right-eigenvector after summing over receptor indices. Here we have assumed that the LFP probe is completely local and reflects total current (hence the sum over β) into E (pyramidal) neurons of the recorded column x. The limitation to AMPA components of the left eigenvectors, on the other hand, reflects our assumption that external noise is entering only via AMPA receptors. Substituting in Eq 28, we then obtain

PLFP(f;x)Pnoise(f)=ux(f)2 (38)
=a,bAab(x)1iω-λb*1-iω-λa (39)

where we defined

Aab(x)=τAMPA2LbA,LaARb*(x)Ra(x) (40)

and LbA,LaA=LaA(LbA) is the Hermitian inner product of the two left eigenvectors, within the AMPA subspace. The key factor here is Rb*(x)Ra(x) which determines the x-dependence and can affect the local dependence of power spectrum on information at point x.

Eqs 39 and 40 constitute our main result here. They express the ratio of LFP to noise power spectrum as a sum of contributions by pairs of eigen-modes, and contributions by individual eigen-modes (the latter corresponding to the diagonal summands with a = b).

In particular, the individual contribution of mode a is

Pa(f;x)Aaa(x)|-iω-λa|2=Aaa(x)γa2+(ω-ωa)2, (41)

where γa and ωa are minus the real and imaginary components of λa, and f=ω2π. This contribution is a Lorentzian function that peak at the natural frequency frequency ωa and has half-width γa. The amplitude of this peak is given by

Aaa(x)γa2|Ra(x)|2γa2, (42)

where we used Eq 40. This amplitude is thus proportional to |Ra(x)|2, i.e., the squared absolute value of the corresponding right eigenvector component at location x. It is also inversely proportional to the squared half-width which measures the distance between the eigenvalue and the imaginary axis. Thus eigenvalues closer to this axis produce stronger and sharper peaks, which appear in the LFP spectrum probed at location x if the corresponding right eigenvector has strong components at that location.

The sum in Eq 39 also contains terms each of which can be interpreted as the contribution of a pair of (distinct) modes. When the left-eigenvectors of different modes are orthogonal (according to the inner product defined after Eq 40, which corresponds to the orthogonality of the AMPA components of the vectors under the common inner product) these contributions vanish. More generally, the contribution of the pair (a, b) can be written as (making use of Aba(x) = Aab(x)*)

Pab(f;x)2ReAab(x)1iω-λb*1-iω-λa (43)
=2NabR(f;x)+NabI(f;x)Dab(f), (44)

where we defined

Dab(f)=(γa2+(ω-ωa)2)(γb2+(ω-ωb)2), (45)

and

NabR(f;x)=Re[Aab(x)](γaγb+(ω-ωa)(ω-ωb)), (46)
NabI(f;x)=Im[Aab(x)](γa(ω-ωb)-γb(ω-ωa)). (47)

Alternatively, the pair contribution Pab(f; x) is given by the product of the individual contributions Pa(f; x) and Pb(f; x), with a correction factor given by 2Aaa-1(x)Abb-1(x)(NabR(f;x)+NabI(f;x)).

Parametrization of the two-population and retinotopic models

The 2x2 (full) connectivity matrix of the 2-population model is parametrized by the four parameters Jab (a, b ∈ {E, I}) as follows:

W=(JEE-JEIJIE-JII). (48)

The DC stimulus input corresponds to feedforward excitatory inputs from LGN and targets both sub-populations only via the AMPA channel (since this input is time-independent, its distribution across NMDA and AMPA channels is actually of no consequence). As in the original SSN, we assumed this input scales linearly with contrast, c, but with varying relative strengths to the E and I captured by the two parameters gE and gI:

IDC=c(gEgI). (49)

In the retinotopic model we index the neurons by their E/I type and retinotopic location. We parametrized the recurrent connection weight from the pre-synaptic E and I units at location y to the type a (a ∈ [E, I]) post-synaptic unit at location x by

Wx,a|y,EJa,E[λa,Eδx,y+(1-λa,E)e-x-yσa,E] (50)

for excitatory projections, and

Wx,a|y,IJa,Ie-(x-y)22σa,I2, (51)

for inhibitory ones. We are using proportionality instead of equal signs in the above equations, because a normalization was done such that the total weight of each type received by a unit was given by the corresponding Jab (independent of the σab and λab parameters). Recurrent connectivity was thus parametrized by the 2x2 matrices Jab and σab, the two λa,E, and the NMDA fraction, ρN, 11 parameters in total. For σII and σEI we used values (see S1 Table) small compared to the distance between neighboring columns (0.4 mm) so that inhibition was effectively local (i.e., intra-columnar); we did not vary σII and σEI across our randomly sampled networks.

We modeled the external stimulus input to the type a unit at x by

Ia,xDC=cgaIx(a{E,I}). (52)

where gE and gI reflect the relative strengths of feedforward connections received by V1’s excitatory and inhibitory networks, and Ix captures the spatial profile of the visual stimulus. For a grating of radius rgrat we modeled the spatial contrast profile, Ix, as

Ix=11+ex-rgratwRF. (53)

The parameter wRF smooths the edges of the grating (due to feedforward filtering by receptive fields of width ∼ wRF). Note that for rgratwRF (which was true for most grating sizes employed), local contrast is nearly uniform under the support of the external input (except within a “boundary layer” of width wRF near ∥x∥ = rgrat). For the Gabor stimulus we have

Ix=e-x22σGabor2. (54)

We took σGabor = 0.5°, as in [1], and the peak contrast (c in Eq 52) was always 100% for this stimulus.

Model parameters and parameter sampling

see S1 Table below for the values of all parameters or parameter ranges for models used in different figures. For the models used in Figs 1 and 4, we found their parameters (Jab and ga which are shared in both figures, and σa,E and λa,E for Fig 4) using random sampling (as further described below) searching for networks that would exhibit the local contrast-dependence of the gamma peak together with strong surround suppression.

For studying the robustness of the contrast dependence of gamma frequency in the two-population model in Fig 2, and in the case of the retinotopic SSN with a smooth fall-off of excitatory horizontal connectivity in Fig 6, we sampled parameters from wide biologically plausible ranges. To determine these ranges for the recurrent and feedforward weights, we first made rough biological estimates for the recurrent E and I weights (i.e., JaE and JaI, respectively, for a ∈ {E, I}), as well as the (excitatory) feedforward weights (gE and gI); we denote these estimates by JE*, JI* and g*, respectively. We then independently varied parameters controlling each type of weight between 0.5 to 1.5 times those estimates (see S1 Table for the actual values).

To come up with the mid-range estimates, JE*, JI* and g*, we relied on empirical estimates of the effect of recurrent and feedforward inputs on the membrane voltage of a post-synaptic neuron. Note that while Jab (and thus JE* and JI*), have dimensions of voltage (such that the recurrent input W r has our units of current), ga (and thus g*) have dimensions of current. In our model, we measured the currents in units of mV/s, by including an implicit factor of membrane capacitance in them. The membrane potential response to a unit current is normally given by the membrane resistance, which in our units becomes the membrane time constant, which we take to be τm = 0.01 s. So to obtain an estimate of g* from voltage measurements, we need to divide the estimate by τm = 0.01 s.

We estimated the effect of feedforward inputs on membrane voltage using measurements in cats and mice [41, 5154] (see [55] for a review and discussion of these measurements). Based on these measurements, we estimate the maximum feedforward input, achieved for 100% contrast to be on the order of the rest to threshold distance, which is around 20 mV [56]. This yields g*τm-120mV/(100%)=20 mV/s per 1% contrast. The Jab parameters measure the total synaptic weight, which biologically is given by a unitary excitatory or inhibitory (depending on b) post-synaptic potential (EPSP or IPSP) times the total number of pre-synaptic V1 neurons, Kb, of type b. Based on anatomical measurements for sensory cortex (reviewed in [55]) we estimate the effective KE to be ∼400 (with a wide margin of uncertainty). And based on electrophysiological measurements we assume the median EPSP amplitude to be ∼0.5 mV. This yields JE*=0.5×400=200 mV. For unitary IPSP amplitude, we used the same value of 0.5 mV, but assumed half as many inhibitory pre-synaptic inputs, due to the smaller number of inhibitory cells in the circuit. We thus took JI*=JE*/2.

In Fig 5, the four extra parameters controlling the spatial profile of excitatory horizontal connections in the retinotopic SSN were additionally sampled randomly as well. These parameters are λEE and λIE quantifying the intra-columnar excess connectivity, and σIE and σEE quantifying the length-scale (range) of the long-range components of excitatory recurrent connections. We sampled the two σ’s uniformly between 150 μm and 600 μm. The non-columnar model had λEE = λIE = 0, while for the columnar model we sampled these uniformly from the interval [0.25, 0.75].

Finally, we assumed that recurrent V1 excitatory synapses are dominated by AMPA, rather than NMDA, and therefore sampled ρN uniformly at random in the interval [0.3, 0.5].

All parameters were sampled uniformly and independently over their ranges, except for enforcement (by sample rejection) of three inequality constraints:

JEIJIE>JEEJIE,JIIgE>JEIgI,σIE>σEE.

Previous work has shown that the first inequality promotes stability (almost a necessary condition) [23, 57], the second inequality ensures that the network is not too strongly inhibition-dominated such that excitatory rates become too small [23, 57]. The last inequality is necessary for obtaining considerable surround suppression [22].

Eigenvalue spectra of rate-based and multi-receptor SSNs in the presence and absence of NMDA

Here we prove that the spectrum of a linearized synaptic model without NMDA is the same as the spectrum of a linearized E/I rate model, with the exchange τAMPAτE and τGABAτI. This means that, in particular, the formulae of [16] for eigenvalues in a 2-neuron/population model still hold for this model with the above replacements.

We start by rewriting the inverse Green’s function, using the Green’s function defined in Eq 24, which we now write in full matrix form. We will also start general, allowing for q different receptor types (we also write in terms of the angular frequency ω = 2πf).

G(ω)-1=A-WΦP (55)

where we define

A-iωT+IRqN×qN (56)
W(WAWGWN)RqN×N (57)
PIN×N1qT=(IN×N,IN×N,IN×N,)RN×qN (58)

where T = diag(τs) ⊗ IN×N and τsR (in our case τs = (τA, τG, τN) or (τA, τG)), and 1qT=(1,,1)Rq.

The eigenvalue spectrum correspond to values of z = − which make the determinant of G(ω)−1 vanish. Noting that the second term in Eq 55 is rank-deficient (has at most rank N, instead of full-rank qN), we make use of the “matrix determinant lemma” to write:

det(G-1(ω))=det(A)det(IN×N-PA-1WΦ) (59)

It is not hard to see that

PA-1=(IN×N-iωτA+1,IN×N-iωτG+1,IN×N-iωτN+1,) (60)

and therefore

PA-1W=α=1q1-iωτα+1Wα (61)

We now limit to q = 2 with only AMPA and GABA.

PA-1W=1-iωτA+1WA+1-iωτG+1WG=WA˜-1 (62)

where we have made use of the specific forms of WA and WG (namely, that they have zero columns for inhibitory and excitatory neurons, respectively) from Eq 17, and where we have defined

W=αWαRN×N (63)
A˜zT˜+IN×N (64)

with T˜=diag(τ˜)RN×N where τ˜=(τA,,τG,)RN is the N-dimensional vector with first NE components equal to τA and the last NI components equal to τG. After identifying τA/G with τE/I, we thus see that T˜ is the same as the T matrix of the r-model (which is N-dimensional), as is W its connectivity matrix. Also noting that (zT˜+IN×N)-1 and Φ are both diagonal, we can commute them in Eq 62 to obtain:

det(G-1(ω))=det(A)det(IN×N-WΦA˜-1) (65)
=det(A)det(A˜)det(zT˜+IN×N-WΦ) (66)
=det(A)det(A˜)det(zT˜+IN×N-ΦW) (67)

(to get the last line, do a similarity transform with Φ, of the matrix in the last determinant).

Now it is explicit that the zeros of the last determinant factor are the eigenvalues of the N-dimensional r-system (after τA/GτE/I identification).

The first factor, on the other hand, can be written as:

det(A)det(A˜)=(zτA+1)N(zτG+1)N(zτA+1)NE(zτG+1)NI=(zτA+1)NI(zτG+1)NE (68)

So the spectrum also has N additional real eigenvalues (in addition to those of the r-model) with values -τA-1 and -τG-1, and multiplicities, NI and NE, respectively. (Thus in total we have 2N eigenvalues as we should).

In particular, all oscillatory/complex eigenvalues are exactly those of the r-model in the no-NMDA case, which in the 2-neuron case are given by the formulae in [16].

When the model has NMDA (or other) receptors (i.e., for q > 2), the above exact correspondence will break down. However, as we now show, the relative slowness of NMDA allows for approximate reductions to the case of two receptor model in different frequency regimes. We consider two regimes for the effect of NMDA:

  1. when |z| or ω are very small compared to the NMDA time-constant: ωτN-1.

  2. when |z| or ω are very large compared to the NMDA time-constant: ωτN-1.

The first regime is relevant for DC response and DC properties (such as surround suppression of steady-state rates). The second regime is approximately valid for gamma oscillations, thanks to the relatively high frequency of those.

In regime 1, it is obvious that the breakdown of E weights into the two types does not have any effects, simply because (setting ω to 0) time-scales do not play any role here. So the parameter ρN makes no difference to fixed point response properties.

In regime 2, looking at Eq 61, we note that the prefactor 1-iωτα+1 for NMDA is very small and can be ignored. This means that for high frequencies (e.g., approximately frequencies around gamma) we can simply kill all NMDA weights, and only consider the AMPA weight matrix, WA. In particular, the model where WAWN, then the effect of NMDA on the gamma peak is approximately equivalent to reducing total excitatory weights (which all affect DC properties) by a scalar factor (which in our formalism is 1 − ρN) when it comes to gamma properties.

Analysis of gamma peak frequency in the two-population model

We consider now the case of the two-population model presented in Two-population model. We will also assume no NMDA contribution (or equivalently work in the very slow NMDA regime, and replace all excitatory weights with their AMPA part, as explained at the end of the previous subsection).

In this case the gamma peak frequency is closely approximated by the imaginary part of the eigenvalues of the Jacobian matrix:

J=-T-1+T-1WΦ (69)
=(γE(-1+WEEΦE)-γEWEIΦIγIWIEΦEγI(-1-WIIΦI)) (70)

where we defined γEτAMPA-1 and γIτGABA-1. Noting that the trace and determinant of J yield the sum and product of the eigenvalues, respectively, we obtain the expression (see [16])

2λ1,2=γE(WEEΦE-1)-γI(WIIΦI+1)±[γE(WEEΦE-1)+γI(WIIΦI+1)]2-4γEγIWEIWIEΦEΦI (71)

A gamma peak exists only if the expression under the square root is negative, i.e.

4γEγIWEIWIEΦEΦI>[γE(WEEΦE-1)+γI(WIIΦI+1)]2, (72)

in which case, for the gamma peak angular frequency ω0, we (approximately) have

4ω02=4βEβIWEIWIE-(βEWEE+βIWII+γI-γE)2 (73)

where we defined βXγXΦX for X ∈ {E, I}.

We will now obtain a simplified expression for the derivative of ω02 with respect to the contrast c, using the rectified supralinear nonlinearity of the SSN. Using Φ*=nk1nr*1-1n (where r* is the firing rate at fixed point) we obtain

dβ*dc=n-1nβ*dlnr*dc (74)

Then using Eq 73, and defining

A(βEWEE+βIWII+γI-γE) (75)

and ()d()dc, we find:

nn-1dω02dc=βEβIWEIWIE(lnrE+lnrI)-2A4(βEWEE(lnrE)+βIWII(lnrI)) (76)
=ω02(lnrE+lnrI)+12A2[(lnrE)+(lnrI)2-awa(lnra)γI-γE+awa] (77)

where the sums are over a ∈ {E, I} and we defined

waβaWaaa{E,I} (78)

Let us now focus on the sign of dω02dc. Assuming that we are in the gamma oscillatory regime (i.e., ω0 is real) and that the fixed point rates increase with contrast, then from Eq 77 we find that sufficient conditoin for dω02dc>0 is that the factor in the square brackets in Eq 77 is positive. In the solutions of SSN most relevant to cortical biology, (ln rI)′ tends to be larger than (ln rE)′ (because excitatory rates tend to saturate or supersaturate earlier). We thus consider two extreme cases: (ln rE)′ = (ln rI)′ and (ln rE)′ = 0.

In the first case, the bracket becomes (lnrI)[1-awaγI-γE+awa]=(lnrI)γI-γEγI-γE+awa, which is positive as long as γI > γE (which is unfortunately not the case for GABA and AMPA).

In the second case, the bracket factor becomes (lnrI)γI-γE+awa-2wI2(γI-γE+awa)=(lnrI)γI-γE+wE-wI2(γI-γE+awa). This is positive (as long as the denominator is positive, which is true as long as γI > γE) if

wE-γE+γI>wI (79)

But the stability of the fixed point dictates that the expression on first line of Eq 71 (the real part of the eigenvalues) has to be negative and thus

wI>wE-γE-γI. (80)

Supporting information

S1 Table. Parameters of models used in different figures of the main text.

In Figs 2, 3 and 5, parameters were sampled independently and uniformly from the ranges given in the table, except for enforcing three inequality constraints (i.e., sampled parameter sets violating any of these inequalities were rejected). See the main text (Methods) for details. †: these were the ranges for sampled λEE and λIE of the columnar model; these parameters were zero in the non-columnar model.

(PDF)

pcbi.1012190.s001.pdf (257.9KB, pdf)
S1 Fig. Behaviour of retinotopic V1 models with long-range inhibitory connections.

The format of the figure is exactly the same as in Fig 5 of the main text (and the reader is referred to the caption of that figure for the detailed guide). Similar to that main figure, this figure compares the locality of gamma contrast dependence in models with and without boosted intra-columnar recurrent excitatory connectivity (columnar vs. non-columnar models, respectively) across their parameter space. However, unlike those in the main figure, the sampled models here had long-range inhibitory connections. Specifically, in each sample the range of IE and II connections were set to two-thirds of the randomly-sampled ranges of the EE and EI excitatory connections, respectively (as in the main text, the excitatory ranges, alongside other parameters, were sampled randomly over a broad range). Thus, while I connections were 33% shorter than E connections, they had long and variable ranges across samples; by contrasts, in the main Fig 5, both IE and II connections had a constant range of 0.09 mm (c.f. our mini-column size of 0.4 mm), across all sampled model. As evident, e.g., from the stark contrast between the behaviour of samples in panels E vs. J (compare with the same panels in Fig 5 of main text), the qualitative difference between the columnar vs. non-columnar models (in accounting for the local contrast dependence of gamma frequency) remains unchanged in the presence of long-range inhibition; our conclusions are thus robust with respect to the assumption of very short inhibitory connections.

(PDF)

pcbi.1012190.s002.pdf (46.1KB, pdf)
S2 Fig. Hopf bifurcation diagram.

The left and right columns show the Hopf bifurcation diagrams for the two example models used in Figs 4 and 6 of the main text, respectively, when stimulated with gratings of full contrast. As the strength of recurrent excitation, JEE, is increased beyond a crticial value, the network undergoes a Hopf bifurcation, switching from a state of damped oscillations to a state with sustained oscillations. To obtain these plots, at each value of JEE, the networks were simulated without noise for long enough to reach steady state: below the Hopf bifurcation the steady state corresponds to a stable fixed point, while above the bifurcation it corresponds to a stable limit cycle (2 seconds of network dynamics were simulated, containing many tens of oscillation cycles). In the plots of the top row the lower and upper branches of the red lines show the minimum and maximum excitatory firing rates (of the E unit in the center of the model’s retinotopic grid) throughout the steady state oscillations (below the bifurcation these lines overlap, as the oscillation amplitude is zero and maximum and minimum rates are equal). In the bottom row plots, the firing rate is replaced with the LFP signal recorded at the center of the grid. The values on the x axes show the factor by which JEE was amplified over the original values in Figs 4 and 6 of the main text. The vertical solid blue lines in the left and right columns correspond to the values of JEE used in the S3 and S4 Figs, respectively (the dashed lines correspond to the original values used in the main figures).

(PDF)

pcbi.1012190.s003.pdf (18.2KB, pdf)
S3 Fig. Behaviour of noise-driven oscillations the model of Fig 4 (main text) when pushed above the Hopf bifurcation.

Except for the bottom right plot, the format of the rest of the figure is the same as in panels B-D and F-G of Fig 4 of the main text (and the reader is referred to the caption of that figure for the detailed guide). The parameters of the model are also the same as those in Fig 4, except for JEE which has been strengthened by a relative factor of 1.112. Unlike in Fig 4, the full stochastic dynamics of the model network were simulated (for 60 seconds, in order to allow for accurate estimation of power-spectra). In particular, the LFP power-spectra shown in the the left column plots were obtained from these simulations using the Welch periodogram method, followed by a gaussian smoothing with a σ of 5 Hz (the dashed lines show the Welch periodogram without smoothing; conclusions are not sensitive to this smoothing). The peak frequencies were obtained using the methods described in the main text from these power-spectra. The bottom-right plot shows a portion of the simulated trajectory in the plane of the E and I firing rates in the center of the retinotopic grid. These show many cycles of the noise-driven oscillations (without input noise, the same plots would have shown overlapping deterministic trajectories going around a diagonally elongated oval-shaped limit cycle).

(PDF)

pcbi.1012190.s004.pdf (34.9KB, pdf)
S4 Fig. Behaviour of the model of Fig 6 (main text) when pushed above the Hopf bifurcation.

Except for the bottom-right plot, the format of the rest of the figure is the same as in Fig 6 of the main text (and the reader is referred to the caption of that figure for the detailed guide). The parameters of the model are also the same as those in Fig 6, except for JEE which has been strengthened by a relative factor of 1.037. The simulation procedure and the description of the bottom-right plot are as given in the caption of S3.

(PDF)

pcbi.1012190.s005.pdf (34KB, pdf)

Acknowledgments

We thank Takafumi Arakaki for technical help throughout this research, Luca Mazzucato for valuable comments on the manuscript, and Guillaume Hennequin for generously sharing LATEX code. CH acknowledges technical help and invaluable feedback from Gabriel Barello, Elliott Abe, and David Wyrick.

Data Availability

A code package sufficient for producing the results of this paper (including a script for the samplings of models and calculations of their LFP power-spectra) can be found at https://gitlab.com/ahmadianlab/ssn_gamma.

Funding Statement

YA is supported by UK Research and Innovation, Biotechnology and Biological Sciences Research Council (www.ukri.org/councils/bbsrc) research grant BB/X013235/1. KDM is supported by National Science Foundation (www.nsf.gov) grant DBI-1707398, National Institutes of Health (www.nih.gov) grants U01NS108683, R01EY029999, and U19NS107613, Simons Foundation (www.simonsfoundation.org) award 543017, and the Gatsby Charitable Foundation (www.gatsby.org.uk). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Ray S, Maunsell JHR. Differences in gamma frequencies across visual cortex restrict their possible use in computation. Neuron. 2010;67(5):885–896. doi: 10.1016/j.neuron.2010.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Jia X, Xing D, Kohn A. No consistent relationship between gamma power and peak frequency in macaque primary visual cortex. Journal of Neuroscience. 2013;33(1):17–25. doi: 10.1523/JNEUROSCI.1687-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron. 1999;24(1):49–65. [DOI] [PubMed] [Google Scholar]
  • 4. Fries P. A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends in Cognitive Sciences. 2005;9(10):474–480. doi: 10.1016/j.tics.2005.08.011 [DOI] [PubMed] [Google Scholar]
  • 5. Fries P. Rhythms for Cognition: Communication through Coherence. Neuron. 2015;88(1):220–235. doi: 10.1016/j.neuron.2015.09.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Ni J, Wunderle T, Lewis CM, Desimone R, Diester I, Fries P. Gamma-Rhythmic Gain Modulation. Neuron. 2016;92(1):240–251. doi: 10.1016/j.neuron.2016.09.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Palmigiano A, Geisel T, Wolf F, Battaglia D. Flexible information routing by transient synchrony. Nature neuroscience. 2017;20:1014–1022. doi: 10.1038/nn.4569 [DOI] [PubMed] [Google Scholar]
  • 8. Buzsáki G, Chrobak JJ. Temporal structure in spatially organized neuronal ensembles: a role for interneuronal networks. Current Opinion in Neurobiology. 1995;5(4):504–510. doi: 10.1016/0959-4388(95)80012-3 [DOI] [PubMed] [Google Scholar]
  • 9. Jefferys JGR, Traub RD, Whittington MA. Neuronal networks for induced’40 Hz’ rhythms. Trends in Neurosciences. 1996;19(5):202–208. [DOI] [PubMed] [Google Scholar]
  • 10. Draguhn A, Buzsáki G. Neuronal Oscillations in Cortical Networks. Science. 2004;304:1926–1930. doi: 10.1126/science.1099745 [DOI] [PubMed] [Google Scholar]
  • 11. Hopfield JJ. Encoding for computation: Recognizing brief dynamical patterns by exploiting effects of weak rhythms on action-potential timing. Proceedings of the National Academy of Sciences of the United States of America. 2004;101(16):6255–6260. doi: 10.1073/pnas.0401125101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Fries P, Nikolić D, Singer W. The gamma cycle. Trends in Neurosciences. 2007;30(7):309–316. doi: 10.1016/j.tins.2007.05.005 [DOI] [PubMed] [Google Scholar]
  • 13. Burns SP, Xing D, Shelley MJ, Shapley RM. Searching for autocoherence in the cortical network with a time-frequency analysis of the local field potential. Journal of Neuroscience. 2010;30(11):4033–4047. doi: 10.1523/JNEUROSCI.5319-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Henrie JA, Shapley R. LFP power spectra in V1 cortex: The graded effect of stimulus contrast. Journal of Neurophysiology. 2005;94(1):479–490. doi: 10.1152/jn.00919.2004 [DOI] [PubMed] [Google Scholar]
  • 15. Gieselmann MA, Thiele A. Comparison of spatial integration and surround suppression characteristics in spiking activity and the local field potential in macaque V1. Eur J Neurosci. 2008;28(3):447–459. doi: 10.1111/j.1460-9568.2008.06358.x [DOI] [PubMed] [Google Scholar]
  • 16. Tsodyks MV, Skaggs WE, Sejnowski TJ, McNaughton BL. Paradoxical effects of external modulation of inhibitory interneurons. J Neurosci. 1997;17:4382–4388. doi: 10.1523/JNEUROSCI.17-11-04382.1997 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Brunel N, Wang XJ. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J Neurophysiol. 2003;90(1):415–430. doi: 10.1152/jn.01095.2002 [DOI] [PubMed] [Google Scholar]
  • 18. Buzsáki G, Wang XJ. Mechanisms of Gamma Oscillations. Annual Review of Neuroscience. 2012;35(1):203–225. doi: 10.1146/annurev-neuro-062111-150444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Gilbert CD, Wiesel TN. Columnar specificity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience. 1989;9(7):2432–2422. doi: 10.1523/JNEUROSCI.09-07-02432.1989 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Cavanaugh JR, Bair W, Movshon JA. Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology. 2002;88(5):2530–2546. doi: 10.1152/jn.00692.2001 [DOI] [PubMed] [Google Scholar]
  • 21. Schwabe L, Ichida JM, Shushruth S, Mangapathy P, Angelucci A. Contrast-dependence of surround suppression in Macaque V1: Experimental testing of a recurrent network model. NeuroImage. 2010;52(3):777–792. 10.1016/j.neuroimage.2010.01.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron. 2015;85(2):402–417. doi: 10.1016/j.neuron.2014.12.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Ahmadian Y, Rubin DB, Miller KD. Analysis of the stabilized supralinear network. Neural Computation. 2013;25(8):1994–2037. doi: 10.1162/NECO_a_00472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Ledoux E, Brunel N. Dynamics of Networks of Excitatory and Inhibitory Neurons in Response to Time-Dependent Inputs. Frontiers in Computational Neuroscience. 2011;5:1–17. doi: 10.3389/fncom.2011.00025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Barbieri F, Mazzoni A, Logothetis NK, Panzeri S, Brunel N. Stimulus dependence of local field potential spectra: Experiment versus theory. Journal of Neuroscience. 2014;34(44):14589–14605. doi: 10.1523/JNEUROSCI.5365-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Burns SP, Xing D, Shapley RM. Is gamma-band activity in the local field potential of V1 cortex a “clock” or filtered noise? J Neurosci. 2011;31(26):9658–9664. doi: 10.1523/JNEUROSCI.0660-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Kang K, Shelley M, Henrie JA, Shapley R. LFP spectral peaks in V1 cortex: network resonance and cortico-cortical feedback. J Comput Neurosci. 2010;29(3):495–507. doi: 10.1007/s10827-009-0190-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Wallace E, Benayoun M, van Drongelen W, Cowan JD. Emergent Oscillations in Networks of Stochastic Spiking Neurons. PLOS ONE. 2011;6(5):e14804. doi: 10.1371/journal.pone.0014804 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Xing D, Shen Y, Burns S, Yeh CI, Shapley R, Li W. Stochastic generation of gamma-band activity in primary visual cortex of awake and anesthetized monkeys. Journal of Neuroscience. 2012;32(40):13873–13880. doi: 10.1523/JNEUROSCI.5644-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Dumont G, Northoff G, Longtin A. A Stochastic Model of Input Effectiveness during Irregular Gamma Rhythms. Journal of Computational Neuroscience. 2016;40(1):85–101. doi: 10.1007/s10827-015-0583-3 [DOI] [PubMed] [Google Scholar]
  • 31. Anderson J, Lampl I, Gillespie D, Ferster D. The Contribution of Noise to Contrast Invariance of Orientation Tuning in Cat Visual Cortex. Science (New York, NY). 2000;290(5498):1968–1972. doi: 10.1126/science.290.5498.1968 [DOI] [PubMed] [Google Scholar]
  • 32. Priebe NJ, Ferster D. Inhibition, Spike Threshold, and Stimulus Selectivity in Primary Visual Cortex. Neuron. 2008;57(4):482–497. doi: 10.1016/j.neuron.2008.02.005 [DOI] [PubMed] [Google Scholar]
  • 33. Dayan P, Abbott L. Theoretical Neuroscience. Cambridge: MIT Press; 2001. [Google Scholar]
  • 34. Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The Dynamical Regime of Sensory Cortex: Stable Dynamics around a Single Stimulus-Tuned Attractor Account for Patterns of Noise Variability. Neuron. 2018;98(4):846–860.e5. doi: 10.1016/j.neuron.2018.04.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Brunel N, Chance FS, Fourcaud N, Abbott LF. Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters. 2001;86(10):2186–2189. doi: 10.1103/PhysRevLett.86.2186 [DOI] [PubMed] [Google Scholar]
  • 36. Fourcaud N, Brunel N. Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Computation. 2002;14:2057–2110. doi: 10.1162/089976602320264015 [DOI] [PubMed] [Google Scholar]
  • 37. Einevoll GT, Kayser C, Logothetis NK, Panzeri S. Modelling and Analysis of Local Field Potentials for Studying the Function of Cortical Circuits. Nature Rreviews Neuroscience. 2013;14(11):770–None. doi: 10.1038/nrn3599 [DOI] [PubMed] [Google Scholar]
  • 38. Kaplan E, Purpura K, Shapley RM. Contrast affects the transmission of visual information through the mammalian lateral geniculate nucleus. The Journal of physiology. 1987;391:267–288. doi: 10.1113/jphysiol.1987.sp016737 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Lund JS, Yoshioka T, Levitt JB. Comparison of Intrinsic Connectivity in Different Areas of Macaque Monkey Cerebral Cortex. Cerebral Cortex. 1993;3(2):148–162. doi: 10.1093/cercor/3.2.148 [DOI] [PubMed] [Google Scholar]
  • 40. Priebe NJ, Ferster D. Direction selectivity of excitation and inhibition in simple cells of the cat primary visual cortex. Neuron. 2005;45:133–45. doi: 10.1016/j.neuron.2004.12.024 [DOI] [PubMed] [Google Scholar]
  • 41. Finn IM, Priebe NJ, Ferster D. The emergence of contrast-invariant orientation tuning in simple cells of cat visual cortex. Neuron. 2007;54(1):137–152. doi: 10.1016/j.neuron.2007.02.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Voges N, Schüz A, Aertsen A, Rotter S. A modeler’s view on the spatial structure of intrinsic horizontal connectivity in the neocortex. Progress in Neurobiology. 2010;92(3):277–292. doi: 10.1016/j.pneurobio.2010.05.001 [DOI] [PubMed] [Google Scholar]
  • 43. Obeid D, Miller KD. Stabilized Supralinear Network: Model of Layer 2/3 of the Primary Visual Cortex. BioRxiv [Preprint]. 2021 bioRxiv 424892. [Google Scholar]
  • 44. Ozeki H, Finn IM, Schaffer ES, Miller KD, Ferster D. Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron. 2009;62:578–592. doi: 10.1016/j.neuron.2009.03.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Adesnik H. Synaptic Mechanisms of Feature Coding in the Visual Cortex of Awake Mice. Neuron. 2017;95:1147–1159. doi: 10.1016/j.neuron.2017.08.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Shushruth S, Mangapathy P, Ichida JM, Bressloff PC, Schwabe L, Angelucci A. Strong recurrent networks compute the orientation tuning of surround modulation in the primate primary visual cortex. J Neurosci. 2012;32:308–321. doi: 10.1523/JNEUROSCI.3789-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Trott AR, Born RT. Input-gain control produces feature-specific surround suppression. J Neurosci. 2015;35:4973–4982. doi: 10.1523/JNEUROSCI.4000-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Liu LD, Miller KD, Pack CC. A Unifying Motif for Spatial and Directional Surround Suppression. J Neurosci. 2018;38(4):989–999. doi: 10.1523/JNEUROSCI.2386-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Oldenburg IA, Hendricks WD, Handy G, Shamardani K, Bounds HA, Doiron B, et al. The Logic of Recurrent Circuits in the Primary Visual Cortex. Nature Neuroscience. 2024;27(1):137–147. doi: 10.1038/s41593-023-01510-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Kondo S, Yoshida T, Ohki K. Mixed functional microarchitectures for orientation selectivity in the mouse primary visual cortex. Nature communications. 2016;7:13210. doi: 10.1038/ncomms13210 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Ferster D, Chung S, Wheat H. Orientation selectivity of thalamic input to simple cells of cat visual cortex. Nature. 1996;380:249–252. doi: 10.1038/380249a0 [DOI] [PubMed] [Google Scholar]
  • 52. Chung S, Ferster D. Strength and orientation tuning of the thalamic input to simple cells revealed by electrically evoked cortical suppression. Neuron. 1998;20:1177–89. doi: 10.1016/S0896-6273(00)80498-5 [DOI] [PubMed] [Google Scholar]
  • 53. Lien AD, Scanziani M. Tuned thalamic excitation is amplified by visual cortical circuits. Nat Neurosci. 2013;16:1315–1323. doi: 10.1038/nn.3488 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Li LY, Li YT, Zhou M, Tao HW, Zhang LI. Intracortical multiplication of thalamocortical signals in mouse auditory cortex. Nat Neurosci. 2013;16:1179–1181. doi: 10.1038/nn.3493 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Ahmadian Y, Miller KD. What Is the Dynamical Regime of Cerebral Cortex? Neuron. 2021;109(21):3373–3391. doi: 10.1016/j.neuron.2021.07.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Constantinople CM, Bruno RM. Deep cortical layers are activated directly by thalamus. Science. 2013;340:1591–1594. doi: 10.1126/science.1236425 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Computation. 1998;10:1321–1371. doi: 10.1162/089976698300017214 [DOI] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1012190.r001

Decision Letter 0

Thomas Serre, Tatiana Engel

28 Nov 2023

Dear Dr. Ahmadian,

Thank you very much for submitting your manuscript "The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Tatiana Engel

Guest Editor

PLOS Computational Biology

Thomas Serre

Section Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The paper provides a novel mechanistic explanation for the relationship between visual stimulus contrast and gamma oscillation frequency in neural activity recorded in macaque primary visual cortex. A stabilized supralinear network (SSN) robustly captures the contrast dependence of gamma peak frequency, and a retinotopically organized SSN model with both short-range and long-range excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. The paper is clearly written. The use of linearized equations to draw insights into the SSN's behavior aids understanding, providing further insights into the proposed cortical circuit mechanisms. The work has the potential to bridge the gap between empirical neurophysiological observations and computational models in the field of visual neuroscience.

Questions.

Could the authors elaborate on whether their results, especially their prediction about excitatory connectivity, depend on their chosen inhibitory connections? Is their choice supported by some experimental evidence? Did the authors explore inhibitory connections with a range longer than the source column but still shorter than the excitatory range?

Would the results differ qualitatively if some structure in the connectivity was included, for instance due to orientation selective columns? I think the paper would be strengthen by a discussion on this topic.

Minor suggestions and typos:

1) Page 20. Left column, first paragraph.“Only the fast receptors, AMPA and GABA, have timescales relevant to gamma band oscillations. These receptors have very fast rise times, which correspond to frequencies much higher than the gamma band. We therefore ignored the rise times of all receptors.“

This sentence sounds a bit unclear. The authors may want to mention, as they do in the main text, that they also ignored the rise times of NMDA currents since they are significantly faster than the characteristic timescales of gamma oscillations.

2) Page 20, right column. In the first paragraph, the authors clearly justify the reasons why a static input-output transfer function is a good approximation, given the synaptic dynamics they consider. However, at the end of the following paragraph, they add:

“However, as long as those gain filters are feature-less over the gamma band, their frequency dependence would not qualitatively affect the location of the gamma peak and its stimulus dependence. Thus we expect that the static I/O approximation will not alter our qualitative results.”

The authors might provide clearer insights into what they mean by 'feature-less over the gamma band'. Is this in reference to the weak dependence of responses on frequency due to the amplitude of synaptic noise and its decay time?"

3) Fig. 2. Caption: The statement 'Panels A-E and F-J show results for the columnar and non-columnar models, respectively' seems to be misplaced. Figure 2 only showcases panels A-E, which are related to the non-retinotopic model.

4) Fig. 5. Caption. Last sentence. Typo: ….H and;

5) Methods, two lines above eq. 13. Typo: “N -dimensional vector of inputs vvt”

6) Methods, two lines below eq. 13. Typo: “and f acts element-wise”. Is it f or F?

Reviewer #2: This paper investigates a model of V1 with supralinear gain function (same for all E, I populations), featuring both intra-column and inter-column horizontal connections, which captured the contrast dependence of gamma peak frequency, surround suppression and the local contrast dependence of gamma oscillations. They consider a retinotopic E-I network, with each pair of E-I populations representing a hypercolumn in the visual cortex. Thus, the eigenmode of one column could affect the spectrum of other columns depending on the inter-column connectivity, which has been studied explicitly in the method section. In addition to the receptive field distance-dependent inter-column connectivity, a strong intra-column connection is added, which they show critical to capture local contrast dependence, such that it is the local stimulus contrast that determines the peak frequency of gamma oscillations at a cortical location under a stimulus with non-uniform contrast. The paper is written clearly and is easy to read. The figures are presented clearly and analytical work is laid out reasonably well. The effort of modeling retinotopic cortical space is a necessary extension to existing models in order to both explain existing data and make any novel predictions to test the model.

This reviewer is however not convinced that there are clear predictions from the study that help us validate or reject the underlying mechanism of increased gamma power observed in LFPs. The study considers an E-I network which even under strong external input caused by increasing contrast, reaches a stable state in the absence of noise while exhibits transient (damped) oscillation under noise. This requires the system to be close to, but below a Hopf bifurcation without noise. The authors argue that 'Gamma oscillations do not behave like sustained oscillations,… as they are not auto-coherent and their timing and duration vary stochastically, resulting in a single broad peak in the power-spectrum, with no visible higher harmonics, consistent with transient (damped) and noise driven oscillations'. However, there is a body of literature that models gamma oscillation as a noisy ISN limit cycles in stochastic models (Benayoun et al. 2010, Wallace et al. 2011, Dumont et al, 2016, Li et al, 2022, etc).. These models involve a Hopf bifurcation and capture many statistical properties of gamma oscillations. It is possible to see broad peaks in model networks with noisy limit cycles in an ISN/SSN (see above), which can vary in strength and central frequency as a function of not only network parameters, but input to the network: the imaginary part of eigenvalues vary with increasing external input, similar to this proposal. These properties appear to capture many aspects of visually evoked gamma when the neural transfer function is operating in a region of accelerating nonlinearity (Veit et al, Jadi & Sejnowski). In the current study, the authors analyze the power-spectrum by applying a linearization scheme: they first find the stable point under a noise-free system and then perturb it with noise and noise-drive deviations. Then by analyzing the corresponding Fourier spectrum with Green function, they calculate the contribution of each individual eigenmode to the power spectrum characterized by ratio of LFP to the noise power spectrum, demonstrating that modes with eigenvalues having a less negative real part make stronger contribution. Whereas in the frame of noisy limit cycles in an SSN/ISN, the more positive the real part of the eigenvalue is, the greater the amplitude. It is not clear if a systematic analysis for detection of higher harmonics has been conducted on electrophysiological data (visible harmonics?). Thus it is not clear to the reviewer what makes the underlying mechanism a better choice. This is a reasonable alternate model for the observed peaks in power spectrum of LFP in visual cortex. What the reviewer would like to see additionally is a proposal for one or more experiments to test model predictions that can help support or reject this or the above mentioned alternate underlying mechanisms in a concrete way. Alternately, the authors could discuss how a Hopf bifurcation based mechanism in this retinotopic model would or would not recapitulate the experimental findings. What would possibly need to change in terms of parameters or connectivity? It is possible that either mechanisms of broad peaks (damped oscillation/noisy limit cycles in superlinear ISNs) in this retinotopic model would work. It is possible that depending on stimulus properties (large vs small, spatial frequency, etc) the underlying mechanism could switch (network on either side of Hopf bifurcation). If there are distinct model predictions for say what would happen when you change contrast of small vs large stimuli, it is very testable in current experiments. This is important because we still don’t quite understand a functional role for gamma dynamics that give these broad peaks in a clear way. They could very well be diagnostic of underlying operating regime (Ray & Maunsell, 2010). The modeling will then be valuable (as something not possible experimentally) and shed light on this issue. Future studies could then explore other non-oscillatory implications of these visual cortical networks going in and out of regimes under different patterns of stimulation.

There is at least one typo in the method section (P20, left column 16th line: change 'w' to 'w_\\alpha').

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No: The code is not available yet, but the authors say that a cloud repository containing all used code will be made fully available before publication and referenced in the manuscript.

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1012190.r003

Decision Letter 1

Thomas Serre, Tatiana Engel

23 May 2024

Dear Dr. Ahmadian,

We are pleased to inform you that your manuscript 'The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Tatiana Engel

Guest Editor

PLOS Computational Biology

Thomas Serre

Section Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: My questions raised in the previous review have been satisfactorily addressed. This version of the manuscript incorporating new analysis on the network dynamics above the Hopf bifurcation represents a significant improvement over the original submission.

Reviewer #2: The authors have satisfactorily addressed the concerns raised in the prior review. No further comments on the revision.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: None

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1012190.r004

Acceptance letter

Thomas Serre, Tatiana Engel

17 Jun 2024

PCOMPBIOL-D-23-01209R1

The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations

Dear Dr Ahmadian,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Zsofia Freund

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Parameters of models used in different figures of the main text.

    In Figs 2, 3 and 5, parameters were sampled independently and uniformly from the ranges given in the table, except for enforcing three inequality constraints (i.e., sampled parameter sets violating any of these inequalities were rejected). See the main text (Methods) for details. †: these were the ranges for sampled λEE and λIE of the columnar model; these parameters were zero in the non-columnar model.

    (PDF)

    pcbi.1012190.s001.pdf (257.9KB, pdf)
    S1 Fig. Behaviour of retinotopic V1 models with long-range inhibitory connections.

    The format of the figure is exactly the same as in Fig 5 of the main text (and the reader is referred to the caption of that figure for the detailed guide). Similar to that main figure, this figure compares the locality of gamma contrast dependence in models with and without boosted intra-columnar recurrent excitatory connectivity (columnar vs. non-columnar models, respectively) across their parameter space. However, unlike those in the main figure, the sampled models here had long-range inhibitory connections. Specifically, in each sample the range of IE and II connections were set to two-thirds of the randomly-sampled ranges of the EE and EI excitatory connections, respectively (as in the main text, the excitatory ranges, alongside other parameters, were sampled randomly over a broad range). Thus, while I connections were 33% shorter than E connections, they had long and variable ranges across samples; by contrasts, in the main Fig 5, both IE and II connections had a constant range of 0.09 mm (c.f. our mini-column size of 0.4 mm), across all sampled model. As evident, e.g., from the stark contrast between the behaviour of samples in panels E vs. J (compare with the same panels in Fig 5 of main text), the qualitative difference between the columnar vs. non-columnar models (in accounting for the local contrast dependence of gamma frequency) remains unchanged in the presence of long-range inhibition; our conclusions are thus robust with respect to the assumption of very short inhibitory connections.

    (PDF)

    pcbi.1012190.s002.pdf (46.1KB, pdf)
    S2 Fig. Hopf bifurcation diagram.

    The left and right columns show the Hopf bifurcation diagrams for the two example models used in Figs 4 and 6 of the main text, respectively, when stimulated with gratings of full contrast. As the strength of recurrent excitation, JEE, is increased beyond a crticial value, the network undergoes a Hopf bifurcation, switching from a state of damped oscillations to a state with sustained oscillations. To obtain these plots, at each value of JEE, the networks were simulated without noise for long enough to reach steady state: below the Hopf bifurcation the steady state corresponds to a stable fixed point, while above the bifurcation it corresponds to a stable limit cycle (2 seconds of network dynamics were simulated, containing many tens of oscillation cycles). In the plots of the top row the lower and upper branches of the red lines show the minimum and maximum excitatory firing rates (of the E unit in the center of the model’s retinotopic grid) throughout the steady state oscillations (below the bifurcation these lines overlap, as the oscillation amplitude is zero and maximum and minimum rates are equal). In the bottom row plots, the firing rate is replaced with the LFP signal recorded at the center of the grid. The values on the x axes show the factor by which JEE was amplified over the original values in Figs 4 and 6 of the main text. The vertical solid blue lines in the left and right columns correspond to the values of JEE used in the S3 and S4 Figs, respectively (the dashed lines correspond to the original values used in the main figures).

    (PDF)

    pcbi.1012190.s003.pdf (18.2KB, pdf)
    S3 Fig. Behaviour of noise-driven oscillations the model of Fig 4 (main text) when pushed above the Hopf bifurcation.

    Except for the bottom right plot, the format of the rest of the figure is the same as in panels B-D and F-G of Fig 4 of the main text (and the reader is referred to the caption of that figure for the detailed guide). The parameters of the model are also the same as those in Fig 4, except for JEE which has been strengthened by a relative factor of 1.112. Unlike in Fig 4, the full stochastic dynamics of the model network were simulated (for 60 seconds, in order to allow for accurate estimation of power-spectra). In particular, the LFP power-spectra shown in the the left column plots were obtained from these simulations using the Welch periodogram method, followed by a gaussian smoothing with a σ of 5 Hz (the dashed lines show the Welch periodogram without smoothing; conclusions are not sensitive to this smoothing). The peak frequencies were obtained using the methods described in the main text from these power-spectra. The bottom-right plot shows a portion of the simulated trajectory in the plane of the E and I firing rates in the center of the retinotopic grid. These show many cycles of the noise-driven oscillations (without input noise, the same plots would have shown overlapping deterministic trajectories going around a diagonally elongated oval-shaped limit cycle).

    (PDF)

    pcbi.1012190.s004.pdf (34.9KB, pdf)
    S4 Fig. Behaviour of the model of Fig 6 (main text) when pushed above the Hopf bifurcation.

    Except for the bottom-right plot, the format of the rest of the figure is the same as in Fig 6 of the main text (and the reader is referred to the caption of that figure for the detailed guide). The parameters of the model are also the same as those in Fig 6, except for JEE which has been strengthened by a relative factor of 1.037. The simulation procedure and the description of the bottom-right plot are as given in the caption of S3.

    (PDF)

    pcbi.1012190.s005.pdf (34KB, pdf)
    Attachment

    Submitted filename: rebuttal_letter.pdf

    pcbi.1012190.s006.pdf (110.3KB, pdf)

    Data Availability Statement

    A code package sufficient for producing the results of this paper (including a script for the samplings of models and calculations of their LFP power-spectra) can be found at https://gitlab.com/ahmadianlab/ssn_gamma.


    Articles from PLOS Computational Biology are provided here courtesy of PLOS

    RESOURCES