Abstract
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.
Introduction
When arriving at the cortex from the sensory periphery, sensory signals are further processed by local recurrent networks. Indeed, the vast majority of all the connections a cortical neuron receives are from the cortical networks within which it is embedded and only a small fraction of connections are from feedforward afferents: The fraction of recurrent connections has been estimated to be as large as 80% [1]. What is the precise role of this recurrent network in sensory processing is not yet fully clear.
In the primary visual cortex of mammals like carnivores and primates, for instance, it has been proposed that the recurrent network might be mainly responsible for the amplification of orientation selectivity [2], [3]. Only a small bias provided by the feedforward afferents would be enough, and selectivity is then amplified by a non-linear mechanism implemented by the recurrent network. This mechanism is a result of the feature-specific connectivity assumed in the model, where neurons with similar input selectivities are connected to each other with a higher probability. This, in turn, could follow from the arrangement of neurons in orientation maps [4]–[6], which implies that nearby neurons have similar preferred orientations. As nearby neurons are also connected with a higher likelihood than distant neurons, feature-specific connectivity is a straight-forward result in this scenario.
Feature-specific connectivity is not evident in all species, however. In rodent visual cortex, for instance, a salt-and-pepper organization of orientation selectivity has been reported, with no apparent spatial clustering of neurons according to their preferred orientations [6]. As a result, each neuron receives a heterogeneous input from pre-synaptic sources with different preferred orientations [7].
Although an over-representation of connections between neurons of similar preferred orientations has been reported in rodents [8]–[12], presumably as a result of a Hebbian growth process during a later stage of development [13], such feature-specific connectivity is not yet statistically significant immediately after eye opening [10]. A comparable level of orientation selectivity, however, has indeed been reported already at this stage [10]. If cortical recurrent networks make a contribution to sensory processing at this stage, random recurrent networks should be chosen as a model [14]–[16]. Activity-dependent reorganization of the network, however, may still refine the connectivity and improve the performance of the processing later during development.
Here we study the distribution of orientation selectivity in random recurrent networks with heterogeneous synaptic projections, i.e. networks where the recurrent connectivity does not depend on the preferred feature of the input to the neurons. We show that in structurally homogeneous networks, the heterogeneity in functional connectivity, i.e. the heterogeneity in preferred orientations of recurrently connected neurons, is indeed responsible for a broad distribution of selectivities. A linear analysis of the network operation can account quite precisely for this distribution, for a wide range of network topologies including Erdős-Rényi random networks and networks with distance-dependent connectivity.
Methods
Network Model
In this study, we consider networks of leaky integrate-and-fire (LIF) neurons. For this spiking neuron model, the sub-threshold dynamics of the membrane potential of neuron is described by the leaky-integrator equation
(1) |
The current represents the total input to the neuron, the integration of which is governed by the leak resistance , and the membrane time constant . When the voltage reaches the threshold at , a spike is generated and transmitted to all postsynaptic neurons, and the membrane potential is reset to the resting potential at . It remains at this level for short absolute refractory period, , during which all synaptic currents are shunted.
The response statistics of a LIF neuron, which is driven by randomly arriving input spikes, can be analytically solved in the stationary case. Assuming a fixed voltage threshold, , the solution of the first-passage time problem in response to randomly and rapidly fluctuating input yields explicit expressions for the moments of the inter-spike interval distribution [17], [18]. In particular, the mean response rate of the neuron, , in terms of the mean, , and variance, , of the fluctuating input is obtained
(2) |
with and .
Employing a mean field ansatz, the above theory can be applied to networks of identical pulse-coupled LIF neurons, randomly connected with homogeneous in-degrees, and driven by external excitatory input of the same strength. Under these conditions, all neurons exhibit the same mean firing rate, which can be determined by a straight-forward self-consistency argument [19], [20]: The firing rate is a function of the first two cumulants of the input fluctuations, and , which are, in turn, functions of the input. If is the input (stimulus) firing rate, and is the mean response rate of all neurons in the network, respectively, we have the relation
(3) |
Here denotes the amplitude of an excitatory post-synaptic potential (EPSP) of external inputs, and denotes the amplitude of recurrent EPSPs. The factor is the inhibition-excitation ratio, which fixes the strength of inhibitory post-synaptic potentials (IPSPs) to . Synapses are modeled as , where the pre-synaptic current is delivered to the post-synaptic neuron instantaneously, after a fixed transmission delay of .
The remaining structural parameters are the total number of neurons in the network, , the connection probability, , and the fraction of neurons in the network that are excitatory (), implying that a fraction is inhibitory (). For all networks considered here we have used and . was always fixed at . For all network connectivities, we fix the in-degree, separately for the excitatory and the inhibitory population, respectively. That is, each neuron, be it excitatory or inhibitory, receives exactly connections randomly sampled from the excitatory population and connections randomly sampled from the inhibitory population. Multiple synaptic contacts and self-contacts are excluded.
In our simulations, inputs are stationary and independent Poisson processes, denoted by a vector of average firing rates. Its i-th entry, , corresponding to the average firing rate of the input to the i-th neuron, depends on the stimulus orientation and the input preferred orientation (PO) of the neuron according to
(4) |
The baseline is the level of input common to all orientations, and the peak input is . The input PO is assigned randomly and independently to each neuron in the population. To measure the output tuning curves in numerical simulations, we stimulated the networks for different stimulus orientations, covering the full range between and in steps of . The stimulation at each orientation was run for , using a simulation time step of . Onset transients (the first ) were discarded.
Linearized Rate Equations
To quantify the response of a network to tuned input, we first compute its baseline (untuned) output firing rate, . This procedure is described elsewhere in detail [16], and we only recapitulate the main steps and equations here. If the attenuation of the baseline and amplification of the modulation is performed by two essentially independent processing channels in the network [16], the baseline firing rate can be computed from the fixed point equation
(5) |
the root of which can be found numerically [16], [20].
Now we linearize the network dynamics about an operating point defined by the baseline. First, we write the full nonlinear rate equation of the network as . Here, the mean and the variance of the input are expressed, in matrix-vector notation, as
(6) |
where and are N-dimensional column vectors of input and output firing rates, respectively, and is the weight matrix of the network. Its entry , the weight of a synaptic connection from neuron to neuron , is either if there is no synapse, if there is an excitatory synapse, or if there is an inhibitory synapse. Matrix is the element-wise square of , that is .
The extra firing rate of all neurons, (output modulation), in response to a small perturbation of their inputs, (input modulation), is obtained by linearizing the dynamics about the baseline, i.e. about and (obtained from Eq. (3) evaluated at and )
(7) |
The partial derivatives of at this operating point can be computed from Eq. (2) as
(8) |
and, in a similar fashion,
(9) |
where , and , , and are the corresponding parameters evaluated at the baseline (for further details on this derivation, see [21]).
We also need to express and in terms of the input perturbations. In fact, they can be written in terms of and from Eq. (6) as:
(10) |
For the total output perturbation, , we therefore obtain
(11) |
With the simulation parameters used here, our network typically operates in a fluctuation-driven regime of activity with a comparable level of input mean and fluctuations, . As a result, the contribution of the mean, , to output modulation in Eq. (11) is larger than the contribution of the variance, . In the noise-dominated regime, and are small compared to in Eq. (9), and hence we can write , yielding . Thus, with a comparable level of mean and fluctuations, the contribution of the mean to output modulation is larger than the contribution of the variance. In fact, the more the network operates in the noise-dominated regime, the more becomes dominant over , making the second term on the right hand side of Eq. (11) negligible.
For the network shown in Fig. 1 and Fig. 2, for instance, . Given the general parameters of our simulation, we obtain and . This yields and , and finally and . In response to feedforward input perturbations, therefore, the contribution of the mean term () is times the contribution of the variance term (). In response to recurrent perturbation vectors with zero mean, both the mean term () and the variance term () would respond with zero output, on average. The variance, in contrast, is not zero; a similar computation as in Eq. (3) yields and , the terms resulting from the mean and variance contributions, respectively. That is, the mean contribution is dominant again by a factor of .
In the rest of our computation we therefore ignore the second part of the right hand side in Eq. (11) and approximate the output modulation as:
(12) |
We call
(13) |
the “linearized gain” and write the linearized rate equation of the network in response to small input perturbations as:
(14) |
Linear and Supralinear Gains
The gain is the linearized gain in the firing rate of a single LIF neuron in response to small changes in its mean input, while it is embedded in a recurrent network operating in its baseline AI state. That is, the extra firing rate, , of a neuron in response to a perturbation in its input, , when all other neurons are receiving the same, untuned input as before, divided by the input modulation weighted by its effect on the postsynaptic membrane .
Alternative to the analytic derivation we pursued above, this gain can also be evaluated numerically by perturbing the baseline firing rate with an extra input, :
(15) |
(Note that, as this is the response gain of an individual neuron to an individual perturbation in its input when all other neurons receive the same baseline input, it is not needed to consider the perturbation in the recurrent firing rate, , in the baseline state.)
If this procedure is repeated for each , a numerical – curve is obtained. This is the curve we have plotted in Fig. 3A as “Numerical perturbation”. If this curve was completely linear, it should not be much different from the results of our analytical perturbation (Eq. (13), denoted by “Linearized gain” in Fig. 3A). The results of the numerical perturbation, however, show some supralinear behavior, i.e. larger perturbations lead to a higher input-output gain. As a result, if we compute the gain at a perturbation size equal to the input modulation (), a different gain is obtained. We use the term “stimulus gain” to refer to this supralinear gain at the modulation size of input (i.e. when ):
(16) |
This is shown by the red line in Fig. 3A.
Linear Tuning in Recurrent Networks
Once we obtained the linearized gains at the baseline state of network operation, the linearized rate equation of the network for modulations about the baseline activity is obtained. Each neuron responds to the aggregate perturbation in its input with a gain obtained by the linearization formalism employed. The total perturbation consists of a feedforward component, which is the modulation in the input (stimulus) firing rate of the neuron, and a recurrent component, which is a linear sum of the respective output perturbations of the pre-synaptic neurons in the recurrent network. This can, therefore, be written, in vector-matrix notation, as:
(17) |
If is invertible, the output firing rates can be computed directly as
(18) |
which can be further expanded into
(19) |
Ignoring higher-order contributions , Eq. (19) can be approximated as
(20) |
Eq. (20) for each stimulus orientation returns the modulation of the output firing rate of all neurons in the network in response to a given input modulation.
We then assume that all inputs are linearly tuned to the stimulus according to
(21) |
where is the baseline rate in absence of stimulation and the vector is the vector of preferred feature for the i-th neuron. The length of the vector that represents the preferred feature is the tuning strength. To ensure the linearity of operation, the firing rate should remain always positive
(22) |
If this condition is satisfied, the linearity of the tuning and positivity of firing rates remain compatible. If the condition is violated, partial rectification of the neuronal tuning curve follows and the linear analysis does not fully hold.
To obtain the operation of the network on input preferred feature vectors, we can write Eq. (20) for input tuning curves
(23) |
Here is a matrix the rows of which are given by the transposed preferred features . Therefore, all neurons in the recurrent network are again linearly tuned, with preferred features encoded by the rows of the matrix . From here we can compute the matrix of output feature vectors, , as
(24) |
The first term on the right-hand side is the weighted tuning vector of the feedforward input each neuron receives, and the second term is the mixture of tuning vectors of corresponding pre-synaptic neurons in the recurrent network.
Distribution of Orientation Selectivity
The length of the output feature vector represents the amplitude of the modulation component of output tuning curves. This is a measure of orientation selectivity, and we compute its distribution here.
Orientation is a two-dimensional feature, and the input feature vector ( in Eq.(24)) is now a vector of two-dimensional input feature vectors (a vector of vectors). Its each entry, corresponding to the input orientation selectivity vector of each neuron, can, therefore, be determined by a length and a direction. The length of all vectors is , as all inputs have the same modulation, and the direction is twice the input PO of neurons (see Eq. (4)), which are drawn independently from a uniform distribution on . They are assumed to be independent of the weight matrix , implying the absence of feature specific connectivity.
The feedforward tuning vector of each neuron is accompanied by a contribution from the recurrent network (Eq. 24). For each neuron, the recurrent contribution is a vectorial sum of the input tuning vectors of its pre-synaptic neurons. According to the multivariate Central Limit Theorem, the summation of a large number of independent random variables leads to an approximate multi-variate normal distribution of the output features. Tuning strength is given by the length of output tuning vectors, . For a bivariate normal distribution with parameters and , we can compute the distribution of this length
(25) |
where
is the modified Bessel function of the first kind and zeroth order. Therefore, we only need to compute the mean and the variance of the resulting distribution.
The mean of the distribution is equal to the length of feedforward feature vector, . This is because the expected value of the contribution of the recurrent network vanishes in each direction
(26) |
and denote, respectively, the random variables from which the weights and input POs are drawn. A similar computation yields . Here we have used the property that the two random variables and are independent, and that all orientations are uniformly represented in the input (). As a result, we obtain
(27) |
The recurrent contribution does not, on average, change the length of output feature vectors. However, it creates a distribution of selectivity, which can be quantified by its variance
(28) |
Again, we have exploited the independence of random variables and , and the uniform representation of input POs (), to factorize the variance, i.e. . Similar computation yields the same variance for the second dimension.
For our random networks, the weights for each row of the weight matrix are drawn from a binomial distribution, . The number of non-zero elements is determined by connection probabilities ( and for excitation and inhibition respectively), and each non-zero entry is weighted by the synaptic strength ( and for excitation and inhibition respectively). The variance can therefore be computed explicitly:
(29) |
For more complex connectivities, the variance can be numerically computed from the weight matrix. For our networks here, the mean and the variance of the distribution of output tuning vectors can, therefore, be expressed as
(30) |
For an output tuning curve with a cosine shape, , the tuning strength we introduced above corresponds to , namely the modulation (F2) component of the tuning curve. is also obtained as the baseline firing rate of the network, , from Eq. (5). To compare the prediction with the result of our simulations, we compute the mean and modulation of individual output tuning curves from the simulated data. Mean and modulation are taken from the zeroth and the second Fourier components of each tuning curve (F0 and F2 components), respectively. The distribution given by Eq. (25) should, therefore, precisely match the distribution of modulation (F2) component of output tuning curves obtained from simulations, if our linear analysis grasps the essential mechanisms of orientation selectivity in model recurrent networks.
Results
Erdős-Rényi Random Networks
We first study excitatory-inhibitory Erdős-Rényi random networks of LIF neurons (Eq. (1)) with a doubly fixed in-degree, namely where both the excitatory in-degree and the inhibitory in-degree is fixed for both excitatory and inhibitory neurons. Figs. 1A–C show the response of a network with and to the stimulus of orientation. The network with these parameters operates in the fluctuation-driven regime, which shows asynchronous-irregular (AI) dynamics (Fig. 1A), with low firing rates (Fig. 1B) and high variance of inter-spike intervals (ISI) (Fig. 1C). The network at this regime is capable of amplifying the weak tuning of the input, as it is reflected both in the network tuning curve in response to one orientation (Fig. 1B) and in individual tuning curves in response to different stimulus orientations (Fig. 1D).
The joint distribution of the modulation (F2) component of (individual) output tuning curves and the respective baseline (F0) component (Fig. 1E) shows that the average values of these two components have become comparable after network operation. However, the F2 component has a much broader distribution (Fig. 1E, inset). The distribution predicted by our theory (Eq. (25)) matches partially with the distribution measured in the simulations (Fig. 1F). The degree of match is quantified by an index, which assesses the overlap area of the two probability distributions.
As our analysis is based on the assumption of linearity of network interactions, the result of our theoretical prediction holds only if the network is operating in the linear regime. Any violation of our linear scheme would, therefore, lead to a deviation of the linear prediction from the measured distribution. The remaining discrepancy should, therefore, be attributed to any factor which invalidates our approximation scheme here. Possible contributing factors of this sort in our networks are partial rectification of tuning curves, correlations and synchrony in the network providing the input, and supralinearity of neuronal gains.
Partial rectification of firing rates is obvious in Fig. 1B. However, this does not seem to be a very prominent effect. Only a small fraction of the population is strictly silent, as is evident in the distribution of firing rates (Fig. 1B, bottom). Correlations, in contrast, seems to be a more important contributor, as is reflected in the raster plot of network activity (Fig. 1A).
To investigate the possible contribution of correlations in the distribution of orientation selectivity, we plotted the distribution of pairwise correlations in the network (Fig. 2). Although the distribution of pairwise correlations has a very long tail, on average correlations are very small in the network (Fig. 2A). This is the case for excitatory-excitatory, excitatory-inhibitory, and inhibitory-inhibitory correlations, and there is the same trend when spike counts are computed for different bin widths (Fig. 2A, insets). Low pairwise correlations in the network are a result of recurrent inhibitory feedback, which actively decorrelates the network activity [22]–[24]. As illustrated in Fig. 2B, upsurges in the population activity of excitatory neurons are tightly coupled to a corresponding increase in the activity of the inhibitory population. However, the cancellation is not always exact and some residual correlations remain.
Since each neuron receives random inputs from 10% of the population, approximately the same correlation of excitation and inhibition is, on average, also expected in the recurrent input to each neuron. Note that, as our networks are inhibition-dominated, the net recurrent inhibition would be stronger than the net recurrent excitation (indeed twice as strong, given the parameters we have used). Altogether, this implies that inhibition is capable of fast tracking of excitatory upsurges (Fig. 2B) such that fast fluctuations in the population activity would not be seen in the recurrent input from the network.
Finally, the single-neuron gain that we computed by linearization (Eq. (13)) could be a source of mismatch, as for a highly non-linear system it might only be valid for small perturbations in the input, and not for stronger modulations. This is shown in Fig. 3A, where the linearized gain, , from Eq. (13) is compared with , the numerically obtained neuronal gain (see Eq. (15) in Methods) when the perturbation has the size of the input modulation, . This gain could be approximated analytically by expanding Eq. (5) to higher order terms. Here, however, we have computed this gain numerically (Eq. (16)).
When the prediction of Eq. (25) is repeated with the new gain (), a great improvement in the match between the measured and predicted distributions is indeed observed (Fig. 3B). We therefore concluded that the main source of mismatch in our prediction was our misestimate of the actual neuronal gains. Other sources of nonlinearity, like rectification and correlations, could therefore be responsible for the remaining discrepancy of distributions (less than 5% in the regime considered here). However, given so many possible sources of nonlinearity in our networks, both at the level of spiking neurons and network interactions, it is indeed quite surprising that a linear prediction works so well.
A remark about rectification in our networks should be made at this point. In the type of networks we are considering here, rectification is in fact not a single-neuron property, i.e. only the result of a rectification effect due to the spike threshold in the LIF neuron. This is not the case as the linearized gain of neurons within the network (Eq. (13)) implies a non-zero response even to small perturbations in the input. This is a result of (internally generated) noise within the recurrent network, as a consequence of balance of excitation and inhibition, which smoothens the embedded f-I curve [25], [26]. Rectification could therefore only happen at the level of network, e.g. by increasing the amount of inhibition.
As our networks are inhibition-dominated, increasing the recurrent coupling would be one way to increase the inhibitory feedback within the network. This can be done in two different ways, either by increasing the connection density or by increasing the weights of synaptic connections. The first strategy is tried in Fig. 4A, where the connection probability has been increased (from to ). The second strategy is added to the first in Fig. 4B, where an increase in the connection density is accompanied by an increase in synaptic weights (from to ). In both cases, however, a significant rectification of tuning curves did not result, and the prediction of our linear theory still holds.
This unexpected effect can be explained intuitively as follows: An increase in recurrent coupling not only decreases the baseline firing rate of the network, but also changes neuronal gains ( and ). A crucial factor in determining this gain is the average membrane potential of neurons in the network, which in turn sets the mean distance to threshold. The larger the mean distance to threshold is in the network, the less is the neuronal gain. This in turn decreases the mean F2 component of output tuning curves. As a result, with a reduced baseline firing rate, a significant rectification of tuning curves still does not follow, as output modulation components have been scaled down by a comparable factor. This is indeed the case in networks of Fig. 4, where the mean (over neurons) membrane potential (temporally averaged) and the neuronal gains have both been decreased compared to the network of Fig. 1 (results not shown; for a detailed analysis, see [16]).
Networks With Distance-Dependent Connectivity
To extend the scope of the linear analysis, we asked if our theory can also account for networks with different statistically defined topologies. In particular, we considered networks with a more realistic pattern of distance-dependent connectivity: Each neuron is assigned a random position in a two-dimensional rectangle representing a flat sheet of cortex (Fig. 5A). The probability of having a connection between a pre-synaptic excitatory (inhibitory) neuron to a given post-synaptic neuron falls off as a Gaussian function with distance, with parameter (). Similar to the Erdős-Rényi random networks considered before, we fix the in-degree, i.e. each neuron receives exactly excitatory and inhibitory connections. Multiple synaptic contacts and self-contacts are not allowed.
The connectivity profile is illustrated in Figs. 5B, C. The pre-synaptic sources of a sample neuron are plotted in Fig. 5B, for . The resulting distribution of the distances of connected neurons, for the example neuron and for the entire population, is shown in Fig. 5C.
Note that the connectivity depends only on the physical distance. As input preferred orientations are assigned randomly and independently of the actual position of neurons in space, distance-dependent connectivity does not imply any feature-specific connectivity. That is, neither a spatial nor a functional map of orientation selectivity is present here.
Before discussing the simulations of the spiking networks, it is informative to look at the eigenvalue spectrum of the associated weight matrix, . It is plotted, for and , in Fig. 5D. Each entry of the matrix is normalized by the reset voltage, , for the eigenvalue spectrum shown in the main panel. The effective firing rate equation of the network can then be written as . The exceptional eigenvalue (green cross) corresponding to the uniform eigenvector (inset, top) and the bulk of eigenvalues (orange dots) are the structural properties that this network has in common with the previous Erdős-Rényi networks (not shown). There is, however, a small number of additional (in this case, ) eigenvalues in between, which are the consequence of the specific realization of our distance-dependent connectivity here. The corresponding eigenmodes will, in principle, affect the response of the network, both in its spontaneous state and in response to stimulation.
All these eigenvalues have, however, negative real parts. They will, therefore, ensure the stability of the linearized network dynamics, as far as these eigenmodes are concerned. The bulk of the spectrum, in contrast, also comprises eigenvalues with real parts larger than , which implies an instability. An alternative normalization of the weight matrix according to the neuronal gain (Fig. 5, inset, bottom; see also [16]), however, does not render these modes unstable.
Here, we are resorting to a linearized rate equation describing the response of the network to (small) perturbations, (see Eq. (17) in Methods). The eigendynamics corresponding to the common-mode (green cross) is faster, and hence it relaxes to the fixed point more rapidly than the other eigenmodes. The common mode effectively leads to the uniform, baseline state of the network (reflected in the baseline firing rate, ), about which the network dynamics has indeed been linearized in our linear prediction. The effect of other eigenmodes, in the stationary state, should therefore be computed by considering the linearized gain about that uniform, baseline state.
Simulation results for a network with this connectivity are illustrated in Fig. 6. Inspection of the spiking activity of the network (Fig. 6A) does not suggest a behavior very different from the behavior of random networks shown in Fig. 1. The irregularity of firing is, however, more pronounced, as the variance of inter-spike intervals is larger (Fig. 6C); the ISI CV has indeed a distribution about 1, which is more similar to the strongly coupled networks described in Fig. 4.
Similar to Erdős-Rényi networks, networks with distance-dependent local connectivity are capable of amplifying the weak tuning of the input signal, and comparable levels of baseline (F0) and modulation (F2) components are emerging (Fig. 6E). When the predicted distribution of F2 components is obtained applying the normalization by the linear gain , a very good match to the measured distribution is obtained (Fig. 6F), comparable to predictions in Fig. 4, and only slightly worse than the prediction in Fig. 1.
Although partial rectification of tuning curves seems to be negligible in the example shown (Fig. 6B), correlations in the network could still be responsible for the remaining discrepancy. Moreover, size and structure of correlations in the network might be different here as compared to random networks due to non-homogeneous connectivity. Distance-dependent connectivity implies that connectivity is locally dense, which can lead to more shared input and this way impose strong correlations at the output.
In fact, however, pairwise correlations do not seem to be systematically larger than in random networks Fig. 2A, judged by the distribution of Pearson correlation coefficients (Fig. 7A). In contrast, the fluctuations in the activity of excitatory and inhibitory populations seem to be even less correlated (compare Fig. 7B with Fig. 2B). Occasional partial imbalance of excitatory and inhibitory input may therefore cause systematic distortions of our linear prediction.
Another potential contributor to the discrepancy of predictions are the different structural properties of these networks, reflected among other things in their respective eigenvalue spectrum. It is therefore informative to look more carefully into the eigenvalues which mark the difference to Erdős-Rényi networks, i.e. the ones localized between the bulk spectrum and the exceptional eigenvalue corresponding to the common-mode. To evaluate this, the first ten eigenvectors (corresponding to the ten largest eigenvalues sorted by their magnitude) of the network are plotted (Fig. 8A). The first eigenvector is the uniform vector (common-mode), and the tenth one is hardly distinguishable from noise. (Note that the corresponding eigenvalue is already part of the bulk.) In between, there are eight eigenvectors with non-random spatial structure.
These eigenvectors reflect the specific sample from the network ensemble we are considering here, and they can, in principle, prefer a specific pattern of stimulation in the input. While other patterns of input stimulation would be processed by the network with a small gain, any input pattern matching these special eigenmodes would experience the highest gain (in absolute terms) from the network. The corresponding eigenvalues have, however, a negative real part, therefore these modes would in this case be attenuated: the corresponding eigenvalues of the operator that yields the stationary firing rate vector, namely , would then be very small.
We do not, however, explicitly represent any of these patterns in our stimuli. The stimuli considered in this work can be broken down to a linear sum of the common-mode (i.e. the first eigenvector) and the modulation component (i.e. a random pattern, as preferred orientations are assigned randomly and independently to all neurons, irrespective of the position of the neuron in space). The modulation component would therefore only have a very small component in the direction of each any special eigenmode. It is however possible that for non-stationary inputs to the network, transient patterns with a bias for selected eigenmodes resonate more than others.
The question arises, if spatially structured eigenmodes (cf. Fig. 8A) have an impact on the observed pattern of spontaneous and evoked neuronal activity. Plotting the response of the network to a stimulus reflecting one particular orientation, as well as the mean activity of neurons over different orientations, do not reveal any visible structure (Fig. 8B, C). The baseline activity of the network seems to be quite uniform, and the response to a certain orientation does not reveal any structure beyond the random spatial pattern one would expect from the random assignment of preferred orientations of the input. This is further supported by visual inspection of the map of preferred orientations for the output (Fig. 8D) and orientation selectivity index (Fig. 8E) in the network.
In principle, it is conceivable that spatially structured eigenmodes could affect the response of the network by setting the operating point of the network differently at different positions in space, as a result of the selective attenuation of certain eigenmodes. However, we have never observed such phenomena in our simulations. The fact that those structured modes get attenuated (and not amplified) might be one reason; another reason might be the fact that eigenmodes are typically heterogeneous and non-local, which makes the selection of the corresponding overall preferred pattern unlikely. Spatial structure of the network, and of its built-in linear eigenmodes, are therefore not dominant in determining the distribution of orientation selectivity. They could, however, be potential contributors in the small deviation of the predicted distribution from the measured one.
Spatial Imbalance of Excitation and Inhibition
To test the robustness of our predictions, we went beyond the case of spatial balance of excitation and inhibition, and also simulated networks with different extents of connectivity. Roughly the same overall behavior of the network, and accuracy of our predictions, were observed for the case of more localized inhibition and less localized excitation ( and , Fig. 9A), as well as for the case of more localized excitation and less localized inhibition ( and , Fig. 9B).
This trend was further corroborated when we systematically scanned the accuracy of our predictions for a large set of different networks, by scanning the parameter space (Fig. 10A). Indeed, for most of the parameters studied, the predicted distribution of orientation selectivity matched very well with the actual distribution (more than 90% overlap). For the more “extreme” combinations of parameters, however, where the spatial extent of excitation and inhibition were highly out of balance, the quality of the match degraded. The deviation was more significant when excitation was more local and inhibition was more global (Fig. 10A, upper left portion). Note that, even for the most extreme cases of local excitation (), the accuracy of our prediction is still fairly good, as long as the inhibition has a similar extent ( – ).
To investigate what happens in each extreme case, we chose two examples (marked in Fig. 10A) for further analysis. The connectivity patterns of these two examples, with and (0.25,0.75) (numbers indicated in ), are illustrated in Fig. 10B, C, respectively. The eigenvalue spectra of the corresponding weight matrices are shown in Fig. 10D, E. When the weights are normalized with respect to the reset voltage (upper panels), both spectra suggest an unstable linearized dynamics, as they both have eigenvalues with a real part larger than one.
The picture changes, however, when a normalization according to the effective gain, , is performed. While the network with local excitation still has several clearly unstable eigenmodes (Fig. 10E, bottom), the spectrum of the network with local inhibition comprises only one positive eigenvalue which is only slightly larger than one (Fig. 10D, bottom). Some of the eigenvectors corresponding to the largest positive eigenvalues are plotted for both networks in Fig. 10F, G, respectively. From this, it seems therefore possible that the source of deviation from the linear prediction is indeed instability of the linearized dynamics (namely the instability of the uniform asynchronous-irregular state about which we perform the linearization) for these extreme parameter settings. When this instability is more pronounced, i.e. for the network with local excitation, the deviation is highest. When the network is at the edge of instability, i.e. for the network with local inhibition, our predictions show only a modest deviation.
To test this hypothesis further, namely that instability of the linearized dynamics is the source of mismatch between the linear prediction and the actual distribution of orientation selectivity, we need to scrutinize the response behavior of the sample networks. The outcome of this is shown in Fig. 11. While the network with local inhibition does not look very different from other examples considered before (Fig. 11A), the behavior of the network with local excitation very clearly shows deviating behavior (Fig. 11B). First, firing rates are much higher than in the less extreme cases, for both excitatory and inhibitory populations (Fig. 11B, first column). Moreover, the activity of excitatory and inhibitory neuronal populations are not well correlated in time, as it is the case for the other networks (Fig. 11B, first column, bottom). The firing rate distribution has a very long tail, and the tail is longer for the excitatory than for the inhibitory population (Fig. 11B, second column). The long tail is accompanied by a peculiar peak at zero firing rate (which is cut for illustration purposes in Fig. 11B, second column, bottom). It reflects the fact that most of the neurons in the network are actually silent, and a small fraction of the population is highly active. The average irregularity of spike trains (the CV of the inter-spike intervals) in the network is reduced compared to our previous examples (Fig. 11B, third column). All these properties are consistent with the presumed instability of the linearized dynamics, as inferred from the eigenvalue spectrum.
In terms of functional properties of the network, the output tuning curves are much more scattered when aligned by the respective preferred orientations of the inputs (Fig. 11B, fourth column, upper panel). In fact, the mean output tuning curve for all neurons of the network does not show any amplification, if it is aligned at the Input PO (Fig. 11B, fourth column, lower panel). The picture changes, however, if tuning curves are aligned according to their Output PO (Fig. 11B, fifth column). Here a clear amplification of the modulation is evident in output tuning curves, although the relation to the feedforward input gets lost. Also, the average output tuning curve is not smooth, i.e. not all orientations are uniformly represented in the distribution of output preferred orientations.
This breaking of the symmetry becomes even more obvious when we look at the response of the two networks to stimuli of different orientations (Fig. 12A, B). While both networks show some degree of inhomogeneity in the spatial pattern of their firing rate responses, the response pattern of the second network is much more clustered (Fig. 12B). In fact, it seems that the internal connectivity structure of the network determines the position of a discrete set of potential activity bumps, and the orientation bias in the input can only choose between these bumps. As the nonlinear dynamics of the unstable network is crucially affecting the activity in response to stimuli, it is not surprising that the distribution of orientation selectivity is not matching the prediction which relies on a linearization about the uniform asynchronous-irregular state (compare Fig. 12C and D, first columns).
In fact, this internal structure is even reflected in the pattern of baseline firing rates (mean of the tuning curves over orientation). While for the network with local inhibition this pattern is covert and ineffective (Fig. 12C, second column), in the network with local excitation clear clusters of activity, resembling the ones in Fig. 12B, are evident (Fig. 12D, second column). One may, therefore, expect that there exists a corresponding pattern in the spatial organization of orientation selectivity. Larger domains of neighboring neurons, who get activated together, also exhibit the same selectivity. This is reflected in the clustering of output preferred orientations (Fig. 12C, third column) and orientation selectivity index (Fig. 12C, fourth column).
Note that a consequence of this clustering of PO is a degenerate representation of orientation selectivity, i.e. not all orientations are represented equally in the network. While the distribution of Output POs is almost uniform in the network with local inhibition (inset in Fig. 12C, third column), clear peaks are present in the distribution of Output POs in the network with local excitation (inset in Fig. 12D, third column). This is in line with our observation of broken symmetry described before, reflected in the pattern of mean output tuning curve in Fig. 11B.
Discussion
We presented a linear analysis, which was capable of predicting the distribution of orientation selectivity in networks with different patterns of random connectivity, including some degree of spatial organization, and for a wide range of parameters. The effective strength of excitation and inhibition in the network (Figs. 1 and Fig. 4), as well as the spatial extent of excitatory and inhibitory connectivity (Fig. 10), did not affect the prediction accuracy very strongly, as long as the linearized dynamics remained stable. We therefore conclude that linear mechanisms are the major network operations that explain amplification and attenuation, and the distribution of the resulting orientation selectivity in our networks, within their stable regimes of linearized dynamics.
Operating Regime of Orientation Selectivity
Note that even in networks with localized connectivity of excitation and/or inhibition, the linearized dynamics remained stable for a vast set of parameter combinations. Even when excitation was highly local and clustered, as long as inhibition had the same spatial connectivity profile, stability of the network was guaranteed. A similar conclusion has been recently obtained from an analysis of spatially embedded balanced networks [27]. It has also been shown before that networks with distance-dependent connectivity can show the same macroscopic behavior similar to random networks without local connectivity [28].
The asynchronous irregular (AI) state has been argued to best match the activity of cortical networks in vivo (see e.g. [19], [29], [30]). The relevance of this regime has only been discussed, however, for cortical networks in response to uniform stimulation. On the other hand, with regard to the processing of a non-uniformly modulated input, it has been claimed that a “marginal state of recurrent dynamics” might be the relevant regime of operation for the processing of weakly tuned inputs [2]. Also, it has recently been suggested that a recurrent regime with “macroscopic chaos” (probably corresponding to our regime of unstable dynamics) might be advantageous for sensory processing, as it may support a better separation of trajectories [31].
In contrast to these proposals, the results of our study suggest that a stable AI state of of dynamics might indeed be the relevant regime of operation also for sensory processing in cortical networks in response to tuned inputs. Notably, the dense and local pattern of inhibition in real cortical circuits [8], [32], [33] is in line and consistent with our proposal. It might indeed be a general strategy biological networks of spiking neurons have exploited to ensure their overall stability to modulated inputs. We note again that we are talking about dynamic stability here, where the network dynamics is linearized about the uniform asynchronous-irregular state, and the effective weights of coupling linearized about this baseline state are considered.
Distribution of Orientation Selectivity
A broad distribution of orientation selectivity is reported across all cortical layers in the primary visual cortex of macaque monkeys [34], as well as in mice [35] (for a comparison of the distributions, see panel C in Fig. S2 therein). Although we chose random connectivities by fixing the in-degree of all neurons (which we refer to as “structural homogeneity”), a broad distribution of orientation selectivities also emerged in all our networks. The main contributor to this broad distribution was, therefore, not the structural heterogeneity of synaptic connectivity. In fact, there is no heterogeneity at all, if one only considers the number of connections each neuron receives from pre-synaptic excitatory and inhibitory neurons Nor were the temporal fluctuations of activity generated by our networks a major source of this variability, although the networks were mostly operated in the fluctuation-driven regime with high amounts of temporal and trial-by-trial variability. As we have generally chosen a homogeneous connectivity pattern, this temporal variance would be essentially the same for all neurons, at least in the baseline state. (This also justifies the mean-field ansatz we have employed for our analysis.) This is again reflected in the narrow distribution of F0 components in all our networks.
The main source of variability in orientation selectivity is rather the “functional heterogeneity” in synaptic connectivity, namely heterogeneous preferred features (here, preferred orientations of inputs) of the pre-synaptic sources within the recurrent network. Receiving input from neurons with different preferred features may be a computational strategy to integrate the information, and help to remove distractive correlations in the activity. The fact that each neuron within the recurrent network receives input from a heterogeneous pool of neurons with a wide range of preferred orientations leads to a random “summation” of pre-synaptic preferred orientations, which eventually changes the output preferred orientation of the post-synaptic neuron [16].
The quenched noise of preferred orientations, and not structural or dynamic fluctuations, is, therefore, the main mechanism responsible for the distribution of orientation selectivity in our networks. We showed that even with this most conservative estimate of neuronal heterogeneity, consistent with recent experiments [7], a broad distribution of neuronal selectivities can be obtained. However, we cannot rule out a possible contribution of other sources of heterogeneity, like heterogeneous connectivity and heterogeneous amounts of excitation and inhibition different neurons may receive in their baseline state (leading to different levels of spontaneous activity, see e.g. [34]), as well as variability in neuron parameters [36] and synaptic noise. Also, heterogeneity in the pattern of feedforward projections to neurons in V1 can be a prominent source of distribution in orientation selectivity. However, if the distribution of orientation selectivity is mainly dominated by feedforward heterogeneity, or/and if single neuron heterogeneities like variability in threshold and synaptic noise are the main source of this distribution, the distribution should not much change when the recurrent network is absent. On the other hand, if functional heterogeneity resulting from recurrent interactions is a major contributor to this distribution, it should get narrowed when the intra-cortical circuitry is deactivated. It therefore awaits further experimental tests which mechanisms are dominant in creating the distribution of feature-selectivity in the cortex.
Future Directions
There are several ways in which the the current study could be expanded. First, sticking to a linear framework of analysis enabled us to analytically compute the distribution of orientation selectivity. In this simplified framework, however, we neglected several nonlinearities, both at the level of neuronal properties and network interactions. These nonlinearities are deemed to be more prominent in biological networks, for instance in the form of rectification [37], [38], or an expansive-compressive transfer nonlinearity [26], [39], [40]. Such mechanisms might play a major role in sharpening and amplification of orientation selectivity. A more complete theoretical treatment of the problem should therefore consider the contribution of nonlinear mechanisms as well, although this may come at the expense of less rigorous analytical predictions.
One way to embrace additional nonlinear mechanisms that are effective in biological networks, at least at the level of simulations, is to use a more realistic and more detailed neuron model. In our simulations here we used the current-based LIF neuron model. Simulating networks of more realistic neuron models, like conductance-based LIF neurons, may change certain behaviors of the network [41], [42]. For instance, increasing the recurrent coupling in our inhibition-dominated networks can decrease the mean membrane potential of neurons in the network to very negative values, as there is no reversal potential limiting it. This is not the case in a conductance-based neuron model, and therefore a network of that sort might show a different behavior, especially when operated in extreme regimes.
Finally, it would be interesting to see how the predictions of our current theory change when one considers networks with feature-specific connectivity. This scenario might be corresponding to species with orientation maps, where neighboring neurons tend to have a similar preferred orientation [4]–[6], or to species without spatial map of selectivity, but with feature-specific functional connectivity [8]–[12]. A linear amplification of feedforward input, for instance, has been recently reported in cortical circuits of mice [43]–[45]. How this effect could be modeled within our theoretical framework, and how it affects the distribution of orientation selectivity, should therefore be a next step in our research.
Acknowledgments
We thank the developers of the simulation software NEST (see http://www.nest-initiative.org) and the maintainers of the BCF computing facilities for their support throughout this study.
Data Availability
The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper.
Funding Statement
Funding by the German Ministry of Education and Research (BMBF; BFNT-F*T, grant 01GQ0830) and the German Research Foundation (DFG; grant EXC 1086) is gratefully acknowledged. The article processing charge was covered by the open access publication fund of the University of Freiburg. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
References
- 1. Peters A, Payne BR (1993) Numerical relationships between geniculocortical afferents and pyramidal cell modules in cat primary visual cortex. Cerebral Cortex 3:69–78. [DOI] [PubMed] [Google Scholar]
- 2. Ben-Yishai R, Bar-Or RL, Sompolinsky H (1995) Theory of Orientation Tuning in Visual Cortex. Proceedings of the National Academy of Sciences 92:3844–3848. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Somers DC, Nelson SB, Sur M (1995) An emergent model of orientation selectivity in cat visual cortical simple cells. The Journal of Neuroscience 15:5448–5465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Blasdel GG, Salama G (1986) Voltage-sensitive dyes reveal a modular organization in monkey striate cortex. Nature 321:579–85. [DOI] [PubMed] [Google Scholar]
- 5. Bonhoeffer T, Grinvald A (1991) Iso-orientation domains in cat visual cortex are arranged in pinwheel-like patterns. Nature 353:429–31. [DOI] [PubMed] [Google Scholar]
- 6. Ohki K, Chung S, Kara P, Hübener M, Bonhoeffer T, et al. (2006) Highly ordered arrangement of single neurons in orientation pinwheels. Nature 442:925–8. [DOI] [PubMed] [Google Scholar]
- 7. Jia H, Rochefort NL, Chen X, Konnerth A (2010) Dendritic organization of sensory input to cortical neurons in vivo. Nature 464:1307–12. [DOI] [PubMed] [Google Scholar]
- 8. Hofer SB, Ko H, Pichler B, Vogelstein J, Ros H, et al. (2011) Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex. Nature Neuroscience 14:1045–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, et al. (2011) Functional specificity of local synaptic connections in neocortical networks. Nature 473:87–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Ko H, Cossell L, Baragli C, Antolik J, Clopath C, et al. (2013) The emergence of functional microcircuits in visual cortex. Nature 496:96–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Ko H, Mrsic-Flogel TD, Hofer SB (2014) Emergence of Feature-Specific Connectivity in Cortical Microcircuits in the Absence of Visual Experience. Journal of Neuroscience 34:9812–9816. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Ishikawa AW, Komatsu Y, Yoshimura Y (2014) Experience-Dependent Emergence of Fine-Scale Networks in Visual Cortex. Journal of Neuroscience 34:12576–12586. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Harris KD, Mrsic-Flogel TD (2013) Cortical connectivity and sensory coding. Nature 503:51–8. [DOI] [PubMed] [Google Scholar]
- 14. Hansel D, van Vreeswijk C (2012) The Mechanism of Orientation Selectivity in Primary Visual Cortex without a Functional Map. The Journal of Neuroscience 32:4049–4064. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Pehlevan C, Sompolinsky H (2014) Selectivity and Sparseness in Randomly Connected Balanced Networks. PLoS ONE 9:e89992. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Sadeh S, Cardanobile S, Rotter S (2014) Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons. Springer Plus 3:148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Siegert AJF (1951) On the first passage time probability problem. Phys Rev 81:617–623. [Google Scholar]
- 18.Ricciardi LM (1977) Diffusion Processes and Related Topics on Biology. Berlin: Springer-Verlag.
- 19. Amit DJ, Brunel N (1997) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex 7:237–52. [DOI] [PubMed] [Google Scholar]
- 20. Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8:183–208. [DOI] [PubMed] [Google Scholar]
- 21. Helias M, Deger M, Rotter S, Diesmann M (2010) Instantaneous non-linear processing by pulse-coupled threshold units. PLoS computational biology 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, et al. (2010) The asynchronous state in cortical circuits. Science 327:587–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Pernice V, Staude B, Cardanobile S, Rotter S (2011) How structure determines correlations in neuronal networks. PLoS Computational Biology 7:e1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Tetzlaff T, Helias M, Einevoll GT, Diesmann M (2012) Decorrelation of Neural-Network Activity by Inhibitory Feedback. PLoS Computational Biology 8:e1002596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Hansel D, van Vreeswijk C (2002) How noise contributes to contrast invariance of orientation tuning in cat visual cortex. The Journal of Neuroscience 22:5118–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Miller KD, Troyer TW (2002) Neural noise can explain expansive, power-law nonlinearities in neural response functions. Journal of Neurophysiology 87:653–9. [DOI] [PubMed] [Google Scholar]
- 27. Rosenbaum R, Doiron B (2014) Balanced Networks of Spiking Neurons with Spatially Dependent Recurrent Connections. Physical Review X 4:021039. [Google Scholar]
- 28. Yger P, El Boustani S, Destexhe A, Frégnac Y (2011) Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. Journal of computational neuroscience 31:229–45. [DOI] [PubMed] [Google Scholar]
- 29. van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274:1724–6. [DOI] [PubMed] [Google Scholar]
- 30. van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural computation 10:1321–71. [DOI] [PubMed] [Google Scholar]
- 31. Ostojic S (2014) Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature neuroscience 17:594–600. [DOI] [PubMed] [Google Scholar]
- 32. Fino E, Yuste R (2011) Dense inhibitory connectivity in neocortex. Neuron 69:1188–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Packer AM, Yuste R (2011) Dense, unspecific connectivity of neocortical parvalbumin-positive interneurons: a canonical microcircuit for inhibition? The Journal of Neuroscience 31:13260–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Ringach DL, Shapley RM, Hawken MJ (2002) Orientation Selectivity in Macaque V1: Diversity and Laminar Dependence. The Journal of Neuroscience 22:5639–5651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Niell CM, Stryker MP (2008) Highly selective receptive fields in mouse visual cortex. The Journal of Neuroscience 28:7520–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Yim M, Aertsen A, Rotter S (2013) Impact of intrinsic biophysical diversity on the activity of spiking neurons. Physical Review E 87:032710. [Google Scholar]
- 37. Carandini M, Ferster D (2000) Membrane potential and firing rate in cat primary visual cortex. The Journal of Neuroscience 20:470–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Priebe NJ, Ferster D (2008) Inhibition, spike threshold, and stimulus selectivity in primary visual cortex. Neuron 57:482–97. [DOI] [PubMed] [Google Scholar]
- 39. Anderson JS, Lampl I, Gillespie DC, Ferster D (2000) The Contribution of Noise to Contrast Invariance of Orientation Tuning in Cat Visual Cortex. Science 290:1968–1972. [DOI] [PubMed] [Google Scholar]
- 40. Priebe NJ, Ferster D (2012) Mechanisms of Neuronal Computation in Mammalian Visual Cortex. Neuron 75:194–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Kuhn A, Aertsen A, Rotter S (2004) Neuronal integration of synaptic input in the fluctuation-driven regime. The Journal of Neuroscience 24:2345–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Kumar A, Rotter S, Aertsen A (2008) Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. The Journal of Neuroscience 28:5268–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Lien AD, Scanziani M (2013) Tuned thalamic excitation is amplified by visual cortical circuits. Nature neuroscience 16:1315–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Li LY, Li YT, Zhou M, Tao HW, Zhang LI (2013) Intracortical multiplication of thalamocortical signals in mouse auditory cortex. Nature neuroscience 16:1179–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Li YT, Ibrahim LA, Liu BH, Zhang LI, Tao HW (2013) Linear transformation of thalamocortical input by intracortical excitation. Nature neuroscience 16:1324–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper.