Abstract
In this work, certain aspects of the structure of the overlapping groups of neurons encoding specific signals are examined. Individual neurons are assumed to respond stochastically to input signal. Identification of a particular signal is assumed to result from the aggregate activity of a group of neurons, which we call information pathway. Conditions for definite response and for non-interference of pathways are derived. These conditions constrain the response properties of individual neurons and the allowed overlap among pathways. Under these constrains, and under the simplifying assumption that all pathways have similar structure, the information capacity of the system is derived. Furthermore, we show that there is a definite advantage in the information capacity if pathway neurons areinterspersed among the neuron assembly.
1. Introduction
Visual cortex neurons fire action potentials when visual stimuli appear within their receptive fields, and visual information is encoded in real time via the joint firing of multiple neurons. Although much is known about the properties of single neuronal units, the rules by which cortical neurons coordinate their activity to represent information about stimuli remain elusive. To understand why, one must consider that the responses of single units are both noisy and ambiguous [1], [2], with large trial-to-trial variability in response strength and probability [3], [4]. In other words, repeated responses to the same stimulus vary considerably, and responses to multiple different stimuli can be the same. To achieve optimal real-time performance, these ambiguities must be resolved at the level of neuronal populations by the coordinated firing of distinct neuronal ensembles. However, it is not clear how these ensembles must behave in order to allow stable unambiguous percepts to emerge. In contrast to the typical variability of single neuron firing, typical stimulus induced percepts (e.g., barring bi-stability phenomena and other ambiguous types of stimuli) appear to be definite, effectively noise-free, representations of the stimulus. This suggests that the coordinated firing of appropriate neuronal ensembles, here called”information pathways”, is able to represent and transmit information about a wide class of stimuli with low degree of uncertainty.
One important question is how the pathways, where definite information about the stimulus is encoded, are implemented in the brain. Existence of super-specialized cells that represent a specific stimulus class or even unique stimuli has been postulated. Such “grandmother-like cells” that respond reliably to increasingly complex arrangements of the stimuli (objects) are found in higher associative cortical areas of primates [5], [6]. They are thought to receive unreliable input from multiple lower-level neurons and integrate it into a definite representation. However, such cells are not found in early visual areas, such as area V1, and they are elusive even in higher areas. Hence, it is likely that earlier visual areas represent definite information in the aggregate, nearly simultaneous, firing of ensembles of neurons whose collective output reliably represents a given stimulus (e.g., an oriented bar in area V1). In agreement with this, it has been shown that recurring recruitment of feature-selective cells into co-firing neuronal ensembles occurs during natural visual stimulation by visual scenes in area V1 [7], [8]. Moreover, neurons with similar feature-selectivity have increased probability to be wired together even though in rodents they are distributed in a salt and pepper fashion [9],[10]. Such neuronal ensembles likely represent, in part,”information pathways” that encode specific visual stimulus features.
In addition to variability, the neurons in V1 have other three computationally important properties. Firstly, their activity under natural visual conditions is sparse with low noise correlations [11], [12], [13]. Secondly, V1 possesses a retinotopic map, where nearby neurons share receptive field locations. Finally, in rodents, despite topografic organization, the feature-selectivity and direction / orientation tuning are not organized in columns, but are distributed across V1 in salt-and-pepper manner [9], [10]. Diffuse localization of sparsely firing feature-selective cells may enable efficient encoding of local features in the scene. The sparsification resulting in low correlated noise enables the fine feature discrimination even though the tuning of individual units is relatively broad [3]. In what follows, we assume that neurons in a pathway respond probabilistically to a stimulus encoded by the pathway, and different pathways form overlapping information representing sets. Neuron responses are considered to be independent, reflecting the low value of noise correlations reported in many vertebrates [14], [15], [13]. Early theoretical considerations [17], [16] suggest that optimality is achieved when there is little or no coordinated response of neurons other than the one induced by the jointly independent response of neurons to a particular signal. These assumptions are based on properties of V1 neuronal responses, which are known to be sparse under natural visual stimulation conditions, with low noise correlations and relatively high response variability [11], [12], [13]. Sparsification resulting in low correlated noise enables fine feature discrimination, even though the tuning of individual units may be relatively broad [3],[18], [19]. Also, it should be added, there is evidence of increased neuronal decorrelation of pyramidal neurons in mice at adulthood [20], suggesting that as more information is encoded in the brain, neurons decorrelate.
In what follows, we use simple models to identify the basic principles underlying the information content of information pathways. Specifically, we explore the computational properties that may allow overlapping sparsely firing neuronal assemblies to create unambiguous definite representations of visual stimuli. To achieve this, two conditions have to be satisfied: (i) The encoding must be definite: the probability of an information pathway to be active when a stimulus is present should be close to 1, while in the absence of stimulus, the probability should be close to 0. (ii) There should be no significant interference between different overlapping pathways. We examine the implications of conditions (i) and (ii) on the degree of pathway overlap and neuronal assembly architecture. Further, we evaluate the information capacity of three incrementally more plausible architectures of the information pathways, and find that, in the most plausible architecture, the number of non-interfering definitely responding pathways that can co-exist, increases exponentially in the maximum allowed overlap. This in turn is determined by the response probabilities of the individual neurons. Finally, we examine the validity of our analysis by analyzing a dataset obtained by two-photon imaging of layer 2/3 neurons in area V1 of adult mice.
The structure of the paper is the following: In Section 2, we analyse the conditions of definite response and of non-interference of overlapping bimodal pathways. In Section 3, three models for overlapping pathway organization are examined, namely the “Dense Neighbourhood Pathway Model”, the “Random Selection Model”, and the “Locality Preserving Random Selection Pathway Model”. The implications of the organization principle on the information capacity of the system are then assessed. In Section 4, we test the main findings of the previous sections in the context of the Interneuron Pyramidal Partner Groups in adult mouse V1 cortical area, as identified by [21], using the “Locality Preserving Random Selection Pathway Model” (Section 3), since it appears to be the most relevant for a topographically mapped area like V1. Section 5 concludes with our main remarks. In the appendix, we analyse a variant of the Locality Preserving Random Selection Pathway Model, where the probability that a neuron belongs to a given pathway varies according to the distance from the pathway center.
2. Overlapping Information Pathways with Bimodal Probabilistically Responding Neurons
In this section, two important requirements of overlapping pathways are examined, namely the conditions of definite response to a preferred signal, encoded by each pathway, and the condition of non-interference among overlapping pathways. These conditions are examined in the abscense of spontaneous firing as well as in the presence of spontaneous firing.
2.1. No Spontaneous Firing
Here we assume no spontaneous firing and derive two necessary constraints that allow a multi neuronal ensemble (pathway) to effectively transmit information. The first is that the pathway needs to give a definite response when the signal it is supposed to transmit is present despite the fact that its individual constituent units often do not. The second is that the activation of a pathway should not induce activation in another pathway whose signal is not present.
Two overlapping information pathways:
Let us suppose that we have two information pathways of n1, n2 neurons each, and suppose that there are two distinct signals S1, S2 that activates them respectively. Here it is assumed that each neuron in pathway i has probability pi of firing if Si is present and probability 0 of firing if Si is not present. The two pathways are assumed to have an overlap of n12 neurons. To decide whether a pathway is active or not, we need to set up a threshold Ki on the number of active neurons. If in pathway i more than Ki neurons are firing, then the pathway is considered active, otherwise it is considered inactive.
Let us now consider the condition for pathway i to be active given that Si is present. Since the neurons are bimodal, the probability of more than Ki neurons firing is given by the binomial distribution:
| (1) |
where qi = 1 − pi is the probability of a neuron not firing when the corresponding signal Si is present. This probability is the probability pathway i is active when Si is present.
To facilitate the calculation we are going to use the De Moivre-Laplace theorem to approximate the binomial distribution with the normal distribution. This approximation is considered to be good for ni > 30 and for a range of values of k in the sum that is of order ni so as to avoid discreteness error. The De Moivre-Laplace theorem tells us:
| (2) |
where Φ(z) is the normal cumulative distribution function.
Let us now focus on the condition of definite response (i) of the pathway to signal Si. Like in any probabilistic response system we have to set a level of certainty above which we consider the system to give a definite response. Let us call this level 1 − ϵ. The condition of definite response of information pathways Si assumes the form:
| (3) |
Condition (3) admits the following interpretation: nipi is the expected number of neurons that are firing in pathway i. Condition (3) is satisfied when the expected number of firing neurons is much larger (in units of standard deviation ) than the threshold Ki.
The condition of no interference between the overlapping pathways S1, S2 is a little bit more tricky since one needs to carefully set the thresholds so as to achieve maximal separability of the pathways. The probability of Sj, j ≠ i, to cause activity in pathway i is given by
| (4) |
This probability has to be kept low at confidence interval ϵ, and this leads to the condition:
| (5) |
The interpretation of this equation is that the expected number of firing neurons in the overlap (nijpj) under presentation of signal Sj is much smaller than the threshold Ki of pathway i.
The threshold Ki should be such that the expected number of firing neurons upon presentation of signal Si is well above Ki, while the expected number of firing neurons in pathway i upon presentation of signal Sj due to the pathway intersection is well below Ki. One could take Ki to be the midpoint of the two expected numbers of firing neurons, however this would ignore possible differences in the standard deviation of the number of firing neurons in the two cases. Instead one should take a weighted average of the two expected numbers by the standard deviations as threshold. This is given by
| (6) |
Suppose now that we set ϵ = 0.01. Then the condition of definite response (3) can be solved by using the normal cumulative distribution table to give
| (7) |
Furthermore, the condition of no interference gives similarly
| (8) |
Setting the threshold to the optimal value (6) the conditions (7) and (8) collapse to the single condition
| (9) |
In the brain there are multiple pathways consisting in principle with different numbers of constituent neurons each of which fires with different probability when the optimal stimulus is present. To simplify things for modeling, we consider pathways consisting of n neurons, each of which fires with probability p to its optimal signal (the signal that activates the pathway).
In this case, the optimal threshold for two overlapping pathways, with overlap m, is given by
| (10) |
Under this choise of optimal threshold, the conditions of definite response and no interference (7) and (8) now collapse to the single condition
| (11) |
To have maximum number of encoding pathways it is necessary to increase the overlap m to a maximum value m0, without violating condition (11). This is achieved when
| (12) |
where the brackets here denote integer part.
Fig.1 shows how the value of the maximum overlap m0 varies as a function of the probability of neuronal response p for a fixed number n = 1000 of neurons in a pathway. Already at p = 0.06 we have a possible 50% overlap. Hence, we can say that for maximal number of non-interfering definitely responding pathways of size n = 1000, even for very low probabilities of response, high overlaps are possible.
Figure 1:

The pathway overlap threshold m0 is plotted against the probability p of a neuron in a pathway responding to its signal. Here the number of neurons in the pathway is taken to be n = 1000.
Hence, we can say that, even when individual neuron responses occur with very low probability, high pathway overlaps are possible while still achieving a near maximal number of non-interfering, definite, pathway responses. A downstream neuron connected to such a pathway and thresholding the overall input at the optimal threshold successfully receives then the information that the pathway carries. Such an information transmission system is in principle implementable in real biological neuronal networks.
In sum, in the absence of spontaneous activity equation (12) determines the maximum allowed overlap (m0) between two pathways as a function of the neuronal response probability (p, q = 1 − p) to the optimal signal that the pathway carries and the pathway constituent neuron number (n). In the next section we examine the effect of spontaneous activity on the maximum allowed pathway overlap.
2.2. Overlapping Pathways in the Presence of Spontaneous Firing
Let us now suppose that each pathway has n bimodal neurons and that a pair of pathways has m neuron overlap, as before. In this case, we will assume that there is a spontaneous probability of firing p0 for a neuron not participating in a pathway whose signal is present. There is also a stimulated probability of firing p when the neuron is in a pathway whose associated signal is present.
The spontaneous firing may arise in many ways. It may be, for example, built in the network so as to maintain dynamic equilibrium. After all, excess firing will generate too much input to the network and this may lead to seizure if network control fails, while not firing may result in too little input to the network leading to global silencing. Another source of spontaneous firing may be the parallel operation of overlapping pathways. It is possible that a neuron belongs not only to the pair of pathways considered here, but also to a third pathway that may or may not be active in parallel to our pair of pathways. It is quite possible that both sources of spontaneous firing mentioned exist and cooperate in the brain.
A further source for the “spontaneous” firing (probability p0) may be simply the signal to the second pathway that is causing increased probability of response to the first pathway. This may happen for example if the signals are close orientation gratings. In this case both orientation gratings generate increased response to the neurons of a pathway, however the preferred orientation generates response with higher probability p than the nearby orientation which generates a response with probability p0. In this case we will refer to the pathways as close signal pathways and to the difference dp = p − p0 as the probability resolution of the two pathways.
As before, there is a threshold K for the number of active neurons in a pathway, above which the pathway is considered active. The condition of definite response again assumes the form
| (13) |
However, in this case, there is one further condition, the condition of non-spontaneous response
| (14) |
Fortunately, this condition is weaker than the non-interference condition hence it is automatically imposed.
The non-interference condition in this case assumes the form
| (15) |
It is not too difficult to show, following a proof similar to the De Moivre-Laplace theorem, that in the limit n ≫ m, m ≫ 1, P(Fi > K|Sj) can be approximated by
| (16) |
This leads to the non-interference condition
| (17) |
In this case, the optimal choice of threshold is
| (18) |
As before, setting the confidence limit ϵ = 0.01, the two conditions collapse to the following condition:
| (19) |
It is this condition that determines the maximum allowed overlap m0 of distinct pathways in the presence of spontaneous firing.
Condition 19 also admits a different interpretation. Suppose that we have two overlapping pathways and two stimuli, A,B, corresponding to the pathways A,B respectively. If the two signals are close to each other (e.g., nearby orientations), then it is possible that signal A evokes responce to pathway B neurons with a probability p0 only slightly smaller than the responce p of pathway B neurons to signal B, and correspondingly for pathway A. In this setup, the overlap threshold m0 admits the interpretation of the maximum overlap allowed so that the two pathways can resolve the two signals at probability difference dp = p − p0. The value of m0 as a function of p is shown in Fig.2.
Figure 2:

The overlap threshold m0 is plotted against the probability of response to signal p for various values of the probability resolution dp = p − p0. Here the pathway neuron number is n = 1000
What is important in Fig.2 is the fact that there is a minimum overlap threshold for a given probability resolution dp, irrespective of the precise values of p, p0 as long as dp = p − p0 remains fixed. Hence, if the overlap is kept below this minimum threshold, the two pathways can resolve the two signals keeping the individual neuron response probability well below 1. This allows an extra freedom in the operation of the information pathways that may be necessary to account for firing rate fluctuations in the brain, as well as to extra constraints in the firing probabilities of pathway neurons that may arise from requirements for information processing.
In the next section we are going to address the question of packing, that is we will assume that we have an assembly of pathways, and that each pathway encodes a different signal. We will ask how many different signals can be encoded in non-interfering pathways under various pathway architectures. In all these models a maximum overlap m0 will be allowed. This m0 will be the one determined in this section through equation (19).
3. Number of Signals Encoded
Let us now try to address the question of how many signals can be encoded independently on a set of N neurons given that the pathway size is n and the maximal overlap allowed is m0 < n.
3.1. Dense Neighbourhood Pathway Model
In this model, we are going to assume that the neurons are irregularly and randomly placed in a cortex layer. We will furthermore assume that pathways are constructed by geometrically adjascent neurons placed closest together.
2D Case:
In this case neurons are irregularly and randomly placed on a surface modeling a cortex layer. Pathways form discs with all neurons in the disc participating in the pathway. Hence, the number of neurons in this model is proportional to the surface area that they occupy. We will assign a surface density d to the number of neurons per unit area. Hence, the areas associated to N, n and to the overlap m are AN = N/d, An = n/d and Am = m/d. Since the pathways are circular, we can associate a radius . The overlap area Am is the overlap of two discs, and the size of this area is fully determined by the radius of the pathway discs and the distance of the centers of the pathways. In fact it is easy to show that
| (20) |
By inverting relation (20) it is possible to determine the minimum distance allowed for a maximum overlap that corresponds to m0 neurons.
Defining the regularized distances with respect to the density , , (20) assumes the form
| (21) |
Consider now the question of how many pathways of size n fit in N neurons if the maximal overlap allowed is m0 neurons. This question corresponds to the question of how many circles of radius Rn fit within an area AN of neurons if the nearest distance of their centers allowed is .
This is a question that can be answered easily if we ignore insignificant edge effects associated with the exact shape of the area AN. For closest packing, the centers of the pathway discs form a triangular lattice of edge , and the area of the triangular cell of the lattice is . Hence, we get that the number of pathways Np is
| (22) |
From equation (22) we see that the number of pathways Np increases linearly with N, but with a proportionality coefficient that depends on both n, m0.
The graph of this proportionality coefficient for maximum overlap m0 is depicted in Fig.3 for n = 1000 pathway neurons. As can be seen, Np/N increases with increasing m0, however only for very large overlaps it becomes comparable to 1.
Figure 3:

(a) The 2D dense pathway model. Overlapping pathways are dense in the sense that they consist of all neurons within overlapping circles. Two pathways are indicated by the blue and red colors. Here blue dots indicate neurons in pathway 1 and red dots neurons in pathway 2, while blue dots with red circles indicate the overlap neurons. (b) The pathway fraction Np/N is plotted against the maximum number of overlap neurons m0. Here the number of neurons in the pathway is taken to be n = 1000.
3D Case:
Here, the neurons are irregularly and randomly placed in three dimensions, essentially considering the cortical layer to be thick, hence allowing three dimensional structure. We will also assume, as in the 2D case, that pathways are constructed by adjascent neurons placed closest together, forming overlapping spheres that are closest packed at the overlap permitted. Hence the centers of the pathways form a closest packed sphere lattice (hexagonal close packed or cubic close packed lattice). In this case, a volume neuron density d associates volumes with neuron numbers, giving VN = N/d for the volume of the aggregate of neurons, Vn = n/d for the volume of the neurons in a pathway and Vm = m/d for the volume of the neurons in the overlap of two pathways. The radius of the pathway sphere is . The overlap volume of two overlapping spheres distance D apart is given by
| (23) |
Regularizing by setting , , (23) assumes the form
| (24) |
Suppose now that m0 is the maximum allowed overlap, so that pathways do not interfere with each other. By inverting numerically equation (24) it is possible to obtain the minimum distance allowed between the centers of the pathways. The maximum number of pathways that can be packed at this minimum distance is equal to the number of auxiliary hard spheres of radius that can be packed in the volume VN. A theorem of Gauss tells us that the maximum fraction of volume that can be occupied by closely packed hard spheres is . Hence the volume of the auxiliary spheres is and the number of them is the quotient of Vaux by the hard sphere volume . Since this is the number of pathways we get that
| (25) |
Equations (24,25) implicitly determine the number of pathways in terms of the number of pathway neurons n and the maximum allowed number of overlap neurons m0. Again, the number of pathways increases linearly in the number of neurons N, but with a higher proportionality coefficient than in the 2D case, as can be seen in Fig.3. It is important to note that in either the 2D or the 3D case the Dense Neighborhood Pathway Model can encode only a relatively small number of signals, linear in the total number of neurons (N). Furthermore, to achieve maximum encoding, a regular topography of pathways needs to be implemented. Such a regular connectivity topography may be difficult to maintain in detail and does not seem to be implemented in mouse V1. In the next section, we will see that more dilute and intermixed pathway models significantly increase the capacity for information transmission.
3.2. Random Selection Model
In the previously described models, the information pathways formed were local in the sense that neurons in a pathway were neighbouring neurons. They were also dense in the sense that all neurons within a radius from the pathway center are in the pathway. Both these restrictions limit severely the number of pathways that are non-interfering. To better understand these limitations suppose the pathway neurons are chosen from the full aggregate. Suppose that we are given a set of N neurons, and that each neuron is chosen at random with probability p = n/N to participate in a particular pathway. This does not quite fix the pathway neurons to be n, but rather demands the expectation value of the number of pathway neurons to be n. Let us say that the non-interference condition allows m0 neuron overlaps among distinct pathways and that activation of a pathway by co-activation of two other pathways is a rare event and can be ignored. A sketch of the pathway structure of the model is depicted in Fig.4.
Figure 4:

(a) On the right: The Random Selection Model. Neurons that form a pathway are selected at random from the neuron aggregate for every pathways. Here blue dots indicate neurons in pathway 1 and red dots neurons in pathway 2, while blue dots with red circles indicate the overlap neurons. (b) On the left: The Locality Preserving Random Selection Pathway Model. Here two specific pathways are indicated by the blue and red circles. Black dots within the blue and red pathways indicate neurons that do not belong to either pathway. Neurons in the overlap are again blue dots with red circles.
Since the overlap neuron number m determines whether there exists interference among pathways we need to calculate the overlap probability P (O = m) where O is the two pathway overlap random variable. To do this suppose that we have fixed the n neurons of pathway Pi, and that we count the ways we can choose the neurons of pathway Pj so as to have m neuron overlap. This number of ways is . Hence, the probability of m overlap in the pathways Pi, Pj is
| (26) |
Suppose now that we work in the large N limit, that is assume N ≫ n. Then we can apply Stirling’s formula to get the N dependence of the overlap probability. Doing this we get
| (27) |
The importance of this formula is that P (O = m) ~ N−m. Suppose now that the condition of no interference is O < m0. Then the probability of interference of the two pathways is
| (28) |
To determine the number of non-interfering pathways Np in this model we have to set a level of tolerance since the pathway overlap is stochastic. A rather strict tolerance level, that guarrantees that the pathways are operating properly, is to demand that it is unlikely to have one interfering pair per pathway. This is equivalent to saying that the expected number of interfering pairs is a small ϵ fraction of the number of pathways.
| (29) |
This means that the allowed number of non-interfering pathways is
| (30) |
Observe that the situation here is very different from the situation in either the 2D or the 3D dense neighbourhood pathway models. Here the number of non-interfering pathways increases like a power of the number of neurons, , while in either of the geometric models it increases linearly in the number of neurons, Np ~ N. Hence, the random selection model can store much more information than the dense neighbourhood pathway models, suggesting that the brain architecture may drop locality when there are many distinct signals to be encoded.
Nevertheless, there is also a penalty to pay. This model for pathway construction is not appropriate when topographical mapping has to be maintained. Since this happens in the early visual areas, this model may not be appropriate for pathways in V1. What we need in V1 is a model that will take the advantages of dilute pathways and combine them with the locality of the pathways in the dense neighbourhood pathway models. Such a model is the Locality Preserving Random Selection Pathway Model.
3.3. Locality Preserving Random Selection Pathway Model
This model assumes that the n pathway neurons are uniformly distributed within a distance Rn from the pathway center. The purpose of this restriction is to allow for spatially localized pathways, that are necessary for the encoding of local features in topographically mapped areas like V1.
2D Case:
Two densities are associated with this model. One is the neuron density d = N/Area, and the other is the pathway neuron density . In terms of these densities the number Nn of neurons within radius Rn from the pathway center is and this is greater than the number of pathway neurons n. The probability p that a neuron within radius Rn from the pathway center belongs to the pathway is p = dn/d = n/Nn. We will also assume that the maximum overlap permitted between pathways is m0 neurons, which is determined by the response properties of the neurons. The structure of the pathways of the model is depicted in Fig.4.
If the pathways form a closest packed lattice, as in the 2D geometric model, then the adjacent pathway overlap area is again given by (20). However the expected number of overlap neurons m is modified by the ratio of the two densities:
| (31) |
Here , and are again the appropriate dimensionless radius and pathway center distance. Solving implicitly (31) for after substituting the maximum overlap m0 for m, gives us the minimum pathway distance .
If is different from zero, this is sufficient to give the maximum number of pathways that can be packed in our neuron area to be
| (32) |
In this case, the maximum number of pathways NP increases linearly with the number of neurons present.
The situation can change when . From (31) it is easy to see that this happens when
| (33) |
In this case, two pathways can operate without interference at any distance D. However, if too many overlapping pathways are present, some pairs will interfere. In this case, it is unlikely to maintain closest packed structure for the pathway centers. Let us denote by the number of interfering pairs. As in the Random Selection Model, we will demand that the expected number of interfering pairs is small compared to the number of pathways,
| (34) |
Let us consider now two adjacent pathways. The probability pb that a random neuron belongs to the overlap of these pathways is given by
| (35) |
where Am is the overlap area and Area is the area of all the neurons. The ratio Am/Area represents the probability that the neuron picked is in the overlap area, and p2 represents the probability that it belongs to both pathways.
Consider the interference probability which is the probability that the overlap of two adjacent pathways is greater than or equal to m0. Then
| (36) |
Since in the regime we are working pb is small, we can apply the Poisson approximation to the binomial distribution and get
| (37) |
The ratio of two successive terms in this sum is given by
| (38) |
Recall that we are in the regime where the expected number of overlap neurons Npb, when two pathways overlap completely, is less than m0, so as to have . In our case, we do not have complete overlap, hence the condition Npb ≪ m0 is rather a mild condition to impose. Hence, in we can keep only the first term to get
| (39) |
Using now (36) and (20) it is easy to show that
| (40) |
Here is a geometric factor that is valued in the interval [0, 1] and is zero when D > 2Rn. By applying Stirling’s formula on the factorial in (39) we get
| (41) |
where r(D) = n2f(D)/Nnm0 < 1 in our regime of interest. Noticing that the function g(r) = r − ln(r) − 1 is decreasing for r < 1 and r(1) = 0 we get that g(r(D)) > 0. Hence, we have a negative exponent of m0 in the pathway interference probability.
If we assume that we have random positioning of the pathways, then the probability that two pathways being in [D, D + dD] apart is
| (42) |
This means that the confusion probability which is now independent of the pathway distance D is
| (43) |
This is expected to retain the exponential behaviour in m0 with an effective coefficient g(rmax) where rmax = max{r(D)} = n2/Nnm0. In fact it is easy to see that
| (44) |
hence we get the bound
| (45) |
In this case, the tolerance condition (34) assumes the form
| (46) |
This is guarranteed if
| (47) |
hence
| (48) |
This model has a linear dependence of the number of pathways Np on the number of neurons N, however there is an exponential dependence of the maximum pathway number on the maximum overlap m0. This means that once we have dilute enough pathways, so that the condition is satisfied, then the number of allowed, non-interfering pathways, increases rapidly. Furthermore, once the diluteness of the pathways is regulated in the brain, pathways can be randomly placed in almost arbitrarily large numbers without significant interference. In practice however there probably are other restrictions, not to be studied here, that limit the number of pathways allowed, like for example the ability of the brain to adress these pathways.
3D Case:
The situation in 3D differs only by the geometric factors. In this case
| (49) |
Here , and are again the appropriate dimensionless radius and pathway center distance. Solving implicitly (49) for after substituting m0 for m gives us . As in the 3D Dense Neighbourhood Pathway Model, the number of pathways that can be packed at this closest distance is
| (50) |
The situation is again expected to change when . This happens, as in the 2D model, when m0 > n2/Nn. In this case
| (51) |
Again the bound for is given by (44), as in the 2D case. There is a difference however in the bound of for disordered pathways because now
| (52) |
The new 3D bound is given by
| (53) |
where rmax is as in the 2D case.
This model, in both the 2D and 3D case, should really be thought of as a hybrid model of the Dense Neighborhood Pathway Model and the Random Selection Model, where good features are kept from each model, always at a cost. Locality is kept from the Dense Neighborhood Pathway Model, but it is less strict since there is random neuron selection within local neighborhoods. This allows for an increased number of pathways encoding distinct signals to be implemented, albeit not quite as many as in the Random Selection Model. This model exhibits two limiting conditions: 1) If the minimum distance between the pathway centers remains significantly greater than zero then the number of pathways increases linearly in N, similar to the Dense Neighborhood Pathway Model. However, 2) if the minimum distance allowed can approach zero, which means that pathway center locations may overlap without too many common neurons, then the number of pathways encoded increases exponentially in the maximum allowed overlap m0, though only linearly in the overall number of neurons N. In this limit, this model can still encode a vast number of distinct signals, while maintaining a local structure that does not require a regular pathway topography. Of course, in real animals, though neuronal pathways are likely to be intermixed they are unlikely to represent a realization of the idealized models we consider here. Realistic connectivity profiles are more likely to peak at close distances and then drop as a function of distance rather than be truly random within a localized domain of a certain radius. However, the effect of these differences is not expected to alter the basic exponential dependence described.
4. Effect of Simultaneous Activity of Many Pathways
Up to this stage we have not considered the simultaneous activation of multiple pathways. However, in real animals, this is not the case. Hence it is possible that an inactive pathway P receives interference from more than one overlapping pathways. Here we estimate the effect that the parallel activation of multiple pathways has on the interference an inactive pathway receives. We find that the Dense Neighborhood Pathway Model and the Locality Preserving Random Selection Pathway Model when pathway centers are forced to remain at a significant distance away from each other (first case), receive little interference from multiple pathway activation unless a large fraction of existing pathways is simultaneously activated. In contrast, the Random Selection Model and the Locality Preserving Random Selection Pathway Model when pathway centers can approach each other (second case) generally exhibit significant interference from the simultaneous activation of multiple pathways. The reason is that in these cases much larger overlaps between pathways are allowed (Section 3), so a small fraction of active pathways may be able to activate the majority of neurons in pathways that should otherwise be silent. Nevertheless, if this fraction is kept small, the number of pathways that can be activated in parallel without producing interference increases exponentially in the number of neurons that must be firing within a pathway to activate it.
Let us consider the effect of two coactive pathways that overlap with a given pathway P, in either the Dense Neighbourhood Pathway Model or the Locality Preserving Random Selection Pathway Mode in the first state. If Pp ≪ 1 is the fraction of pathways that are active at a particular time, and the number of pathways overlapping with P, OP, of order 1, then the probability that two overlapping pathways are active together is . If the probability that two overlapping pathways with P are coactive is small, the probability of interference on P by the coactivity of pairs of overlapping pathways is also expected to be small, assuming of course that there is no bias on which pathways are active at a particular time.
The situation is different when one considers the Random Selection Model or the Locality Preserving Random Selection Pathway Model in its second state. The reason for this is that a particular pathway P may overlap with a large number of pathways, hence the effect of multiple active pathways is not negligible. To estimate the effect of this, we will assume that a fraction f (significantly smaller than 1) of the N neurons available is active at a particular time due to the activation of multiple pathways. Since the construction of pathways is random, we can assume that these active neurons are uniformly distributed in the aggregate. Now the probability that the pathway P receives input from m aggregate firing neurons is
| (54) |
and the probability that m exceeds the pathway activation threshold, m ≥ K, is bounded by
| (55) |
provided that the expected number of aggregate firing neurons within our pathway P, nf ≪ K. If we now demand that the expected number of pathways NpP (IA ≥ K) suffering interference is smaller than ϵ, then we have
| (56) |
Hence, in this case the number of non-interfering pathways increases exponentially in the minimum number of active neurons necessary to activate the pathway P.
It is important to note that the number of non-interfering pathways allowed can increase towards the prior limit (Section 3) if there is a mechanism to diminish the probability of activation in potentially interfering parallel pathways, when a particular pathway is active. It is possible that a mechanism to reduce parallel-pathway interfering activity may be provided by attention: for example, the neural pathway that encodes an attended object tends to be enhanced while pathways that encode unattended objects are relatively silenced.
5. Test case: Interneuron Pyramidal Partner Groups in mouse V1
We will try to test the implications of the above calculations with two-photon data collected from layer II/III of the V1 area of adult mice [21]. In [21], Palagina et al. raise the possibility that pyramidal neurons connected to certain types of interneurons may form distinct interconnected information pathways. We note that this is not a universally accepted assumption, as there is evidence that various types of interneurons also exert more global, non-specific, control over their neighboring neuronal circuit. Nevertheless, we believe that there may be room for both hypotheses: for example, under certain conditions, global quenching of the network activity may be required, while under others it may be beneficial to have the capacity to specifically quench the activity of one pathway while sparing activity in nearby distinct pathways. The latter can be efficiently accomplished if there is a “local controller” of information pathways, and this role may be played by specific interneurons. Accordingly, there is evidence that pyramidal neurons, functionally connected to a particular interneuron, tend to have similar tuning properties [21], which is compatible with the possibility that they form a joined group processing information [21],[22],[23].
Such groups of pyramidal neurons linked to local interneurons may represent a framework of information pathways. In such a cluster, the coordinated firing of functionally similar pyramidal cells may be controlled by the partner interneuron(s). Of course it is unlikely that there is an interneuron dedicated to every pathway because this would demand too many interneurons. Indeed, many types of interneurons were shown to form synapses with a majority of pyramidal cells in the vicinity [24],[25],[26], [27], [28]. The coexistence of this blanket anatomical connectivity and feature-selective interneuron – pyramidal cell functional clusters, likely results from the graded synaptic connectivity strength between an individual interneuron and pyramidal cells that surround it [26], [29], [30], [31], and also from the graded synaptic connectivity strength between pyramidal cells themselves and from the specificity of their connections [32], [33], [34], [35], [36], [37]. Thus, during operation under natural viewing conditions, when the activity of V1 neurons is sparse [11], [39], [12],[38], a specific feature likely recruits only a subset of pyramidal cells and interneurons with strongest reciprocal connections. Such subsets may represent largely separated feature-selective information pathways with low inter-pathway interference.
Here we proceed under the assumption that the interneuron-pyramidal-partner cluster pathways identified by Palagina et al. [21](see also [22]) constitute basic information processing pathways, and we apply the strategy outlined above to check whether such pathways have internal consistency with regard to their information encoding properties. As the groups identified have limited spatial extent, we will use the locality preserving random selection pathway model after extrapolating the data, which were acquired inplane, to 3-dimensions. To perform the extrapolation we use the fact that neuronal diameter is 10 microns, therefore in-plane photon recordings typically record from neurons over a thickness of dt 20 microns. The cut-off radius for extrapolation was taken to be Rn 250 microns, the distance over which functional connectivity of pyramidal neurons to the “parent” interneuron decays [21]. We assume constant pyramidal neuron density within Rn. Although approximate, this should capture the basic structure of the pathways described in [42],[40],[41]. Note that we do not in any way imply that the pathways have a strictly spherical shape. The actual pathways have significant directional properties and their extend is anyway limited by the border of the mouse cortical layer (layer 2/3).
The overlap between different pathways is estimated as follows: First two interneurons i, j are selected. With the interneuron positions as pathway centers, the area of the intersection, Aij, of the circles of radius Rn that is within our recording area, is selected and pathway intersection neurons that are within Aij counted. Table 1 lists these overlaps from an experiment whose field of view contained 5 interneurons. This number is associated to cortical volume Vij = Aij × dt, and can be extrapolated to the overlap volume of the two spheres of radius Rn by proportionality (results listed in table 1). The diagonal elements of table 1 correspond to the number of pyramidal neurons that constitute the information processing pathway associated with each interneuron, and the last column (Nn) is the total number of pyramidal neurons contained within sphere Rn, whether they belong to the corresponding pathway or not.
Table 1: Interneuron Pathway Overlap Data.
Real overlap data is the interneuron pathway overlap data observed, while the projected overlap data is the projected overlap data of spherical interneuron pathways of radius Rn. The diagonal elements are the (real or projected) pathway neurons. Nn is the projected number of pyramidal neurons within the radius Rn, no matter whether they belong to a pathway or not.
| 1 | 2 | 3 | 4 | 5 | 1 | 2 | 3 | 4 | 5 | Nn |
|---|---|---|---|---|---|---|---|---|---|---|
| 73 | 32 | 21 | 42 | 13 | 1972 | 642 | 267 | 468 | 178 | 3694 |
| 32 | 72 | 42 | 52 | 31 | 642 | 2014 | 657 | 756 | 365 | 3842 |
| 21 | 42 | 50 | 44 | 27 | 267 | 657 | 1653 | 1359 | 506 | 3202 |
| 42 | 52 | 44 | 81 | 34 | 468 | 756 | 1359 | 2274 | 714 | 3282 |
We then ask the question whether the extrapolated intersection numbers Nij (table 1) are compatible with the definite-response non-interference condition computed in equation (19). In order for pathways not to interfere with each other with confidence limit ϵ = 0.01, ϵ = 0.05 and ϵ = 0.1, the discriminant (D) given by
| (57) |
should be D > 2.33, D > 1.65 or D > 1.29 respectively. For orientation tuning, p stands for the neuron probability of response at its orientation tuning, while p0 is the probability of response at resolution angle to the orientation tuning. Since typical values of both p and p0 vary considerably depending on the tuning of particular neurons, we will only fix the probability resolution dp = p − p0 = 0.1 to typical values, and we will take a near worst case scenario by assuming p = 0.5, p0 = 0.4. This maximizes the potential for interference of two overlapping pathways (neuron number n, overlap neurons m), tuned at resolution angle difference, each responding to its orientation tuning with probability p and to the orientation tuning of the other with probability p0. (dp = 0.1; see fig.2).
Fig.5 plots the values of D for all pairs of interneuron pathways obtained from 4 different animals. Horizontal lines represent confidence limit thresholds. Note that all but one pathway pairs (x-axis) satisfy the non-interference definite-response condition at the ϵ = 0.05 confidence limit. This result is encouraging. Naturally, much more work is needed to test empirically how the predictions of the model fit with experimental data in different information encoding situations. In the future, we plan to check this for different types of information processing pathways in different V1 cortical layers.
Figure 5:

The discriminant D is plotted for every pathway (interneuron) pair. The value of D should be above the cutoff line for every pair of pathways to avoid interference. The three cutoff lines correspond to the confidence limits ϵ = 0.01, ϵ = 0.05 and ϵ = 0.1. The top left panel corresponds to the table dataset. Interneuron pair (3,4) appears to have interference. Note that the two interneurons are also very close together. The other three pannels correspond to three more adult mice.
6. Conclusion
We examined the properties of neuronal information encoding pathways under a series of simplifying assumptions. Individual neurons were assumed to respond stochastically to input signals and definite encoding of a particular signal was assumed to result from the aggregate near synchronous activity of a set of neurons, which form an information encoding pathway. We then derived conditions for the response of this pathway to be (i) definite, and (ii) not to interfere with the spontaneous firing of neighboring distinct pathways.
The following conclusions can be drawn: (a) Information capacity of neuron”information pathway” ensembles is severely limited if pathways are dense. Hence, it is not favored to have pathways that include all neurons within a region around the pathway center. The reason for this is that pathways cannot come close together without causing interference. Maximization of the information capacity in this case forces the pathways to maintain a lattice geometric structure (ordered phase) which is not observed in mouse V1. In contrast, intermixed pathways allow random spatial placement of pathway centers (disordered phase), achieving much larger information capacity. (b) For pathways in the ordered phase, and assuming that only pairwise overlaps cause confusion, the number of pathways that fit in an N-neuron ensemble increases linearly in N. In the random selection model without distance restriction, this linear behavior changes to a power law behavior where m0 is the maximum allowed overlap between two pathways. However, this is not realistic since pathways are formed by random selection of neurons from the whole neuron assembly. A more realistic hybrid model that maintains locality while allowing “dilute” pathways is the locality preserving random selection pathway model with a sharp cutoff. For this model, once the density of neurons engaged in the formation of pathways is low enough to allow complete overlap of the pathways (in the geometric sense), the pathway organization enters in the disordered phase. In this phase, the number of pathways increases linearly in N but exponentially in m0, allowing a large number of non-interfering pathways to coexist. Here m0 is expected to be a significant fraction of the number of pathway neurons. (c) A similar behavior is expected to occur for a more realistic model that does not exhibit a sharp cutoff or a completely random connectivity distribution, but has the probability that a neuron belongs to a pathway drop with distance from the pathway center (see Appendix), assuming this probability is small enough to ensure pathways are “dilute”. This occurs because for sufficiently “dilute” pathways it is possible to bring two pathway centers on top of each other without an overlap higher than the allowed overlap m0, leading to an exponential increase in the number of pathways . The exponential increase in the number of non-interfering pathways arises from counting the geometrically overlapping pathways that are “dilute” enough not to interfere with each other. (d) We used our model to study a particular type of postulated information processing pathway consisting of pyramidal neurons functionally connected to particular “partner” interneurons in Layer 2/3 of area V1 of adult mice [21]. Pathways identified were found to satisfy the constraints imposed by our models. However, the analysis we propose here is more general and should apply to differently identified neuronal ensembles that constitute information encoding pathways. (e) Finally, we did not attempt to model in a realistic way the molecular, synaptic, cellular and multi-cellular biological components which store, address or transmit information. The advantage of doing this is that it allowed us to extract from first principles basic constraints that are generally applicable for transmitting information without interference in neural network systems, under the simplifying assumptions we state above. As such the work applies to multiple different neural systems and organisms that may have very different biological implementations. Studying specific biological implementations of the above principles is an important direction for future experimental and modelling work.
The most important simplifications we made in drawing our conclusions are the following: (a) Definite information is carried by pathways that are either collectively active or inactive, based on whether a signal is present or not (“bimodal pathways”). Such bimodal pathways seem to be well suited for object and discrete signal recognition, such as parallel (pop-up) object recognition. Similar pathways have been theorized to contribute to neural computations [42],[41] and bear resemblance to ensembles of neurons that encode odors in the olfactory system of the fruit fly, particularly in the Kenyon cell group[43],[44],[45],[40],[41]. Information from the olfactory receptors converges to projection neurons and then it is vastly sparsified by projecting to approximately 2500 kenyon cells through random connections [44], [43]. Simulations carried out in [45] indicate that the formation of sparse probabilistic information pathways (termed odor hashes) allows for maintaining efficiently odor similarity according to the overlap of the sets of pathway neurons encoding the distinct odors. This suggests the formation of sparse pathways, that are not only non-interfering since thay encode distinct odors, but that also maintain information carrying overlap structure. (b) We took all pathways to have equal size and neurons to be randomly placed within the physical region where the pathway lies. Size equality is similar to the ‘Equal Citizen’ principle proposed by Valiant [42]. We do not expect these assumptions to lead to qualitative differences in the conclusions we draw. (c) The response properties of individual neurons are stochastic and were taken to have similar probability for all neurons in a given pathway. It is evident that in real mice not all neurons have the same response properties, however this simplifying assumption still captures the essential qualitative features that emerge at our level of analysis. (d) Another implicit assumption is that the neuron or neurons that will “read” the output of an “information pathway” has the ability to adjust its threshold, or the synaptic inputs it receives from the pathway, so as to achieve maximal discriminability. This is the same thing as saying that if the pathway can distinguish among two signals, then the read-out neuron can also read this distinction. This implies a sigmoid response function of adjustable slope and seems also to be physiologically plausible.
Note that, although information capacity increases drastically in the disordered phase, the ability to manipulate and address this information can be problematic. For example, lets assume that there is a direct control on pathways by interneurons as argued in [21]. The number of interneurons is much smaller (at best on the order of 15%) than the number of neurons N, hence this would likely limit too severely the information capacity unless combinations of interneurons are used to adress pathways. However, one pyramidal neuron is functionally connected with only a small number of interneurons. Nevertheless, in some situations it would be advantageous to exert direct control over the pathways in a way in which the organism can turn one pathway on or off selectively, without interfering with distinct neighboring pathways that encode other signals. Hence there must be a mechanism to address individual information pathways for information processing and memory retrieval. In this work, we do not attempt to answer the important question of pathway addressing. It might well be that interneuron control is not exerted over single pathways, but rather over specific “root” pathways that may control the thread of information processing across multiple more elementary pathways.
The way information pathways are packed together can teach us about the structure of the firing patterns of neurons that encode and transmit information. Experiments that concentrate on extracting information from multi-neuronal population activity are expected to reveal the structure of the information pathways that operate in neocortical areas and whether their properties conform with our predictions, as well as with the predictions of other groups (e.g., [42], [40], [41],[45]) that study associations and memory. Further experimental and theoretical work on the nature of information pathways and how they get modified by learning has a lot to teach us about how computations are performed in the brain.
Supplementary Material
Acknowledgements
This work was supported in part by the Simons Foundation Research Award #402047 and the NEI RO1 award EY024019 to SS.
This work was also funded by the Hellenic Foundation for Research & Innovation (HFRI) and General Secretariat for Research & Technology (GSRT), Project # 2285 (neuronXnet).
We would like to thank Thomais Asvestopoulou for her editorial review.
7. Appendix: The Case where the Pathway Inclusion Probability Depends on Distance from Pathway Center
7.1. 2D Case
Let us now suppose that there is a probability P (|r − rA|) for a neuron at position r to belong to the pathway A, whose center is located at rA. Each neuron is assumed to have uniform probability to be anywhere in the area considered. Since in this model the number of neurons in a pathway can vary, we will normalize the number of such neurons by the expectation value
| (58) |
This gives us that the expected value of the distance of the neurons in a pathway is given by . The probability that a neuron belongs to both pathways A and B is given by
| (59) |
This probability is naturally expressed in elliptical coordinates since it involves distance from two poles. If we set the distance of the two pathway centers to be |rA − rB| = 2a then equation (59) becomes
| (60) |
The probability of m neuron overlap is given by
| (61) |
where qb = 1 − pb.
The probability of interference of two pathways is given by
| (62) |
where m0 is really determined by the neuron probability of firing given pathway signal and the probability of firing spontaneously for the two pathways. Here, for the last equality we have used the normal approximation of the binomial distribution.
7.2. 3D Case
In the 3D case the pathways are assumed to have spherical shape with a neuron density that varies with distance from the center. Since the probability that a neuron is located in d3r is d3r/V, we have that the probability that a given neuron belongs to pathway A is . This leads to the neuron number expectation value normalization
| (63) |
As in the 2D case, the probability for a neuron to belong to both pathways A, located at rA, and pathway B located at rB is given by
| (64) |
This overlap probability simplifies again if we use elliptical coordinates on a plane through the two pathway centers and then we rotate on the axes of the two pathway centers. In this way we get
| (65) |
where y in the above formula stands for the cartesian y coordinate and w1 = cosh u while w2 = cos v. Again, the probability of interference of two pathways is given in terms of this 3D overlap probability pb by equation (62).
As in the locality preserving random selection pathway model, we expect that we have two phases assuming maximum pathway number. One is the ordered phase where pathway centers are not allowed to overlap, since in this phase two overlapping pathways share enough neurons to cause interference in their operation. This phase occurs when the pathways are “dense”. The other phase is the disordered phase. In this case, even when the pathway centers overlap, the number of common neurons in the two pathways is small and not sufficient to cause interference. In this case, the lattice structure is difficult to maintain, since two pathways can, on their own, come as close as necessary. In this phase, pathways can be randomly placed on the plane or 3D space up to a certain density of pathways that makes interference likely. This second phase occurs when pathways are “dilute” in the sense that within a neighbourhood of the pathway center a small fraction of the neurons belongs to the pathway. As has become clear from the locality preserving random selection pathway model, in this phase the information capacity is very large, increasing exponentially in the number of overlapping neurons.
References
- [1].Tolhurst DJ, Movshon JA, Dean AF, The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Res. 1983; 23:775–785. [DOI] [PubMed] [Google Scholar]
- [2].Shadlen MN and Newsome WT, The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J. Neurosci 1998; 18:3870–3896. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Averbeck BB, Latham PE, Pouget A Neural correlations, population coding and computation Nature reviews neuroscience, 2006 [DOI] [PubMed] [Google Scholar]
- [4].Pruszynski JA, Zylberberg J The language of the brain: real-world neural population codes Curr Opin Neurobiol, 2019 [DOI] [PubMed] [Google Scholar]
- [5].Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I Invariant visual representation by single neurons in the human brain Nature, 2005 [DOI] [PubMed] [Google Scholar]
- [6].Perrett DI, Rolls ET, Caan W, Visual neurons responsive to faces in the monkey temporal cortex. Exp Brain Res. 198247 (3): 329–42. [DOI] [PubMed] [Google Scholar]
- [7].Miller Jae-eun Kang, Ayzenshtat Inbal, Carrillo-Reid Luis, and Yuste Rafael Visual stimuli recruit intrinsically generated cortical ensembles PNAS September23, 2014111 (38) E4053–E4061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Carrillo-Reid Luis, Miller Jae-eun Kang, Hamm Jordan P., Jackson Jesse and Yuste Rafael Endogenous sequential cortical activity evoked by visual stimuli Journal of Neuroscience 10 June 2015, 35 (23) 8813–8828 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Ohki K, Chung S, Ch’ng YH, Kara P, Reid RC Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex Nature volume 433, 597–603 (2005) [DOI] [PubMed] [Google Scholar]
- [10].Dräger UC Receptive fields of single cells and topography in mouse visual cortex Journal of Comparative Neurology, (1975) [DOI] [PubMed] [Google Scholar]
- [11].Olshausen BA, Field DJ Sparse coding of sensory inputs Current opinion in neurobiology, 2004 [DOI] [PubMed] [Google Scholar]
- [12].Vinje WE, Gallant JL Sparse coding and decorrelation in primary visual cortex during natural vision Science, 2000 [DOI] [PubMed] [Google Scholar]
- [13].Cohen MR, Kohn A Measuring and interpreting neuronal correlations Nat. Neurosci 14(7), 811–819 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Ecker Alexander S., Berens Philipp, Cotton R. James, Subramaniyan Manivannan, Denfield George H., Cadwell Cathryn R., Smirnakis Stelios M., Bethge Matthias, and Tolias Andreas S. State Dependence of Noise Correlations in Macaque Primary Visual Cortex Neuron 82, 235–248, April2, 2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Kanitscheider Ingmar, Coen-Cagli Ruben, and Pouget Alexandre Origin of information-limiting noise correlations PNAS December15, 2015112 (50) E6973–E6982 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Zohary E, Shadlen MN, Newsome WT Correlated neuronal discharge rate and its implications for psychophysical performance Nature 370(6485), 140–143. (1994) [DOI] [PubMed] [Google Scholar]
- [17].Abbott LF, Dayan P The effect of correlated variability on the accuracy of a population code Neural Comput 11(1):91–101, (1999) [DOI] [PubMed] [Google Scholar]
- [18].Andermann Mark L., Kerlin AM and Reid RC Chronic cellular imaging of mouse visual cortex during operant behavior and passive viewing Front Cell Neurosci. 2010. March 12;4:3 (2010) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Seriès P, Latham PE, Pouget A. Tuning curve sharpening for orientation selectivity: coding efficiency and the impact of correlations. Nat Neurosci. 2004October;7(10):1129–35 (2004) [DOI] [PubMed] [Google Scholar]
- [20].Golshani Peyman, Goncalves J. Tiago, Khoshkhoo Sattar, Mostany Ricardo, Smirnakis Stelios, Portera-Cailliau Carlos Internally Mediated Developmental Desynchronization of Neocortical Network Activity The Journal of Neuroscience, September2, 2009, 29(35):10890–10899 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Palagina Ganna, Meyer Jochen F. and Smirnakis Stelios M., Inhibitory Units: An Organizing Nidus for Feature-Selective SubNetworks in Area V1. Journal of Neuroscience 19 June 2019, 39 (25) 4931–4944; [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Bonifazi P, Goldin M, Picardo MA, Jorquera I, Cattani A, Bianconi G, Represa A, Ben-Ari Y, Cossart R GABAergic Hub Neurons Orchestrate Synchrony in Developing Hippocampal Networks Science 04December2009, Vol. 326, Issue 5958, pp. 1419–1424 [DOI] [PubMed] [Google Scholar]
- [23].Roux L and Buzsáki G Tasks for inhibitory interneurons in intact brain circuits Neuropharmacology, 88:10–23, (2015) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Fino Elodie, Packer Adam M., and Yuste Rafael The Logic of Inhibitory Connectivity in the Neocortex The Neuroscientist 19(3) 228–237, 2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Hofer Sonja B, Ko Ho, Pichler Bruno, Vogelstein Joshua, Ros Hana, Zeng Hongkui, Lein Ed, Lesica Nicholas A, Mrsic-Flogel Thomas D Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex Nat Neurosci 14, 1045–1052 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Znamenskiy Petr, Kim Mean-Hwan, Muir Dylan R., Iacaruso Maria Florencia, Hofer Sonja B., Thomas D. Mrsic-Flogel Functional selectivity and specific connectivity of inhibitory neurons in primary visual cortex Advance online publication, doi: 10.1101/294835, (2018) [DOI] [Google Scholar]
- [27].Packer AM and Yuste R Dense, unspecific connectivity of neocortical parvalbumin-positive interneurons: a canonical microcircuit for inhibition? Journal of Neuroscience, 31 (37) 13260–13271, (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Oláh S, Füle M, Komlósi G, et al. Regulation of cortical microcircuits by unitary gaba-mediated volume transmission Nature. 2009;461(7268):1278–1281. doi: 10.1038/nature08503, (2009) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Yoshimura Y and Callaway E. Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity Nat Neurosci.8(11):1552–9, (2005) [DOI] [PubMed] [Google Scholar]
- [30].Safari M, Mirnajafi-Zadeh J, Hioki H et al. Parvalbumin-expressing interneurons can act solo while somatostatinexpressing interneurons act in chorus in most cases on cortical pyramidal cells Sci Rep 7, 12764 (2017) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Kwan AC, Dan Y Dissection of cortical microcircuits by single-neuron stimulation in vivo Curr Biol. 2012August21;22(16):1459–67, (2012) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Yoshimura Y, Dantzker J and Callaway E Excitatory cortical neurons form fine-scale functional networks Nature 433, 868–873 (2005) [DOI] [PubMed] [Google Scholar]
- [33].Wertz Adrian, Trenholm Stuart, Yonehara Keisuke, Hillier Daniel, Raics Zoltan, Leinweber Marcus, Szalay Gergely, Ghanem Alexander, Keller Georg, Rózsa Balázs, Conzelmann Karl-Klaus, Roska Botond Single-cell initiated monosynaptic tracing reveals layer-specific cortical network modules Science, Vol. 349, Issue 6243, pp. 70–74 (2015) [DOI] [PubMed] [Google Scholar]
- [34].Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB Highly nonrandom features of synaptic connectivity in local cortical circuits PLOS Biology 3(10): e350 (2005) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Lee W, Bonin V, Reed M et al. Anatomy and function of an excitatory network in the visual cortex Nature 532, 370–374 (2016) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [36].Ko H, Hofer S, Pichler B et al. Functional specificity of local synaptic connections in neocortical networks Nature 473, 87–91 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Ko H, Cossell L, Baragli C et al. The emergence of functional microcircuits in visual cortex Nature 496, 96–100 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Buzsaki Gyorgy Neural Syntax: Cell Assemblies,Synapsembles, and Readers Neuron 68, 362–385, Issue 3, November4, 2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Rolls ET and Tovee MJ Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex J. Neurophysiol 73, 713–726 (1995) [DOI] [PubMed] [Google Scholar]
- [40].Papadimitriou Christos H., Vempala Santosh S. Random Projection in the Brain and Computation with Assemblies of Neurons10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10–12, 2019, San Diego, California, USA, pages 57:1–57:19, (2019) [Google Scholar]
- [41].Papadimitriou Christos H., Vempala Santosh S., Mitropolsky Daniel, Collins Michael, Maass Wolfgang Brain computation by assemblies of neurons doi: 10.1101/869156, (2020) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [42].Valiant LGl. Memorization and association on a realistic neural model Neural Comput. 17(3):527–55 (2005) [DOI] [PubMed] [Google Scholar]
- [43].Yang Guangyu Robert, Wang Peter Yiliu, Sun Yi, Litwin-Kumar Ashok, Axel Richard, Abbott LF, Evolving the Olfactory System 2019 Conference on Cognitive Computational Neuroscience, (2019) [Google Scholar]
- [44].Vosshall LB, Wong AM, Axel R An olfactory sensory map in the fly brain Cell, 102(2), 147–159, (2000) [DOI] [PubMed] [Google Scholar]
- [45].Dasgupta Sanjoy, Stevens Charles F., Navlakha Saket A neural algorithm for a fundamental computing problem Science 10 Vol. 358, Issue 6364, pp. 793–796, (2017) [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
