Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2014 Feb 27;10(2):e1003469. doi: 10.1371/journal.pcbi.1003469

The Sign Rule and Beyond: Boundary Effects, Flexibility, and Noise Correlations in Neural Population Codes

Yu Hu 1,*, Joel Zylberberg 1, Eric Shea-Brown 1,2,3
Editor: Jonathan W Pillow4
PMCID: PMC3937411  PMID: 24586128

Abstract

Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.

Author Summary

Sensory systems communicate information to the brain — and brain areas communicate between themselves — via the electrical activities of their respective neurons. These activities are “noisy”: repeat presentations of the same stimulus do not yield to identical responses every time. Furthermore, the neurons' responses are not independent: the variability in their responses is typically correlated from cell to cell. How does this change the impact of the noise — for better or for worse? Our goal here is to classify (broadly) the sorts of noise correlations that are either good or bad for enabling populations of neurons to transmit information. This is helpful as there are many possibilities for the noise correlations, and the set of possibilities becomes large for even modestly sized neural populations. We prove mathematically that, for larger populations, there are many highly diverse ways that favorable correlations can occur. These often differ from the noise correlation structures that are typically identified as beneficial for information transmission – those that follow the so-called “sign rule.” Our results help in interpreting some recent data that seems puzzling from the perspective of this rule.

Introduction

Neural populations typically show correlated variability over repeat presentation of the same stimulus [1][4]. These are called noise correlations, to differentiate them from correlations that arise when neurons respond to similar features of a stimulus. Such signal correlations are measured by observing how pairs of mean (averaged over trials) neural responses co-vary as the stimulus is changed [3], [5].

How do noise correlations affect the population's ability to encode information? This question is well-studied [3], [5][16], and prior work indicates that the presence of noise correlations can either improve stimulus coding, diminish it, or have little effect (Fig. 1). Which case occurs depends richly on details of the signal and noise correlations, as well as the specific assumptions made. For example [8], [9], [14] show that a classical picture — wherein positive noise correlations prevent information from increasing linearly with population size — does not generalize to heterogeneously tuned populations. Similar results were obtained by [17], and these examples emphasize the need for general insights.

Figure 1. Different structures of correlated trial-to-trial variability lead to different coding accuracies in a neural population.

Figure 1

(Modified and extended from [5].) We illustrate the underlying issues via a three neuron population, encoding two possible stimulus values (yellow and blue). Neurons' mean responses are different for each stimulus, representing their tuning. Trial-to-trial variability (noise) around these means is represented by the ellips(oid)s, which show 95% confidence levels. This noise has two aspects: for each individual neuron, its trial-to-trial variance; and at the population level, the noise correlations between pairs of neurons. We fix the former (as well as mean stimulus tuning), and ask how noise correlations impact stimulus coding. Different choices (A–D) of noise correlations affect the orientation and shape of response distributions in different ways, yielding different levels of overlap between the full (3D) distributions for each stimulus. The smaller the overlap, the more discriminable are the stimuli and the higher the coding performance. We also show the 2D projections of these distributions, to facilitate the comparison with the geometrical intuition of [5]. First, Row A is the reference case where neurons' noise is independent: zero noise correlations. Row B illustrates how noise correlations can increase overlap and worsen coding performance. Row C demonstrates the opposite case, where noise correlations are chosen consistently with the sign rule (SR) and information is enhanced compared to the independent noise case. Intriguingly, Row D demonstrates that there are more favorable possibilities for noise correlations: here, these violate the SR, yet improve coding performance vs. the independent case. Detailed parameter values are listed in Methods Section “Details for numerical examples and simulations”.

Thus, we study a more general mathematical model, and investigate how coding performance changes as the noise correlation are varied. Figure 1, modified and extended from [5], illustrates this process. In this figure, the only aspect of the population responses that differs from case to case are the noise correlations, resulting in differently shaped distributions. These different noise structures lead to different levels of stimulus discriminability, and hence coding performance. The different cases illustrate our approach: given any set of tuning curves and noise variances, we study how encoded stimulus information varies with respect to the set of all pairwise noise correlations.

Compared to previous work in this area, there are two key differences that makes our analysis novel: we make no particular assumptions on the structure of the tuning curves; and we do not restrict ourselves to any particular correlation structure such as the “limited-range” correlations often used in prior work [5], [7], [8]. Our results still apply to the previously-studied cases, but also hold much more generally. This approach leads us to derive mathematical theorems relating encoded stimulus information to the set of pairwise noise correlations. We prove the same theorems for several common measures of coding performance: the linear Fisher information, the precision of the optimal linear estimator (OLE [18]), and the mutual information between Gaussian stimuli and responses.

First, we prove that coding performance is always enhanced – relative to the case of independent noise – when the noise and signal correlations have opposite signs for all cell pairs (see Fig. 1). This “sign rule” (SR) generalizes prior work. Importantly, the converse is not true, noise correlations that perfectly violate the SR –and thus have the same signs as the signal correlations – can yield better coding performance than does independent noise. Thus, as previously observed [8], [9], [14], the SR does not provide a necessary condition for correlations to enhance coding performance.

Since experimentally observed noise correlations often have the same signs as the signal correlations [3], [6], [19], new theoretical insights are needed. To that effect, we develop a new organizing principle: optimal coding will always be obtained on the boundary of the set of allowed correlation coefficients. As we discuss, this boundary can be defined in flexible ways that incorporate constraints from statistics or biological mechanisms.

Finally, we identify conditions under which appropriately chosen noise correlations can yield coding performance as good as would be obtained with deterministic neural responses. For large populations, these conditions are satisfied with high probability, and the set of such correlation matrices is very high-dimensional. Many of them also strongly violate the SR.

Results

The layout of our Results section is as follows. We will begin by describing our setup, and the quantities we will be computing, in Section “Problem setup”.

In Section “The sign rule revisited”, we will then discuss our generalized version of the “sign rule”, Theorem 1, namely that signal and noise correlations between pairs of neurons with opposite signs will always improve encoded information compared with the independent case. Next, in Section “Optimal correlations lie on boundaries”, we use the fact that all of our information quantities are convex functions of the noise correlation coefficients to conclude that the optimal noise correlation structure must lie on the boundary of the allowed set of correlation matrices, Theorem 2.

We will further observe that there will typically be a large set of correlation matrices that all yield optimal (or near-optimal) coding performance, in a numerical example of heterogeneously tuned neural populations in Section “Heterogeneously tuned neural populations”.

We prove that these observations are general in Section “Noise cancellation” by studying the noise canceling correlations (those that yield the same high coding fidelity as would be obtained in the absence of noise). We will provide a set of necessary and sufficient conditions for correlations to be “noise canceling”, Theorem 3, and for a system to allow for these noise canceling correlations, Theorem 4. Finally, we will prove a result that suggests that, in large neural populations with randomly chosen stimulus response characteristics, these conditions are likely to be satisfied, Theorem 5.

A summary of most frequent notations we use is listed in Table 1.

Table 1. Notations.

Inline graphic stimulus
Inline graphic response of neuron Inline graphic
Inline graphic mean response of neuron Inline graphic
Inline graphic derivative against Inline graphic, Eq. (6)
Inline graphic covariance between Inline graphic and Inline graphic, Eq. (11)
Inline graphic noise covariance matrix (averaged or conditional, Section “Summary of the problem set-up”)
Inline graphic covariance of the mean response, Eq. (10)
Inline graphic, Inline graphic (a matrix is) positive definite and positive semidefinite
Inline graphic total covariance,Eq. (10)
Inline graphic optimal readout vector of OLE, Eq. (9)
Inline graphic noise correlations, Eq. (15)
Inline graphic signal correlations, Eq. (16)
Inline graphic linear Fisher information, Eq. (5)
Inline graphic OLE information (accuracy of OLE), Eq. (12)
Inline graphic mutual information for Gaussian distributions, Eq. (13)

Problem setup

We will consider populations of neurons that generate noisy responses Inline graphic in response to a stimulus Inline graphic. The responses, Inline graphic – wherein each component Inline graphic represents one cell's response – can be considered to be continuous-valued firing rates, discrete spike counts, or binary “words”, wherein each neuron's response is a 1 (“spike”) or 0 (“not spike”). The only exception is that, when we consider Inline graphic (discussed below), the responses must be continuous-valued. We consider arbitrary tuning for the neurons; Inline graphic. For scalar stimuli, this definition of “tuning” corresponds to the notion of a tuning curve. In the case of more complex stimuli, it is similar to the typical notion of a receptive field. Recall that the signal correlations are determined by the co-variation of the mean responses of pairs of neurons as the stimulus is varied, and thus they are determined by the similarity in the tuning functions.

As for the structure of noise across the population, our analysis allows for the general case in which the noise covariance matrix Inline graphic (superscript Inline graphic denotes “noise”) depends on the stimulus Inline graphic. This generality is particularly interesting given the observations of Poisson-like variability [20], [21] in neural systems, and that correlations can vary with stimuli [3], [16], [19], [22]. We will assume that the diagonal entries of the conditional covariance matrix – which describe each cells' variance – will be fixed, and then ask how coding performance changes as we vary the off-diagonal entries, which describe the covariance between the cell's responses (recall that the noise correlations are the pairwise covariances, divided by the geometric mean of the relevant variances Inline graphic).

We quantify the coding performance with the following measures, which are defined more precisely in the Methods Section “Defining the information quantities, signal and noise correlations”, below. First, we consider the linear Fisher information (Inline graphic, Eq. (5)), which measures how easy it is to separate the response distributions that result from two similar stimuli, with a linear discriminant. This is equivalent to the quantity used by [11] and [10] (where Fisher information reduces to Inline graphic). While Fisher information is a measure of local coding performance, we are also interested in global measures.

We will consider two such global measures, the OLE information Inline graphic (Eq. (12)) and mutual information for Gaussian stimuli and responses Inline graphic (Eq. (13)). Inline graphic quantifies how well the optimal linear estimator (OLE) can recover the stimulus from the neural responses: large Inline graphic corresponds to small mean squared error of OLE and vice versa. For the OLE, there is one set of read-out weights used to estimate the stimulus, and those weights do not change as the stimulus is varied. For contrast, with linear Fisher information, there is generally a different set of weights used for each (small) range of stimuli within which the discrimination is being performed.

Consequently, in the case of Inline graphic and Inline graphic, we will be considering the average noise covariance matrix Inline graphic, where the expectation is taken over the stimulus distribution. Here we overload the notation Inline graphic be the covariance matrix that one chooses during the optimization, which will be either local (conditional covariances at a particular stimulus) or global depending on the information measure we consider.

While Inline graphic and Inline graphic are concerned with the performance of linear decoders, the mutual information Inline graphic between stimuli and responses describes how well the optimal read-out could recover the stimulus from the neural responses, without any assumptions about the form of that decoder. However, we emphasize that our results for Inline graphic only apply to jointly Gaussian stimulus and response distributions, which is a less general setting than the conditionally Gaussian cases studied in many places in the literature. An important exception is that Theorem 2 additionally applies to the case of conditionally Gaussian distributions (see discussion in Section “Convexity of information measures”).

For simplicity, we describe most results for scalar stimulus Inline graphic if not stated otherwise, but the theory holds for multidimensional stimuli (see Methods Section “Defining the information quantities, signal and noise correlations”).

The sign rule revisited

Arguments about pairs of neurons suggest that coding performance is improved – relative to the case of independent, or trial-shuffled data – when the noise correlations have the opposite sign from the signal correlations [5], [7], [10], [13]: we dub this the “sign rule” (SR). This notion has been explored and demonstrated in many places in the experimental and theoretical literature, and formally established for homogenous positive correlations [10]. However, its applicability in general cases is not yet known.

Here, we formulate this SR property as a theorem without restrictions on homogeneity or population size.

Theorem 1. If, for each pair of neurons, the signal and noise correlations have opposite signs, the linear Fisher information is greater than the case of independent noise (trial-shuffled data). In the opposite situation where the signs are the same, the linear Fisher information is decreased compared to the independent case, in a regime of very weak correlations. Similar results hold for Inline graphic and Inline graphic , with a modified definition of signal correlations given in Section. “Defining the information quantities, signal and noise correlations”.

In the case of Fisher information, the signal correlation between two neurons is defined as Inline graphic (Section “Defining the information quantities, signal and noise correlations”). Here, the derivatives are taken with respect to the stimulus. This definition recalls the notion of the alignment in the change in the neurons' mean responses in, e.g., [11]. It is important to note that this definition for signal correlation is locally defined near a stimulus value; thus, it differs from some other notions of “signal correlation” in the literature, that quantify how similar the whole tuning curves are for two neurons (see discussion on the alternative Inline graphic in Section “Defining the information quantities, signal and noise correlations”). We choose to define signal correlations for Inline graphic, Inline graphic and Inline graphic as described in Section “Defining the information quantities, signal and noise correlations” to reflect precisely the mechanism behind the examples in [5], among others.

It is a consequence of Theorem 1 that the SR holds pairwise; different pairs of neurons will have different signs of noise correlations, as long as they are consistent with their (pairwise) signal correlations. The result holds as well for heterogenous populations. The essence of our proof of Theorem 1 is to calculate the gradient of the information function in the space of noise correlations. We compute this gradient at the point representing the case where the noise is independent. The gradient itself is determined by the signal correlations, and will have a positive dot product with any direction of changing noise correlations that obeys the sign rule. Thus, information is increased by following the sign rule, and the gradient points to (locally) the direction for changing noise correlations that maximally improves the information, for a given strength of correlations. A detailed proof is included in Methods Section “Proof of Theorem 1: the generality of the sign rule”; this includes a formula for the gradient direction (Remark 1 in Section “Proof of Theorem 1: the generality of the sign rule”). We have proven the same result for all three of our coding metrics, and for both scalar, and multi-dimensional, stimuli.

Intriguingly, there exists an asymmetry between the result on improving information (above), and the (converse) question of what noise correlations are worst for population coding. As we will show later, the information quantities are convex functions of the noise correlation coefficients (see Fig. 2). As a consequence, performance will keep increasing as one continues to move along a “good” direction, for example indicated by the SR. This is what one expects when climbing a parabolic landscape in which the second derivative is always nonnegative. The same convexity result indicates that the performance will not decrease monotonically along a “bad” direction, such as the anti-SR direction. For example, if, while following the anti-SR direction, the system passed by the minimum of the information quantity, then continued increases in correlation magnitude would yield increases in the information. In fact, it is even possible for anti-SR correlations to yield better coding performance than would be achieved with independent noise. An example of this is shown in Fig. 2, where the arrow points in the direction in correlation space predicted by the SR, but performance that is better than with independent noise can also be obtained by choosing noise correlations in the opposite direction.

Figure 2. The “sign rule” may fail to identify the globally optimal correlations.

Figure 2

The optimal linear estimator (OLE) information Inline graphic (Eq. (12)), which is maximized when the OLE produces minimum-variance signal estimates, is shown as a function of all possible choices of noise correlations (enclosed within the dashed line). These values are Inline graphic (x-axis) and Inline graphic (y-axis) for a 3-neuron population. The bowl shape exemplifies the general fact that Inline graphic is a convex function and thus must attain its maximum on the boundary (Theorem 2) of the allowed region of noise correlations. The independent noise case and global optimal noise correlations are labeled by a black dot and triangle respectively. The arrow shows the gradient vector of Inline graphic, evaluated at zero noise correlations. It points to the quadrant in which noise correlations and signal correlations have opposite signs, as suggested by Theorem 1. Note that this gradient vector, derived from the “sign rule”, does not point towards the global maximum, and actually misses the entire quadrant containing that maximum. This plot is a two-dimensional slice of the cases considered in Fig. 3, while restricting Inline graphic (see Methods Section “Details for numerical examples and simulations” for further parameters).

Thus, the result that anti-SR noise correlations harm coding is only a “local” result – near the point of zero correlations – and therefore requires the assumption of weak correlations. We emphasize that this asymmetry of the SR is intrinsic to the problem, due to the underlying convexity.

One obvious limitation of Theorem 1 and the “sign rule” results in general is that they only compare information in the presence of correlated noise with the baseline case of independent noise. This approach does not address the issue of finding the optimal noise correlations, nor does it provide much insight into experimental data that do not obey the SR. Does the sign rule rule describe optimal configurations? What are the properties of the global optima? How should we interpret noise correlations that do not follow the SR? We will address these questions in the following sections.

Optimal correlations lie on boundaries

Let us begin by considering a simple example to see what could happen for the optimization problem we described in Section “Problem setup”, when the baseline of comparison is no longer restricted to the case of independent noise. This example is for a population of 3 neurons. In order to better visualize the results, we further require that Inline graphic. Therefore the configurations of correlations is two dimensional. In Fig. 2, we plot information Inline graphic as a function of the two free correlation coefficients (in this example the variances are all Inline graphic, thus Inline graphic).

First, notice that there is a parabola-shaped region of all attainable correlations (in Fig. 2, enclosed by black dashed lines and the upper boundary of the square). The region is determined not only by the entry-wise constraint Inline graphic (the square), but also by a global constraint that the covariance matrix Inline graphic must be positive semidefinite. For linear Fisher information and mutual information for Gaussian distributions, we further assume Inline graphic (i.e. Inline graphic is positive definite) so that Inline graphic and Inline graphic remain finite (see also Section “Defining the information quantities, signal and noise correlations”). As we will see again below, this important constraint leads to many complex properties of the optimization problem. This constraint can be understood by noting that correlations must be chosen to be “consistent” with each other and cannot be freely and independently chosen. For example, if Inline graphic are large and positive, then cells 2 and 3 will be positively correlated – since they both covary positively with cell 1 – and Inline graphic may thus not take negative values. In the extreme of Inline graphic, Inline graphic is fully determined to be 1. Cases like this are reflecting the corner shape in the upper right of the allowed region in Fig. 2.

The case of independent noise is denoted by a black dot in the middle of Fig. 2, and the gradient vector of Inline graphic points to a quadrant that is guaranteed to increase information vs. the independent case (Theorem 1). The direction of this gradient satisfies the sign rule, as also guaranteed by Theorem 1. However, the gradient direction and the quadrant of the SR both fail to capture the globally optimal correlations, which are at upper right corner of the allowed region, and indicated by the red triangle. This is typically what happens for larger, and less symmetric populations, as we will demonstrate next.

Since the sign rule cannot be relied upon to indicate the global optimum, what other tools do we have at hand? A key observation, which we prove in the Methods Section “Proof of Theorem 2: optima lie on boundaries”, is that information is a convex function of the noise correlations (off-diagonal elements of Inline graphic). This immediately implies:

Theorem 2. The optimal Inline graphic that maximize information must lie on the boundary of the region of correlations considered in the optimization.

As we saw in Fig. 2, mathematically feasible noise correlations may not be chosen arbitrarily but are constrained by the fact that the noise covariance matrix be positive semidefinite. We denote this condition by Inline graphic, and recall that it is equivalent to all of its eigenvalues being non-negative. According to our problem setup, the diagonal elements of Inline graphic, which are the individual neurons' response variances, are fixed. It can be shown that this diagonal constraint specifies a linear slice through the cone of all Inline graphic, resulting a bounded convex region in Inline graphic called a spectrahedron, for a population of Inline graphic neurons. These spectrahedra are the largest possible regions of noise correlation matrices that are physically realizable, and are the set over which we optimize, unless stated otherwise.

Importantly for biological applications, Theorem 2 will continue to apply, when additional constraints define smaller allowed regions of noise correlations within the spectrahedron. These constraints may come from circuit or neuron-level factors. For example, in the case where correlations are driven by common inputs [22], [23], one could imagine a restriction on the maximal value of any individual correlation value. In other settings, one might consider a global constraint by restricting the maximum Euclidean norm (2-norm) of the noise correlations (defined in Eq. (18) in Methods).

For a population of Inline graphic neurons, there are Inline graphic possible correlations to consider; naturally, as Inline graphic increases, the optimal structure of noise correlations can therefore become more complex. Thus we illustrate the Theorem above with an example of 3 neurons encoding a scalar stimulus, in which there are 3 noise correlations to vary. In Fig. 3, we demonstrate two different cases, each with distinct Inline graphic matrix and vector Inline graphic (values are given in Methods Section “S:numerics”). In the first case, there is a unique optimum (panel A, largest information is associated with the lightest color). In the second case, there are 4 disjoint optima (panel B), all of which lie on the boundary of the spectrahedron.

Figure 3. Optimal coding is obtained on the boundary of the allowed region of noise correlations.

Figure 3

For fixed neuronal responses variances and tuning curves, we compute coding performance – quantified by Inline graphic information values – for different values of the pair-wise noise correlations. To be physically realizable, the correlation coefficients must form a positive semi-definite matrix. This constraint defines a spectrahedron, or a swelled tetrahedron, for the Inline graphic cells used. The colors of the points represent Inline graphic information values. With different parameters Inline graphic and Inline graphic (see values in Methods Section “Details for numerical examples and simulations”), the optimal configuration can appear at different locations, either unique (A) or attained at multiple disjoint places (B), but always on the boundary of the spectrahedron. In both panels, plot titles give the maximum value of Inline graphic attained over the allowed space of noise correlations, and the value of Inline graphic that would obtained with the given tuning curves, and perfectly deterministic neural responses. This provides an upper bound on the attainable Inline graphic (see text Section “Noise cancellation”). Interestingly, in panel (A), the noisy population achieves this upper bound on performance, but this is not the case in (B). Details of parameters used are in Methods Section “Details for numerical examples and simulations”.

In the next section, we will build from this example to a more complex one including more neurons. This will suggest further principles that govern the role of noise correlations in population coding.

Heterogeneously tuned neural populations

We next follow [8], [9], [15] and study a numerical example of a larger (Inline graphic) heterogeneously tuned neural population. The stimulus encoded is the direction of motion, which is described by a 2-D vector Inline graphic. We used the same parameters and functional form for the shape of tuning curves as in [8], the details of which are provided in Methods Section “Details for numerical examples and simulation”. The tuning curve for each neuron was allowed to have randomly chosen width and magnitude, and the trial-to-trial variability was assumed to be Poisson: the variance is equal to the mean. As shown in Fig. 4 A , under our choice of parameters the neural tuning curves – and by extension, their responses to the stimuli – are highly heterogenous. Once again, we quantify coding by Inline graphic (see definition in Section “Problem setup” or Eq. (12) in Methods).

Figure 4. Heterogeneous neural population and violations of the sign rule with increasing correlation strength.

Figure 4

We consider signal encoding in a population of 20 neurons, each of which has a different dependence of its mean response on the stimulus (heterogeneous tuning curves shown in A). We optimize the coding performance of this population with respect to the noise correlations, under several different constraints on the magnitude of the allowed noise correlations. Panel (B) shows the resultant – optimal given the constraint – values of OLE information Inline graphic, with different noise correlation strengths (blue circles). The strength of correlations is quantified by the Euclidean norm (Eq. (18)). For comparison, the red crosses show information obtained for correlations that obey the sign rule (in particular, pointing along the gradient giving greatest information for weak correlations); this information is always less than or equal to the optimum, as it must be. Note that correlations that follow the sign rule fail to exist for large correlation strengths, as the defining vector points outside of the allowed region (spectrahedron) beyond a critical length (labeled (ii)). For correlation strengths beyond this point, distinct optimized noise correlations continue to exist; the information values they obtain eventually saturate at noise-free levels (see text), which is Inline graphic for the example shown here. This occurs for a wide range of correlation strengths. Panel (C) shows how well these optimized noise correlations are predicted from the corresponding signal correlations (by the sign rule), as quantified by the Inline graphic statistic (between 0 and 1, see Fig. 5). For small magnitudes of correlations, the Inline graphic values are high, but these decline when the noise correlations are larger.

Our goal with this example is to illustrate two distinct regimes, with different properties of the noise correlations that lead to optimal coding. In the first regime, which occurs closest to the case of independent noise, the SR determines the optimal correlation structure. In the second, moving further away from the independent case, the optimal correlations may disobey the SR. (A related effect was found by [8]; we return to this in the Discussion.) We accomplish this in a very direct way: for gradually increasing the (additional) constraint on the Euclidean norm of correlations (Eq. (18) in Methods Section “Defining the information quantities, signal and noise correlations”), we numerically search for optimal noise correlation matrices and compare them to predictions from the SR.

In Fig. 4 B we show the results, comparing the information attained with noise correlations that obey the sign rule with those that are optimized, for a variety of different noise correlation strengths. As they must be, the optimized correlations always produce information values as high as, or higher than, the values obtained with the sign rule.

In the limit where the correlations are constrained to be small, the optimized correlations agree with the sign rule; an example of these “local” optimized correlations is shown in Fig. 5 ADG , corresponding to the point labeled Inline graphic in Fig. 4 BC . This is predicted by Theorem 1. In this “local” region of near-zero noise correlations, we see a linear alignment of signal and noise correlations (Fig. 5 D ). As larger correlation strengths are reached (points Inline graphic and Inline graphic in Fig. 4 BC ), we observe a gradual violation of the sign rule for the optimized noise correlations. This is shown by the gradual loss of the linear relationship between signal and noise correlations in Fig. 4 D vs. E vs. F , as quantified by the Inline graphic statistic. Interestingly, this can happen even when the correlation coefficients continue have reasonably small values, and are broadly consistent with the ranges of noise correlations seen in physiology experiments [3], [8], [24].

Figure 5. In our larger neural population, the sign rule governs optimal noise correlations only when these correlations are forced to be very small in magnitude; for stronger correlations, optimized noise correlations have a diverse structure.

Figure 5

Here we investigate the structure of the optimized noise correlations obtained in Fig. 4; we do this for three examples with increasing correlation strength, indicated by the labels Inline graphic in that figure. First (ABC) show scatter plots of the noise correlations of the neural pairs, as a function of their signal correlations (defined in Methods Section “Defining the information quantities, signal and noise correlations”). For each example, we also show (DEF) a version of the scatter plot where the signal correlations have been rescaled in a manner discussed in Section “Parameters for Fig. 1, Fig. 2 and Fig. 3”, that highlights the linear relationship (wherever it exists) between signal and noise correlations. In both sets of panels, we see the same key effect: the sign rule is violated as the (Euclidean) strength of noise correlations increases. In (ABC), this is seen by noting the quadrants where the dots are located: the sign rule predicts they should only be in the second and fourth quadrants. In (DEF), we quantify agreement with the sign rule by the Inline graphic statistic. Finally, (GHI) display histograms of the noise correlations; these are concentrated around 0, with low average values in each case.

The two different regimes of optimized noise correlations arise because, at a certain correlation strength, the correlation strength can no longer be increased along the direction that defines the sign rule without leaving the region of positive semidefinite covariance matrices. However, correlation matrices still exist that allow for more informative coding with larger correlation strengths. This reflects the geometrical shape of the spectrahedron, wherein the optima may lie in the “corners”, as shown in Fig. 3. For these larger-magnitude correlations, the sign rule no longer describes optimized correlations, as shown with an example of optimized correlations in Fig. 5 CF .

Fig. 5 illustrates another interesting feature. There is a diverse set of correlation matrices, with different Euclidean norms beyond the value of (roughly) 1.2, that all achieve the same globally optimal information level. As we see in the next section, this phenomenon is actually typical for large populations, and can be described precisely.

Noise cancellation

For certain choices of tuning curves and noise variances, including the examples in Fig. 3 A and Section “Heterogeneously tuned neural populations”, we can tell precisely the value of the globally optimized information quantities — that is, the information levels obtained with optimal noise correlations. For the OLE, this global optimum is the upper bound on Inline graphic. This is shown formally in Lemma 8, but it simply translates to an intuitive lower bound of the OLE error, similar to the data processing inequality for mutual information. This bound states that the OLE error cannot be smaller than the OLE error when there is no noise in the responses, i.e. when the neurons produce a deterministic response conditioned on the stimulus. This upper bound may — and often will (Theorem 5) — be achievable by populations of noisy neurons.

Let us first consider an extremely simple example. Consider the case of two neurons with identical tuning curves, so that their responses are Inline graphic, where Inline graphic is the noise in the response of neuron Inline graphic, and Inline graphic is the same mean response under stimulus Inline graphic. In this case, the “noise free” coding is when Inline graphic on all trials, and the inference accuracy is determined by the shape of the tuning curve Inline graphic (whether or not it is invertible, for example). Now let us consider the case where the noise in the neurons' responses is non-zero but perfectly anti correlated, so that Inline graphic on all trials. We can then choose the read-out as Inline graphic to cancel the noise and achieve the same coding accuracy as the “noise free” case.

The preceding example shows that, at least in some cases, one can choose noise correlations in such a way that a linear decoder achieves “noise-free” performance. One is naturally left to wonder whether this observation applies more generally.

First, we state the conditions on the noise covariance matrices under which the noise-free coding performance is obtained. We will then identify the conditions on parameters of the problem, i.e. the tuning curves (or receptive fields) and noise variances, under which this condition can be satisfied. Recall that the OLE is based on a fixed (across stimuli) linear readout coefficient vector Inline graphic defined in Eq. (9)

Theorem 3. A covariance matrix Inline graphic attains the noise-free bound for OLE information (and hence is optimal), if and only if Inline graphic . Here Inline graphic is the cross-covariance between the stimuli responses ( Eq. (11) ), Inline graphic is the covariance of the mean response ( Eq. (10) ), and Inline graphic is the linear readout vector for OLE, which is the same as in the noise-free case — that is, Inline graphic — when the condition is satisfied.

We note that when the condition is satisfied, the conditional variance of the OLE is Inline graphic. This indicates that all the error comes from the bias, if we as usual write the mean square error (for scalar Inline graphic) in two parts, Inline graphic. The condition obtained here can also be interpreted as “signal/readout being orthogonal to the noise.” While this perspective gives useful intuition about the result, we find that other ideas are more useful for constructing proofs of this and other results. We discuss this issue more thoroughly in Section “The geometry of the covariance matrix”.

In general, this condition may not be satisfied by some choices of pairwise correlations. The above theorem implies that, given the tuning curves, the issue of whether or not such “noise free” coding is achievable will be determined only by the relative magnitude, or heterogeneity, of the noise variances for each neuron – the diagonal entries of Inline graphic. The following theorem outlines precisely the conditions under which such “noise-free” coding performance is possible, a condition that can be easily checked for given parameters of a model system, or for experimental data.

Theorem 4. For scalar stimulus, let Inline graphic , Inline graphic , where Inline graphic is the readout vector for OLE in the noise-free case. Noise correlations may be chosen so that coding performance matches that which could be achieved in the absence of noise if and only if

graphic file with name pcbi.1003469.e132.jpg (1)

When “ Inline graphic ” is satisfied, all optimal correlations attaining the maximum form a Inline graphic dimensional convex set on the boundary of the spectrahedron. When “ Inline graphic ” is attained, the dimension of that set is Inline graphic , where Inline graphic is the number of zeros in Inline graphic .

We pause to make three observations about this Theorem. First, the set of optimal correlations, when it occurs, is high-dimensional. This bears out the notion that there are many different, highly diverse noise correlation structures that all give the same (optimal) level of the information metrics. Second, and more technically, we note that the (convex) set of optimal correlations is flat (contained in a hyperplane of its dimension), as viewed in the higher dimensional space Inline graphic. A third intriguing implication of the theorem is that when noise-cancellation is possible, all optimal correlations are connected, as the set is convex (any two points are connected by a linear segment that also in the set), and thus the case of disjoint optima as in Fig. 3 B will never happen when optimal coding achieves noise-free levels. Indeed, in Fig. 3 B , the noise-free bound is not attained.

The high dimension of the convex set of noise-canceling correlations explains the diversity of optimal correlations seen in Fig. 4 B (i.e., with different Euclidean norms). Such a property is nontrivial from a geometric point of view. One may conclude prematurely that the dimension result is obvious if one considers algebraically the number of free variables and constraints in the condition of Theorem 3. This argument would give the dimension of the resulting linear space. However, as shown in the proof, there is another nontrivial step to show that the linear space has some finite part that also satisfies the positive semidefinite constraint. Otherwise, many dimensions may shrink to zero in size, as happens at the corner of the spectrahedron, resulting in a small dimension.

The optimization problem can be thought of as finding the level set of information function associated with as large as possible value while still intersecting with the spectrahedron. The level sets are collections of all points where the information takes the same value. These form high dimensional surfaces, and contain each other, much as layers of an onion. Here these surfaces are also guaranteed to be convex as the information function itself is. Next, note from Fig. 3 that we have already seen that the spectrahedron has sharp corners. Combining this with our view of the level sets, one might guess that the set of optimal solutions — i.e. the intersection — should be very low dimensional. Such intuition is often used in mathematics and computer science, e.g. with regards to the sparsity promoting tendency of L1 optimization. The high dimensionality shown by our theorem therefore reflects a nontrivial relationship between the shape of the spectrahedron and the level sets of the information quantities.

Although our theorem only characterizes the abundance of the set of exact optimal noise correlations, it is not hard to imagine the same, if not more, abundance should also hold for correlations that approximately achieve the maximal information level. This is indeed what we see in numerical examples. For example, note the long, curved level-set curves in Fig. 2 near the boundaries of the allowed region. Along these lines lie many different noise correlation matrices that all achieve the same nearly-optimal values of Inline graphic. The same is true of the many dots in Fig. 3 A that all share a similar “bright” color corresponding to large Inline graphic.

One may worry that the noise cancellation discussed above is rarely achievable, and thus somewhat spurious. The following theorem suggests that the opposite is true. In particular, it gives one simple condition under which the noise cancellation phenomenon, and resultant high-dimensional sets of optimal noise correlation matrices, will almost surely be possible in large neural populations.

Theorem 5. If the Inline graphic defined in Theorem 4 are independent and identically distributed (i.i.d.) as a random variable Inline graphic on Inline graphic with Inline graphic , then the probability

graphic file with name pcbi.1003469.e146.jpg (2)

In actual populations, the Inline graphic might not be well described as i.i.d.. However, we believe that the conditions of the inequality of Eq.(1) are still likely to be satisfied, as the contrary seems to require one neuron with highly outlying tuning and noise variance value (a few comparable outliers won't necessary violate the condition, as their magnitudes will enter on the right hand side of the condition, thus the condition only breaks with a single “outlier of outliers”).

Discussion

Summary

In this paper, we considered a general mathematical setup in which we investigated how coding performance changes as noise correlations are varied. Our setup made no assumptions about the shapes (or heterogeneity) of the neural tuning curves (or receptive fields), or the variances in the neural responses. Thus, our results – which we summarize below – provide general insights into the problem of population coding. These are as follows:

  • We proved that the sign rule — if signal and noise correlations have opposite signs, then the presence of noise correlations will improve encoded information vs. the independent case — holds for any neural population. In particular, we showed that this holds for three different metrics of encoded information, and for arbitrary tuning curves and levels of heterogeneity. Furthermore, we showed that, in the limit of weak correlations, the sign rule predicts the optimal structure of noise correlations for improving encoded information.

  • However, as also found in the literature (see below), the sign rule is not a necessary condition for good coding performance to be obtained. We observed that there will typically be a diverse family of correlation matrices that yield good coding performance, and these will often violate the sign rule.

  • There is significantly more structure to the relationship between noise correlations and encoded information than that given by the sign rule alone. The information metrics we considered are all convex functions with respect to the entries in the noise correlation matrix. Thus, we proved that the optimal correlation structures must lie on boundaries of any allowed set. These boundaries could come from mathematical constraints – all covariance matrices must be positive semidefinite – or mechanistic/biophysical ones.

  • Moreover, boundaries containing optimal noise correlations have several important properties. First, they typically contain correlation matrices that lead to the same high coding fidelity that one could obtain in the absence of noise. Second, when this occurs there is a high-dimensional set of different correlation matrices that all yield the same high coding fidelity – and many of these matrices strongly violate the sign rule.

  • Finally, for reasonably large neural populations, we showed that both the noise-free, and more general SR-violating optimal, correlation structures emerge while the average noise correlations remain quite low — with values comparable to some reports in the experimental literature.

Convexity of information measures

Convexity of information with respect to noise correlations arises conceptually throughout the paper, and specifically in Theorem 2. We have shown that such convexity holds for all three particular measures of information studied above (Inline graphic, Inline graphic, and Inline graphic). Here, we show that these observations may reflect a property intrinsic to the concept of information, so that our results could apply more generally.

It is well known that mutual information is convex with respect to conditional distributions. Specifically, consider two random variables (or vectors) Inline graphic, each with conditional distribution Inline graphic and Inline graphic (with respect the random “stimulus” variable(s) Inline graphic). Suppose another variable Inline graphic has a conditional distribution given by a nonnegative linear combination of the two, Inline graphic, Inline graphic. The mutual information must satisfy Inline graphic. Notably, this fact can be proved using only the axiomatic properties of mutual information (the chain rule for conditional information and nonnegativity) [25].

It is easy to see how this convexity in conditional distributions is related to the convexity in noise correlations we use. To do this, we further assume that the two conditional means are the same, Inline graphic, and let Inline graphic be random vectors. Introduce an auxiliary Bernoulli random variable Inline graphic that is independent of Inline graphic, with probability Inline graphic of being 1. The variable Inline graphic can then be explicitly constructed using Inline graphic: for any Inline graphic, draw Inline graphic according to Inline graphic if Inline graphic and according to Inline graphic otherwise. Using the law of total covariance, the covariance (conditioned on Inline graphic) between the Inline graphic elements of Inline graphic is

graphic file with name pcbi.1003469.e174.jpg
graphic file with name pcbi.1003469.e175.jpg
graphic file with name pcbi.1003469.e176.jpg

This shows that the noise covariances are expressed accordingly as linear combinations. If the information depends only on covariances (besides the fixed means), as for the three measures we consider, the two notions of convexity become equivalent. A direct corollary of this argument is that the convexity result of Theorem 2 also holds in the case of mutual information for conditionally Gaussian distributions (i.e., such that Inline graphic given Inline graphic is Gaussian distributed).

Sensitivity and robustness of the impact of correlations on encoded information

One obvious concern about our results, especially those related to the “noise-free” coding performance, is that this performance may not be robust to small perturbations in the covariance matrix – and thus, for example, real biological systems might be unable to exploit noise correlations in signal coding. This issue was recently highlighted, in particular, by [26].

At first, concerns about robustness might appear to be alleviated by our observation that there is typically a large set of possible correlation structures that all yield similar (optimal) coding performance (Theorem 4). However, if the correlation matrix was perturbed along a direction orthogonal to the level set of the information quantity at hand, this could still lead to arbitrary changes in information. To address this matter directly, we explicitly calculated the following upper bound for the sensitivity of information, or condition number Inline graphic with respect (sufficiently small) perturbations. The condition number Inline graphic is defined as the ratio of relative change in the function to that in its variables. For example, the condition number corresponding to perturbing Inline graphic is the smallest number Inline graphic that satisfies Inline graphic. Similarly one can define condition number Inline graphic for perturbing the tuning of neurons Inline graphic.

Proposition 6. The local condition number of Inline graphic under perturbations of Inline graphic (where magnitude is quantified by 2-norm) is bounded by

graphic file with name pcbi.1003469.e188.jpg (3)

where Inline graphic and Inline graphic are the largest and smallest eigenvalue of Inline graphic respectively. Here Inline graphic is the condition number with respect to the 2-norm, as defined in the above equation.

Similarly, the condition number for perturbing of Inline graphic is bounded by

graphic file with name pcbi.1003469.e194.jpg (4)

where Inline graphic is the Inline graphic -th column of Inline graphic and assume Inline graphic for all Inline graphic . Here Inline graphic is the dimension of the stimulus Inline graphic .

Though stated for Inline graphic, same results also hold for Inline graphic when replacing Inline graphic by Inline graphic in Eq. (3) and (4). We believe that a similar property is possible to derive for mutual information Inline graphic, but that the expression could be quite cumbersome; we do not pursue this further here.

To interpret this Proposition, we make the following observations which explain when the sensitivity or condition numbers will (or will not) be themselves reasonable in size, for given noise correlations Inline graphic. In our setup, the diagonal of Inline graphic (or Inline graphic for OLE) is fixed, and therefore Inline graphic is bounded (Gershgorin circle theorem). As long as Inline graphic (or Inline graphic) is not close to singular, the information should therefore be robust, i.e. with a reasonably bounded condition number. For OLE, as Inline graphic, we always have a universal bound of Inline graphic determined only by Inline graphic. For the linear Fisher information, however, nearly singular Inline graphic can more typically occur near optimal solutions; in these cases, the condition numbers will be very large.

Relationship to previous work

Much prior work has investigated the relationship between noise correlations and the fidelity of signal coding [3], [5][11], [13][16]. Two aspects of our current work complement and generalize those studies.

The first are our results on the sign rule (Section “The sign rule revisited”). Here, we find that, if each cell pair has noise correlations that have the opposite sign vs. their signal correlations, the encoded information is always improved, and that, at least in the case of weak noise correlations, noise correlations that have the same sign as the signal correlations will diminish encoded information. This effect was observed by [6] for neural populations with identically tuned cells. Since the tuning was identical in their work, all signal correlations were positive. Thus, their observation that positive noise correlations diminish encoded information is consistent with the SR results described above.

Relaxing the assumption of identical tuning, several studies followed [6] that used cell populations with tuning that differed from cell to cell, but maintained some homogeneous structure – i.e., identically shaped, and evenly spaced (along the stimulus axis) tuning curves, e.g., [5], [7]. The models that were investigated then assumed that the noise correlation between each cell pair was a decaying function of the displacement between the cells' tuning curve peaks. The amplitude of the correlation function – which determines the maximal correlation over all cell pairs, attained for “nearby” cells – was the independent variable in the numerical experiments. Recall that these nearby (in tuning-curve space) cells, with overlapping tuning curves, will have positive signal correlations. These authors found that positive signs of noise correlations diminished encoded information, while negative noise correlations enhanced it. This is once again broadly consistent with the sign rule, at least for nearby cells which have the strongest correlation. Finally, we note that [5], [10], [12] give a crisp geometrical interpretation of the sign rule in the case of Inline graphic cells.

At the same time, experiments typically show noise correlations that are stronger for cell pairs with higher signal correlations [3], [6], [19], which is certainly not in keeping with the sign rule. This underscores the need for new theoretical insights. To this effect, we demonstrated that, while noise correlations that obey the sign rule are guaranteed to improve encoded information relative to the independent case, this improvement can also occur for a diverse range of correlation structures that violate it. (Recall the asymmetry of our findings for the sign rule: noise correlations that violate the sign rule are only guaranteed to diminish encoded information if those noise correlations are very weak).

This finding is anticipated by the work of [8], [9], [14], who used elegant analytical and numerical studies to reveal improvements in coding performance in cases where the sign rule was violated. They studied heterogeneous neural populations, with, for example, different maximal firing rates for different neurons. In particular, these authors show how heterogeneity can simultaneously improve the accuracy and capacity of stimulus encoding [14], or can create coding subspaces that are nearly orthogonal to directions of noise covariance [8], [9]. Taken together, these studies show that the same noise correlation structure discussed above – with nearby cells correlated – could lead to improved population coding, so long as the noise correlations are sufficiently strong. [8] also demonstrated that the magnitude of correlations needed to satisfy the “sufficiently strong” condition decreases as the population size increases, and that in the large Inline graphic limit, certain coding properties become invariant to the structure of noise correlations. Overall, these findings agree with our observations about a large diversity of SR-violating noise correlation structures that improve encoded information.

One final study requires its own discussion. Whereas the current study (and those discussed above) investigated how coding relates to noise correlations with no concerns for the biophysical origin of those correlations [17], studied a semi-mechanistic model in which noise correlations were generated by inter-neuronal coupling. They observed that coupling that generates anti-SR correlations is beneficial for population coding when the noise level is very high, but that at low noise levels, the optimal population would follow the SR. Understanding why different mechanistic models can display different trends in their noise correlations is important, and we are currently investigating that issue.

The geometry of the covariance matrix

One geometrical, and intuitively helpful, way to think about problems involving noise correlations is to ask when the noise is “orthogonal to the signal”: in these cases, the noise can be separated from or be orthogonal to the signal, and high coding performance is obtained. This geometrical view is equally valid for the cases we study (e.g., the conditions we derive in Theorem 3), and is implicit in the diagrams in Figure 1. To make the approach explicit, one could perform an eigenvector analysis on the covariance matrices at hand, where quantities like linear Fisher information are rewritten as a sum of projections of the tuning vector to the eigen-basis of the covariance matrix, weighted by the appropriate eigenvalues.

This invites the question of whether a simpler way to obtain the results in our paper wouldn't be to consider how covariance eigenvectors and eigenvalues could be manipulated more directly. For example, if one could simply “rotate” the eigenvectors of the covariance matrix out of the signal direction – or shrink the eigenvalues in that direction – one would necessarily improve coding performance. So why don't we simply do this when exploring spaces of covariance matrices? The reason is that these eigenvalue and eigenvector manipulations are not as easy to enact as they might at first sound (to us, and possibly to the reader). Recall that we asked how noise correlations affect coding subject to the specific constraint that the noise variance of each neuron is fixed, which translates in general to rather complex constraints on the eigenvalues and eigenvectors. For example, the eigenvalues of a fixed-diagonal covariance matrix cannot be equivalently described by simply having a fixed sum (which is a necessary condition for the diagonals to be constant, but is not a sufficient one). These facts limit the insights that a direct approach to adjusting eigenvalues and eigenvectors can have for our problem, and emphasize the non-trivial nature of our results.

An exception comes, for example, in special cases when the covariance matrix has a circulant structure, and consequently always has the Fourier basis for eigenvectors. These cases include many situations considered in the literature [8], [10]. For contrast, the covariance matrices we studied were allowed to change freely, as long as the diagonals remained fixed.

Limitations and extensions

We have developed a rich picture of how correlated noise impacts population coding. For our results on noise cancellation in particular, this was done by allowing noise correlations to be chosen from the largest mathematically possible space (i.e., the entire spectrahedron). This describes the fundamental structure of the problem at hand, but are conclusions derived in this way important for biology? It is not hard to imagine many biological constraints that may further limit the range of possible noise correlations (e.g., limits on the strength of recurrent connections or shared inputs). On the one hand, the likelihood that the underlying phenomena could be found in biological systems seems increased by the fact that many different correlation matrices will suffice for noise free coding and that, as we discuss in Proposition 6, information levels appear to have some robustness under perturbations of the underlying correlation matrices.

However, care must still be taken in interpreting what we mean by “noise free.” As emphasized by, e.g., [8], [27], noise upstream from the neural population in question can never be removed in subsequent processing. Therefore, the “noise free” bound we discuss in Lemma 8 should not allow for a higher information level than that determined by this upstream noise. In some cases, this fact could lead to a consistency requirement on either the set of signal correlations Inline graphic, the set of allowed noise correlations Inline graphic, or both. To specify these constraints and avoid possible over-interpretations of the abstract coding model as we study, one could combine a explicit mechanistic model with the present approach.

On another note, we have asked what noise correlations allow for linear decoders to best recover the stimulus from the set of neural population responses. At the same time, there is reason to be wary of linear decoders [28] (see also [16]), as they might miss significant information that is only accessible via a non-linear read-out. Furthermore, given the non-linearity inherent in dendritic processing and spike generation [29], there is added motivation to consider information without assuming linearity.

Furthermore, we have herein restricted ourselves to asking about pairwise noise correlations, while there are many studies that identify higher-order correlations (HOC) in neural data [30], [31], and some numerical results [32] that hint at when those HOC are beneficial for coding. In light of this study, it is interesting to ask whether we can derive a similarly general theory for HOC, and to investigate how the optimal pairwise and higher-order correlations interrelate. Note this issue is closely related to the type of decoder that is assumed: the performance of linear decoder (as measured by mean squared error) depends on the pairwise correlations, but not HOC. Therefore the effect of HOC must be studied in the context of nonlinear coding.

Finally, we note that here we used an abstract coding model that evaluates information based on the statistics Inline graphic and so on. For generality, we made no assumptions on the structure of these statistics, and any links among them. This suggests two questions for future work: whether an arbitrary set of such statistics is realizable in a constructive model of random variables, and whether there are any typical relationships between these statistics when they arise from tuned neural populations. As a preliminary investigation, we partially confirmed the answer to the first question, except for a “zero measure” set of statistics, under generic assumptions (data not shown).

Experimental implications

Recall that we observed that, in general, for a given set of tuning curves and noise variances, there will be a diverse family of noise correlation matrices that will yield good (optimal, or near-optimal) performance. This effect can be observed in Figs. 2, 3, and 5, as well as our result about the dimension of the set of correlation matrices that yield (when it is possible) noise-free coding performance (Theorem 4).

At least compared with the alternative of a unique optimal noise correlation structure, our findings imply that it could be relatively “easy” for the biological system to find good correlation matrices. At the same time, since the set of good solutions is so large, we should not be surprised to see heterogeneity in the correlation structures exhibited by biological systems. Similar observations have previously been made in the context of neural oscillators: Prinz and colleagues [33] observed that neuronal circuits with a variety of different parameter values could produce the types of rhythmic activity patterns displayed by the crab stomatogastric ganglion. Consequently, there is much animal-to-animal variability in this circuit [34], even though the system's performance is strongly conserved.

At the same time, the potential diversity of solutions could present a serious challenge for analyzing data (cf. [26]). Notice, that, at least for the Inline graphic cases of Figs. 2 and 3 for example, how much the performance can vary as one of the correlation coefficients is changed, while keeping the other ones fixed. If this phenomenon is general, it means that, in an experiment where we observe a (possibly small) subset of the correlation coefficients, it may be very hard to know how those correlations actually affect coding: the answer to that question depends strongly on all of the other (unobserved) correlation coefficients. As our recording technologies improve [35], and we make more use of optical methods, these “gaps” in our datasets will get smaller, and this issue may be resolved; further theoretical work to gauge the seriousness of the underlying issue is also needed. In the meanwhile, caution seems wise when analyzing noise correlations in sparsely sampled data.

Finally, recall that the optimal noise correlations will always lie on the boundary of the allowed region of such correlations. Importantly, what we mean by that boundary is flexible. It may be the mathematical requirement of positive semidefinite covariance matrices – the loosest possible requirement – or there may be tighter constraints that restrict the set of correlation coefficients. Since biophysical mechanisms determine noise correlations, we expect that there will be identifiable regions of possible correlation coefficients that are possible in a given circuit/system. Understanding those “allowed” regions will, we anticipate, be important for attempts to relate noise correlations to coding performance, and ultimately to help untangle the relationship between structure and function in sensory systems.

Methods

In the Methods below, we will first revisit the problem set-up, and define our metrics of coding quality. We will then prove the theorems from the main text. Finally, we will provide the details of our numerical examples. A summary of our most frequently used notation is listed in Table 1.

Summary of the problem set-up

We consider populations of neurons that encode a stimulus Inline graphic by their noisy responses Inline graphic. For simplicity, we will suppress the vector notation in the Methods Unless otherwise stated, most of our results apply equally well to either scalar, or multi-dimensional, stimuli.

The mean activity or “tuning” of the neurons are described by Inline graphic. In the case of scalar stimuli, this corresponds to the notion of a tuning curve. For more complex stimuli, this is more aligned with the idea of a receptive field.

The trial-to-trial noise part in Inline graphic, given a fixed stimulus, can be described by the conditional covariance Inline graphic (superscript Inline graphic denotes “noise”). In particular Inline graphic are noise variances of each neuron.

We ask questions of the following type: given fixed tuning curves Inline graphic and noise variances Inline graphic, how does the choice of noise covariance structure Inline graphic, Inline graphic affect linear Fisher information Inline graphic (see Section “Defining the information quantities, signal and noise correlations”)?

Besides the local information measure Inline graphic that quantifies coding near a specific stimulus, we also considered global measures that describe overall coding of the entire ensemble of stimuli. These are Inline graphic and Inline graphic, described in Section “Defining the information quantities, signal and noise correlations”. For these quantities, the relevant noise covariance is Inline graphic. We overload the notation with Inline graphic in these global coding contexts. The optimization problem can then be identically stated for Inline graphic and Inline graphic.

Defining the information quantities, signal and noise correlations

Linear Fisher information

Linear Fisher information quantifies how accurately the stimulus near a value Inline graphic can be decoded by a local linear unbiased estimator, and is given by

graphic file with name pcbi.1003469.e243.jpg (5)

In the case of a Inline graphic dimensional stimulus the same definition holds, with

graphic file with name pcbi.1003469.e245.jpg (6)

In order for Inline graphic to be defined by Eq. (5), we assume Inline graphic is invertible and hence positive definite: Inline graphic. It can be shown that Inline graphic is the (attainable) lower bound of the covariance matrix of the error of any local linear unbiased estimator. Here the term lower bound is used in the sense of positive semidefiniteness, that is the ordering Inline graphic if and only if Inline graphic. To obtain a scalar information quantity, we consider Inline graphic and also denote this by Inline graphic if not stated otherwise.

Optimal linear estimator

To quantify the global ability of the population to encode the stimulus (instead of locally, as for discrimination tasks involving small deviations from a particular stimulus value), we follow [18] and consider a linear estimator of the stimulus, given responses Inline graphic:

graphic file with name pcbi.1003469.e255.jpg (7)

with fixed parameters Inline graphic and Inline graphic unchanged with Inline graphic. The set of readout coefficients Inline graphic that minimize the mean square error for a scalar random stimulus Inline graphic, i.e.

graphic file with name pcbi.1003469.e261.jpg (8)

can be solved analytically as in [18], yielding:

graphic file with name pcbi.1003469.e262.jpg (9)

where

graphic file with name pcbi.1003469.e263.jpg (10)

and Inline graphic is a column vector with entries Inline graphic. Here the expectation Inline graphic generally means averaging over both noise and stimulus (except in Inline graphic, where averaging is only over the stimulus).

For multidimensional stimuli Inline graphic, similar to the case for linear Fisher information, the lower bound (in sense of positive semidefiniteness) of the error covariance Inline graphic is given by Inline graphic. Here Inline graphic is extended to form a matrix

graphic file with name pcbi.1003469.e272.jpg (11)

Furthermore, a corresponding lower bound for the sum of squared errors Inline graphic is the scalar version Inline graphic.

When minimizing the OLE error with respect to noise correlations, Inline graphic, Inline graphic and Inline graphic are constants with respect to the optimization. Minimizing OLE error is therefore equivalent to maximizing the second term above, given by Inline graphic. This motivates us to define what we call “the information for OLE”, which is simply the second term (above) — i.e., the term that is subtracted from the signal variance to yield the OLE error. Specifically,

graphic file with name pcbi.1003469.e279.jpg (12)

Thus, when Inline graphic is large, the decoding error is small, and vice versa. Comparing with the expression for Inline graphic, we see a similar mathematical structure, which will enable almost identical proofs of our theorems for both of these measures of coding performance.

Similar to Inline graphic, we need Inline graphic to be invertible in order to calculate Inline graphic. Since the signal covariance matrix Inline graphic does not change as we vary Inline graphic, this requirement is easy to satisfy. In particular, we assume Inline graphic is invertible (Inline graphic), and thus for all consistent – i.e. positive semidefinite – Inline graphic, Inline graphic, so that Inline graphic is invertible.

Mutual information for Gaussian distributions

While the OLE and the linear Fisher information assume that a linear read-out of the population responses is used to estimate the stimulus, one may also be interested in how well the stimulus could be recovered by more sophisticated, nonlinear estimators. Mutual information, based on Shannon entropy is a useful quantity of this sort. It has many desirable properties consistent with the intuitive notion of “information”, and it we will use it to quantify how well a non-linear estimator could recover the stimulus.

Assuming that the joint distribution of Inline graphic is Gaussian (Inline graphic can be multidimensional), the mutual information has a simple expression

graphic file with name pcbi.1003469.e294.jpg (13)

The quantities above are the same as in the definitions of Inline graphic. Moreover, Inline graphic is taken to base Inline graphic, and hence the information is in units of nats. To convert to bits, one must simply divide our Inline graphic values by Inline graphic.

There is a consistency constraint that must be satisfied by any joint distribution of Inline graphic, namely that

graphic file with name pcbi.1003469.e301.jpg (14)

This guarantees that Inline graphic is always defined and real (but could be Inline graphic). To keep Inline graphic finite, one needs to further assume Inline graphic, which is equivalent to Inline graphic. This can be seen by rewriting mutual information while exchanging the position of the two variables (since mutual information is symmetric),

graphic file with name pcbi.1003469.e307.jpg

It is easy to see that the formula contains terms similar to those in Inline graphic and Inline graphic. In the scalar stimulus case, since Inline graphic is an increasing function, maximizing Inline graphic is equivalent to maximizing Inline graphic. In fact, the leading term in the Taylor expansion of Inline graphic with respect to Inline graphic is Inline graphic, which is proportional to Inline graphic. In the case of multivariate stimuli Inline graphic, we note that the operation Inline graphic preserves ordering defined in the positive semidefinite sense, i.e. Inline graphic. This close relationship suggests a way of transforming Inline graphic to a comparable scale of information in nats (or bits) as Inline graphic.

Signal and noise correlations

Given the noise covariance matrix Inline graphic one can normalize it as usual by its diagonal elements (variances) to obtain correlation coefficients

graphic file with name pcbi.1003469.e323.jpg (15)

We next discuss signal correlations, which describe how similar the tuning of a pair of neurons is. For linear Fisher information, we define signal correlations as

graphic file with name pcbi.1003469.e324.jpg (16)

Here Inline graphic is the sensitivity vector describing how the mean response of neuron Inline graphic changes with Inline graphic. With the above normalization, Inline graphic takes value between Inline graphic and Inline graphic.

For the other two information measures we use, Inline graphic and Inline graphic, a similar signal correlation can be defined. Here, we first define analogous tuning sensitivity vectors Inline graphic for each neuron, which will replace Inline graphic in Eq. (16). These vectors are

graphic file with name pcbi.1003469.e335.jpg (17)

for Inline graphic and Inline graphic respectively. Here Inline graphic is the diagonal matrix of noise variances, and Inline graphic.

The definitions of signal correlations above are chosen so that they are tied directly to the concept of the sign rule, as demonstrated in the proof of Theorem 1. As a consequence, for the case of Inline graphic and Inline graphic, signal correlations are defined through the population readout vector. This has an important implication that we note here. Consider a case where only a subset of the total population is “read out” to decode a stimulus. Then, the population readout vector — and hence the signal correlations defined above — could vary in magnitude and even possibly change signs depending on which neurons are included in the subset.

A different definition of signal correlations for OLE is sometimes used in literature, which we denote by Inline graphic. Naturally, one should not expect our sign rule results to apply exactly under this definition. However, when we redid our plots of signal vs. noise correlations using Inline graphic for our major numerical example (Fig. 5 ABC ), we observed the same qualitative trend (data not shown). This reflects the fact that, at least in this specific example, the signal correlations defined in the two ways are positively correlated. Understanding how general this phenomenon is would require further studies taking into account how the relevant statistics (Inline graphic, L, etc.) are generated from tuning curves or neuron models.

We next define the notion of the magnitude or strength of correlations, as came up throughout the paper. In particular, in Section “Heterogeneously tuned neural populations”, we considered restrictions on the magnitudes of noise correlations when finding their optimal values. We proceed as follows. Since Inline graphic, the list of all pairwise correlations of the population can be regarded as a single point in Inline graphic. If not stated otherwise, the vector 2-norm in that space (Euclidean norm) is what we call the “strength of correlations:”

graphic file with name pcbi.1003469.e347.jpg (18)

Proof of Theorem 1: The generality of the sign rule

We will now restate and then prove Theorem 1, first for Inline graphic and then for Inline graphic and Inline graphic.

Theorem 1. If, for each pair of neurons, the signal and noise correlations have opposite signs, the linear Fisher information is greater than the case of independent noise (trial-shuffled data). In the opposite situation where the signs are the same, the linear Fisher information is decreased compared to the independent case, in a regime of very weak correlations. Similar results hold for I OLE and I mut;G, with a modified definition of signal correlations given in Section “Defining the information quantities, signal and noise correlations”.

The proof proceeds by showing that information increases along the direction indicated by the sign rule, and that the information quantities are convex, so that information is guaranteed to increase monotonically along that direction.

Proof. Consider linear Fisher information

graphic file with name pcbi.1003469.e351.jpg (19)

Let Inline graphic be the diagonal part of Inline graphic, corresponding to (noise) variance for each neuron. We change the off-diagonal entries of Inline graphic along a certain direction Inline graphic in Inline graphic and consider a parameterization of the resultant covariance matrix, with parameter Inline graphic: Inline graphic. We evaluate the directional derivative (Inline graphic) of Inline graphic at Inline graphic,

graphic file with name pcbi.1003469.e362.jpg (20)

Here Inline graphic, and we have used the identity Inline graphic and the fact Inline graphic. Recalling the definition of signal correlations in Eq. (16), if the sign of Inline graphic is chosen to be opposite to the sign of Inline graphic for all Inline graphic, then Eq. (20) ensures that the directional derivative Inline graphic at Inline graphic.

We now derive a global consequence of this local derivative calculation. Inline graphic as a function of Inline graphic has Inline graphic. Since Inline graphic is smooth, there exists Inline graphic, such that for Inline graphic, Inline graphic. For corresponding Inline graphic, applying the mean value theorem, we have Inline graphic. Similarly, for the opposite case where all the signs of the noise correlations are the same as the signs of Inline graphic, the information will be smaller than the independent case (at least for weak enough correlations). This proves the local “sign rule”.

Thus, at least for small noise correlations, choosing noise correlations that oppose signal correlations will always be yield higher information values than the case of uncorrelated noise. To prove the “global” version of this theorem — that opponent signal and noise correlations always yield better coding than does independent noise — we will need to establish the convexity of Inline graphic. This is done in Theorem 2.

Note that, as we will soon prove, Inline graphic is a convex function of Inline graphic, and hence Inline graphic is increasing with Inline graphic. This means that the Inline graphic from our prior argument can be made arbitrarily large, and the same result – that performance improves when noise correlations are added, so long as they lie along this direction – will hold, provided that Inline graphic is still physically realizable. Thus, the improvement over the independent case is guaranteed globally for any magnitude of noise correlations.

Note that the arguments above do not guarantee that the globally optimal noise correlation structure will follow the sign rule. Indeed, we have seen concrete examples of this in Figs. 2 and Fig. 3.

Remark 1. From Eq. (20), the gradient (steepest uphill direction) of Inline graphic evaluated with independent noise Inline graphic is Inline graphic.

Remark 2. The same result can be shown for Inline graphic and Inline graphic, replacing Inline graphic with Inline graphic and Inline graphic, respectively, in the definition of Inline graphic in Eq. (16). The gradients are Inline graphic and Inline graphic, respectively, where Inline graphic is Inline graphic-th row of Inline graphic, and Inline graphic.

Proof of Theorem 2: Optima lie on boundaries

We begin by restating Theorem 2, which we then prove first for Inline graphic and then for Inline graphic and Inline graphic.

Theorem 2. The optimal Cn that maximize information must lie on the boundary of the region of correlations considered in the optimization.

We will show that Inline graphic is a convex function of Inline graphic and hence it will either attain its maximum value only on the boundary of the allowed region, or it will be uniformly constant. The latter is a trivial case that only happens when Inline graphic, as we see below.

Proof. To show that a function is convex, it is sufficient to show its second derivative along any linear direction is non-negative. For any constant direction Inline graphic of changing (off-diagonal entries of) Inline graphic, we consider a straight-line perturbation, Inline graphic parameterized by Inline graphic. Taking the derivative of Inline graphic with respect to Inline graphic,

graphic file with name pcbi.1003469.e415.jpg (21)

We have used that Inline graphic. Let Inline graphic. Taking another derivative gives

graphic file with name pcbi.1003469.e418.jpg (22)

The inequality is because of Lemma 7 (see below) and Inline graphic being positive definite. Also, note that Inline graphic.

For the case when Inline graphic is constant over the region, using Proposition 10 (below), Inline graphic for any direction of change Inline graphic. Letting Inline graphic, Inline graphic, we see that the Inline graphic-th row of Inline graphic must be 0. This leads to Inline graphic and, since Inline graphic, to Inline graphic. This was the claim in the beginning. In other words, in the case where Inline graphic is constant with respect to the noise correlations, the optimal read-out is zero, regardless of the neurons' responses. With the exception of this (trivial) case, the optimal coding performance is obtained when the noise correlation matrix lies on a boundary of the allowed region.

Lemma 7. (Linear algebra fact) For any positive semidefinite matrix Inline graphic , and any matrix Inline graphic , Inline graphic (assuming the dimensions match for matrix multiplications) is positive semidefinite and hence Inline graphic . If “ = ” is attained, then Inline graphic .

Remark 3. When Inline graphic i.e. positive definite, Inline graphic leads to Inline graphic as Inline graphic is invertible.

Proof. For any vector Inline graphic (with the same dimension as the number of columns in Inline graphic), Inline graphic since Inline graphic. Thus, by definition, Inline graphic, and therefore Inline graphic.

For the second part, if Inline graphic, all the eigenvalues of Inline graphic must be 0 (since none of them can be negative as Inline graphic), hence Inline graphic. This in fact requires Inline graphic. To see this, let Inline graphic be an orthogonal diagonalization of Inline graphic. For any vector Inline graphic as above, Inline graphic. Since the eigenvalues Inline graphic are non-negative, let Inline graphic be the diagonal matrix with the square roots of Inline graphic. We have

graphic file with name pcbi.1003469.e459.jpg (23)

Therefore the vector Inline graphic and Inline graphic. Since Inline graphic can be any vector, we must have Inline graphic.

Remark 4. Because of the similarities in the formulae for Inline graphic and Inline graphic, the same property can be shown for Inline graphic. In order for Inline graphic to be invertible, Inline graphic is only defined over the open set of positive definite Inline graphic. We therefore assume the closure of the allowed region is contained within this open set Inline graphic to state the boundary result.

A parallel version of Theorem 2 can also be established for Inline graphic, as we next show.

Proof of Theorem 2 for I mut;G. Again consider the linear parameterization Inline graphic along a direction Inline graphic, as defined above. Let Inline graphic. The consistency constraint in Eq. (14) assures Inline graphic. To keep Inline graphic finite, we further assume Inline graphic. Then, the derivative of Inline graphic with respect to Inline graphic is

graphic file with name pcbi.1003469.e480.jpg (24)

where we have used the identity Inline graphic. The second derivative is thus

graphic file with name pcbi.1003469.e482.jpg
graphic file with name pcbi.1003469.e483.jpg
graphic file with name pcbi.1003469.e484.jpg
graphic file with name pcbi.1003469.e485.jpg

Here Inline graphic is the identity matrix, Inline graphic, Inline graphic and Inline graphic as defined below Eq. (9). Inline graphic being positive definite allows us to split it into its square root Inline graphic. Moreover, the identity Inline graphic, for any matrices Inline graphic, and Inline graphic, is used in deriving the last line in the above equation. For the last inequality, we apply Lemma 7 to the two terms with Inline graphic and Inline graphic being positive semidefinite.

We have thus shown that Inline graphic is convex. For the special case that Inline graphic is constant, Proposition 10 shows Inline graphic. With the same argument as for Inline graphic, we observe that, in this (trivial) case Inline graphic.

Proof of Theorem 3: Conditions on the noise covariance matrix, under which noise-free coding is possible

We begin by showing that, for a given set of tuning curves, the maximum possible information – which may or may not be attainable in the presence of noise – is that which would be achieved if there were no noise in the responses. This is the content of Lemma 8. Next, we will introduce Lemma 9, which is a useful linear-algebraic fact that we will use repeatedly in our proofs.

We will then prove Theorem 3, which provides the conditions under which such noise-free performance can be obtained. One direction of the proof of Theorem 3 (sufficiency) is straightforward, while the other direction (necessity) relies on the observation of several conditions that are equivalent to the one in the theorem. We prove these equalities in Proposition 10.

For Theorem 3, we will only consider Inline graphic, since Inline graphic and Inline graphic will typically be infinity in the noise-free case (Inline graphic becomes singular). If one takes all instances of infinite information as “equally optimal,” a version of Theorem 3 can also be obtained; moreover, the condition in Theorem 3 becomes a sufficient but not necessary condition for infinite information.

Lemma 8 (Upper bound by noise-free information).

graphic file with name pcbi.1003469.e506.jpg (25)

Here the noise-free information Inline graphic refers to that which is obtained when plugging in Inline graphic at the place of Inline graphic in Eq. (12) .

Proof. This follows essentially from the consistency between the information quantity and the positive semidefinite ordering of covariance matrices. First, we write

graphic file with name pcbi.1003469.e510.jpg (26)

Then, we note the fact that for two positive definite matrices Inline graphic, Inline graphic if and only if Inline graphic. From this, we have Inline graphic. Finally, applying Lemma 7 yields Inline graphic.

Lemma 9 ( Useful linear algebra fact ). If, for any Inline graphic , Inline graphic , and Inline graphic , Inline graphic , then Inline graphic .

Proof. Inline graphic.

Proposition 10. (Equivalent conditions used in proving the noise-free coding Theorem 3).

Along a certain direction Inline graphic , the following conditions are equivalent.

graphic file with name pcbi.1003469.e523.jpg (27)

The same also holds for Inline graphic and Inline graphic .

Proof for I OLE. “Inline graphic”:

We again consider parametrized deviations from Inline graphic, Inline graphic for some constant matrix B. Let Inline graphic, and recall (Eq. (22)),

graphic file with name pcbi.1003469.e530.jpg (28)

Since Inline graphic is positive definite, according to the remark after Lemma 7, we have Inline graphic.

Inline graphic)”: If Inline graphic, by Lemma 9, Inline graphic. We have Inline graphic, for all Inline graphic in the allowed region, and hence Inline graphic.

Inline graphic”: immediate.

This concludes the proof for Inline graphic.

Proof for I F,lin. For Inline graphic, we further assume Inline graphic to avoid infinite information. Identical arguments will prove the properties above, where Inline graphic is replaced by Inline graphic.

Proof for I mut, G. For Inline graphic, we similarly assume Inline graphic (as defined in the proof of Theorem 2). Let Inline graphic, then Inline graphic,

graphic file with name pcbi.1003469.e549.jpg
graphic file with name pcbi.1003469.e550.jpg

It is easy to see Inline graphic. When Inline graphic holds, using Lemma 7, each of the two terms must be 0. In particular, as we discussed in the proof of Theorem 2 for Inline graphic (above), each of the terms is non-negative. Thus, if their sum is Inline graphic, then each term must individually be Inline graphic. According to the remark after Lemma 7, the second term being 0 indicates that Inline graphic or Inline graphic, which is Inline graphic.

If Inline graphic holds, by Lemma 9, we have Inline graphic. We have Inline graphic, for all Inline graphic in the allowed region, and hence Inline graphic. Similarly Inline graphic. This proves the property for Inline graphic.

Theorem 3. A covariance matrix Cn attains the noise-free bound for OLE information (and hence is optimal), if and only if CnA = Cn(Cμ) −1 L = 0. Here L is the cross-covariance between the stimuli responses ( Eq. (11) ), Cμ is the covariance of the mean response ( Eq. (10) ), and A is the linear readout vector for OLE, which is the same as in the noise-free case — that is, A = (Cn+Cμ) −1 L = (Cμ) −1 L — when the condition is satisfied.

Proof. If Inline graphic, then Lemma 9 implies that Inline graphic, which means that Inline graphic, using the definition in Eq. (12).

For the other direction of the theorem, consider a function of Inline graphic, Inline graphic, whose values at the endpoints are equal, according to saturation of the information bound. The mean value theorem assures that there exists a Inline graphic such that

graphic file with name pcbi.1003469.e572.jpg (29)

Since Inline graphic is positive semidefinite, according to Lemma 7, Inline graphic. Now using Lemma 9, we have that Inline graphic, and the readout vector Inline graphic.

Proof of Theorem 4: Conditions on tuning curves and variance, under which noise-free coding performance is possible

Next, we will restate, and then prove, Theorem 4. The proof will require using geometric ideas in Lemma 11, which we will state and prove below.

Theorem 4. For scalar stimulus, let qi =  Inline graphic , i = 1⋅⋅⋅N, where A = (Cμ) −1 L is the readout vector for OLE in the noise-free case. Noise correlations may be chosen so that coding performance matches that which could be achieved in the absence of noise if and only if

graphic file with name pcbi.1003469.e578.jpg (1)

When “<” is satisfied, all optimal correlations attaining the maximum form a Inline graphic dimensional convex set on the boundary of the spectrahedron. When “ = ” is attained, the dimension of that set is Inline graphic , where N0 is the number of zeros in {qi}.

The proof is based on the condition in Theorem 3. After taking several invertible transforms of the equation, the problem of finding a noise-canceling Inline graphic is transformed to that of finding a set of Inline graphic vectors, whose length are specified by Inline graphic, that sum to zero (the vectors form a closed loop when connected consecutively). This allows us to take a geometrical point of view, in which inequality Eq. (1) becomes the triangle inequality. This will prove the “necessary” part of the Theorem. Lemma 0.5 shows the opposite direction, by inductively constructing the set of vectors that sum to zero.

This procedure will yield one “particular” Inline graphic with the noise-canceling property. Very much like finding all general solutions of an ODE, we then add to our particular solution an arbitrary homogeneous solution, which belongs to a vector space of dimension Inline graphic. In order for our perturbed solution, at least for small enough perturbations, to still be positive semidefinite, the particular Inline graphic we start with must be generic. In other words, it must satisfy a rank condition, which is guaranteed by the construction in Lemma 11. We can then conclude that the set of all noise canceling Inline graphic forms a linear segment with the dimension of the space of homogeneous solutions.

Finally, special treatments are given for the cases of “Inline graphic” in Eq. (1), as well as cases where some Inline graphic are 0.

Proof. To establish the necessity direction of the Theorem, first let Inline graphic be a diagonal matrix with Inline graphic or Inline graphic, where vector Inline graphic. Note that

graphic file with name pcbi.1003469.e594.jpg (30)

Let Inline graphic, a positive semidefinite matrix with diagonal Inline graphic.

Inline graphic can be diagonalized by an orthogonal matrix Inline graphic, Inline graphic. Without loss of generality, further assume that the first Inline graphic diagonal elements of Inline graphic are positive, with the rest being 0, where Inline graphic. Let Inline graphic be the first Inline graphic block of Inline graphic, and Inline graphic be the first Inline graphic rows of Inline graphic. Then we have

graphic file with name pcbi.1003469.e609.jpg (31)

Let Inline graphic, a Inline graphic matrix, and Inline graphic be the Inline graphic-th column. As Inline graphic, the 2-norm of vector Inline graphic is Inline graphic. Let Inline graphic be the maximum of Inline graphic,

graphic file with name pcbi.1003469.e619.jpg (32)

This concludes the necessary direction of our proof.

To establish sufficiency, we first focus on the case of “Inline graphic” and all Inline graphic. We will construct a generic Inline graphic that has rank Inline graphic, satisfying Inline graphic. We will basically reverse the direction of arguments in Eq. (3032). We will later deal with the “Inline graphic” case, and the case of Inline graphic for some Inline graphic.

Lemma 11 Let Inline graphic , Inline graphic be an orthonormal basis of Inline graphic . Given a set of positive Inline graphic satisfying “ Inline graphic ” in Eq. (1) , there exist Inline graphic vectors Inline graphic , such that Inline graphic , Inline graphic and the spanned linear subspace Inline graphic .

Proof. We prove this by induction. Inline graphic has to be at least 3 for the inequality to hold. For Inline graphic, this is the case of a triangle. There is a (unique) triangle Inline graphic, for which the length of the three sides Inline graphic, Inline graphic, Inline graphic are Inline graphic respectively. The altitude from Inline graphic intersects the line of Inline graphic at Inline graphic. Let Inline graphic be the origin of the coordinate system, with Inline graphic being the x-axis and aligned with Inline graphic, and the altitude Inline graphic being the y-axis aligned with Inline graphic. From such a picture, it is easy to verify the following: Inline graphic, Inline graphic, Inline graphic satisfies the lemma, where Inline graphic if Inline graphic lies within Inline graphic and Inline graphic otherwise.

For the case of Inline graphic, assume that Inline graphic is the largest of the Inline graphic. Because of the inequality, there will always exist some non-negative real number Inline graphic (not necessarily one of the Inline graphic) such that

graphic file with name pcbi.1003469.e665.jpg (33)

We can verify that the set Inline graphic satisfies the inequality as well. By the assumption of induction, there exist vectors Inline graphic that span the space of Inline graphic, such that Inline graphic and Inline graphic.

Note the choice of Inline graphic also guarantees that Inline graphic can be the edge lengths of a triangle. Applying the result at Inline graphic, the three sides Inline graphic, Inline graphic, Inline graphic correspond to Inline graphic respectively. Let Inline graphic, Inline graphic. It is easy to verify that these Inline graphic satisfy the lemma.

Using the lemma, we have a set of Inline graphic. Stacking them as column vectors gives a matrix Inline graphic; moreover, Inline graphic. Let Inline graphic, which is positive semidefinite with diagonals Inline graphic. It is easy to show that Inline graphic, by comparing the null spaces of the matrices. Let Inline graphic, where Inline graphic is defined as above. Then Inline graphic.

Now consider the case where there are zeros in Inline graphic. Assume that the first Inline graphic entries contain all of the the non-zero values. We apply the construction above for the first Inline graphic dimensions, and get a Inline graphic matrix such that Inline graphic, Inline graphic, where Inline graphic is part of Inline graphic with the first Inline graphic elements. The following block diagonal matrix

graphic file with name pcbi.1003469.e699.jpg (34)

satisfies Inline graphic and Inline graphic.

We have shown that for the “Inline graphic” case in the theorem, there is always a noise canceling Inline graphic. Consider the direction Inline graphic, in which off-diagonal elements of Inline graphic vary, while keeping Inline graphic (temporarily ignoring the positive semidefinite constraint). The set of all such Inline graphic form a linear subspace Inline graphic of Inline graphic, determined by the linear system Inline graphic. Since there are Inline graphic equations, the dimension of Inline graphic is at least Inline graphic.

In the “Inline graphic” case, there must be at least 3 non-zero Inline graphic in order for the triangle inequality to be satisfied in Eq. 1. We will choose these three Inline graphic to be Inline graphic. Consider a block of the coefficient matrix associated with the system Inline graphic (note that the entries of Inline graphic are considered to be unknown variables), that are columns corresponding to variables Inline graphic

graphic file with name pcbi.1003469.e721.jpg (35)

Performing Gaussian elimination on the columns of this matrix, we obtain the following matrix, which will have the same rank.

graphic file with name pcbi.1003469.e722.jpg (36)

This matrix – which determines the number of constraints that must be satisfied in order for Inline graphic – has rank Inline graphic, and hence Inline graphic is exactly Inline graphic.

For any direction in Inline graphic, we can always perturb the generic Inline graphic we found above by some finite amount Inline graphic, and still have Inline graphic be positive semidefinite. Let Inline graphic be the smallest non-zero eigenvalue of Inline graphic. Take any Inline graphic. For any vector Inline graphic, let Inline graphic be an orthogonal decomposition where Inline graphic is the projection along the direction of Inline graphic. Then

graphic file with name pcbi.1003469.e738.jpg (37)

This shows that the Inline graphic are positive semidefinite and they form a set of dimension as Inline graphic. We can always take the admissible Inline graphic values to their extremes, and the resulting matrices are all the possible noise canceling Inline graphic. For any Inline graphic, Inline graphic, and Inline graphic must be in Inline graphic. Note that the sets of positive semidefinite Inline graphic (spectrahedra) are convex. As a consequence, any point along the segment Inline graphic will be positive semidefinite. This shows we must have encompassed Inline graphic when considering the largest possible perturbations of Inline graphic, in any direction Inline graphic. Moreover, we note that the set of all noise-canceling Inline graphic is convex: if Inline graphic, Inline graphic, Inline graphic for any Inline graphic and Inline graphic is positive semidefinite, with the diagonal matching Inline graphic.

Thus, we have proved the claim about the dimension and convexity of the set of optimal correlations for the case of “Inline graphic” in Eq. (1).

Finally, for the special case of “Inline graphic” in Eq. (1), again first consider the case where all Inline graphic. As before, solving Inline graphic is equivalent to solving Inline graphic and there is an one to one correspondence between the two. Revisiting Eq. (32) in the proof above, the equality condition in the triangle inequality implies that Inline graphic all point along the same direction, and that Inline graphic is in the opposite direction, in order to cancel their sum. This fully determines Inline graphic, where Inline graphic, and

graphic file with name pcbi.1003469.e768.jpg (38)

It is easy to verify that Inline graphic, and hence there is a unique noise canceling Inline graphic.

For the case when there are Inline graphic 0's among the Inline graphic, assume that the first Inline graphic coordinates are non-zero, so that Inline graphic. Next, we write Inline graphic in block matrix form, with blocks of dimension Inline graphic and Inline graphic:

graphic file with name pcbi.1003469.e778.jpg (39)

Applying the previous argument from the Inline graphic case, there is a unique Inline graphic. Moreover, note that Inline graphic, following from the fact that Inline graphic in Eq. (38) has rank 1. Let Inline graphic be the orthogonal diagonalization and Inline graphic. Let Inline graphic be the identity matrix of dimension Inline graphic. Then we can take an orthogonal transform:

graphic file with name pcbi.1003469.e787.jpg
graphic file with name pcbi.1003469.e788.jpg

With the notation Inline graphic, the original problem Inline graphic is therefore equivalent to finding all Inline graphic and Inline graphic such that,

graphic file with name pcbi.1003469.e793.jpg (40)

while keeping the matrix in this equation positive semidefinite.

For any positive semidefinite matrix Inline graphic, it is easy to show that Inline graphic by considering the principle minor with indices Inline graphic, which must be non-negative. Note that since Inline graphic has only one non-zero diagonal entry, this forces the first Inline graphic columns of Inline graphic to be entirely 0. So we can rewrite the block matrix by dimension Inline graphic and Inline graphic as

graphic file with name pcbi.1003469.e802.jpg (41)

where Inline graphic is the Inline graphic-th column of Inline graphic. Since Inline graphic, we have Inline graphic. It can be verified that, as long as the block structure of Eq. (41) is satisfied, Eq. (40) is always true. The positive semidefinite constraint becomes the constraint that the lower block be positive semidefinite; in turn, this corresponds to a spectrahedron (and hence a convex set) of dimension Inline graphic. Note that this dimensionality and convexity will be preserved when we undo the invertible linear transforms performed in prior steps to obtain the noise-canceling Inline graphic.

Proof of Theorem 5: Probability that noise-free coding is possible

In this subsection, we will restate, and then prove, Theorem 5.

Theorem 5. If the Inline graphic defined in Theorem 4 are independent and identically distributed (i.i.d.) as a random variable X on Inline graphic then the probability

graphic file with name pcbi.1003469.e812.jpg (2)

Proof. We will use the following fact to establish a lower bound for the probability of the event in the theorem (below, we denote this event as Inline graphic).

graphic file with name pcbi.1003469.e814.jpg (42)

We choose the two events Inline graphic and Inline graphic as Inline graphic and Inline graphic. Note that Inline graphic implies C,

graphic file with name pcbi.1003469.e820.jpg (43)

the event in concern. We will then show that, for large populations, Inline graphic and Inline graphic, and thus Inline graphic.

For Inline graphic, by the law of large numbers, the average should converge to the expectation (which is a positive number), hence

graphic file with name pcbi.1003469.e825.jpg (44)

We next consider event B. Let the cumulative distribution function of Inline graphic be Inline graphic. Then cumulative distribution function for Inline graphic is Inline graphic by the assumption that these variables are drawn i.i.d. It follows that

graphic file with name pcbi.1003469.e830.jpg
graphic file with name pcbi.1003469.e831.jpg

Here, the first inequality is obtained via the lower bound of Inline graphic over the interval of integration, and the second uses the fact Inline graphic.

As Inline graphic, the last integral converges to 0 because of the fact that Inline graphic, together with the Lebesgue dominated convergence theorem. Hence Inline graphic as Inline graphic.

Combining the limits of Inline graphic and Inline graphic using Eq. (42), together with the fact Inline graphic, we conclude that Inline graphic must approach 1 as Inline graphic.

Proof of Proposition 6: Sensitivity to perturbations

Here, we will prove Proposition 6, which puts bounds on the condition numbers that define the sensitivity of our coding metrics to perturbations in noise correlations or the tuning curves. For our proof, we will require three different lemmas. We state and prove these, before moving on to Proposition 6.

Here, we will first consider the condition number for the case of a scalar stimulus Inline graphic, when Inline graphic is a vector. In the proof of the proposition, we show how to extend the results to the case of multivariate Inline graphic. As we mentioned in Section “Sensitivity and robustness of the impact of correlations on encoded information”, the same proof works for Inline graphic as well as Inline graphic.

Lemma 12. For any submultiplicative matrix norm Inline graphic and Inline graphic ,

graphic file with name pcbi.1003469.e850.jpg (45)

Proof. Since Inline graphic, Inline graphic exists and

graphic file with name pcbi.1003469.e853.jpg (46)

Lemma 13. For any positive definite matrix Inline graphic , vectors Inline graphic and Inline graphic such that Inline graphic ,

graphic file with name pcbi.1003469.e858.jpg (47)

Proof.

graphic file with name pcbi.1003469.e859.jpg
graphic file with name pcbi.1003469.e860.jpg
graphic file with name pcbi.1003469.e861.jpg
graphic file with name pcbi.1003469.e862.jpg
graphic file with name pcbi.1003469.e863.jpg
graphic file with name pcbi.1003469.e864.jpg

Here, we have used Inline graphic, Inline graphic, and the assumed condition in the last line.

Lemma 14. For any positive definite matrix Inline graphic , vector Inline graphic and matrix Inline graphic where Inline graphic,

graphic file with name pcbi.1003469.e871.jpg (48)

Proof.

graphic file with name pcbi.1003469.e872.jpg
graphic file with name pcbi.1003469.e873.jpg
graphic file with name pcbi.1003469.e874.jpg
graphic file with name pcbi.1003469.e875.jpg
graphic file with name pcbi.1003469.e876.jpg

Here we have used Inline graphic. As Inline graphic, we apply Lemma 12 is applied to obtain the last line.

Proposition 6. The local condition number of IF;lin under perturbations of Cn (where magnitude is quantified by 2-norm) is bounded by

graphic file with name pcbi.1003469.e879.jpg (3)

where Inline graphic max and Inline graphic min are the largest and smallest eigenvalue of Cn respectively. Here Inline graphic is the condition number with respect to the 2-norm, as defined in the above equation.

Similarly, the condition number for perturbing of Inline graphic is bounded by

graphic file with name pcbi.1003469.e884.jpg (4)

where Inline graphic is the i-th column of Inline graphic and assume Inline graphic for all i. Here K is the dimension of the stimulus s.

Proof. Note that

graphic file with name pcbi.1003469.e888.jpg (49)

where Inline graphic is the Inline graphic-th unit vector (Inline graphic). Since the bound in Lemma 14 does not depend on Inline graphic, we apply the Lemma for Inline graphic and each Inline graphic respectively. For any perturbation Inline graphic satisfying Inline graphic, we have

graphic file with name pcbi.1003469.e897.jpg
graphic file with name pcbi.1003469.e898.jpg

Here Inline graphic. We then note that for positive semidefinite matrices Inline graphic, Inline graphic, where Inline graphic and Inline graphic are the smallest and largest eigenvalues of Inline graphic. This proves the bound on the condition number for perturbing Inline graphic.

Similarly, for a perturbation of Inline graphic with Inline graphic, Inline graphic. This guarantees that

graphic file with name pcbi.1003469.e909.jpg (50)

Applying Lemma 13 for each Inline graphic and Inline graphic, we have

graphic file with name pcbi.1003469.e912.jpg
graphic file with name pcbi.1003469.e913.jpg
graphic file with name pcbi.1003469.e914.jpg

Here Inline graphic is the Frobenius norm and we have used the fact for any matrix Inline graphic, Inline graphic. The last inequality follows from the definition of Inline graphic.

Details for numerical examples and simulation

Here, we describe the parameters of our numerical models, and the numerical methods we used.

Parameters for Fig. 1, Fig. 2 and Fig. 3

All parameters we use are dimensionless, unless stated otherwise.

In Fig. 1, the mean response for the three neurons under stimulus 1 (red) and 2 (blue) is Inline graphic and Inline graphic respectively:

graphic file with name pcbi.1003469.e921.jpg (51)

For each case of correlation structure (i.e., for each row in Figure 1), the noise covariance matrix is the same for the two stimuli, and all neuron variances Inline graphic. In detail:

graphic file with name pcbi.1003469.e923.jpg (52)
graphic file with name pcbi.1003469.e924.jpg (53)

The confidence circles and spheres are calculated based on a Gaussian assumption for the response distributions.

In Fig. 2, the noise variances Inline graphic are all set to 1. Additionally,

graphic file with name pcbi.1003469.e926.jpg (54)

In Fig. 3, the noise variances Inline graphic are all set to 1. In panel A

graphic file with name pcbi.1003469.e928.jpg (55)

For panel B

graphic file with name pcbi.1003469.e929.jpg (56)

Heterogeneous tuning curves

For the results in Section “Heterogeneously tuned neural populations”, we use the same model and parameters as in [8] to set up a heterogeneous population with tuning curves of random amplitude and width. For completeness, we include the details of this setup as follows:

The shape of each tuning curve (specifying firing rates) is modeled by a von Mises distribution. This an analog of the Gaussian distribution over the unit circle:

graphic file with name pcbi.1003469.e930.jpg (57)

The parameters Inline graphic respectively control the magnitude, width and preferred direction for each neuron. We set Inline graphic to be equally spaced along Inline graphic and Inline graphic. The Inline graphic are independently chosen from a Inline graphic-square distribution with 3 degrees of freedom, scaled to a mean of 19. Inline graphic is similarly drawn from a Inline graphic-normal distribution with parameters giving mean 2 and standard deviation 2 (for the underlying normal distribution).

We assume Poisson firing variability, so that Inline graphic, and use a spike-count window Inline graphic ms in Fig. 4 and 5.

Equivalence between penalty functions and constrained optimizations

In this section we note a standard fact about implementing constrained optimization with penalty functions — i.e., the method of Lagrange multipliers.

Consider an optimization problem: Inline graphic. Now add a penalty term Inline graphic with constant Inline graphic and consider the new optimization problem: Inline graphic. If Inline graphic is one of the solutions to this new optimization problem, then it is also an optimal solution to the constraint optimization problem Inline graphic.

To show this, let Inline graphic be any point that satisfies Inline graphic. Further, note Inline graphic is also the solution to the problem of Inline graphic, since we simply add a constant Inline graphic. Therefore,

graphic file with name pcbi.1003469.e952.jpg
graphic file with name pcbi.1003469.e953.jpg

As Inline graphic also satisfies the constraint, we conclude that Inline graphic is an optimal solution to the constrained optimization problem.

We use this fact to find the information-maximizing noise correlations, with the restriction that the noise correlations by small in magnitude. For a given Inline graphic, we perform the optimization Inline graphic, where Inline graphic in this case is one of our information measures, Inline graphic refers to the off-diagonal elements of the covariance matrix, and Inline graphic is the measure of the correlation strength as in Eq. (58). Thanks to the above result, we can be assured that the resulting covariance matrix (described by Inline graphic) will be the one that maximizes the information for a particular strength of correlations. By varying Inline graphic (or Inline graphic in Eq. (58)), we can thus parametrically explore how the optimal correlation structures change as one allows either larger, or smaller, correlations in the system.

Penalty function

In Section “Heterogeneously tuned neural populations”, our aim is to plot optimized noise correlations at various levels of the correlation strength, as quantified by the Euclidean norm. This constrained optimization problem can be achieved, as shown in the previous section, by adding a term to the information that penalizes the Euclidean norm — that is, a constant times the sum-of-squares of correlations. This is precisely the procedure that we follow, ranging over a number of different values of the constant to produce the plot of Fig. 4.

In more detail, we choose these different values of the constant as follows. To force the correlations towards a fixed strength of Inline graphic, we optimize a modified objective function with an additional term:

graphic file with name pcbi.1003469.e965.jpg (58)

As will become clear, the term before the sum is a constant with respect to the terms being optimized; from one optimization to the next, we adjust the value of Inline graphic in this term. Here the variance terms Inline graphic are constants to scale Inline graphic properly as correlation coefficients. Also, Inline graphic is the gradient vector of Inline graphic at Inline graphic (the diagonal matrix corresponding independent noise) with respect to off-diagonal entries of Inline graphic (see the remarks after the proof of Theorem 1). Inline graphic means the entry-wise product of the two vectors (of length Inline graphic indexed by Inline graphic). Note that Inline graphic is the ordinary vector 2-norm.

To understand this choice of the constant in (58), note that the new optimal correlations with the penalty can be characterized by setting the gradient of the total objective function to 0. In a small neighborhood of Inline graphic, the gradient of Inline graphic is close to Inline graphic. With these substitutions, the equation for the gradient of the total objective function yields approximately:

graphic file with name pcbi.1003469.e980.jpg (59)

where we took an entry-wise product with Inline graphic and rearranged terms to obtain the final equality. The final equality implies that the (vector) 2-norm of noise correlations Inline graphic (i.e., the Euclidean norm) is approximately Inline graphic. This is what we set out to achieve with the additional term in the objective function.

Rescaling signal correlation

In Fig. 5 DEF , we make scatter plots comparing noise correlations with the rescaled signal correlations. Here, we explain how and why this rescaling was done.

First, we note that the rescaling is done by multiplying each signal correlation by a positive weight. This will not change its sign, the property associated with the sign rule (Fig. 5 ABC ).

Next, recall that in deriving the sign rule (Eq. (20)), we calculated the gradient of the information with respect to noise correlations. One should expect alignment between this gradient and the optimal correlations when their magnitudes are small. In other words, if we make a scatter plot with dots whose y and x coordinates are entries of the gradient and noise correlation vectors, respectively (so that the number of dots is the length of these vectors), we expect to see that a straight line will pass through all the dots.

We next note that the entries of the gradient vector Inline graphic are not exactly the normalized signal correlations (see Eq. (59)). Instead, this vector has additional “weight factors” that differ for each entry (neuron pair), and hence for each dot in the scatter plot. Thus, to reveal a linear relationship between signal and noise correlations in a scatter plot, we must scale each signal correlation with a proper (positive) weight, determined below, so that Inline graphic. We then redo the scatter plots with these new values on the horizontal axis. As we will see, the weights Inline graphic (defined below) do not depend on the noise correlations.

We now determine Inline graphic. Recall that our goal is to define Inline graphic such that, when it is used to rescale signal correlations as above, we will see a linear alignment between signal and noise correlations. In other words, if we choose Inline graphic correctly, we will have Inline graphic (for any Inline graphic). Comparing the formulae for Inline graphic (from the remarks after the proof of Theorem 1) and Inline graphic (Eq. (17)), we see that Inline graphic satisfies this (with constantInline graphic). Here Inline graphic.

Acknowledgments

This work was inspired by an ongoing collaboration with Fred Rieke and his colleagues on retinal coding, which suggested the importance of “mapping” the full space of possible signal and noise correlations. We gratefully acknowledge the ideas and insights of these scientists. We further wish to thank Andrea Barreiro, Fred Rieke, Kresimir Josic and Xaq Pitkow for helpful comments on this manuscript.

Funding Statement

Supported for this work came from NSF CRCNS grant DMS-1208027 and a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund to ESB. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Mastronarde D (1983) Correlated firing of cat retinal ganglion cells. I. Spontaneously active inputs to X- and Y- cells. Journal of Neurophysiology 49: 303–324. [DOI] [PubMed] [Google Scholar]
  • 2. Alonso J, Usrey W, Reid R (1996) Precisely correlated firing of cells in the lateral geniculate nucleus. Nature 383: 815–819. [DOI] [PubMed] [Google Scholar]
  • 3. Cohen MR, Kohn A (2011) Measuring and interpreting neuronal correlations. Nature Neuroscience 14: 811–819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Gawne T, Richmond B (1993) How independent are the messages carried by adjacent inferior temporal cortical neurons? Journal of Neuroscience 13: 2758–2771. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Averbeck BB, Latham PE, Pouget A (2006) Neural correlations, population coding and computation. Nature Reviews Neuroscience 7: 358–366. [DOI] [PubMed] [Google Scholar]
  • 6. Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370: 140–143. [DOI] [PubMed] [Google Scholar]
  • 7. Abbott LF, Dayan P (1999) The effect of correlated variability on the accuracy of a population code. Neural Computation 11: 91–101. [DOI] [PubMed] [Google Scholar]
  • 8. Ecker AS, Berens P, Tolias AS, Bethge M (2011) The Effect of Noise Correlations in Populations of Diversely Tuned Neurons. Journal of Neuroscience 31: 14272–14283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Shamir M, Sompolinsky H (2006) Implications of neuronal diversity on population coding. Neural Comput 18: 1951–1986. [DOI] [PubMed] [Google Scholar]
  • 10. Sompolinsky H, Yoon H, Kang K, Shamir M (2001) Population coding in neuronal systems with correlated noise. Physical Review E 64: 051904. [DOI] [PubMed] [Google Scholar]
  • 11. Averbeck BB, Lee D (2006) Effects of noise correlations on information encoding and decoding. Journal of Neurophysiology 95: 3633–3644. [DOI] [PubMed] [Google Scholar]
  • 12. Latham P, Roudi Y (2011) Role of correlations in population coding. arXiv preprint arXiv 11096524. [Google Scholar]
  • 13. Romo R, Hernandez A, Zainos A, Salinas E (2003) Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron 38: 649–657. [DOI] [PubMed] [Google Scholar]
  • 14. da Silveira RA, Berry MJ II (2013) High-Fidelity Coding with Correlated Neurons. arXivorg [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Wilke SD, Eurich CW (2002) Representational accuracy of stochastic neural populations. Neural Comput 14: 155–189. [DOI] [PubMed] [Google Scholar]
  • 16. Josić K, Shea-Brown E, Doiron B, de la Rocha J (2009) Stimulus-dependent correlations and population codes. Neural Computation 21: 2774–2804. [DOI] [PubMed] [Google Scholar]
  • 17. Tkacik G, Prentice J, Balasubramanian V, Schneidman E (2010) Optimal population coding by noisy spiking neurons. Proc Natl Acad Sci USA 107: 14419–14424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Salinas E, Abbott L (1994) Vector reconstruction from firing rates. Journal of Computational Neuroscience 1: 89–107. [DOI] [PubMed] [Google Scholar]
  • 19. Kohn A, Smith M (2005) Stimulus Dependence of Neuronal Correlation in Primary Visual Cortex of the Macaque. Journal of Neuroscience 25: 3661–3673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Softky W, Koch C (1993) The highly irregular firing of cortical cells is incosistent with temporal integration of random epsp's. Journal of Neuroscience 13: 334–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Britten K, Shadlen M, Newsome W, Movshon J (1993) Responses of neurons in macaque MT to stochastic motion signals. Visual Neurosci 10: 1157–1169. [DOI] [PubMed] [Google Scholar]
  • 22. de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A (2007) Correlation between neural spike trains increases with firing rate. Nature 448: 802–806. [DOI] [PubMed] [Google Scholar]
  • 23. Binder M, Powers R (2001) Relationship between Simulated Common Synaptic Input and Discharge Synchrony in Cat Spinal Motoneurons. J Neurophysiol 86: 2266–2275. [DOI] [PubMed] [Google Scholar]
  • 24. Hansen B, Chelaru M, Dragoi V (2012) Correlated variability in laminar cortical circuits. Neuron 76: 590–602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Cover TM, Thomas JA (2006) Elements of Information Theory. John Wiley & Sons. [Google Scholar]
  • 26. Beck J, Kanitscheider J, Pitkow X, Latham P, Pouget A (2013) The perils of inferring information from correlations. Cosyne Abstracts 2013 Salt Late City USA [Google Scholar]
  • 27. Beck JM, Ma WJ, Pitkow X, Latham PE, Pouget A (2012) Not noisy, just wrong: the role of suboptimal inference in behavioral variability. Neuron 74: 30–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Shamir M, Sompolinsky H (2004) Nonlinear population codes. Neural Computation 16: 1105–1136. [DOI] [PubMed] [Google Scholar]
  • 29.Koch C (1999) Biophysics of Computation. Oxford University Press. [Google Scholar]
  • 30. Ganmor E, Segev R, Schneidman E (2011) Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences of the United States of America 108: 9679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Ohiorhenuan I, Mechler F, Purpura K, Schmid A, Hu Q, et al. (2010) Sparse coding and high-order correlations in fine-scale cortical network. Nature 466: 617–621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Zylberberg J, Shea-Brown E (2012) Input nonlinearities shape beyond-pairwise correlations and improve information transmission by neural populations. arXiv preprint arXiv 12123549. [DOI] [PubMed] [Google Scholar]
  • 33. Prinz A, Bucher D, Marder E (2004) Similar network activity from disparate circuit parameters. Nature Neuroscience 7: 1354–1352. [DOI] [PubMed] [Google Scholar]
  • 34. Marder E (2011) Variability, compensation, and modulation in neurons and circuits. Proceedings of the National Academy of Sciences USA 108: 15542–15548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Stevenson I, Kording K (2011) How advances in neural recording affect data analysis. Nature Neuroscience 14: 139–142. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES