Abstract
Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwise cross-correlation. This report introduces and analyzes a novel algorithm for the generation of discretized spike trains with arbitrary mean rates and controlled cross correlation. Pairs of spike trains with any pairwise correlation can be generated, and higher-order correlations are compatible with common synaptic input. Relations between allowable mean rates and correlations within a population are discussed. The algorithm is highly efficient, its complexity increasing linearly with the number of spike trains generated and therefore inversely with the number of cross-correlated pairs.
1 Introduction
Understanding any natural or artificial system is considerably furthered by the possibility of not only being able to observe but also to manipulate it, that is, by modifying the system’s state and observing the resulting changes in behavior. In neuroscience, recent technological advances have substantially increased the extent to which the microstate of nervous systems can be controlled. While electrical microstimulation has been one of the standard tools of neurophysiology for some time (for a recent review, see Cohen & Newsome, 2004), new technology allows the stimulation of substantially larger numbers of sites (see, e.g., Normann, Maynard, Rousche, & Warren, 1999; Sekirnjak et al., 2006). In addition, newly developed optical methods based on uncaging of neurotransmitters allow the activation of up to tens of thousands of locations per second with high spatial and temporal resolution (Callaway & Katz, 1993; Shepherd, Pologruto, & Svoboda, 2003; Boucsein, Nawrot, Rotter, Aertsen, & Heck, 2005; Shoham, O’Connor, Sarkisov, & Wang, 2005; Miesenbock & Kevrekidis, 2005; Thompson et al., 2005). Realizing the full potential of these techniques will require the injection of sets of artificial spike trains with controlled rates and correlations between them. There is thus a need for efficient and systematic methods that allow generating not just individual spike trains but large numbers of them and the possibility of varying their rates and correlation structure.
A similar need arises in theoretical neuroscience. While much of the earlier work in the field was based on the mean-rate coding assumption originating in the peripheral nervous system (Adrian & Zotterman, 1926), that is, the hypothesis that neural processing uses only the information contained in the mean number of spikes averaged over a few hundred milliseconds, this concept is being questioned in increasingly stronger terms (Abeles, 1982; Singer, 1999). A substantial number of current models explores the consequences of the temporal coding hypothesis, that is, that the nervous system does not discard all information in neural spike trains other than the mean activity level. It has been proposed that neural information may be represented by controlled changes in the autocorrelation function, as in the large number of models of oscillation coding (e.g., Gray & Singer, 1989; Niebur, Koch, & Rosin, 1993), or in cross-correlation functions, as in models studying synchrony (e.g., Niebur & Koch, 1994; Tiesinga & Sejnowski, 2004), or a combination of both.
Thus, both the development of quantitative neural models that rely on correlated activity and of experimental techniques that allow selectively exciting neural tissue in multiple sites require the availability of test data. The subject of this letter is the generation of sets of synthetic spike trains with controlled rates and cross-correlations.
A finite pairwise correlation within a population of N model neurons can be obtained by generating a set of N2 random numbers with suitable distribution (e.g., Poisson) and assigning them randomly to the N spike trains. If N2 < N, some states are necessarily identical, which introduces correlations between spike trains (Destexhe & Pare, 1999). While this method is efficient, the mean pairwise correlation is not parametrically controlled (only indirectly via the ratio N2/N), and it is the same for all pairs of neurons. Other methods were based on manipulating stochastic processes by either subtracting events (Ogata, 1981) or adding events (Stroeve & Gielen, 2001). Different from these earlier methods, the algorithm developed here generates spike trains whose mean rates as well as cross-correlations between each pair of spike trains are free parameters that can be selected independently, subject to constraints discussed below.
The algorithm developed here is a generalization of our earlier work (Mikula & Niebur, 2003a, 2003b, 2004, 2005) in that it allows differing rates for the spike trains rather than requiring that all rates are identical. Furthermore, the cross-correlation between any two of these spike trains can be selected anywhere between the cases of independent spike trains (minimal correlation), on one hand, and having identical spike trains (maximal correlation), on the other, subject to limitations discussed below.
Rather than advancing to the general case immediately, we first introduce, in sections 2 and 3, the simple and intuitive case of considering two spike trains only. In sections 4 and 5, we generalize to an arbitrary number of spike trains. Different from the earlier case of identical firing rates in all spike trains, not all firing rates and cross-correlations can be combined; it is impossible to have strong correlations in a set of spike trains with vastly different firing rates. The resulting constraints are studied in section 6.
2 Generating Two-Spike Trains
Our goal is to provide a constructive method for generating two stochastic processes (we use the terms spike train and stochastic point process interchangeably), x and y, each consisting of a series of binary events (0, or no spike in bin, and 1, or one spike in bin). We assume that spike trains are memoryless, that is, all their time bins are independent, and we can therefore analyze them separately (this assumption can be relaxed; see section 7). These are Bernoulli processes, with 1 occurring in process x with probability1 rx = 〈x〉, with 0 ≤ rx ≤ 1 (the expectation value of obtaining a 1; see equation 3.2) and 0 occurring with probability 1 − 〈x〉. Likewise, in process y, 1 occurs with probability ry = − 〈y〉, with 0 ≤ ry ≤ 1, and 0 with probability 1 − 〈y〉. We note that for most practical purposes, the firing probability in any given bin is low, rx ≪ 1, in which case the Bernoulli process can be approximated by a Poisson process. The pairwise correlation of the two processes is Cxy, the expectation value of the probability of obtaining coincident spikes with rate correction (see equation 3.4).
In order to construct the two processes x and y, we follow the appendix in Mikula and Niebur (2003b). We start out with three independent Bernoulli processes that have either a 1 or a 0 in each bin and are binomially distributed. For one of the spike trains, referred to as reference spike train r in the following (because it is left unchanged), the probability of having a 1 in any given bin is p; thus, a 0 is found with probability (1 − p).
The other two spike trains, x and y, are constructed as follows. For spike train x, we start with a memoryless two-state stochastic process in each bin of which a 1 is found with probability px and a 0 with probability (1 − px). We then switch, with a probability , the state in x to that in r. Note that in general (in all cases other than for p = px or q = 0), the mean rate of the resulting spike train will be neither p nor px but an intermediate value to be computed later, in equation 3.2. The same procedure is applied to spike train y, with px replaced everywhere by py.
Table 1 shows the complete probability table for all combinations of a reference spike train (r, first column) and two spike trains x and y (second and third columns). Determining the probabilities Pr xy (fourth column) is straightforward. For the purpose of illustration, we illustrate computation of the first probability, P000. It is obtained by observing that the probability of having no spike in the reference spike train is (1 − p) since the probability of obtaining a spike is p. This gives rise to the first factor, (1 − p). Given the absence of a spike in the reference spike train, there are two possibilities that spike train x also does not contain a spike in this spike train.
Table 1.
Probability Table for All States of the Reference Spike Train (sr) and Two Spike Trains sx and sy.
| sr | sx | sy | Pr xy | |
|---|---|---|---|---|
| 0 | 0 | 0 |
|
|
| 0 | 1 | 0 |
|
|
| 0 | 0 | 1 |
|
|
| 0 | 1 | 1 |
|
|
| 1 | 0 | 0 |
|
|
| 1 | 1 | 0 |
|
|
| 1 | 0 | 1 |
|
|
| 1 | 1 | 1 |
|
Notes: The reference spike train has firing probability p, and the original spike trains x and y have firing probabilities px and py, respectively. In columns 1–3, 1 indicates a spike in the respective spike train, and 0 stands for no spike.
The first is that it starts out with no spike. This happens with the probability (1 − px), the first term in the square bracket. This is independent of the state of the reference spike train since whether a switch occurs or not, this spike train will always have no spike (switching would not change the state since, by assumption, the reference spike train does not contain a spike).
The second possibility occurs if the state in spike train x is switched to that in the reference spike train (0) even though originally (before switching) this state was 1. Since the latter occurs with the probability px and the switching with the probability , this additional probability is , which completes the derivation of the first square bracket in P000 (note that these two events, having a spike occur in spike train x and switching the state to that of the reference spike train, are independent; therefore, their probabilities multiply to obtain the probability of the compound event). Finally, the same considerations apply to the second spike train (y), which therefore contributes a factor equal to that obtained for x but with all px replaced by py. The total probability is the product of those for the states of r, x, and y. The other cases in Table 1 are derived analogously.
3 Properties of Two Spike Trains
Having obtained Table 1, it is straightforward (although somewhat tedious) to verify the normalization condition:
| (3.1) |
We now proceed to compute the mean firing rate of the spike trains. Direct substitution yields
| (3.2) |
In this equation, the first and second equalities are the definition of the mean rate, and the last one is the result of direct substitution of terms from Table 1 and straightforward simplification. As anticipated, the mean rate 〈x〉 is influenced by p and px. Of course, for q = 0, we obtain 〈x〉= px, and for q = 1, we have 〈x〉= p. The analogous result is obtained for y:
| (3.3) |
The rate-corrected cross-correlation (or covariance) between spike trains x and y is
| (3.4) |
where the first and second equalities are definitions, and the third one is obtained after substitution from Table 1 and simplification. We observe that Cxy is independent of px and py. Furthermore, Cxy attains a maximum value of q/4 at p = 1/2 and decreases monotonically for larger and smaller p, to reach zero for both p = 0 and p = 1. Of course, Cxy is identically zero for all p for q = 0. For q = 1, all three spike trains are identical and have spike rate p and correlation Cxy = p − p2, which is the autocorrelation of a binomial process train with probability p. The latter can also be computed directly from Table 1 as the variance of the reference spike train,
| (3.5) |
where the same notation as that introduced in equation 3.4 was used.
In practice, rather than specifying the rates px and py of the uncorrelated stochastic processes and the correlation with the reference spike train, we are interested in generating spike trains with specified mean rates and cross-correlation. That is, 〈x〉, 〈y〉, and Cxy are given quantities, and we have to determine parameters px, py, and q given these values. By solving equations 3.2 to 3.4 for px, py, and q, we find that a pair of spike trains with means 〈x〉, 〈y〉 and cross-correlation Cxy is obtained by choosing px, py, and q as
| (3.6) |
| (3.7) |
| (3.8) |
where p, the rate of the reference spike train, is formally a free parameter. There are limitations on the choice of this parameter (e.g., p − p2 ≥ Cxy to ensure q ≤ 1 in equation 3.8), which will be discussed in the more general case of arbitrary numbers of spike trains in section 6. Note that these limitations do not apply if all firing rates are identical (〈x〉= 〈y〉 = p) since px and py do not depend on q in this case (Mikula & Niebur, 2003a).
4 Arbitrary Number of Spike Trains
For N spike trains, there are 2N possible states in each bin. For nonzero q, all these states are entangled because each randomly chosen pair has to have the same mean covariance. Despite the apparent complexity of the problem, we can compute the probability of each of the states as the sum of two terms, as follows. The trick is that we specify only the correlation with the reference spike train r and that the pairwise correlation is then obtained from expressions like equation 3.4. For instance, Table 1 shows that the state without a spike in either x and y has a total probability of
where the terms in the first and second parentheses correspond to the cases without and with a spike in r, respectively.
The algorithm to generate N correlated spike trains is thus as follows. We start by generating N uncorrelated, memoryless stochastic point processes (i.e., they have independent time bins) and binomial firing statistic in each bin. As before, each is a Bernoulli process, and process i, with i ∈ {1, …, N}, has a mean firing rate (probability of having state 1) of pi ; 0 ≤ pi ≤ 1. One additional process with the same statistics and with mean rate p is generated and referred to, as before, as the reference spike train r. To generate the set of N correlated spike trains, we switch, with a probability , the state of uncorrelated spike train i to that of spike train r. Using the notation from the previous paragraph, let (s1, s2, …, sN) be the state of the N spike trains in a given time bin. The probability of finding the system in this state is then
| (4.1) |
where and are the terms from the third column of Table 2.
Table 2.
Probability of Obtaining State si in Process i Depends on the State of the Reference Process.
| sr | si | P(i)(sr, si) | |||
|---|---|---|---|---|---|
| 0 | 0 |
|
|
||
| 1 | 0 |
|
|
||
| 0 | 1 |
|
|
||
| 1 | 1 |
|
|
Notes: Column 1: state of reference process. Column 2: state of process i. The last column shows the probability of obtaining the combination of state sr in the reference process and state si in process i, and the third column shows only that component of this probability that is independent of p. The first factor in P(i)(sr, si) stems from the reference spike train r, the second factor comes from spike train i and depends on both its own state and that of r. Note that in order to avoid proliferation of symbols, we distinguish the symbols for the probabilities in column 4 from the expressions in column 3 (which are not probabilities but terms used in equation 4.1 and beyond) by using a subscript notation in column 3 and parentheses in column 4 (top row).
We will show now that equations 3.6 to 3.8 generalize immediately to the case of arbitrary N. The mean rate of process i is defined as the expectation value of this process having state 1, that is,
The sums within each parenthesis in this equation are again unity, and we obtain
| (4.2) |
where the second equation is obtained using column 3 in Table 2. Equation 4.2 is the generalization of equation 3.2 for spike train i.
The correlation function is computed analogously. By definition, the correlation function of processes i and k is the expectation value of both processes having state 1 corrected by the product of their rates:
| (4.3) |
Sums in the parentheses being unity, we are left with
| (4.4) |
Using again and from Table 2 and equation 4.2, we obtain, after simplification,
| (4.5) |
which is the generalization of equation 3.4.
In practice, the mean rates ri for i ∈ {1, …, N} of the correlated processes and their pairwise correlation C are given, and we need to determine the parameters pi and q of the uncorrelated stochastic processes. We therefore solve equations 4.2 and 4.5 for these parameters to obtain
| (4.6) |
| (4.7) |
which are the generalizations of equations 3.2 to 3.4. Section 6 discusses constraints on the choice of these parameters.
5 Arbitrary Cross-Correlations
So far we have assumed that all cross-correlations are identical. To generalize to arbitrary covariances, rather than changing the state in all stochastic processess with the same probability , the state in process i is changed to that of the reference process with probability , which can be different for each i. The table of probabilities is then just as in Table 2 but with replaced by everywhere. Straightforward algebra along the lines of equations 4.1 to 4.5 shows that the mean firing rate of process i is unchanged and given as in equation 4.2, and that the mean cross-correlation function between processes i and k, as defined in equation 4.3, is
| (5.1) |
Of course, in the special case of qi = qk, this reduces to equation 4.5.
The mean rates are obtained as in equation 4.6, with replaced by in the numerator and denominator. Likewise, given a set of cross-correlations Ci j = Cji; i, j = 1, …, N), the corresponding parameters qi j are computed from equation 5.1. In the most general case, when all Ci j are different, half of the N parameters qi can be chosen freely (subject to constraints similar to those discussed in section 6), and the other N/2 are determined by equation 5.1. We note in passing that another way to add structure to the correlation at the population level is to add additional reference spike trains.
6 Constraints on Rates and Correlations
While arbitrary values for q ∈ [0, 1] and p ∈ [0, 1] can be chosen if the mean rates of all spike trains are identical (Mikula & Niebur, 2003a), a set of conditions between p and q needs to be satisfied when mean rates differ (for simplicity, we discuss the case of equal pairwise cross-correlation only; the generalization discussed in section 5 does not pose any particular difficulties but complicates the notation). This is intuitively understood by the fact that too large a spread of firing rates does not allow enforcing pairwise strong correlation on all pairs. Formally, the constraint manifests itself by the requirement that q in equation 4.7 and pi in equation 4.6 need to be well-defined probabilities with 0 ≤ q, pi ≤ 1. Equation 4.7 requires2
| (6.1) |
We solve for p to obtain
| (6.2) |
Equation 4.6 yields
| (6.3) |
| (6.4) |
Inequalities 6.3 and 6.4 can be combined as
| (6.5) |
For a given p and q, the right inequality 6.5 determines the frequency of the spike train with the smallest possible ri as
| (6.6) |
and the left inequality determines the frequency of the spike train with the largest ri,
| (6.7) |
In the following, we consider the practically more important reverse question: For a given set of ri, can we find (at least) one p such that the algorithm can be applied, and if so, what are the allowable values for p?
For any given pi and q < 1, a solution of inequality 6.5 exists because the left inequality can be transformed into , which is smaller than the right-hand side since by assumption, (the case q = 1 is trivial, leading to N identical spike trains with rate p). The condition does, however, establish a constraint on a set of mean rates ri that can be generated with this procedure because if their spread becomes too large, it is not possible to find a p that satisfies both inequalities 6.5 for all i.
From the second of the inequalities 6.5 we have and thus , bearing in mind that the positive solution of this equation will be the correct one since p is a probability. Replacing q in this inequality by using equation 3.8, we obtain and after division by p (≠0),
| (6.8) |
Thus, p must be smaller than the expression on the right for all ri. For C = 0, the condition is fulfilled for all probabilities p, but for increasing C, it becomes more and more stringent.
Using again equation 3.8, we rewrite the first inequality in equation 6.5 as
| (6.9) |
or
| (6.10) |
where is a positive constant.
At equality, this is a nonlinear equation of the type
| (6.11) |
whose solution depends, through A, on ri and C. In the relevant interval p ∈ ]0, 1[, the right-hand side of this equation is a U-shaped function with exactly one crossing with the identity. These fixed points are given by p = C[(1 − ri)2 + C]−1.
Although a set of p can always be found for any given pair of values (ri, C), it is possible that the intervals given by conditions equations 6.8 and 6.9 are nonoverlapping for a given C and a range of values [ri,min, ri,max]. No solution with the specified set of mean rates and pairwise correlation C between all pairs exists in this case.
Only if at least one p can be found that simultaneously fulfills equations 6.8 and 6.9 can the algorithm be applied. Figure 1 shows the solutions of inequalities 6.5 as a function of ri. Only those values of p that are between the two lines corresponding to the left inequality (solid line) and the right inequality (dashed line) fulfill both inequalities and are therefore acceptable solutions. If the pairwise correlation is relatively small, as in Figure 1a with C = 0.02, a suitable p value can be chosen for all but the most extreme values of ri, and in this case there is usually a large range of suitable p values. For large correlation, for example, for C = 0.2 as shown in Figure 1b (note that the maximum of completely correlated, i.e., identical, spike trains is attained for C = 0.25), the constraints become more stringent and the spread of allowable values for p is zero for a substantial subset of ri (those below and above the crossing points). It is impossible to generate sets of correlated spike trains with unequal mean rates; only sets with ri = ∀i are allowed.
Figure 1.
Solutions of the inequalities 6.5 as a function of ri for C = 0.02 (a) and C = 0.2 (b). The solid lines show the limits of the solution (i.e., at equality) for the left inequality; the dashed lines are the equivalent for the right inequality. Acceptable p values are all those between the two curves, that is, above the full lines and below the dashed lines. For a chosen range of mean rates (shaded on the abscissa), the range of acceptable values (shaded on the ordinate) is significantly smaller for C = 0.2 than for C = 0.02. The spread of mean rates is zero for rates ri that are outside the range where the curves cross.
The shaded part of the abscissa (in Figures 1a and 1b) is an example for a range of firing rates [ri,min, ri,max]. The projection onto the ordinate shows that for small correlation (see Figure 1a, C = 0.02), a large range of p values can be chosen (shaded part of the ordinate), while for strong correlation (C = 0.2, in Figure 1b), the range of possible p values is much smaller. For an even larger range, for example, if [ri,min, ri,max] = [0.4, 0.6], it is impossible to generate spike trains with cross-correlation C = 0.2 between each pair of the set.
Figure 2 shows that spike trains computed with the described algorithm have the desired properties. The upper panels of Figure 2 show examples of 300 bins each of 100 spike trains, with, respectively, low correlation (C = 0.02) in the upper left panel and high correlation (C = 0.2) in the upper right. Even at C = 0.02, spike trains are clearly correlated and, as expected, correlation is very strong at C = 0.2. The lower left panel of Figure 2 demonstrates that the actual firing rates, computed as the expectation value of the number of events in equation 3.2 and plotted on the ordinate, scatter closely around the selected values (on the abscissa). For illustration, we generated 20 spike trains with 1000 bins each with firing rates varying in equidistant steps, with px ∈ [0.3, 0.4] for C = 0.02 and px ∈ [0.5, 0.6] for C = 0.2 (these intervals were chosen simply so we could show both data sets in one plot). In both cases, p = 0.58 was chosen, which is from the allowable range in Figure 1. The bottom right panel of Figure 2 is a plot of the cross-correlation values between all these 20 spike trains (that is, the expectation value computed in equation 3.4) plotted as a function of the parameter C. Again, it is seen that the cross-correlation of the simulated spike trains scatters around the chosen value.
Figure 2.
Properties of simulated spike trains. (Upper left) Raster plot of 100 spike trains with p = 0.2 and C = 0.02. (Upper right) Same, but C = 0.2. (Bottom left) Measured rates of 40 simulated spike trains plotted against nominal rates. For illustration, rates are shown for 20 spike trains with equidistant rates in the range [0.3, 0.4] for C = 0.02 (+ signs) and from the range [0.5, 0.6] for C = 0.2 (x signs). (Lower right) Average cross-correlation between spike trains plotted as a function of C. It can be seen that the observed cross-correlation, computed from equation 3.4, scatters around the nominal value.
7 Discussion
A thorough understanding of biological neural systems requires the development of quantitative models, both experimental and theoretical. This is of particular importance for understanding neural microcircuity, one of the most complex and at the same time practically and philosophically important structures known. Both theoretical (computational) and experimental models of neural tissue need a principled way of stimulating the model substrate, in vitro or in silico. Recent reports demonstrate substantial progress in overcoming the technical difficulties of delivering stimuli with high spatial and temporal resolution into the tissue. While the simplest stimulation patterns might be independently distributed random processes, the next stages of investigation will surely demand more structured patterns.
This letter introduces an efficient and straightforward algorithm for constructing excitation patterns consisting of stochastic processes with controlled mean activity and cross-correlations between all pairs of processes. The algorithm is highly efficient and requires on the order of N × B operations where N is the number of spike trains and B the number of time bins per spike train. Remarkably, its complexity is thus linear with N, even though the number of cross-correlations generated increases as o(N2). Furthermore, the algorithm lends itself to parallelization up to the finest grain (invidual spike trains and individual bins).
An attractive alternative approach for generating correlated sets of model spike trains is based on maximum entropy principles. The basic idea (perhaps most clearly described by Jaynes, 1957, but going back to Gibbs and Laplace) is that the most rational course of action is to avoid making assumptions in the absence of information. Applied to the question of spike train statistics, it has been argued that spike trains with controlled second-order correlations should be generated with maximum entropy in all higher orders. This means that all but second-order correlations must be minimal since higher-order correlations would introduce structure and therefore lower-than-maximum entropy.
Algorithms for spike train generation and analysis based on maximum entropy principles (e.g., Bohte, Spekreijse, & Roelfsema, 2000; Nakahara & Amari, 2002) are powerful methods for characterizing ensembles of spike trains if higher-order correlations are either known to vanish or if the structure of the higher-order correlations is known (and can thus be taken into account), or if absolutely no information about them is available. However, none of these three cases is the situation considered here, namely, when we want to generate spike trains with correlational structure similar to that found in biological neural systems. Although we assume that only second-order correlations are explicitly known, as is usually the case (some exceptions in which higher-order correlations are observed and analyzed can be found are Gerstein & Clark, 1964; Abeles & Goldstein, 1977; Abeles & Gerstein, 1988; Abeles, 1991; Martignon, von Hasseln, Grün, Aertsen, & Palm, 1995; Riehle, Grün, Diesmann, & Aertsen, 1997), and that we have no detailed information about higher-order correlations, we do know that the spike trains are generated in a highly connected neural network. For this reason, it is unlikely that no correlations exist between them at any higher than second order, which is the assumption made by maximum entropy methods. Applying maximum entropy methods with vanishing higher-order terms in this situation would then violate the basic rationale for using these methods. Paraphrasing from Jaynes (1957), the goal is to generate a probability distribution that correctly represents our state of knowledge about the system, or finding a probability assignment that avoids bias, while agreeing with whatever information is given.
It is always possible to use a maximum entropy approach, enforcing vanishing higher-order correlations, in a case in which insufficient information about higher-order correlations is available. In this study, we adopt a different strategy, with the explicit goal of taking into account all available information, including the likely presence of higher-order correlations. Although we have no detailed knowledge about those correlations in the absence of sufficient experimental data, we at least know that such correlations are likely to exist in a hierarchically connected complex network.
The solution adopted in this study is to generate spike trains whose higher-order correlations correspond to those found in neural systems whose correlations are due to common input. Although this is by no means the only possible source of correlations (examples for others being direct and indirect interactions between neurons, as was observed more than a generation ago; Moore, Segundo, Perkel, & Gerstein, 1970), common input is widely recognized to be an important source of correlation in spike trains. The spike trains generated by the methods developed here are thus particularly suited as models of neural activity where neurons are influenced by spiking input from a common source.
While we have focused on pairwise cross-correlations in this work, it is possible to make the mean rates of all processes time variant (by adjusting the parameters pi) and thus also to vary their autocorrelations. This allows, for instance, modeling absolute and refractory periods by temporarily lowering the spike rate of a process after a spike has occurred in this process. Figure 3 shows an example of correlation functions for a process with an absolute refractory period of two bins. Relative refractory periods are implemented in the same way but with a finite lowered probability of firing after each spike. The formulas for the computation of rates and correlations, equations 4.6 and 4.7, remain unchanged, with N reduced by the number of neurons that are in the absolute refractory period, and with ri in equation 4.6 replaced by a lowered rate (subject to the constraints in section 6) for neurons in the relative refractory period. Figure 3 shows, as expected, vanishing entries in the autocorrelation function (squares) during the absolute refractory period, which is reflected in smaller cross-correlation (crosses) because of the finite correlation between the spike trains. The figure also shows that refractoriness decreases the mean firing rate (from 0.2 to about 0.14 in this case) and leads to an increase of the autocorrelation function immediately following the refractory period, with a corresponding peak in the power spectrum (Bair, Koch, Newsome, & Britten, 1994; Franklin & Bair, 1995). This peak, more noticeable for higher firing rates where higher-order peaks also can be observed (data not shown), is again reflected in attenuated form in the cross-correlation function. We note that the occurrence of these structures is a matter of principle (Bair et al., 1994; Franklin & Bair, 1995) and not due to the specifics of our implementation.
Figure 3.
Correlation functions with refractory period. Shown are the average autocorrelation functions (squares) of 20 spike trains and the average cross-correlation functions (crosses) between them, with an absolute refractory period of 2 bins. All processes have mean rates of p = 0.2 and correlation between spike trains is C = 0.1. The refractory period introduces a depression not only in the autocorrelation but, because of the correlation between spike trains, also in the cross-correlation. Also note the increase in correlation immediately following the refractory period in both functions.
The possibility of varying the firing rates independently (rather than for all spike trains in unison) and the resulting requirement that bins are independent alleviates limitations of our older algorithm (Mikula & Niebur, 2003a). Experimentally observed cross-correlograms frequently have broad maxima rather than a sharp peak only in the center bin (as in, for example, Mikula & Niebur, 2003a; Destexhe & Pare, 1999). Spike jitter can be introduced by allowing transitions into a given bin of a spike train not only from the corresponding bin in the reference spike train but also (with smaller probability) from the neighboring bins. The probabilities for the states are computed as in equation 4.1 and, for a stationary situation (i.e., pi and jitter distribution independent of time), the rates (see equation 4.2) are unchanged. The width of the jitter distribution will determine the width of the resulting correlation function. Some kind of integral over the peak width is usually employed as a measure of synchrony, and the one-bin definition of synchrony (see equation 4.3) is replaced by a more complex expression that involves this measure.
As a general observation, we established in section 6 that there are combinations of rates and pairwise cross-correlations that cannot be achieved, namely, high cross-correlations and, simultaneously, a large spread in the rates of the invididual spike trains. Within the limits described in sections 5 and 6, pairwise cross-correlations and mean firing rates can be chosen freely. A free parameter in the procedure is the bin width, which determines the relation between the probability of the 1 event in the stochastic process and the mean firing rate of the simulated spike train generated: mean firing rate = bin width × probability (see also note 1). The only formal requirement in the choice of the bin width is that there cannot be more than one event in any given bin, and the actual choice will depend on constraints provided by the system under study. For computational models, a suitable bin size could be the inverse maximal rate at which the neurons are known to fire; for the generation of artificial spike trains to be used in microstimulation, it could be the inverse maximal stimulation frequency.
The spike trains introduced here can be used in a variety of situations that we have explored previously with spike trains with identical rates, for example, to study the effect of spike-timing-dependent plasticity (Mikula & Niebur, 2003b), the interplay of excitation and inhibition (Mikula & Niebur, 2004), and in feedforward networks of coincidence detectors (Mikula & Niebur, 2005). The ease of generating large numbers of synthetic spike trains in a highly efficient algorithm should prove to be of use in a large number of applications. On request, the author will make available sets of spike trains with the described statistics.
Acknowledgments
I thank Emery Brown for discussions. The work was supported by NIH grants NS43188-01A1, 5R01EY016281-02, and R01-NS40596.
Footnotes
We use the terms firing probability and mean firing rate interchangeably since they are proportional, the factor of proportionality being the bin width, a constant. See also section 7.
As noted at the end of section 3, these constraints do not apply if all mean rates are identical, pi = ∀i = 1, …, N, since in that case pi = ri = p in equation 4.6, and the equation becomes an identity. The same is true for the trivial cases of q = 0 or q = 1.
References
- Abeles M. Local cortical circuits. New York: Springer–Verlag; 1982. [Google Scholar]
- Abeles M. Corticonics—Neural circuits of the cerebral cortex. Cambridge: Cambridge University Press; 1991. [Google Scholar]
- Abeles M, Gerstein G. Detecting spatiotemporal firing patterns among simultaneously recorded single neurons. J Neurophysiol. 1988;60:909–924. doi: 10.1152/jn.1988.60.3.909. [DOI] [PubMed] [Google Scholar]
- Abeles M, Goldstein M. Multispike train analysis. Proc IEEE. 1977;65:762–773. [Google Scholar]
- Adrian ED, Zotterman Y. The impulses produced by sensory nerve endings. Part 2: The response of a single end organ. J Physiol. 1926;61:151–171. doi: 10.1113/jphysiol.1926.sp002281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bair W, Koch C, Newsome W, Britten K. Power spectrum analysis of bursting cells in area MT in the behaving monkey. J Neuroscience. 1994;14(5):2870–2892. doi: 10.1523/JNEUROSCI.14-05-02870.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bohte SM, Spekreijse H, Roelfsema PR. The effects of pairwise and higher-order correlations on the firing rate of a postsynaptic neuron. Neural Computation. 2000;12:153–179. doi: 10.1162/089976600300015934. [DOI] [PubMed] [Google Scholar]
- Boucsein C, Nawrot M, Rotter S, Aertsen A, Heck D. Controlling synaptic input patterns in vitro by dynamic photo stimulation. J Neurophysiology. 2005;94(4):2948–2958. doi: 10.1152/jn.00245.2005. [DOI] [PubMed] [Google Scholar]
- Callaway EM, Katz LC. Photostimulation using caged glutamate reveals functional circuitry in living brain slice. Proc Nat Acad Sci USA. 1993;90:7661–7665. doi: 10.1073/pnas.90.16.7661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen MR, Newsome WT. What electrical microstimulation has revealed about the neural basis of cognition. Curr Opin Neurobiol. 2004;14(2):169–177. doi: 10.1016/j.conb.2004.03.016. [DOI] [PubMed] [Google Scholar]
- Destexhe A, Pare D. Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. J Neurophysiology. 1999;81:1531–1547. doi: 10.1152/jn.1999.81.4.1531. [DOI] [PubMed] [Google Scholar]
- Franklin J, Bair W. The effect of a refractory period on the power spectrum of neuronal discharge. SIAM J Appl Math. 1995;55(4):1074–1093. [Google Scholar]
- Gerstein G, Clark W. Simultaneous studies of firing patterns in several neurons. Science. 1964;143:1325–1327. [PubMed] [Google Scholar]
- Gray C, Singer W. Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proc Nat Acad Sci USA. 1989;86:1698–1702. doi: 10.1073/pnas.86.5.1698. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jaynes ET. Information theory and statistical mechanics. Physical Review. 1957;106(4):620–630. [Google Scholar]
- Martignon L, von Hasseln H, Grun S, Aertsen A, Palm G. Detecting higher-order interactions among the spiking events in a group of neurons. Biological Cybernetics. 1995;73(1):69–81. doi: 10.1007/BF00199057. [DOI] [PubMed] [Google Scholar]
- Miesenbock G, Kevrekidis IG. Optical imaging and control of genetically designated neurons in functioning circuit. Annu Rev Neurosci. 2005;28:533–563. doi: 10.1146/annurev.neuro.28.051804.101610. [DOI] [PubMed] [Google Scholar]
- Mikula S, Niebur E. The effects of input rate and synchrony on a coincidence detector: Analytical solution. Neural Computation. 2003a;15:539–547. doi: 10.1162/089976603321192068. [DOI] [PubMed] [Google Scholar]
- Mikula S, Niebur E. Synaptic depression leads to nonmonotonic frequency dependence in the coincidence detector. Neural Computation. 2003b;15(10):2339–2358. doi: 10.1162/089976603322362383. [DOI] [PubMed] [Google Scholar]
- Mikula S, Niebur E. Correlated inhibitory and excitatory inputs to the coincidence detector: Analytical solution. IEEE Transactions on Neural Networks. 2004;15(5):957–962. doi: 10.1109/TNN.2004.832708. [DOI] [PubMed] [Google Scholar]
- Mikula S, Niebur E. Rate and synchrony in feedforward networks of coincidence detectors: Analytical solution. Neural Computation. 2005;14(4):881–902. doi: 10.1162/0899766053429408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore G, Segundo J, Perkel D, Gerstein G. Statistical signs of synaptic interactions in neurons. Biophysical Journal. 1970;10:876–900. doi: 10.1016/S0006-3495(70)86341-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nakahara H, Amari S. Information-geometric measure for neural spikes. Neural Comput. 2002;14(10):2269–2316. doi: 10.1162/08997660260293238. [DOI] [PubMed] [Google Scholar]
- Niebur E, Koch C. A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. Journal of Computational Neuroscience. 1994;1(1):141–158. doi: 10.1007/BF00962722. [DOI] [PubMed] [Google Scholar]
- Niebur E, Koch C, Rosin C. An oscillation-based model for the neural basis of attention. Vision Research. 1993;33:2789–2802. doi: 10.1016/0042-6989(93)90236-p. [DOI] [PubMed] [Google Scholar]
- Normann RA, Maynard EM, Rousche PJ, Warren DJ. A neural interface for a cortical vision prosthesis. Vision Research. 1999;39(15):2577–2587. doi: 10.1016/s0042-6989(99)00040-1. [DOI] [PubMed] [Google Scholar]
- Ogata Y. On Lewis simulation method for point processes. IEEE Transactions on Information Theory. 1981;27:23–31. [Google Scholar]
- Riehle A, Grün S, Diesmann M, Aertsen A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science. 1997;278:1950–1953. doi: 10.1126/science.278.5345.1950. [DOI] [PubMed] [Google Scholar]
- Sekirnjak C, Hottowy P, Sher A, Dabrowski W, Litke AM, Chichilnisky EJ. Electrical stimulation of mammalian retinal cells with multielectrode arrays. J Neurophysiology. 2006;95(6):3311–3327. doi: 10.1152/jn.01168.2005. [DOI] [PubMed] [Google Scholar]
- Shepherd GM, Pologruto TA, Svoboda K. Circuit analysis of experience-dependent plasticity in the developing rat barrel cortex. Neuron. 2003;38(2):277–290. doi: 10.1016/s0896-6273(03)00152-1. [DOI] [PubMed] [Google Scholar]
- Shoham A, O’Connor DH, Sarkisov DV, Wang SS. Rapid neurotransmitter uncaging in spatially defined patterns. Nature Methods. 2005;2(11):837–843. doi: 10.1038/nmeth793. [DOI] [PubMed] [Google Scholar]
- Singer W. Neuronal synchrony: A versatile code for the definition of relations? Neuron. 1999;24:49–65. doi: 10.1016/s0896-6273(00)80821-1. [DOI] [PubMed] [Google Scholar]
- Stroeve S, Gielen S. Correlation between uncoupled conductance-based integrate-and-fire neurons due to common and synchronous presynaptic firing. Neural Computation. 2001;13:2005–2029. doi: 10.1162/089976601750399281. [DOI] [PubMed] [Google Scholar]
- Thompson SM, Kao JP, Kramer RH, Poskanzer KE, Silver RA, Digregorio D, Wang SS. Flashy science: Controlling neural function with light. J Neurosci. 2005;25(45):10358–10365. doi: 10.1523/JNEUROSCI.3515-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tiesinga PHE, Sejnowski TJ. Rapid temporal modulation of synchrony by competition in cortical interneuron networks. Neural Computation. 2004;16:251–275. doi: 10.1162/089976604322742029. [DOI] [PMC free article] [PubMed] [Google Scholar]



