Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Nov 25.
Published in final edited form as: Neural Comput. 2008 Jun;20(6):1495–1511. doi: 10.1162/neco.2007.10-06-350

A new multi-neuron spike-train metric

Conor Houghton 1,2,*, Kamal Sen 2,
PMCID: PMC2782407  NIHMSID: NIHMS156538  PMID: 18194108

Abstract

The Victor-Purpura spike-train metric has recently been extended to a family of multi-neuron metrics and used to analyze spike trains recorded simultaneously from pairs of proximate neurons. The Victor-Purpura metric is one of the two metrics commonly used for quantifying the distance between two spike trains, the other is the van Rossum metric. Here, we suggest an extension of the van Rossum metric to a multi-neuron metric. We believe this gives a metric which is both natural and easy to calculate. Both types of multi-neuron metric are applied to simulated data and are compared.

1 Introduction

A key current challenge is understanding how neurons in the primary sensory regions encode their input stimuli. One technique is to use a spike-train metric to cluster the responses to a set of stimuli. By assessing how successfully the clusters match the stimuli it is possible to tell how successfully a given metric utilizes the most salient features in a neuronal response.

This technique is used, for example, in (Narayan et al., 2006) as a tool to investigate the response of songbird neurons to songs. In the experiment, recordings are made of the neuronal responses of field L neurons of the anesthetized Zebra Finch to 10 repetitions each of 20 con-specific songs. The van Rossum metric (van Rossum, 2001) is then used to work out the distance between pairs of spike trains and a simplified k-means type clustering algorithm is used to divide the 200 spike trains into 20 clusters.

To calculate the van Rossum metric the spike train is first filtered to form a function: the spike train, considered as a list of spike times t = (t1, t2, · · · , tn) is mapped to a real function using a kernel h(t):

F:spike trainsreal functionst=(t1,t2,,tn)F(t)=i=1nh(tti). (1)

The distance between two spike trains, t1 and t2, is taken to be the distance between the two corresponding functions, using, for example, the standard L2 metric on the space of real functions:

d2(t1,t2;h)=dt(f1f2)2 (2)

where f1(t)=F(t1) and f2(t)=F(t2).

One common choice of kernel is the decaying exponential

h(t)={0t<01τetτt0}. (3)

In (Narayan et al., 2006), this kernel is used with different values of the time constant τ. Different τ values give different metrics and different clusters. By calculating which clustering most accurately groups the spike trains by song stimulus, it is found that the best metric corresponds to τ ≈ 12 ms. This, it is argued, gives an indication of the timescales involved in neuronal responses in this part of the Zebra finch brain.

Another useful metric was introduced in (Victor and Purpura, 1996; Victor and Purpura, 1997). The Victor-Purpura distance between two spike trains is calculated by considering how one spike train could be transformed into the other using a sequence of small moves: deleting or adding a spike or moving a spike to another location. A cost of one is associated with adding or deleting a spike and a cost of q|δt| is associated with moving a spike by a time interval δt. The Victor-Purpura distance is then the cost of transforming one spike train into the other in the cheapest possible way.

The q parameter determines the cost of moving a spike. For q = 0 there is no cost and the Victor-Purpura distance between two spike trains is just the difference in the number of spikes. For q very large, in contrast, it is cheaper to delete and create spikes than to move a spike from one location to another and so the distance between two spike trains is the total number of non-coincident spikes. Roughly speaking, 2/q gives the time-difference interval for which two spikes can be considered to be related by jitter.

It should be determined experimentally whether the Victor-Purpura or the van Rossum is the more suitable spike-train metric. One nice feature of the Victor-Purpura metric is that it allows unreliability and jitter to be distinguished in the spiking response to repeated presentations of the same stimuli. On the other hand, the metric explicitly pairs spikes in the two spike trains; it is not clear whether this is a good property for a spike-train metric to have.

Because it involves a kernel function and a choice of metric on the function space, the van Rossum metric is very flexible. In fact, numerically, the Victor-Purpura distance is very similar to an L1 van Rossum distance:

d1(t1,t2;h=δ2q)=dtf1f2 (4)

where the kernel δ2/q(t) is the block function

δ2q(t)={q20t<2q0otherwise} (5)

and q is the time-scale parameter in the Victor-Purpura metric. By considering simple cases, with small numbers of spikes, is easy to see why this might the case. We have checked that the two metrics are numerically similar for real data by computing the distance between 400 pairs of spike trains chosen at random from the collection of field L spike trains described in (Narayan et al., 2006). We have found that the average difference between the Victor-Purpura distance and van Rossum distance, with 2/q = 20 ms is 2.54% with a standard deviation of .08%.

It might be also interesting to consider a generalized Victor-Purpura metric which gains the flexibility of the van Rossum metric by changing the cost of moving a spike from q|δt| to f(|δt|) for some function f.

2 Multi-neuron metrics

It is pointed out in (Aronov et al., 2003) that it is useful to apply the clustering technique to simultaneous recordings from proximate neurons. This could help to uncover how populations of neurons cooperate to encode a sensory input. In order to cluster the response of, for example, a pair of neurons, a two-neuron metric must be defined. In (Aronov et al., 2003; Aronov, 2003) a family of multi-neuron metrics is defined; we will call these the ARMV metrics. In the ARMV metrics, spikes are labelled by the neuron that fired them, but this label can be changed at a cost of k, where k is a parameter. Hence, the ARMV distance between two sets of labelled spike trains is the cheapest set of small moves transforming one set into the other, where, now, the small moves are adding or deleting a spike at a cost of one, moving a spike by an interval δt at a cost of q|δt| and relabelling a spike at a cost of k.

In (Aronov et al., 2003) two different coding strategies for neuron populations are distinguished: a ‘summed population code’ (SP) metric where the two spike trains from the two neurons are super-imposed before the distance is calculated, and a ‘labeled line code’ (LL) metric where the distance is measured for each neuron separately and then added. These two possibilities correspond to the metrics at either end of the one-parameter family of ARMV metrics obtained by varying k. If k = 0 there is no cost to relabelling a spike and so this is a SP metric. On the other hand, if k ≥ 2 it costs more to relabel a spike than to delete a spike from one spike train and to add a spike in the other. Hence, for k ≥ 2 the ARMV distance between the two sets of spike trains is the same as the sum of the individual Victor-Purpura distances. Therefore, this is an SP metric.

To illustrate, the SP and LL two-neuron metric constructed from the van Rossum spike-train metric will be considered. Let t1 and u1 be the responses of the two neurons to the first stimulus presentation and t2 and u2 to the second and define the corresponding functions

fi(t)=F(ti)gi(t)=F(ui) (6)

with i = 1, 2. Since a SP metric first superimposes the spike trains, the two-neuron SP distance between the responses to the two stimuli is given by

d2[(t1,u1),(t2,u2);h,SP]=dt(f1+g1f2g2)2 (7)

whereas the LL distance is the sum of the distances between stimuli for each neuron:

d2[(t1,u1),(t2,u2);h,LL]=dt(f1f2)2+dt(g1g2)2 (8)

where, because this is an L2 metric, the sum has been taken using the Theorem of Pythagoras. The purpose of this paper is to define a natural one-parameter family of metrics interpolating between these two possibilities. For simplicity, we will construct the two-neuron metrics in the next section, the multi-neuron metric is constructed in a similar way and is described in appendix 1.

3 The metrics

The idea behind the new metrics is to map the filtered spike trains fa and ga into the space of two-dimensional vector fields. The space of two-dimensional vector fields on the line

{(f,g):f(t)andg(t)are real functions} (9)

can be thought of as the space of ordered pairs of functions. The space has an obvious norm whereby, the length is given by taking the usual Euclidean norm on the two-dimensional vector space and the L2 norm on the function space:

(f,g)2=dt(f2+g2). (10)

Now, if we associate each of the neurons with a two-dimensional unit vector, e, we can map its spike-trains into the space of two-dimensional vector fields

Fe:spike trainstwodimensional vector fieldst=(t1,t2,,tn)Fe[(t1,t2,,tn)]=F(t)e=(i=1nh(tti))e. (11)

So, for example, if we assign the unit vector

e0=(10) (12)

to the first neuron then a spike train t = (t1, t2, · · · , tn) from that neuron is mapped to the vector field

Fe0(t)=(i=1nh(tti)0). (13)

The two-neuron metric being proposed here is calculated by assigning different unit vectors to each of the two neurons. The metrics are parameterized by the angle between these vectors. It will be seen that varying this angle from zero to π/2 interpolates between the SP and LL metrics described above. For simplicity, the unit vector e0 is assigned to the first neuron, and the vector

eθ=(cosθsinθ) (14)

to the second. The total response vector field r is then formed by adding together the two vector fields associated with the two neurons. Hence, using the same notation as above, we have two response vector fields, one for each presentation

r1=f1e0+g1eθr2=f2e0+g2eθ. (15)

The distance between the two pairs of spike trains, (t1, u1) and (t2, u2), is defined as the distance between the two corresponding vector fields, given by the norm above:

d2[(t1,u1),(t2,u2);h,θ]=r1r22. (16)

Expanding in terms of the fa and ga

d2[(t1,u1),(t2,u2);h,θ]=dt(δf2+δg2+2cδfδg)=d2[(t1,u2),(t2,u2);h,LL]2+2cdtδfδg (17)

where δf = f1f2, δg = g1g2 and c = cosθ. It is clear from these formula that

d2[(t1,u1),(t2,u2);h,θ=0]=d2[(t1,u2),(t2,u2);h,SP)
d2[(t1,u1),(t2,u2);h,θ=π2]=d2[(t1,u2),(t2,u2);h,LL) (18)

and that the parameter θ interpolates between these by varying the contribution the correlation ∫ dtδfδg makes to the metric.

The two-neuron metrics can be generalized to multi-neuron metrics by using higher dimensional vectors: for an n-neuron metrics the two-dimensional vectors are replaced by n-dimensional vectors. This is discussed in appendix 1.

It is possible that θ > π/2 metrics will also prove useful. For example, in the θ = π case, common spikes do not contribute to the distance; this may be of interest when the neurons receive a common noisy drive, or are subject to common excitability fluctuations, while the signal is coded in the difference in their behavior.

(f,g)1=dt(f+g). (19)

It is easy to use this prescription to form other, similar, metrics. For example, there is an L1 version which uses an L1 norm on functions and the l1 norm on the two-dimensional vector space:Now, using this norm,

fα=(1αα) (20)

with 0 ≤ α < 1, is a unit length vector. If

α=sinθcosθ+sinθ (21)

there is an angle of θ between fα and the unit vector e0. Using the same notation as above, two responses are

r1=f1e0+g1fαr2=f2e0+g2fα (22)

and the metric is

d1[(t1,u1),(t2,u2);h,α]=r1r21. (23)

The same kernel need not be used for the different neurons: for example, if an exponential kernel is being used, a different time constant could be used for each neuron.

In addition to the Victor-Purpura and van Rossum metrics, correlation based metrics (Schreiber et al., 2003) have been considered, as has a parameter-free metric based in the ratios of interspike intervals (Kreuz et al., 2007). It would be interesting to examine how these could be naturally extended to multi-neuron recordings.

4 Testing the metrics using simulated data

To illustrate the properties of the new metrics and to compare them to the ARMV metrics, we apply the metrics to some simple simulated spike-train data. As discussed above, a common application of spike-train metrics uses them to cluster responses to different stimuli. The simulation attempts to replicate this situation.

4.1 The simulation

Four neurons are used in the simulation: two receptive neurons feeding forward to two leaky integrate-and-fire (LIF) neurons. The receptive neurons produce Poisson spike trains with time-varying firing rates, the functions determining these firing rates are the analog of stimuli in the simulations and we will refer to a pair of firing rate function as a stimulus. As illustrated in Fig. 1, the feed-forward connectivity depends on a mixing parameter a. Each receptive neuron is connected to one LIF neuron with relative strength a and to the other with relative strength 1 - a. For a = 0 each receptive neuron is connected to a different LIF neuron and for a = 0.5 each LIF neuron receives input equally from each of the receptive neurons. The LIF neurons also receive a background Poisson input. The LIF neurons, in turn, produce spike trains. We will refer to these pairs of spike trains, produced by the LIF neurons, as responses and it is to these that the metrics will be applied. The LIF neurons follow a standard implementation and there is a detailed description of the entire simulation in appendix 2.

Figure 1.

Figure 1

The network used for the simulations. The two receptive neurons produce Poisson spikes according to time varying firing rate functions. These spike trains feed forward to two leaky integrate-and-fire neurons, the relative synaptic strengths of the feed-forward connections are parameterized by a.

During each run, five stimuli are chosen and all five are presented 20 times, giving 100 responses in all. Throughout the simulation and when calculating the new metrics, time is discretized into .25 ms time steps and the stimuli are two seconds long.

4.2 Clustering

To examine the performance of the metrics we will follow the bootstrapped clustering procedure used in (Victor and Purpura, 1996). In this procedure a confusion matrix, N, is calculated; this is a square matrix whose size is given by the number of stimuli and whose ijth entry, Nij, is the number of responses from stimulus i which are closest, on average, to the responses from stimulus j. Thus, the diagonal entries represent correctly clustered responses and the odiagonal, responses which are closest to the wrong cluster.

More precisely, if there are n responses and c stimuli, the responses are grouped into c clusters according to their stimulus. Starting with all the entries in the confusion matrix set to zero, each response is considered in turn: it is temporarily removed and the average distance between it and the members of each of the c clusters is calculated giving a set of average distances {d1, d2, · · · , dc}. If the smallest distance in this set is dj and the response being considered was taken from ith cluster, one is added to Nij. When all the responses have been considered, the elements of N will add to n

n=i=1cj=1cNij. (24)

In our case, there are five stimuli, so c = 5 and for each stimuli there are 20 responses, so n = 100. As in (Victor and Purpura, 1996), the averaging over cluster elements is performed with a bias exponent, so, for a response r, the biased average distance to the kth cluster, Ck, is

dk=[sCkd(r,s)z]1z (25)

where the bias is taken to be z = — 2, under-weighting outliers. Of course, here, the responses r and s will be pairs of spike trains and d(r, s) will be one of the multi-unit metrics described above.

If the metric is good at clustering the responses by stimulus, the confusion matrix will be largely diagonal, if it is poor, the entries off the diagonals will be approximately equal to the diagonal elements. As pointed out in (Victor and Purpura, 1996), the transmitted information, h, is a good quantitive measure of how well the responses are clustered. It is given by

h=1nijNij(lnNijlniNijlnjNij+lnn). (26)

For perfect clustering, for c equally likely stimuli, h will be ln c. So, in the case being considered here, the maximum value of h will be ln 5 ≈ 1.61. If there is no clustering by stimulus h = 0, though, in practise, a finite sample will over-estimate this.

4.3 Results

We will use the L1 version of the new metrics with the block kernel δ2/q given in Eq. (5). This makes the comparison between the new metric and the ARMV metric clearer: as discussed above, the Victor-Purpura metric with cost parameter q is very similar to the L1 van Rossum metric with the δ2/q kernel. A time scale of 2/q = 20 ms is used for the new metrics and with the corresponding q used in the ARMV metrics, this matches the membrane time constant in the LIF neurons. The L1 version of the new metrics have a parameter α with α = 0 corresponding to the SP metric and α = 1 to LL. The simulations also have a parameter, a, which controls how much the inputs from the two receptive neurons are mixed in the LIF neurons.

First, we examine whether it is possible that the metric parameter α could be used to deduce the connectivity of neurons from spiking responses. In Fig. 2, the optimal metric parameter, as determined by calculating the transmitted information h (26), is graphed against for different values of a. It is seen that when there is no mixing, a = 0, the best clustering results from α nearer one and, when the inputs are equally mixed, a = 0.5, the best clustering results from values of α nearer zero. Furthermore, the transition between these two is gradual.

Figure 2.

Figure 2

The optimal α plotted against a. The value of metric parameter α giving the maximum h is plotted for different values of the mixing parameter a. For each value of a the best value of α has been found for 20 different runs, the average is marked and the error bars give one standard deviation.

In Fig. 3 the performances of both the new metrics and the ARMV metrics are plotted against a. The new metrics consistently have a higher h value than the ARMV metrics. This implies that, at least on simulated data, the new metrics are better at capturing the structures in the data which encode the stimuli. This should not be taken to imply that the new metrics will prove better than the ARMV metrics when dealing with real data, the simulated data probably has a different statistical structure than real data. It is noteworthy, however, that both metrics have a similar performance profile, changing in a similar way as a changes; this indicates that they have similar clustering properties. The distinction between them is therefore likely to be subtle and informative when applied to real data and, for large numbers of neurons, where the ARMV metrics become computationally expensive, the new metrics are likely to produce similar results.

Figure 3.

Figure 3

The relative performances of the different metrics. The best and worst value of h over the parameter range is plotted against a, the mixing parameter. For each value of a, h was calculated for different values of the metric parameter, α ∊ [0, 1] for the new metric and k ∊ [0, 2] for the ARMV metric. The best and worst values are marked max and min in the graph, the average h value calculated using the responses of each of the two neurons taken on its own is also shown and is marked one. For the single unit examples, the new metric reduces to the van Rossum metric and the ARMV to the Victor-Purpura metric. All values have been averaged over 20 runs.

Again, when the performances of the metrics are plotted as the spike trains are degraded, the new metrics outperform the ARMV metrics, and, more importantly, the performance profiles are very similar. In Fig. 4, the h value for metric clustering is plotted as the relative amount of background input is increased. When there is no background input, both the new metric and ARMV metrics have similar performances, near the maximum possible value. As the background input becomes more and more significant, the h value falls. When the amount of background input is five times the feed-forward input from the receptive neurons, h has fallen to 0.75 for the new metric, and 0.62 for the ARMV metric.

Figure 4.

Figure 4

Noise and the performances of the metrics. Here, the amount of input from the Poisson background is varied: h is plotted against the relative value of the synaptic strengths g and gP for the new and ARMV metrics. For each value of gP /g, the synaptic strengths were tuned to give the LIF neurons a firing rate of 20 Hz. Because the background spiking has a higher average rate, gP /g = 1 corresponds to an input where the background component is, on average, 2.5 times larger than the component coming from the receptive neurons.

In Fig. 5, the spike trains are degraded by adding jitter; again, as the jitter is increased, the h value falls in a similar way for both the new and ARMV metrics. In Fig. 6 the spike trains are degraded by the random relabelling of individual spikes, this is intended to reproduce the situation where a spike sorting algorithm is used to separate individual spike trains from a single recording. Again, the performances of both metrics decline gracefully. As before the new metric performs better and, in this case, it is more robust.

Figure 5.

Figure 5

Jitter and the performance of the metrics. Jitter has been added to all the spikes in the a = 0.5 simulated responses; the jitter has been drawn uniformly from the interval [-j, j]. In each case, the mixing parameter is a = 0.5; for the new metrics, α = 0.5 and k = 1 for the ARMV metrics. For each value of j, h is averaged over 20 runs.

Figure 6.

Figure 6

Spike misassignment and the performances of the metrics. The spike trains have been corrupted by a random relabelling of the spikes, each spike has a λ probability of being swapped over to the other spike train in the response. For both the new and ARMV metrics, h is plotted against λ for the SP and LL metrics, as well as the metric lying half way between these: α = 0.5 for the new metrics and k = 1 for the ARMV metrics. For each value of λ, the value of h is averaged over 20 runs. The runs are all for zero mixing parameter, a = 0, where the LL metrics are at their greatest advantage over the SP metric. The SP distances are invariant under the relabelling of spikes.

5 Discussion

We have presented new multi-neuron metrics: they are constructed by mapping the multi-neuron spike trains into a space of vector fields. The metrics are straight-forward to calculate. It does not seem possible to say whether the metrics present here or the ARMV metrics presented in (Aronov et al., 2003) will prove more useful for analyzing multi-unit recordings in the primary sensory regions. However, in the case of the simulated data, the two types of metrics perform in a very similar way when the simulation parameters are changed and when the simulated spike trains are degraded. As with the ARMV metrics, it is expected that the new metrics will be a useful tool in analysing the population coding of information in the sensory pathways. It is also hoped that the subtle differences between the new metrics and the ARMV metrics will, when applied to real data, reveal subtle difference in neuronal coding.

The new metrics are faster to compute than the ARMV metrics. In a situation like the one described in section 4, where a large matrix of distances is computed, the slowest step is calculating the distance between the two response vectors fields. This is an O(n2d) calculation, where n is the number of neurons and d is the number of time steps used in the integration. The ARMV metric, in comparison, has computation time O(2n2mn+1) (Aronov, 2003), where m is the typical number of spikes in a spike train. This illustrates one elegant aspect of the ARMV metrics: they can be calculated without discretizing time, however, they are slower. In the simulations we have performed, m = 40, n = 2 and d = 4000 predicting that for two neurons the new metrics are an order of magnitude quicker. This is what we observed when analyzing the simulated data. Even for small numbers of neurons, the new metric is considerably faster and for large numbers of neurons the difference will be huge. Thus the new metrics may provide a useful tool to do template-based spike-train classification for brain-computer interfaces with large numbers of units.

Acknowledgments

C.J.H. thanks the International Human Frontiers Science Program Organization for a short-term fellowship. K.S. was supported by a grant from NIDCD: 1 RO1 DC-007610-01A1. We thank Xiao-Ping Liu, Rajiv Narayan and Le Wang for helpful discussions. We would like to thank one of the anonymous referees for suggesting our comment on the θ > π/2 metrics.

Appendix 1: More than two spike trains

It is easy to generalize the two-neuron metrics to metrics for more than two neurons by using higher-dimensional unit vectors. The multi-neuron metrics have many parameters, corresponding to the different angles between the unit vectors. At first, this may seem to distinguish this family of metrics from the ARMV metrics which are described as having only one such parameter. However, the ARMV metrics can be generalized so that there are different costs for relabelling spikes between different pairs of neurons. Although it is likely that multi-neuron metrics will be most useful when each of these inter-neuronal parameters is separately optimized, it is possible to consider a family of multi-neuron metrics which has only one parameter, by requiring, for example, that the unit vectors are all at the same mutual angle. A set of this sort can be calculated by doing a succession of rotations in different planes. The first unit vector, e1, is chosen to lie in x1-direction: (1, 0, · · · , 0), the second, e2 is then generated from the first by rotating it an angle θ in the x1x2-plane. e3, in turn, is generated from the second by rotating it an angle θ’ in the x2x3-plane. Since the x1-axis is perpendicular to the x2x3-plane, e1 · e3 = e1 · e2 = cosθ. However, θ’ must be chosen to make the angle between the e2 and e3 equal to θ. For three-neurons, for example, the three unit vectors would be given by

e1=(100),e2=(cosθsinθ0),e3=(cosθsinθcosθsinθsinθ) (27)

and taking the scalar product of e2 and e3 shows that cos θ’ is given by

cosθ=cosθcos2θsin2θ (28)

and, of course, sinθ=1cos2θ,θ’ itself is not needed. For four neurons, a further rotation in the x3x4-plane gives unit vectors

(1000),(cosθsinθ00),(cosθsinθcosθsinθsinθ0),(cosθsinθcosθsinθsinθcosθsinθsinθsinθ) (29)

where

cosθ=cosθcos2θsin2θ (30)

and, in fact, this can be iterated: in going from the n - 1 neuron to the n neuron case, the extra unit vector can be obtain by a rotation into the new dimension by an angle that satisfies an equation of this form.

Appendix 2: The simulation

The receptive neurons fire stochastically, with a time dependent firing rate given by a rate function, s(t). These rate functions are chosen randomly as rectified Fourier series:

s(t)=A[i=020ancos2nπtT]+ (31)

where T = 2s is the total duration of the stimulus and the an are sampled uniformly from the interval [ — 1, 1]. The square bracket denotes half-wave rectification

[x]+={xx>00otherwise} (32)

and A is chosen so that

0Ts(t)=20T (33)

giving an average firing rate of 20 Hz.

For the LIF neurons, a standard model is used, following an example described in (Dayan and Abbott, 2001). Each neuron has a voltage satisfying

τmdVdt=ElV+I (34)

where El = — 54mV is the leak potential, τm = 20 ms is the membrane time constant and I is the synaptic input current. The neuron fires when V reaches the threshold, — 50 mV and is then reset to — 65 mV. The synaptic current is

I=g(EeV) (35)

where Ee = 0 mV is the excitatory synapse reversal potential; all the synapses are excitatory. The synaptic conductance is the sum of the individual synaptic conductances. For each LIF neuron there are three synapses, two feeding forward from the receptive neurons and one from the background. In each case the synaptic conductance depends on a gating probability P and is given by gmaxP where gmax = ga and g(1 - a)for the feed-forward synapses, with g = 1.25 and gmax = gp for the background synapse.

These values have been chosen to give the LIF neurons spiking rates of roughly 20 Hz, though the precise value will vary from stimulus to stimulus. Each synapse has its own gating probability P satisfying

τsdPdt=P (36)

where τs = 4 ms is the synaptic time constant. Each time a spike arrives at a synapse, that synapse’s gating probability is increased according to

PP+(1P)Pmax (37)

where Pmax = 0.3.

For most simulations gp = g; the only time g and gP are varied is in the simulations used to produce Fig. 4 where the e ect of changing the ratio gP /g is examined. The background firing rate is 50 Hz. So, for gP = g, the average input current from the background is 2.5 times the average current from the receptive neurons, however, the variance of the current from the receptive neurons will be much higher.

References

  1. Aronov D. Fast algorithm for the metric-space analysis of simultaneous responses of multiple single neurons. Journal of Neuroscience Methods. 2003;124:175–179. doi: 10.1016/s0165-0270(03)00006-2. [DOI] [PubMed] [Google Scholar]
  2. Aronov D, Reich DS, Mechler F, Victor JD. Neural coding of spatial phase in v1 of the macaque monkey. Journal of Neurophysiology. 2003;89:3304–3327. doi: 10.1152/jn.00826.2002. [DOI] [PubMed] [Google Scholar]
  3. Dayan P, Abbott LF. Theoretical Neuroscience. MIT Press; 2001. [Google Scholar]
  4. Kreuz T, Haas JS, Morelli A, Abarbanel HDI, Politi A. Measuring spike train synchrony. Journal of Neuroscience Methods. 2007;165:151–161. doi: 10.1016/j.jneumeth.2007.05.031. [DOI] [PubMed] [Google Scholar]
  5. Narayan R, Graña G, Sen K. Distinct time scales in cortical discrimination of natural sounds in songbirds. Journal of Neurophysiology. 2006;96:252–258. doi: 10.1152/jn.01257.2005. [DOI] [PubMed] [Google Scholar]
  6. Schreiber S, Fellous JM, Whitmer D, Tiesinga P, Sejnowski TJ. A new correlation-based measure of spike timing reliability. Neurocomputing. 2003;52-54:925–931. doi: 10.1016/S0925-2312(02)00838-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. van Rossum M. A novel spike distance. Neural Computation. 2001;13:751–763. doi: 10.1162/089976601300014321. [DOI] [PubMed] [Google Scholar]
  8. Victor JD, Purpura KP. Nature and precision of temporal coding in visual cortex: a metric-space analysis. Journal of Neurophysiology. 1996;76:1310–1326. doi: 10.1152/jn.1996.76.2.1310. [DOI] [PubMed] [Google Scholar]
  9. Victor JD, Purpura KP. Metric-space analysis of spike trains: theory, algorithms and application. Network: Computation in Neural Systems. 1997;8:127–164. [Google Scholar]

RESOURCES