Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 1999 Aug 31;96(18):10450–10455. doi: 10.1073/pnas.96.18.10450

Noise shaping in populations of coupled model neurons

D J Mar *,, C C Chow , W Gerstner §, R W Adams , J J Collins *
PMCID: PMC17909  PMID: 10468629

Abstract

Biological information-processing systems, such as populations of sensory and motor neurons, may use correlations between the firings of individual elements to obtain lower noise levels and a systemwide performance improvement in the dynamic range or the signal-to-noise ratio. Here, we implement such correlations in networks of coupled integrate-and-fire neurons using inhibitory coupling and demonstrate that this can improve the system dynamic range and the signal-to-noise ratio in a population rate code. The improvement can surpass that expected for simple averaging of uncorrelated elements. A theory that predicts the resulting power spectrum is developed in terms of a stochastic point-process model in which the instantaneous population firing rate is modulated by the coupling between elements.


An important issue in neuroscience is how neurons encode information (18). Here, we consider the problem of encoding an analog signal in the firing times of a population of neurons. Several factors must be accounted for in addressing this issue. First, the firing records of neurons are often noisy and irregular (8, 9). Second, cortical neurons sometimes fire relatively slowly compared to many of the signals they may need to encode (10, 11).

We explore a method for population rate coding by which a network of coupled noisy neurons can encode relatively high-frequency signals. We consider a system of N neurons that receive the same analog input. The relevant output is the population firing rate FN(t), the number of neuronal firings per unit time summed across the population. This quantity does not require averaging over a significant time window, and it can respond quickly to rapidly changing inputs (1215). For neurons firing asynchronously, FN is approximately N times the single-neuron rate. The input signal is encoded in the modulation of the firing times of the neurons in the network. When the analog input is converted to a train of discrete spike events, “quantization” noise from the errors made in digitization is unavoidable and limits the fidelity with which the output signal can be decoded. For N uncoupled, independent neurons, the quantization noise power grows as N, and the coherent signal power grows as N2. The system signal-to-noise ratio (SNR), defined as the ratio of the output signal power to the noise power, grows as N. The SNR is improved if either the output signal is increased or the noise is reduced. Some systems are limited in the maximal signal power that they can process. In such cases, another important figure of merit is the system dynamic range (DR), which we take to be the ratio between the maximum signal power the system can tolerate and the noise power. For a fixed maximum output signal power, improving the DR is equivalent to reducing the noise. An example of a system requiring high DR is the human auditory system, which processes signals ranging from a soft whisper to a loud jet engine.

We propose a method to improve the DR and SNR for a population rate code beyond simple averaging over N independent elements. Inspiration for our method comes from the concept of noise shaping, used in certain electronic analog-to-digital converters (16). It has been proposed that noise shaping could be used in a single neuron (1719); in this paper, we pursue this question for a network of coupled neurons. Our approach uses inhibitory coupling between neurons to generate temporal anticorrelations. In the frequency domain, these correlations shift the quantization noise power from one part of the spectrum to another, thereby “shaping” the spectrum and, more importantly, lowering the noise at the frequencies of interest. We are able to suppress the quantization noise power within the signal bandwidth at a rate of N−1, and therefore, for a fixed operating range, increase the usable DR. We also find that, for sufficiently large coupled networks, the SNR improves as N2.

Integrate-and-Fire (IF) Network

We model the neuronal network as a population of N IF oscillators. Each neuron is characterized by a voltage Vi(t) and fires whenever Vi exceeds a threshold Vth. Each neuron is coupled to other neurons via a sum over postsynaptic currents γ. Between firings, the dynamics for Vi are given by

graphic file with name M1.gif 1

where

graphic file with name M2.gif 2

In Eq. 1, i and j index elements of the network (i, j ∈ {1, … , N}), and tjm, m = 1, 2, 3, … , is the set of firing times of the jth neuron. After firing, Vi is reset to a random value between 0 and Vthδ, where δ = 0.75 for all results shown, with the exception of Fig. 5. The postsynaptic current γ(t′) in Eq. 2 exponentially decays with time constant τs for t′ > 0, and vanishes for t′ ≤ 0. Other possible postsynaptic current waveforms include α-functions, which have a finite rise time before decaying (2022).

Figure 5.

Figure 5

Power spectrum for a network of N = 40 neurons, coupled with the synaptic kernel γ(t′) = cos(2πt′/T)exp(−t′/τs), with T = 1.0 ms and τs = 5.0 ms. Other parameters are I0 = 15.4, K = −15, and δ = 0.5, and αi is uniformly distributed between 1.0 and 1.5.

Eq. 1 describes the ith neuron as a leaky integrator of the total source current; this current consists of a driving term I(t) common to all of the neurons and an interaction term Σj,m Kijγ(ttjm) due to contributions whenever any neuron in the population has fired. In our sign convention, Kij > 0 corresponds to inhibitory coupling. For simplicity, for all of the results shown here, the coupling is all-to-all: Kij = K, a constant. Heterogeneity among the individual neuron rates is provided by a distribution in the values of the coefficients αi. The time scales in this coupled-oscillator system are the membrane time τm = 1 s and the synaptic time τs = 10−3 s (units have been assigned for convenience). The random reset introduces phase noise into each neuron whenever it fires. We remark that our model summarized in Eqs. 1 and 2 could possibly synchronize (2328), particularly with inhibitory coupling (2932). However, these synchronization effects are deliberately suppressed by the strong phase randomization from the random reset and the distribution in natural oscillator rates (33, 34).

Numerical Simulation Results

We numerically integrate (35) the above model and record the firing times of each neuron. Fig. 1 shows raster plots of the firings in a network of N = 50 neurons. In Fig. 1a and b, the network is uncoupled (K = 0), whereas in Fig. 1c and d, the coupling between elements is inhibitory (K = 50.0 > 0). In both cases, the input I(t) is a constant I0, and the mean network firing rate is FN = 1,000 ± 1 Hz. In Fig. 1b and d, we collapse the raster plots into network firing records to show the difference in the summed outputs. In particular, occasional clumps and gaps are observed in the output of the uncoupled network (see Fig. 1b), as expected for a set of uncorrelated elements. However, the record in Fig. 1d for the coupled network appears to be distributed relatively smoothly in time.

Figure 1.

Figure 1

Raster plots of the firing events for uncoupled (a and b) and coupled (c and d) networks of N = 50 elements (Eqs. 1 and 2). The network heterogeneity αi is uniformly distributed in the interval (1.27, 1.50), corresponding to the bottom and top of a and c, and the overall network firing rate in both cases is FN = 1.000 × 103 ± 1 Hz. In b and d, the raster plots are collapsed into a collective record for the entire network. The membrane and synapse times are τm = 1 s and τs = 10−3 s, respectively. The coupling strength and applied current, respectively, are K = 0 and I0 = 9.48 (a and b) and K = 50.0 and I0 = 47.3 (c and d).

To quantify this, we show in Fig. 2a histograms of the interspike intervals (ISIs) corresponding to the network outputs shown in Fig. 1b and d. Data are shown for trial durations of 200 s and using a bin width Δ = 0.1 ms. The histogram decays exponentially in the uncoupled case with a time constant of approximately 1 ms, consistent with an uncorrelated Poisson process. In contrast, the histogram of the coupled network is narrowly distributed about its maximum at 1 ms, which is consistent with the “smooth” firing record in Fig. 1d.

Figure 2.

Figure 2

(a) Histogram of ISIs for the collective firing output of the network (see Fig. 1b and d) taken for trial durations of 200 s and a bin width of 0.1 ms. The inverse population rate FN−1 = 1.0 ms is indicated by the arrow. (b) Autocorrelation of the collective firing record (see Fig. 1b and d) taken for 200 s and using a sampling interval of 0.1 ms. For both a and b, dots (●) and crosses (+) represent data for the uncoupled and coupled networks, respectively. Lines between points are drawn to guide the eye.

The difference between coupled and uncoupled networks is further illustrated by the autocorrelation function a(τ) of the network firing sequences XN(t).** In Fig. 2b, a(τ) is plotted for time shifts τ < 5 ms, using a sampling interval of 0.1 ms. For the uncoupled network, a(0) = 1 and a(τ) is essentially 0 thereafter, indicative of uncorrelated firing. With inhibitory coupling, a(τ) is negative immediately after τ = 0, crosses 0 at τ ≈ 0.8 ms, and then displays decaying oscillations with a period of approximately 1 ms. This describes a system with anticorrelated firing. After a neuron fires, the inhibitory coupling forces the network to wait approximately 0.8 ms before the next neuron can fire.

Fig. 3 shows power spectra on logarithmic axes for the uncoupled and coupled networks.‡‡ For these data, we have included in the applied current I(t) a sinusoidal input signal term S(t) = A sin(2πf0t) with frequency f0 = 100 Hz and amplitude A = 2.365. The sinusoidal signal is clearly visible above the background for both cases and does not display much broadening or other nonlinear distortion. The inhibition lowers the overall firing activity of every neuron. To make a meaningful comparison between the two cases, we compensate for this effect by adjusting the dc component I0 of the applied current to maintain the population rate FN at 1,000 ± 1 Hz.

Figure 3.

Figure 3

Power spectra for a network of N = 50 neurons whose dynamics are given by Eqs. 1 and 2. The top trace is for the uncoupled network (K = 0, I0 = 9.48) and the middle trace is for a coupled network (K = 50.0, I0 = 47.3). In both cases, the input signal current is I(t) = I0 + A sin(2πf0t), where f0 = 100 Hz and A = 2.365. Other system parameters are as specified in the caption of Fig. 1. In the coupled case, the lower quantization noise below fc = 800 Hz corresponds to larger DR. The measured SNR at f0 is 8.1 dB in the uncoupled network and 10.6 dB in the coupled network. The scale bar between 12 and 28 Hz indicates the range of individual neuron firing rates in the coupled network. The dotted line is the theoretical prediction from Eq. 7. The bottom trace is the spectrum for a single representative neuron selected from the coupled network. The mean firing rate of this neuron is 19 Hz (see arrow).

For the uncoupled network, the spectrum is flat over most of the frequency range shown, consistent with the vanishing autocorrelation in Fig. 2b. This behavior is a direct result of asynchronous firing because of the random reset, the heterogeneity in αi, and, most importantly, the lack of any interneuron interactions. The decrease in the spectrum below about 15 Hz is caused by the refractory time of the individual neurons (36, 37).

When the neurons are coupled by inhibition, both signal and noise power are reduced from their values in the uncoupled network. As shown in Fig. 3, the noise power is significantly suppressed over a wide frequency range, up to approximately fc = 800 Hz. Immediately below this “corner” frequency, the noise power varies as f2 and decreases to a maximum suppression of >13 dB at frequencies <80 Hz. The power at low frequencies is transferred partially to frequencies around fc. This power is visible in Fig. 3 as a small bump near fc and FN.

In Fig. 3, the bandwidth over which the noise is suppressed is free from peaks (except the input signal at f0) and other structure. This reflects the absence of synchronization between the network elements (also visible in Fig. 1c) and underscores the asynchronous firing nature of the network. We also show in Fig. 3 a spectrum obtained from a single representative neuron in the coupled network. This neuron possesses an intrinsic rate of 19 Hz (see arrow), near the network average. The single-neuron spectrum is flat over a wide bandwidth, except for suppression below 20 Hz because of refractoriness. The absence of a peak or other feature at f0 implies that this single neuron by itself does not carry any information at the signal frequency. The noise-shaping and signal-transmission characteristics are network properties.

Noise shaping is a dynamical effect between neurons firing near their intrinsic rates. Each neuron’s rate is suppressed slightly by the mean network activity. The coupling disfavors short ISIs in the network record and spaces out the firing events, as shown in Figs. 1 and 2. We emphasize that this shaping takes place at frequencies both below and above the firing rates of the individual neurons. The fastest neuron in the coupled network fires at 28 Hz (see bar in Fig. 3), less than twice the nominal single-element rate FN/N = 20 Hz and well below the corner frequency fc.

To illustrate how the noise-shaping effect varies with population size, we first show in Fig. 4a the dependence of the population rate FN on N. Both axes are scaled by τs. For these data, I0 = 47.3, K = 50.0, and all other network parameters are the same as in Figs. 13. The data for τs = 0.3, 1, 3, and 10 ms all fall near the same curve, indicating that the maximum output rate, and hence fc, is directly dependent on the product Nτs. In addition, the inverse synapse time τs−1 specifies an upper limit on fc. As seen in Fig. 4a, for small N, FNτs increases with N. In this regime, the ISIs are much larger than τs, and FN scales linearly with N. As the size of the network grows, the ISIs decrease toward τs. The individual firing rates become increasingly slowed by the synaptic inhibition, and the population rate increases only slightly thereafter. For our choices of parameters, FN and fc saturate for Nτs ≳ 20 ms.

Figure 4.

Figure 4

Dependence of network firing behavior on the population size N. (a) Population firing rate FN vs. N, where both axes have been scaled by τs. The network parameters are identical to those in Figs. 13, with the exception of N and τs. ×, ●, +, and □ represent data for τs = 0.3, 1, 3, and 10 ms, respectively. The dotted line is the theoretical rate obtained from Eq. 6. (b) Log–log plot of measured quantization noise power P at 100 Hz (●) and 30 Hz (open symbols) vs. N, taken with τs = 1 ms, I0 = 47.3, A = 0, and other parameters as in Figs. 13. Squares and circles represent data for uncoupled (K = 0) and coupled (K = 50) networks, respectively. (c) Log–log plot of SNR vs. N measured at f = 30 Hz for a network with K = 50 and A = 2.365. In b and c, theoretical predictions obtained from Eqs. 6 and 7 for the coupled networks are shown by a dotted line (f = 100 Hz) and dashed lines (f = 30 Hz), respectively.

We next show how adding more neurons to the network confers beneficial effects for the DR and the SNR. In Fig. 4b, we show a log–log plot of the quantization noise power P measured at 100 Hz (●) and 30 Hz (open symbols), as N varies. For these data, τs = 1 ms, I0 = 47.3, and A = 0. Squares represent P at 30 Hz for K = 0; as expected, P increases linearly with N for the uncoupled network. Circles in Fig. 4b show P for K = 50. For the coupled network, P decreases approximately as N−1 for large N; in this regime, FNτs ≈ 1, and the coupling effectively narrows the histogram in Fig. 2a and shapes the spectrum. The decrease in noise power results in a significant increase in system DR. However, this improvement does not extend to arbitrarily large N. For very large values of Nτs, neurons with small αi receive so much inhibition that they are prevented from reaching threshold and firing.

The network SNR is also enhanced by the coupling. In Fig. 3, the measured SNR is improved by 2.5 dB. In Fig. 4c, we show the SNR measured for an input signal at f0 = 30 Hz, just above the fastest individual neuron rate. For N ≳ 15, the SNR improves approximately as N2, faster than that for simple averaging. Clearly, there is a large region in parameter space in which the coupled network significantly outperforms simple averaging in its DR and SNR.

Theory

The spectrum shown in Fig. 3 can be estimated analytically by treating the population activity as a modulated stochastic point process. The spectrum can then be computed rigorously for certain classes of noise intensity (38, 39; C.C.C. and W.G., unpublished data). Here we give a simple heuristic derivation. For an uncoupled network whose elements fire randomly, the average of the population firing rate FN depends on the sum of all of the applied currents. With coupling, FN is modulated, and for a finite N, fluctuates around the mean. We define the population activity (33) as F(t) = Σj=1N Σm δ(ttjm). The time average of F(t) is the mean population firing rate, because 〈F(t)〉 = (1/T) ∫0T F(t)dt = n/T, where n is the number of firings in the time interval T. For nearly asynchronous firing as observed in our simulations, we can assume that F(t) represents the population firing rate, including fluctuations (33).

To calculate the spectrum, we first consider the effects on a single neuron. In our simulations, τm ≫ τs, so we consider the limit of no leakage. Because the coupling Kij = K is uniform and all-to-all, we can simplify the coupling term in Eq. 1. The dynamics of a single neuron obeys

graphic file with name M3.gif 3

where γ(t′) is given in Eq. 2, I0 and S(t) are the mean (dc) and time-varying components of I(t), and ξ̃(t) is a stochastic forcing term that mimics the random effects. For nearly asynchronous firing, the voltage in Eq. 3 grows approximately linearly with t. The firing rate of neuron i is then proportional to dVi/dt. We obtain the population rate by normalizing by the effective threshold voltage Veff = Vth(1 − δ/2)†† and summing over elements: F(t) ≃ Veff−1 Σi=1N dVi/dt. Using Eq. 3 in this expression yields a stochastic equation for the population rate:

graphic file with name M4.gif 4

where Ī and S̄(t) are the network-averaged dc and time-dependent inputs, ξ(t) is uncorrelated white noise with zero mean and variance 〈ξ(t)ξ(t′)〉 = σ2δ(tt′), and σ2FNΔ is obtained under the assumption that the firings can be modeled as a Poisson process.§§ The coupling and random reset cause fluctuations around the mean rate. Let F(t) = FN + δF(t), where FN is the mean firing rate and δF(t) is a fluctuation around FN. Eq. 4 then becomes

graphic file with name M5.gif
graphic file with name M6.gif 5

From the time-independent terms in Eq. 5, the mean firing rate is then obtained as

graphic file with name M7.gif 6

We can define a critical network size Nc = Veff(Kτs)−1, above which FN begins to saturate. Using the values Ī ≈ 65.5, N = 50, Veff = Vth(1 − δ/2) = 0.625, K = 50, and τs = 10−3, we obtain FN = 1.05 kHz, in good agreement with the simulations. We also plot the relation in Eq. 6 in Fig. 4a and see that it agrees well with the numerical results.

The spectrum of the noise fluctuations is obtained from the time-varying terms in Eq. 5 as the absolute square of the Fourier transform of δF(t):

graphic file with name M8.gif 7

where γ̂(f) = 1/(τs−1 − 2πif) is the Fourier transform of the postsynaptic current γ in Eq. 2 and f is the frequency. The coupling kernel γ̂(f) has a cutoff at 2πf = τs−1. For sufficiently large N and K and at sufficiently low frequency NKVeff−1γ̂(f) ≫ 1 and the coupling “shapes” the spectrum of both signal and noise. The theoretically predicted spectrum in Eq. 7 using the same parameter values as above with Δ = 0.1 ms is compared to the data for the coupled network in Fig. 3. The dotted line has no free parameters. As shown, the theory agrees well with the data in matching the corner frequency fc and also provides a good estimate of the noise-shaping level below fc. In Fig. 4b, we have plotted the calculated noise power at f = 100 Hz (dotted line) and f = 30 Hz (dashed line), from Eqs. 6 and 7. As shown, the theory matches the numerical data well for N ≳ 30.

From Eq. 7, we see that SNR = N2Ŝ2(f)/(Veff2σ2). There are two regimes for the SNR, depending on whether N > Nc. For small networks, (NNc), σ2 scales as N and SNR ∝ N, similar to simple averaging. As N is increased beyond Nc, the population rate FN, and hence σ2, saturates. Then SNR ∝ N2. This regime is seen in Fig. 4c, in which the dotted line shows the calculated SNR from Eq. 7. Again, the agreement is excellent for N > Nc = 12.5.

Deviations from the theory likely occur because the firing rates of the individual neurons possess nonlinearities and correlations that deviate from a pure Poisson process. For example, the theory does not reproduce the oscillations in the autocorrelation function (see Fig. 2); in the noise spectrum, these are manifest as a bump in Fig. 3 near fc. The theoretical value from Eq. 7 for P(f = 100 Hz) underestimates the numerical data for small N (see dots and dotted line in Fig. 4). In this regime, f is comparable to fc, and the data reflect the noise power in the nearby bump. We have also ignored the intrinsic refractory time of the neurons caused by the IF dynamics. As seen in Fig. 3, and also shown previously (37), refractoriness alone leads to noise-shaping. We therefore expect Eq. 7 to overestimate the spectrum at very low frequencies.¶¶ This is also seen in Fig. 4b by comparing theory (dashed line) and data (circles) for P(f = 30 Hz) for N ≤ 10.

Summary and Conclusions

Our results demonstrate improved signal encoding through noise shaping in a network of coupled model neurons. Noise shaping allows the population to encode signals over a wide bandwidth with extended DR and improved SNR. By firing nearly asynchronously, the network can encode signals with frequencies well above those of the individual elements. Because coupling lowers the quantization noise power, for a given SNR, analog signals may be encoded with fewer neurons. This remains true even when the population firing rate is unchanged. In our model, the elements interact via a coupling rule that is local in time and hence is easy to implement. Noise and heterogeneity in the network help serve to break up clustering and stabilize the asynchronous firing state. They may also be used to boost weak signals above threshold (4044).

The DR and SNR both improve with increasing N at rates faster than that of an uncoupled network. The inhibitory coupling shapes the spectrum and reduces the noise power at low frequencies; this reduction results directly in an improved DR. However, the shaping also reduces the signal power accordingly. The SNR is improved by a different effect: the inhibition sets a maximum population firing rate that is determined by the synaptic time scale, the applied current, and the coupling constant. As the coupled network is increased beyond a critical N, the background noise power, which is proportional to FN, saturates while the signal power continues to grow as N2. This increase in SNR could be observed in any network in which the inhibitory coupling reduces the population rate. These dependences of DR and SNR surpass those of an uncoupled network, for which the noise power increases linearly with N.

In our simulations, we have found that the noise-shaping effect shown in Fig. 3 is robust against element heterogeneity among the input coefficients αi and against variations in the coupling coefficients Kij. We have also been able to generate more complicated noise-shaped spectra, such as a notch at a given frequency as shown in Fig. 5, by suitable choices for the postsynaptic current waveform γ(t). We also note that shaping need not be limited to anticorrelations between individual spikes. For instance, it could take place between neuron bursts. With different network architectures, it may be possible to create noise-shaping networks in which the SNR, DR, or other performance criteria are significantly enhanced beyond what is shown here.

In biological experiments, noise shaping may be difficult to detect in the firing records of individual neurons. These records may look Poisson-like and, except for refractoriness, display few correlations. In particular, cross-correlations between pairs of neurons in simulations of the coupled network do not show significant anticorrelations (data not shown). Noise shaping arises from the correlations in the aggregate firings, a collective property of the population. A demonstration of noise shaping in biological systems would require the simultaneous recording of the firings from many coupled neurons (45, 46). Such demonstrations are of interest because noise shaping or a variant thereof may be at work in biological systems that operate at frequencies higher than those of the network elements (11, 47).

Acknowledgments

We thank N. Kopell for a critical reading of the manuscript. This work was supported at Boston University by a contribution from Ray Stata of Analog Devices, Inc., and by National Institute of Mental Health Grant K01 MH01508 (C.C.C.).

ABBREVIATIONS

SNR

signal-to-noise ratio

DR

dynamic range

IF

integrate-and-fire

ISI

interspike interval

Footnotes

This paper was submitted directly (Track II) to the Proceedings Office.

The system is integrated using a fourth-order Runge–Kutta routine (35). Accuracy in the firing sequence is ensured by using an adaptive step size to handle nearly simultaneous firings. This scheme is not critical for obtaining our results. The system is integrated for 200 s after a settling period of 30 s.

**

We compute a(τ) = 〈XN(t′)XN(t′ − τ)〉, where XN(t) is defined on a discrete set of times kΔ, where k indexes the bin intervals. We set XN(t) = 1 if there is a firing event between t and t + Δ, and XN(t) = 0 otherwise.

‡‡

For the (one-sided) spectra shown in Fig. 3, the data are partitioned into 255 overlapping sequences and windowed using a Bartlett window. The results presented herein are not sensitive to the details of the Fourier transform segmentation or the choice of window.

††

At each firing event, the neuron is reset to a random value between 0 and Vthδ. On average, this reduces the potential difference from the threshold by Vthδ/2.

§§

The probability of a firing event during an interval of duration Δ is FNΔ. In this case, the noise power is the variance in the bin occupation, or σ2 = FNΔ(1 − FNΔ). For small Δ, this approaches the Poisson result σ2 = FNΔ. For the data in Fig. 3, σ2 agrees with the Poisson prediction to within 1%.

¶¶

As f → 0, the ratio of P(f) for the coupled network to that of the uncoupled network approaches 0.04, or 14 dB, for the parameters used in Fig. 3. This agrees well with the noise suppression measured at the left edge of the figure.

References

  • 1.Shadlen M N, Newsome W T. Curr Opin Neurobiol. 1994;4:569–579. doi: 10.1016/0959-4388(94)90059-0. [DOI] [PubMed] [Google Scholar]
  • 2.Sejnowski T J. Nature (London) 1995;376:21–22. doi: 10.1038/376021a0. [DOI] [PubMed] [Google Scholar]
  • 3.Softky W R. Curr Opin Neurobiol. 1995;5:239–247. doi: 10.1016/0959-4388(95)80032-8. [DOI] [PubMed] [Google Scholar]
  • 4.Singer W, Gray C M. Annu Rev Neurosci. 1995;18:555–586. doi: 10.1146/annurev.ne.18.030195.003011. [DOI] [PubMed] [Google Scholar]
  • 5.Ferster D, Spruston N. Science. 1995;270:756–757. doi: 10.1126/science.270.5237.756. [DOI] [PubMed] [Google Scholar]
  • 6.Stevens C F, Zador A. Curr Biol. 1995;5:1370–1371. doi: 10.1016/s0960-9822(95)00273-9. [DOI] [PubMed] [Google Scholar]
  • 7.Meister M. Proc Natl Acad Sci USA. 1996;93:609–614. doi: 10.1073/pnas.93.2.609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Shadlen M N, Newsome W T. J Neurosci. 1998;18:3870–3896. doi: 10.1523/JNEUROSCI.18-10-03870.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Softky W R, Koch C. J Neurosci. 1993;14:334–350. doi: 10.1523/JNEUROSCI.13-01-00334.1993. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Thorpe S, Fize D, Marlot C. Nature (London) 1996;381:520–522. doi: 10.1038/381520a0. [DOI] [PubMed] [Google Scholar]
  • 11.Carr C E. Annu Rev Neurosci. 1993;16:223–243. doi: 10.1146/annurev.ne.16.030193.001255. [DOI] [PubMed] [Google Scholar]
  • 12.Tsodyks M V, Sejnowski T. Network. 1995;6:111–124. [Google Scholar]
  • 13.van Vreeswijk C, Sompolinsky H. Science. 1996;274:1724–1726. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
  • 14.Berry M J, II, Meister M. J Neurosci. 1998;18:2200–2211. doi: 10.1523/JNEUROSCI.18-06-02200.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gerstner W. In: Pulsed Neural Networks. Maass W, Bishop C M, editors. Cambridge, MA: MIT Press; 1998. pp. 261–295. [Google Scholar]
  • 16.Norsworthy S R, Schreier R, Temes G C, editors. Delta-Sigma Data Converters. Piscataway, NJ: IEEE Press; 1997. [Google Scholar]
  • 17.Shin J H, Lee K R, Park S B. Int J Electronics. 1993;74:359–368. [Google Scholar]
  • 18.Cheung K F, Tang P Y H. Proc. 1993 IEEE Int. Conf. Neural Networks. 1993. , 489–493. [Google Scholar]
  • 19.Adams R W. Proc. 1997 IEEE Int. Conf. Neural Networks. 1997. , 953–958. [Google Scholar]
  • 20.Rall W. J Neurophysiol. 1967;30:1138. doi: 10.1152/jn.1967.30.5.1138. [DOI] [PubMed] [Google Scholar]
  • 21.Traub R D, Miles R. Neuronal Networks of the Hippocampus. Cambridge, U.K.: Cambridge Univ. Press; 1991. [Google Scholar]
  • 22.Tsodyks M, Mitkov I, Sompolinsky H. Phys Rev Lett. 1993;71:1280–1283. doi: 10.1103/PhysRevLett.71.1280. [DOI] [PubMed] [Google Scholar]
  • 23.Mirollo R E, Strogatz S H. SIAM J Appl Math. 1990;50:1645–1662. [Google Scholar]
  • 24.Kuramoto Y. Physica D. 1991;50:15–30. [Google Scholar]
  • 25.Hansel D, Mato G, Meunier C. Neural Comput. 1995;7:307–337. doi: 10.1162/neco.1995.7.2.307. [DOI] [PubMed] [Google Scholar]
  • 26.Gerstner W, van Hemmen J L, Cowan J D. Neural Comput. 1996;8:1653–1676. doi: 10.1162/neco.1996.8.8.1653. [DOI] [PubMed] [Google Scholar]
  • 27.Chow C C. Physica D. 1998;118:343–370. [Google Scholar]
  • 28.Bressloff P C, Coombes S. Phys Rev Lett. 1998;81:2168–2171. [Google Scholar]
  • 29.Wang X J, Rinzel J. Neural Comput. 1992;4:84–97. [Google Scholar]
  • 30.van Vreeswijk C, Abbott L, Ermentrout G B. J Comp Neurosci. 1994;1:313–321. doi: 10.1007/BF00961879. [DOI] [PubMed] [Google Scholar]
  • 31.Terman D, Kopell N, Bose A. Physica D. 1998;117:241–275. [Google Scholar]
  • 32.White J A, Chow C C, Ritt J, Soto-Treviño C, Kopell N. J Comp Neurosci. 1998;5:5–16. doi: 10.1023/a:1008841325921. [DOI] [PubMed] [Google Scholar]
  • 33.Gerstner W. Phys Rev E. 1995;51:738–758. doi: 10.1103/physreve.51.738. [DOI] [PubMed] [Google Scholar]
  • 34.Abbott L F, van Vreeswijk C. Phys Rev E. 1993;48:1483–1490. doi: 10.1103/physreve.48.1483. [DOI] [PubMed] [Google Scholar]
  • 35.Press W H, Teukolsky S A, Vetterling W T, Flannery B P. Numerical Recipes in C: The Art of Scientific Computing. Cambridge, U.K.: Cambridge Univ. Press; 1992. [Google Scholar]
  • 36.Edwards B W, Wakefield G H. J Acoust Soc Am. 1993;93:3353–3364. doi: 10.1121/1.405718. [DOI] [PubMed] [Google Scholar]
  • 37.Spiridon M, Chow C C, Gerstner W. In: Proceedings of the International Conference on Artificial Neural Networks (ICANN’98) Niklasson L, Boden M, Ziemke T, editors. Berlin: Springer; 1998. pp. 337–342. [Google Scholar]
  • 38.Hawkes A G. Biometrika. 1971;58:83–90. [Google Scholar]
  • 39.Daley D, Vere-Jones D. An Introduction to the Theory of Point Processes. New York: Springer; 1988. [Google Scholar]
  • 40.Moss F, Pierson D, O’Gorman D. Int J Bifurc Chaos. 1994;4:1383–1397. [Google Scholar]
  • 41.Wiesenfeld K, Moss F. Nature (London) 1995;373:33–36. doi: 10.1038/373033a0. [DOI] [PubMed] [Google Scholar]
  • 42.Collins J J, Chow C C, Imhoff T T. Nature (London) 1995;376:236–238. doi: 10.1038/376236a0. [DOI] [PubMed] [Google Scholar]
  • 43.Pei X, Wilkens L, Moss F. Phys Rev Lett. 1996;77:4679–4682. doi: 10.1103/PhysRevLett.77.4679. [DOI] [PubMed] [Google Scholar]
  • 44.Gammaitoni L, Hänggi P, Jung P, Marchesoni F. Rev Mod Phys. 1998;70:223–287. [Google Scholar]
  • 45.Wilson M A, McNaughton B L. Science. 1993;261:1055–1058. doi: 10.1126/science.8351520. [DOI] [PubMed] [Google Scholar]
  • 46.Meister M, Pine J, Baylor D A. J Neurosci Methods. 1994;51:95–106. doi: 10.1016/0165-0270(94)90030-2. [DOI] [PubMed] [Google Scholar]
  • 47.Gerstner W, Kempter R, van Hemmen J L, Wagner H. Nature (London) 1996;383:76–78. doi: 10.1038/383076a0. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES