Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2013 Nov 25;110(50):E4931–E4936. doi: 10.1073/pnas.1304680110

Long-period rhythmic synchronous firing in a scale-free network

Yuanyuan Mi a,b, Xuhong Liao b, Xuhui Huang b, Lisheng Zhang b, Weifeng Gu b, Gang Hu b,1, Si Wu a,c,d,1
PMCID: PMC3864271  PMID: 24277831

Significance

Understanding the mechanisms of how neural systems process temporal information is at the core to elucidate brain functions, such as for speech recognition and music appreciation. The present study investigates a simple yet effective mechanism for a neural system to extract the rhythmic information of external inputs in the order of seconds. We propose that a large-size neural network with scale-free topology is like a repertoire, which consists of a large number of loops and chains with various sizes, and these loops and chains serve as substrates to learn the rhythms of external inputs.

Keywords: reservoir network, temporal information processing, scaled chemical synapse

Abstract

Stimulus information is encoded in the spatial-temporal structures of external inputs to the neural system. The ability to extract the temporal information of inputs is fundamental to brain function. It has been found that the neural system can memorize temporal intervals of visual inputs in the order of seconds. Here we investigate whether the intrinsic dynamics of a large-size neural circuit alone can achieve this goal. The network models we consider have scale-free topology and the property that hub neurons are difficult to be activated. The latter is implemented by either including abundant electrical synapses between neurons or considering chemical synapses whose efficacy decreases with the connectivity of the postsynaptic neuron. We find that hub neurons trigger synchronous firing across the network, loops formed by low-degree neurons determine the rhythm of synchronous firing, and the hardness of exciting hub neurons avoids epileptic firing of the network. Our model successfully reproduces the experimentally observed rhythmic synchronous firing with long periods and supports the notion that the neural system can process temporal information through the dynamics of local circuits in a distributed way.


Over the last decades, our knowledge of how the neural system extracts the spatial structures of visual stimuli has advanced considerably, as is well documented by the receptive field properties of visual neurons (1). The equally important issue of how the neural system processes temporal information remains much less understood (25). A central issue that has been widely debated is whether timing in the brain relies on a centralized clock, such as a dedicated pacemaker that counts the lapse of time, or whether timing is achieved distributively in local circuits (2, 5, 6). Previous studies have shown that a recurrent network with random connections, diversified single neuron dynamics, and synaptic short-term plasticity (STP) can use time-varying states to retain the memory traces of external inputs (7, 8). However, limited by the network structure, this type of model can only represent temporal information up to hundreds of milliseconds (8).

Notably, in their experimental study, Sumbre et al. (9) found that the neural circuit in the optic tectum of zebrafish can memorize temporal intervals of visual inputs in the time order of seconds. In this experiment, a visual stimulation was first presented to a zebrafish periodically for 20 times. After this conditioning stimulation (CS), the neural circuit of the optic tectum of the zebrafish displayed self-sustained synchronous firing with the same rhythm as that of the CS pattern. This sustained rhythmic activity induced regular tail flipping in the zebrafish, suggesting that it may serve as a substrate for perceptual memory of rhythmic sensory experience. The longest period that the neural circuit was able to memorize was up to 20 s.

How does a neural system acquire and memorize this long-period rhythm? An easy solution is perhaps that a clock in the brain counts the time and guides the rhythmic response of the neural circuit. However, the evidence accumulated thus far tends to suggest that timing in the brain is not controlled by a centralized clock (4, 5). It therefore prompts an important question: is the intrinsic dynamics of a neural circuit alone sufficient to generate these rhythmic synchronous firings? Computationally, the difficulty for a homogeneous neural network implementing this task lies on the fact that the time constants of single neurons and neuronal synapses are too short (in the order of 10–1,000 ms) to hold the stimulation information for sufficiently long time to establish associative learning.

To tackle this challenging issue, we propose a unique mechanism relying on a special topology of the network and an appropriate interaction between neurons. In particular, we consider a network model with scale-free structure (10, 11), i.e., the fraction of neurons in the network with k connections satisfies Inline graphic, with a constant γ. In practice, we do not require the network topology to be perfectly scale free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. Neurons with Inline graphic are called hubs and others with Inline graphic are of lower degree (depending on the parameter setting, Inline graphic is used in this study).

The dynamics of a single neuron are given by (12)

graphic file with name pnas.1304680110eq1.jpg
graphic file with name pnas.1304680110eq2.jpg

where Inline graphic and Inline graphic are variables describing the state of the neuron, Inline graphic is analogous to the membrane potential, and Inline graphic the recovery current. τ is the time constant, and N is the number of neurons. The term Inline graphic denotes the neuronal interaction. Fig. 1A shows the evolution of Inline graphic and Inline graphic in an excitation process, mimicking the generation of an action potential.

Fig. 1.

Fig. 1.

(A) Dynamics of the single neuron model described in Eqs. 1 and 2. The function Inline graphic in Eq. 2 is chosen to be Inline graphic, Inline graphic and 1 for Inline graphic, Inline graphic, and Inline graphic, respectively. We take Inline graphic, Inline graphic, and Inline graphic so that the dynamics of a single neuron hold excitability when the input is sufficiently large. After firing, the neuron is subject to a refractory period and then reaches the steady rest state. (B) An example of a low-degree neuron connected by electrical synapses with three neighbors. The coupling strength Inline graphic. A single spike is sufficient to activate this neuron. In a network, a spike can typically activate a neuron with less than six connected neighbors. (C) An example of a hub neuron connected by electrical synapses with eight neighbors. A single spike is not adequate to activate this neuron. (D) The same hub neuron as in C can be successfully activated by two or more simultaneously arriving spikes. (E) An example of a low-degree neuron connected by scaled chemical synapses with three neighbors. The synapse strength, illustrated by the line width, is Inline graphic with Inline graphic. The firing threshold Inline graphic. A single spike is sufficient to activate this neuron. (F) Two or more simultaneously arriving spikes are required to activate a hub neuron with eight neighbors connected by scaled chemical synapses. The synapse strength is Inline graphic.

Apart from scale-free topology, another key character of our model is the hardness of activating a hub neuron in comparison with exciting a low-degree neuron. This property is crucial for avoiding epileptic firing and can be realized in two ways. One way is to consider electrical coupling between neurons (13), implementable by setting Inline graphic, which is also called diffusive coupling. The weights are set Inline graphic if the two neurons i and j have a connection, and Inline graphic otherwise. Diffusive coupling, which can be regarded as a constant electrical resister linking two neurons, has the effect of equating the potentials of connected neurons. It is known that electrical synapses have the role of enhancing synchronous firing in a neural network (13). Here, they have also the important role of increasing the difficulty of activating a hub neuron, because a hub neuron has many connections so that the excitation current it receives is easily leaked to connected neighbors. We choose the coupling strength Inline graphic, so that a single spike is adequate to activate a low-degree neuron (Fig. 1B), whereas two or more simultaneously arriving spikes are needed to excite a hub neuron (Fig. 1 C and D).

Alternatively, the hardness of activating a hub neuron can be implemented by considering that the efficacy of a chemical synapse decreases with the connectivity of the postsynaptic neuron. This property is realized by setting Inline graphic, where H(ujθ) = uj for uj > θ and H(ujθ) = 0 otherwise, mimicking that neuron i receives input from neuron j only when j fires (this occurs when Inline graphic, the firing threshold). Inline graphic or 0 depends on whether there is a chemical synapse from neuron j to i or not. The parameter setting Inline graphic with Inline graphic the connectivity of neuron i implies that the more connections a neuron has, the weaker the synaptic efficacy is. We choose the value of Inline graphic, so that a single spike is adequate to activate a low-degree neuron (Fig. 1E), whereas two or more simultaneously arriving spikes are needed to excite a hub neuron (Fig. 1F).

Results

Rhythmic Synchronous Firing in a Scale-Free Network.

We first demonstrate that our network model has the capacity of retaining rhythmic synchronous firing. Because the network behaviors for electrical and chemical synapses are qualitatively the same, we will present the results for electrical synapses here. The results for chemical synapses are shown in SI Text, unless stated specifically.

For illustration, an arbitrary scale-free network with Inline graphic, number of neurons Inline graphic, and mean connectivity Inline graphic is generated, as shown in Fig. 2A. The neurons are indexed according to the order of their generation by the preferential-attachment rule (11). Beginning with a random initial condition (by setting Inline graphic and Inline graphic, for Inline graphic, to be uniformly distributed random numbers between 0 and 1), we evolve the network state according to Eqs. 1 and 2 and find that with a probability of around 10%, the network evolves into a self-sustained periodically oscillatory stationary state (i.e., an attractor), whereas in other cases, the initial activity of the network fades away rapidly. Measured by the mean activity of all neurons, i.e., Inline graphic, the oscillatory attractor is characterized by bursts of synchronous firing across the network that are interrupted by a rather long interbursting interval during which only a small number of neurons fire (Fig. 2B and Movie S1). Thus, the network can maintain rhythmic synchronous firing.

Fig. 2.

Fig. 2.

(A) A network with scale-free topology Inline graphic, mean connectivity Inline graphic, and Inline graphic. The diameter of a neuron is proportional to its connectivity. (B) The population activity of the network, which displays rhythmic synchronous firing. (C and D) Activities of individual neurons can be divided into two types: those that fire only once in a single period (Inline graphic type; C) and those that fire twice or more (Inline graphic type, D). (E) Same as A with the positions of neurons reorganized by placing all Inline graphic neurons (green and orange colored) in the center and all Inline graphic neurons (blue colored) outside. All Inline graphic neurons have a low-degree connection and form a loop. Two red edges show the pathways along which excitation of the loop is propagated to a hub neuron.

Mechanism for Long-Period Rhythmic Synchronous Firing.

To unveil the mechanism underlying the network behavior, we investigate the activities of individual neurons during rhythmic firing and find that they can be roughly divided into two groups: those that fire only once in a single period, referred to as Inline graphic neurons hereafter (Fig. 2C), and those that fire twice in a single period, referred to as Inline graphic neurons hereafter (Fig. 2D) (there are very few neurons firing more than twice in a single period and they are attributed to Inline graphic neurons for simplicity). The different behaviors of Inline graphic and Inline graphic neurons disclose the secret of the network dynamics. We redraw the network diagram in Fig. 2A by separating all Inline graphic neurons into the center of the figure, as shown in Fig. 2E. Interestingly, we find that all Inline graphic neurons have low-degree connectivity and form a loop. We call such a loop formed by low-degree neurons a low-degree loop.

The picture of the network dynamics now becomes clear as described in the following. For illustration, let us consider that neuron 91 on the loop (marked red in Fig. 2E) is excited first and that the activity propagates along the loop in an anticlockwise direction (Movie S2). Initially, the firing of low-degree neurons in the loop is not sufficient to generate synchronous firing of the network and the network state remains silent. The situation starts to change when the excitation propagates to neuron 36 from which hub neuron 4 becomes activated. This hub neuron fires because it receives two spikes around the same time through two transmission routes (see red edges in Fig. 2E). Many other hubs are also connected to the loop; however, they do not fire because they do not receive two or more synchronized spikes from the loop. The firing of hub neuron 4 subsequently induces synchronous firing in the network due to its massive connections to others. After synchronous firing, the network returns to the interbursting period because the majority of the neurons are now in the refractory state.

Thus, in generating rhythmic synchronous firing, the role of a hub neuron is to rapidly spread excitation across the network, and the role of a low-degree loop is to retain the activity during the interbursting interval, with the length of the loop determining the rhythm. Neurons on the loop fire twice in a single period: once in the interbursting interval and once during the hub-induced synchronous burst. Other neurons fire only once during the synchronous firing. Electrical synapses (or scaled chemical synapses) between neurons play a critical role, ensuring that only one or very few hub neurons are activated during the propagation of excitation along the low-degree loop, which protects against epileptic firing of the network.

After synchronous firing, the majority of neurons are in the refractory period and thus cannot be activated immediately again. However, there are exceptions. In the example network of Fig. 2E, the five neurons on the loop marked green (their indexes are 91, 153, 59, 202, and 201) are activated by the excitation propagating along the loop at around the same time when hub neuron 4 is excited. Therefore, they are not involved in the hub-induced synchronous burst because of their refractory period. This property means that they recover earlier than others from the refractory period. As a result, neuron 91 can be activated by the activity of the network at the end of the synchronous phase and the propagation of the excitation wave along the loop begins again. Thus, the low-degree loop plays another important role of retaining the excitation seed against a hub-induced synchronous firing so that a rhythmic network activity can be maintained. This property is clearly displayed in Movie S2.

As a comparison, we also studied network models with non–scale-free topology, including regular, random, and small-world networks, and found that they all fail to retain long-period rhythmic synchronous firing, either due to lacking hub neurons to trigger synchronous firing or due to short of low-degree neurons to retain long-period low-level activities (Figs. S1S3).

Reservoir Nature of a Scale-Free Network.

In the above, we demonstrated that a scale-free network has the capacity of retaining periodical synchronous firings and the length of a low-degree loop determines the period of the rhythm. We further check whether a scale-free network has resources to maintain a broad range of rhythmic activities. The statistical distribution of the lengths of low-degree loops in a scale-free network with 300 neurons is measured. As Fig. 3A shows, the length of low-degree loops has a broad distribution, indicating that the network is like a reservoir having the potential to maintain rhythms with a wide range of time periods. The longest period is determined by the length of the longest low-degree loop. We measure how the latter, referred to as Inline graphic, scales up with the network size. Fig. 3B shows that it increases linearly with the number of neurons (see theoretical analysis for larger networks in Fig. S4). Fitted by a linear curve, this gives Inline graphic. Thus, for a network size N in the range of 104–105 (a very rough estimation of the number of neurons in the zebrafish tectum), Inline graphic is in the range of Inline graphicInline graphic. Consider that the membrane time constant of individual neurons is in the order of 10 ms and the transmission delay between neurons is in the order of 1 ms, the time consumed for excitation propagating along the longest low-degree loop is therefore in the range of 13–130 s, which fully covers the period of 20 s observed in the experiment (9).

Fig. 3.

Fig. 3.

(A) Distribution of the lengths of low-degree loops in a scale-free network of size Inline graphic. The result is obtained by averaging 100 randomly generated scale-free networks of the same size. (B) Length of the longest low-degree loop vs. the number of neurons in a network. Actual values are obtained through extensively searching all loops in each network, and the upper bounds are calculated theoretically (Fig. S4). For each data point, the result is obtained by averaging over 100 randomly generated scale-free networks of the same size.

Matching the Rhythm of External Input.

The above analysis suggests that to retain long-period rhythmic synchronous firing in a network, the key is to have a low-degree loop of proper size and a hub neuron “hooked” on that loop that can be activated by the excitation of the loop. However, how does the neural system acquire the necessary structure from a given external rhythmic input? Here, we argue that this could be achieved through a learning process. To demonstrate this idea, we carry out the following simulation.

A conventional scale-free network has only topology without a geometrical structure. To incorporate the property that a neural circuit is essentially embedded in a 2D cortical sheet with inhomogeneous connection density in distance, we build up a scale-free network in 2D space accommodating the biological feature that neurons tend to have more connections locally (Fig. 4 A and B; for details, see Materials and Methods).

Fig. 4.

Fig. 4.

(A) A scale-free network of 1,000 neurons in 2D space, in which neurons tend to have higher connectivity locally. (B) The connectivity distribution of the network, which satisfies the power law, with Inline graphic. (C) The network can retain residual activity elicited by a transient strong stimulation for more than Inline graphic. (Inset) Network activity around Inline graphic. This residue activity serves as a memory trace of the stimulation. Electrical synapses are used. Parameters are the same as in Fig. 1 A and B. (D) Network learns to generate rhythmic synchronous firing with a period of Inline graphic after an external stimulation is presented repeatedly for 20 times.

To extract the rhythm of periodical stimulation, the key is to establish association between consecutive stimulations, which requires the network to hold the stimulation information (memory trace) for a duration longer than the interstimulation interval. Consider that at Inline graphic, a stimulation is first presented to a portion of neurons in the network (4% used in our simulation to ensure the initial synchronous firing of the network), referred to as neural cluster A, whose activation triggers a strong transient response in the network. Subsequently, the residue of the stimulus-induced activity is propagating in the network along various paths, which effectively holds a memory trace of the preceding stimulation (Fig. 4C) and provides a cue for associative learning. Around time T, the period of the rhythmic input, the residual activity reaches a neural cluster B, and meanwhile, a new round of stimulation is applied to neural cluster A. According to the idea of Hebbian learning (1416), the synapses from neural clusters B to A are potentiated. This learning process occurs repeatedly at each period of the stimulation presentation. Finally, the connections from neural cluster B to A are strengthened to the extent that the excitation of B can activate A, and thus a closed loop of size T is formed. Note that because the stimulation induces initial synchronous firing, this means that neural cluster A contains hub neurons and these hub neurons are hooked on the loop after learning. The network can now hold synchronous firing with the same rhythm as the input. Fig. 4D displays the simulation result for the case of electrical synapses. The result for chemical synapses is shown in Fig. S5.

The range of rhythmic inputs a network can learn is determined by the lifetime of a memory trace elicited by the stimulation. We measured how the latter scales up with the network size (Materials and Methods). Fig. 5 A and B shows that the mean value of the lifetime of a memory trace, referred to as Inline graphic, increases linearly with the number of neurons, and can be approximated to be Inline graphic for electrical synapses (Fig. 5A) and Inline graphic for chemical synapses (Fig. 5B), where τ is the time constant of the single neuron dynamics. Fig. 5 C and D demonstrates that a scale-free network with electrical synapses is able to learn a broad range of rhythmic inputs once the interstimulation interval is smaller than the lifetime of a memory trace the network can hold. The result for chemical synapses is similar and is shown in Fig. S5.

Fig. 5.

Fig. 5.

(A and B) The lifetime of a memory trace vs. the number of neurons in a scale-free network. (A) Networks with electrical synapses. (B) Networks with chemical synapses. The parameters for the single neuron dynamics and synaptic strengths are the same as in Fig. 1. Each data point is obtained by averaging over 100 networks of the same size. (C and D) A given scale-free network (with electrical synapses) learns to generate a broad range of rhythmic synchronous firings from different external inputs. An external stimulation is applied periodically for 20 times. The interstimulation intervals are (C) Inline graphic and (D) Inline graphic.

Discussion

In summary, we proposed a very simple, yet effective, mechanism to generate and maintain long-period rhythmic synchronous firing in the neural system. The network has scale-free topology and contains low-degree loops and chains of various sizes, which endows the neural system with the capacity to process a broad range of rhythmic inputs. In the presence of a rhythmic external input, the neural system selects a low-degree loop from its reservoir with the loop size matching the input rhythm, and this matching operation can be achieved by a learning process.

Scale-free topology and the hardness of activating a hub neuron are two fundamental elements of our model. Strictly speaking, we do not need the network structure to be perfectly scale free but rather that it contains a few hubs and a large number of low-degree neurons. This type of connection pattern has been suggested to achieve a good balance between communication efficiency and wiring economy in neural networks (17). A number of experimental studies also support the existence of scale-free networks in the neural system. For instance, it was found that the topology of developing hippocampal networks in rats and mice (18) and the topology of functional networks in the human brain based on functional MRI data (19, 20) are scale free. The experimental data also showed that stimulating a single or a few cortical neurons could significantly modulate perception and motor output (21, 22) and the global state of the brain (23, 24), indicating the potential presence of hub neurons.

To implement the hardness of activating a hub neuron in comparison with exciting a low-degree one, we proposed two potential mechanisms. One is that neurons are connected by electrical synapses. Electrical coupling has been found to exist abundantly between ganglion cells in the retina and between interneurons in the cortex. It is unclear yet whether, in some cortical regions, electrical synapses may exist in abundance between excitatory neurons (25).

The other mechanism is that neurons are connected by chemical synapses with their strengths decreasing with the degree of connectivity of postsynaptic neurons. In contrast, if the strengths of chemical synapses are homogeneous (by setting Inline graphic independent of Inline graphic), then a scale-free network is unable to retain long-period rhythmic synchronous firing due to that hub neurons induce high-frequency oscillating responses (Fig. S6). We found that if the synaptic strength scales inversely with the square-root of neuronal connectivity, i.e., Inline graphic, long-period rhythms are well retained. Notably, this relationship is also the condition for a balanced network producing irregular neural responses (26). The experimental data also showed that synaptic efficacy can decrease with the connectivity of a postsynaptic neuron (27).

We also tested the robustness of our model to noise. Two noise sources were explored. One is introducing heterogeneity in neuronal connections, and the other is the injection of background noisy input. We measured how these two forms of noise affect the network responses to an external strong transient stimulation, including (i) the reliability of the network generating synchronous firing followed by long-lasting residual activity, and (ii) the lifetime of a memory trace elicited by the stimulation. We found that over a wide range of noise amplitudes, the network performance is robust (Fig. S7).

In general, if a network system has the scale-free type of topology, i.e., it consists of a few hub nodes and a large number of low-degree nodes, and the property that hub nodes are difficult to be activated in comparison with low-degree ones (irrespective of the mechanism of how this is achieved), then we would expect that the main findings in this work will still be applicable. It will be interesting to explore whether other dynamic systems hold these properties.

Synchronous oscillation has been suggested to play important roles in brain functions. Different mechanisms have been proposed to generate rhythms in neural responses, ranging from single cell properties (e.g., pacemaker or resonance) to neural population dynamics (e.g., a network of interneurons or a loop of reciprocally coupled excitatory and inhibitory neurons) (28). A challenging issue to these proposed mechanisms is to reconcile the notion of regular oscillation and the seemly contradictory behavior of irregular responses of individual neurons. Here, the mechanism we propose for generating slow oscillation depends on the propagation of neural activity along a loop of low-degree neurons. Thus, it does not contradict the phenomenon that individual neurons can respond irregularly to a constant stimulus (29).

Finally, our study has some far-reaching implications on neural information processing. It suggests that a neural system can use loops and chains of connected neurons to hold the memory trace of input information and that the latter might serve as the substrate to process temporal events. The neural model we propose here agrees with the idea of a reservoir network (8, 30), which states that the neural system is like a large repertoire with complete or even redundant resources for implementing a brain function, in which simplicity, rather than the efficiency of using resources, matters. Our study also supports the notion that the neural system can process temporal information by using the dynamics of a local circuit in a distributed way.

Materials and Methods

Generation of a Scale-Free Network in 2D Space.

A neural circuit is essentially embedded in 2D space, and neurons tend to have more connections locally. To accommodate this feature, we generate a scale-free network in 2D space. First, we use the preferential-attachment rule (11) to generate a preliminary scale-free network and assign neurons’ positions as follows: starting from the center to the peripheral regions, neurons are added one by one clockwise in a 2D lattice (Fig. 4A). Denote Inline graphic to be the neuron added at the ith step. For a newly added neuron, it has one connection with probability Inline graphic and two connections with probability Inline graphic linked to the existing neurons. The chance for an existing neuron Inline graphic linking to the new neuron is given by Inline graphic, where Inline graphic is the current connectivity of neuron Inline graphic, and the summation runs over all existing neurons. This preferential-attachment rule leads to fact that hub neurons tend to be located at the central area, because earlier added neurons have more chances to be connected to others. This structure agrees with the property in the visual system (e.g., the retina) that neurons tend to have higher density, and hence higher connectivity, in the fovea than in the peripheral region. Second, for neuron Inline graphic having large enough connectivity (we choose those neurons with index Inline graphic in the preliminary network), we remove its connection to neuron Inline graphic with a probability, Inline graphic, with Inline graphic and Inline graphic is the Euclidean distance between the neurons in the 2D lattice. The connections of low-connectivity neurons are unchanged. A large Inline graphic implies that the two neurons are well separated in the space and hence have high probability to be disconnected. This pruning rule achieves the goal that neurons tend to have more connections locally. By using the above method, we generate a network as shown in Fig. 4A, and we confirm that it has the scale-free topology in terms of the distribution of neurons’ connectivity satisfying the power law (Fig. 4B).

Learning Algorithm.

We consider a simple learning rule (1416), which states that if a neuron i fires in advance of a neuron j within a short-time window Inline graphic Inline graphic, then there is a chance (with the probability Inline graphic) that the synapse from neurons i to j is established. Analogy to the experimental setting, we consider an external stimulation is applied to the network repeatedly with an interval T. This stimulation is able to activate 4% of the neurons in the network instantly, referred to as neural cluster A (these neurons are randomly and uniformly chosen from the network). Suppose that the first-round stimulation is applied at Inline graphic, which triggers a strong transient response of the network, implying that at least one hub neuron is activated. Subsequently, the residual activity of the transient response is propagating in the network along various paths. Denote those neurons that happen to be active after the first-round stimulation in the time window Inline graphic to be neural cluster B, which holds the memory trace of the stimulation. At time T, the second-round simulation is applied, which activates neural cluster A again. According to the aforementioned learning rule, the couplings from clusters B to A are strengthened. This learning process occurs repeatedly at each round of presenting the same simulation. Finally, the connections from clusters B to A become sufficiently strong, and the excitation of B can activate A. Thus, a closed loop of size T is formed, and the network is able to generate rhythmic activity with the same period as the input.

Lifetime of Memory Trace.

We calculate how the lifetime of residual activity elicited by synchronous firing of a network scales up with the network size. For a given size N, we randomly generate 100 networks with long low-degree chains and scale-free topology. For each network, we stimulate hub neurons, whose excitations trigger synchronous firing of the network followed by long-lasting residual activity. The lifetime of the memory trace is measured from the onset of synchronous firing to the moment when all neurons are inactive. The statistics of the memory trace lifetime for a given network size is obtained by averaging over 100 random networks. We carry out simulations for the network sizes of 500, 1,000 and 1,500 and interpret the relationship between the lifetime of the memory trace and the network size by a linear curve (Fig. 5 A and B).

Supplementary Material

Supporting Information

Acknowledgments

We acknowledge the very valuable comments of two anonymous reviewers. We thank M. M. Poo, T. Sejnowski, M. Tsodyks, M. Rasch, W. H. Zhang, X.-J. Wang, and K. Y. Michael Wong for valuable discussions. This work is supported by National Foundation of Natural Science of China Grants 11305112 (to Y.M.), 11135001 and 11174034 (to G.H.), 91132702 and 31261160495 (to S.W.), and 11205041 (to X.L.); the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning (CNLYB1211); and Natural Science Foundation of Jiangsu Province BK20130282.

Footnotes

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1304680110/-/DCSupplemental.

References

  • 1.Kandel E, Schwartz J, Jessell T, Siegebaum S, Hudspeth A. Principles of Neural Science. 5th Ed. New York: McGraw-Hill Ryerson; 2013. [Google Scholar]
  • 2.Carr CE. Processing of temporal information in the brain. Annu Rev Neurosci. 1993;16:223–243. doi: 10.1146/annurev.ne.16.030193.001255. [DOI] [PubMed] [Google Scholar]
  • 3.Mauk MD, Buonomano DV. The neural basis of temporal processing. Annu Rev Neurosci. 2004;27:307–340. doi: 10.1146/annurev.neuro.27.070203.144247. [DOI] [PubMed] [Google Scholar]
  • 4.Buhusi CV, Meck WH. What makes us tick? Functional and neural mechanisms of interval timing. Nat Rev Neurosci. 2005;6(10):755–765. doi: 10.1038/nrn1764. [DOI] [PubMed] [Google Scholar]
  • 5.Ivry RB, Schlerf JE. Dedicated and intrinsic models of time perception. Trends Cogn Sci. 2008;12(7):273–280. doi: 10.1016/j.tics.2008.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gihbon J. Scalar expectancy theory and Weber’s law in animal timing. Psychol Rev. 1977;84(3):279–335. [Google Scholar]
  • 7.Karmarkar UR, Buonomano DV. Timing in the absence of clocks: Encoding time in neural network states. Neuron. 2007;53(3):427–438. doi: 10.1016/j.neuron.2007.01.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Buonomano DV, Maass W. State-dependent computations: Spatiotemporal processing in cortical networks. Nat Rev Neurosci. 2009;10(2):113–125. doi: 10.1038/nrn2558. [DOI] [PubMed] [Google Scholar]
  • 9.Sumbre G, Muto A, Baier H, Poo M-M. Entrained rhythmic activities of neuronal ensembles as perceptual memory of time interval. Nature. 2008;456(7218):102–106. doi: 10.1038/nature07351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–512. doi: 10.1126/science.286.5439.509. [DOI] [PubMed] [Google Scholar]
  • 11.Albert R, Barabási AL. Statistical mechanics of complex networks. Rev Mod Phys. 2002;74(1):47–97. [Google Scholar]
  • 12.Bär M, Eiswirth M. Turbulence due to spiral breakup in a continuous excitable medium. Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics. 1993;48(3):R1635–R1637. doi: 10.1103/physreve.48.r1635. [DOI] [PubMed] [Google Scholar]
  • 13.Sherman A, Rinzel J. Rhythmogenic effects of weak electrotonic coupling in neuronal models. Proc Natl Acad Sci USA. 1992;89(6):2471–2474. doi: 10.1073/pnas.89.6.2471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bi GQ, Poo M-M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci. 1998;18(24):10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bloomfield SA, Völgyi B. The diverse functional roles and regulation of neuronal gap junctions in the retina. Nat Rev Neurosci. 2009;10(7):495–506. doi: 10.1038/nrn2636. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Yang XD, Korn H, Faber DS. Long-term potentiation of electrotonic coupling at mixed synapses. Nature. 1990;348(6301):542–545. doi: 10.1038/348542a0. [DOI] [PubMed] [Google Scholar]
  • 17.Buzsáki G, Geisler C, Henze DA, Wang X-J. Interneuron Diversity series: Circuit complexity and axon wiring economy of cortical interneurons. Trends Neurosci. 2004;27(4):186–193. doi: 10.1016/j.tins.2004.02.007. [DOI] [PubMed] [Google Scholar]
  • 18.Bonifazi P, et al. GABAergic hub neurons orchestrate synchrony in developing hippocampal networks. Science. 2009;326(5958):1419–1424. doi: 10.1126/science.1175509. [DOI] [PubMed] [Google Scholar]
  • 19.Eguíluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV. Scale-free brain functional networks. Phys Rev Lett. 2005;94(1):018102. doi: 10.1103/PhysRevLett.94.018102. [DOI] [PubMed] [Google Scholar]
  • 20.van den Heuvel MP, Stam CJ, Boersma M, Hulshoff Pol HE. Small-world and scale-free organization of voxel-based resting-state functional connectivity in the human brain. Neuroimage. 2008;43(3):528–539. doi: 10.1016/j.neuroimage.2008.08.010. [DOI] [PubMed] [Google Scholar]
  • 21.Brecht M, Schneider M, Sakmann B, Margrie TW. Whisker movements evoked by stimulation of single pyramidal cells in rat motor cortex. Nature. 2004;427(6976):704–710. doi: 10.1038/nature02266. [DOI] [PubMed] [Google Scholar]
  • 22.Houweling AR, Brecht M. Behavioural report of single neuron stimulation in somatosensory cortex. Nature. 2008;451(7174):65–68. doi: 10.1038/nature06447. [DOI] [PubMed] [Google Scholar]
  • 23.Morgan RJ, Soltesz I. Nonrandom connectivity of the epileptic dentate gyrus predicts a major role for neuronal hubs in seizures. Proc Natl Acad Sci USA. 2008;105(16):6179–6184. doi: 10.1073/pnas.0801372105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Li CY, Poo M-M, Dan Y. Burst spiking of a single cortical neuron modifies global brain state. Science. 2009;324(5927):643–646. doi: 10.1126/science.1169957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Connors BW, Long MA. Electrical synapses in the mammalian brain. Annu Rev Neurosci. 2004;27:393–418. doi: 10.1146/annurev.neuro.26.041002.131128. [DOI] [PubMed] [Google Scholar]
  • 26.van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274(5293):1724–1726. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
  • 27.Li H, Li Y, Lei Z, Wang K, Guo A. Transformation of odor selectivity from projection neurons to single mushroom body neurons mapped with dual-color calcium imaging. Proc Natl Acad Sci USA. 2013;110(29):12084–12089. doi: 10.1073/pnas.1305857110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Wang X-J. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev. 2010;90(3):1195–1268. doi: 10.1152/physrev.00035.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998;18(10):3870–3896. doi: 10.1523/JNEUROSCI.18-10-03870.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Jaeger H, et al. Special issues on echo state networks and liquid state machines. Neural Networks. 2007;20(3):287–289. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information
1304680110_sm01.gif (2.4MB, gif)
1304680110_sm02.gif (4.6MB, gif)

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES