Skip to main content
Neural Plasticity logoLink to Neural Plasticity
. 2021 Jan 20;2021:6668175. doi: 10.1155/2021/6668175

Gamma Oscillations Facilitate Effective Learning in Excitatory-Inhibitory Balanced Neural Circuits

Kwan Tung Li 1, Junhao Liang 1, Changsong Zhou 1,
PMCID: PMC7840255  PMID: 33542728

Abstract

Gamma oscillation in neural circuits is believed to associate with effective learning in the brain, while the underlying mechanism is unclear. This paper aims to study how spike-timing-dependent plasticity (STDP), a typical mechanism of learning, with its interaction with gamma oscillation in neural circuits, shapes the network dynamics properties and the network structure formation. We study an excitatory-inhibitory (E-I) integrate-and-fire neuronal network with triplet STDP, heterosynaptic plasticity, and a transmitter-induced plasticity. Our results show that the performance of plasticity is diverse in different synchronization levels. We find that gamma oscillation is beneficial to synaptic potentiation among stimulated neurons by forming a special network structure where the sum of excitatory input synaptic strength is correlated with the sum of inhibitory input synaptic strength. The circuit can maintain E-I balanced input on average, whereas the balance is temporal broken during the learning-induced oscillations. Our study reveals a potential mechanism about the benefits of gamma oscillation on learning in biological neural circuits.

1. Introduction

The emergence of oscillations in neurophysiological signals within different frequency bands is commonly observed in mammal brain [14]. This cross-species phenomenon is behaviorally relevant. When animals or humans are performing different tasks or at the resting state, neural oscillations within specific frequency bands would be detected in specific brain regions. For example, alpha oscillations can be detected in occipital electroencephalography (EEG) signal when humans are in an eye-closed state, but it transitions to beta band when eyes are open [1].

Learning is the ability of acquiring new knowledge, behaviors, skills, attitudes, preferences, etc. possessed by humans and animals [5]. Physiologically, learning is believed to accompany with structural changes of neural circuits in the brain and is frequently observed to associate with the emergence of gamma band oscillations [68]. Through learning processes, structural clusters are formed in neural networks in both the prefrontal cortex and hippocampus, which are behaviorally relevant for the recall of short-term memory and the later consolidation of long-term memory [9, 10].

Although it is commonly accepted that gamma oscillations emerge during learning, their relation is not yet clear. First, the mechanism underlying gamma oscillation in neural circuits is still under active debate [8, 1115]. Second, the learning process in the neural circuit level is often phenomenally explained through plasticity mechanism [1618], while plasticity mechanisms in neural circuits are diverse and their effects are elusive. Especially, how plasticity interacts with neural dynamics is not well understood.

In this paper, we study learning in neural networks through spike-timing-dependent plasticity (STDP), a widely-observed phenomenon in experiments. It describes how synaptic strength changes according to the spiking time difference of neurons. STDP has become one of the learning rules widely used for studying cluster formation in simulating learning process in neural networks [19, 20]. Triplet STDP rule, heterosynaptic plasticity, and other plasticity rules are used together in Zenke et al. [19] to study cell assemblies and memory recall. Zenke et al. [19] and previous studies showed that triplet STDP rule can account for firing frequency dependence of spiking [19, 21], which is not considered in traditional pair-based STDP rule [22, 23], and heterosynaptic plasticity can prevent explosive increase of synaptic weight [24, 25]. Excitation-inhibition neural circuit with synaptic kinetics can allow fast network oscillation with a wide range of frequency [6, 7, 11, 2628]. Under suitable network parameters, such networks can generate gamma oscillations. In recent studies, gamma oscillation has been shown beneficial for coding input signal through plasticity [29, 30], inducing the postneurons to phase lock with the input signal [31], and is stable for transferring multiplexing signal [32]. However, these studies focus on plasticity between external inputs and receiving neurons. How gamma oscillation interacts with plasticity to form a stable cluster within a recurrent circuit is not well understood. Here, we used excitation-inhibition circuit of integrate-and-fire neurons with synaptic kinetics to study the effects of plasticity including STDP, heterosynaptic plasticity, and transmitter-induced plasticity. We aimed to study how gamma oscillation is generated and how it interplays with plasticity during the learning process.

In neural networks with plasticity, the neural circuit dynamic properties and the network structure (synaptic strength) interact with each other (Figure 1(a)). Elucidating the mechanism underlying this interaction is highly challenging due to their coevolution nature. In our study, we first split this problem into two branches separately (Figure 1(a)). The branch I (Figure 1(a), left) is to study how structure influences dynamics. Specifically, we aimed to understand how synaptic weight affects circuit properties such as firing rate, synchrony, and oscillations, among others, under different synaptic time constants and strengths of background inputs. The branch II (Figure 1(a), right) is the contrary, where we aimed to elucidate how diverse circuit properties would affect synaptic weights through plasticity, while the actual weight is not changed. Finally, the two branches were considered together in self-organized circuit to study the coevolution of network dynamics and structure simultaneously in the presence of plasticity.

Figure 1.

Figure 1

Schematic diagram of the study. (a) A paradigm of the interaction between network structure and dynamics. Branch I (left): synaptic weights influence the circuit dynamical properties, and this influence depends on other dynamical parameters such as synaptic time constant (τdE, τdI) and background input fbackground. Branch II (right): the circuit dynamics influence the synaptic weights through plasticity rule. (b) Illustration of the synaptic time course Sk(t), k = E, I in Eq.(3) with different decay time τdk.

The dynamics of neural circuits studied here can perform differently in terms of firing rate and degree of synchrony, consistent with previous studies [26, 3337]. The key parameters governing these properties are the synaptic coupling strength and the receptor time constants induced by different neurotransmitters. By studying the effect of different artifact spike train in shaping the structure of virtual networks (branch II), we found that the firing rate and synchronous properties of the spike train play a major role. We found that network synchrony is beneficial while the bursting spike of neurons is detrimental to synaptic potentiation. Moreover, gamma oscillation is always accompanied by synaptic potentiation during learning in circuits with plasticity where the dynamics and structure coevolve. Furthermore, we showed that the synchronous dynamics forms a special network structure after learning. E-I balanced inputs received by neurons are temporally broken during learning. In all, the study here showed that gamma oscillation in E-I balanced neural circuits has a beneficial role for effective learning through STDP.

2. Material and Methods

2.1. Neural Circuit Model

2.1.1. Network and Circuit Dynamics

Our model circuit is composed of 2400 neurons, with 2000 excitatory (E), nE, and 400 inhibitory (I), nI, neurons. Neurons are connected randomly with directed synapses with probability p = 0.2. Each neuron receives background input from 400 independent Poisson trains of rate fbackground = 2.5Hz. Neural dynamics in the network is modeled by conductance-based integrate-and-fire neurons [34] as

τkdVikdt=VLVik+GikEtEEVik+GikItEIVik,GikIt=τkjiInwijSIttjn,GikEt=τkjiOngkOSEttjn+jiEnwijujtxjtSEttjn. (1)

Here, k = E or I denotes the neuron type, and ik indicates the k neighbors of i neuron. Vik(t) is membrane potential of neuron i, and tin is the nth spike of neuron i. wij is the synaptic strength from presynaptic neuron j to postsynaptic neuron i, and their initial values are set as

wij=gEE,iE,jEgEI,iE,jIgIE,iI,jEgII,iI,jI. (2)

During the simulation, gEE changes according to the plasticity rules illustrated below. The synaptic time course is described by a biexponential function with two characteristic time constants, the rising time constant τrk and decay time constant τdk, both depending on the type of presynaptic neurons. It takes the form

Skt=Θtτlτdkτrketτlτdketτlτrk, (3)

where Θ(t) is the Heaviside function. We used excitatory synaptic time constant ranging from τdE = 3 ~ 90ms. It is biologically realistic because AMPA receptor has a smaller synaptic time constant at about 5 ms [38] and NMDA receptor has a large synaptic time constant at around 100 ms [39]. Thus, an effective combination of AMPA and NMDA receptors gives the excitatory synaptic time constant lying in the range we studied in this work, and the value of τdE is biologically determined by the ratio between these two receptors. Furthermore, we used τdI = 10ms, approximately the time constant of a major inhibitory receptor, the GABAa [40]. Different shapes of synaptic current (Eq. (3)) are shown in Figure 1(b).

2.1.2. Short-Term Synaptic Plasticity

We adopt presynaptic short-term plasticity (STP) in the model, which represents the dynamics of neurotransmitter [41]. It is described by two variables, the amount of neurotransmitter xiand its release probability per spike  ui, which are associated with their time constants τD, the depression time constant, and τF, the facilitation time constant. In general, τFτD holds. These variables are governed by the following equations:

duitdt=UuitτFU1uitSit,dxitdt=1xitτduitxitSit, (4)

where δ(t) is the Dirac delta function and Si(t) = ∑nδ(ttin) is the spike train of neuron i. According to the firing history of neurons, short-term potentiation or depression [4144] can be induced by the change (accumulation and release) of neurotransmitter xi and the change of release probability ui , which in turn temporal change the synaptic efficacy.

2.1.3. Plasticity Rule

This synaptic weight changes according to the integrative plasticity mechanism, modeled by a triplet spike-timing-dependent plasticity (STDP), plus a heterosynaptic plasticity and a transmitter-induced plasticity [19]. In our model, the plasticity rule is applied to all E to E synapses in the recurrent network. That is, for two excitatory neurons i, j, their synapse evolves according to

dwijdt=SitAzjzislowSjtBziTriplet STDPSitβzi3wijw~Heterosynaptic plasticity+δ1SjtTransmitterinduced plasticity, (5)
dzidt=ziτSTDP+Sit, (6)
dzislowdt=zislowτSTDP_slow+Sit (7)

In simulation, we set a lower bound of each synaptic weight as 0.001 to prevent negative and zero value.

The plasticity modifies synaptic weight according to presynaptic and postsynaptic traces. Each neuron i is accompanied by two synaptic traces, the fast synaptic trace zi and the slow synaptic trace zislow. They increase by a unit when the neuron has a spike and decays to zero with fast and slow time constants, i.e., τSTDP, τSTDP_slow, respectively. Apart from the commonly used fast synaptic trace, we also adopt a slow synaptic trace, which is first introduced in [21]. Slow synaptic trace gives rise to the spiking frequency dependence of plasticity, and it can better match with experimental result [19, 21]. The first term in the right-hand side (RHS) of Eq. (5) is the triplet STDP that potentiates synapse by an amount of Azjzislow every time a postsynaptic neuron fires and depresses synapse by an amount of Bzi every time a presynaptic neuron fires, where A and B are long-term potentiation and depression rates, respectively. In general, considering the effect of a spike in presynaptic neuron at time tpresp and a spike in postsynaptic neuron at time tpostsp, the synapse linking these two neurons are potentiated when tpostsptpresp > 0 and depressed when this difference is negative. Imposing a triplet STDP alone is in general unstable in synapse evolution. This is because, for STDP between excitatory neurons, potentiated (depressed) synapses tend to induce more (less) spiking of the postsynaptic neurons, which in turn potentiates (depress) this synapse. Such a positive feedback effect makes the synapse weights diverge. A solution to stabilize triplet STDP is to let it works with other plasticity rules together. The general rule that can stabilize STDP was introduced in [45, 46]. Zenke et al. [19] used heterosynaptic plasticity, described by the second term on the RHS of Eq.(5), to stabilize triplet STDP. Every time when a postsynaptic neuron spikes, the heterosynaptic plasticity reduces the synaptic strength by an amount ofβzi3wijw~, with β being a learning rate and w~ being the target synaptic weight value. The heterosynaptic plasticity rule was found in experiment [24, 25], which suggests that it works in high spiking frequency domain; therefore, a cubic exponent for postsynaptic trace, zi3, is used. Transmitter-induced plasticity, the third term in the RHS of Eq. (5), prevents neurons from silence. It may represent spine growth [47] or a kind of long-term potentiation to counteract Hebbian long-term depression [48]. Its effect is not crucial for our study here, since we study relatively short simulation time ~100 s.

2.1.4. Learning and Memory Encoding

We encoded the memory into the network during learning by adding stimulus to a subset of neurons in addition to background input, which facilitates the formation of a cluster structurally by these neurons. Unless otherwise specified, at t = 30s, an extra stimulation input representing a learning signal is applied to 200 chosen excitatory neurons to model the coding of a single memory. For those chosen neurons, their background rate is increased from fbackground to fstim = As × fbackground, where As is an augment factor. This extra stimulation input lasts until t = 100s, and the background rate of the chosen neurons is tuned to 0.5fbackground lasting for 2 s to remove persistent activity [41, 49]. After that, their background input is tuned back to the default rate fbackground and the simulation is ended at t = 110s.

The biological meaning and value of parameters in our model is summarized in Table 1.

Table 1.

Parameter used in neural circuit.

Description Symbol Value
Numbers of E, I neurons N E, NI 2000, 400
Network connection probability p 0.2 [26]
Background input connection number N O 400
Default firing rate of background input (per connection) f background 2.5 Hz
Augment factor As 3.5
Axonal delay τ l 1 ms [26]
Leakage potential V L -70 mV [26]
Threshold potential V th -50 mV [26]
Rest potential V rest -60 mV [26]
Membrane time constant for E, I neurons τ E, τI 20 ms, 10 ms [26]
Refractory period for E, I neurons t refratory E, trefratoryI 2 ms, 1 ms [26]
Reversal potential of E, I neurons E E, EI 0 mV, -70 mV [26]
Input conductance from background to E, I neurons g EO, gIO 0.05, 0.05
(normalized by leakage conductance)
Input conductance from E to E, I to E, E to I, I to I g EE, gEI, gIE, gII 0.1, 0.6,
0.84, 0.48
(normalized by leakage conductance)
Decay time constant of excitatory, inhibitory current τ d E, τdI 3 ms, 8 ms
Rising time constant of excitatory, inhibitory current τ r E, τrI 0.5 ms, 0.5 ms [26]
Facilitation time constant in STP τ F 1500 ms [41]
Depression time constant in STP τ D 200 ms [41]
Initial neural transmitter release probability in STP U 0.2 [41]
Learning rate for LTP A 0.001 [21]
Learning rate for LTD B 0.001 [21]
Transmitter plasticity strength δ 1 1 × 10−5 [19]
Learning rate for heterosynaptic plasticity β 0.01 [19]
Heterosynaptic plasticity parameter w~ 0.1
Characteristic time constant for synaptic trace τ STDP 20 ms [21]
Characteristic time constant for slow synaptic trace τ STDP_slow 100 ms [21]

2.2. Statistical Index and Analysis

2.2.1. Spike Count Series

For analyzing neural dynamics, we first constructed the neuron spike train series as follows. The time axis is first divided into consecutive time windows with sizes Δt ms. The number of spikes of neuron i are then counted in each window to obtain a discrete sequence Ni(t), which is designated as the spike count series of neuron i with time window size Δt.

Furthermore, we defined a binary spiking series Bi(t) = 0, if Ni(t) = 0 and Bi(t) = 1, if Ni(t) > 0 for each neuron.

The number of spikes of the whole neuron population can be counted in each window. This constructs the population spike count series Nα(t) for the E and I populations, respectively. Furthermore, qα(t) = Nα(t)/nαΔt is the population averaged firing rate series.

2.2.2. Synchrony Index

We used a synchrony index to quantify the synchronized spiking of neurons in the circuit [34] as

SIij=tBitBjttBittBjt, (8)

where Bi(t) is the binary spike series binned with Δt = 3ms. If the two neurons i, j are completely synchronous, then SIij = 1, and if they are completely asynchronous, SIij = 0.

2.2.3. Gamma Power

The power spectrum density of network oscillation is calculated by Fourier transform of the mean-detrended population firing rate qα(t) constructed with Δt = 600ms. We define the gamma power as the integration of the power spectrum density from 28 Hz to 40 Hz [27, 28].

In circuits with plasticity, high firing neurons within the 200 coding neurons, which are defined here as neurons having spike in 95% of bins in the firing rate series with width Δt = 50ms, are picked for calculating frequency, firing rate, synchrony, and oscillation. This is because we are interested in the properties related to plasticity and the synaptic potentiation/depression are concentrated at synapses of high firing neurons (synapses of low firing neurons only change slightly). No significant conclusion can be drawn if we take all the neurons into account.

3. Results

3.1. Circuit Dynamics without Plasticity

We considered a randomly connected conductance-based excitatory-inhibitory neuronal circuit [34] together with triplet STDP, heterosynaptic plasticity [19], and transmitter-induced plasticity. We studied the process of memory encoding with learning. Details of the model are presented in Methods. In this section, we investigated the circuit dynamical properties without plasticity mechanism. The analysis in this subsection is carried out for the 20 neurons with the highest firing rates.

First, we studied the effect of different excitatory synaptic time constants (τdE) and the excitatory synapse weight gEE on the network dynamics. Before the extra learning stimulation (at t = 30s), the circuits with different τdE are in low firing rate, low synchrony, and without apparent network oscillations (Figures 2(a) and 2(b)). After applying the extra stimulation, the dynamic feature of circuit becomes apparent, and circuits with smaller τdE tend to support synchronous spiking (Figures 2(a) and 2(b)).

Figure 2.

Figure 2

Dynamical properties of the circuits without plasticity. (a, b) Raster plot of the spiking time of 200 stimulated neurons (extra stimulation onset at = 30s for gEE = 0.1. (a) Highly synchronous state with τdE = 6ms. (b) Asynchronous state, τdE = 90ms. (c–e) Circuit properties with respect to synaptic strength during extra stimulus. The cases of different τdE values are plotted in different colors, and the τdE values corresponding to different colors are shown in the bar. (c) Network firing rate. (d) Synchrony index. (e) Gamma power. Other parameters are set as τdI = 8ms, As = 2.5.

Second, we explored how synaptic weight affects the dynamics regarding synchrony, oscillation, and firing rate. Generally, smaller coupling strength gEE or smaller excitatory synaptic time constant τdE induces lower firing rates (Figure 2(c)). The synchrony index (Figure 2(d)) and gamma power (Figure 2(e)) (details in Methods) have similar dependence on synaptic strength. They increase as gEE increases or τdE decreases. Interestingly, there is a sharp change in these dependence relations when  τdE is close to  τdI, which inspires us to roughly distinguish two network states as follows. When  τdE is smaller than  τdI, the synchrony and gamma power are relatively strong. In this case, we termed the dynamics under stimulation input as synchronous dynamic state. In contrast, when  τdE is larger than  τdI, synchrony and gamma power are weak, and we termed the dynamics under stimulation input as asynchronous dynamic state. Finally, we will also explore the dynamic states with moderate synchrony when  τdE is close to  τdI. These definitions apply to the following study of networks with or without plasticity.

3.2. The Effect of Spiking Dynamics on Plasticity

In neural circuits with plasticity mechanism, the synaptic weights are shaped by network dynamic properties, especially, the degree of synchrony and firing rates. To understand the effect of network spiking dynamics on plasticity, we first studied the neural circuit without plasticity and with extra stimulation starting at t = 30s. Next, the plasticity rule is applied to artifact spike trains generated by manipulations of the spike trains of 200 coding excitatory neurons obtained from this network simulation. Then, we imposed the plasticity rule with these artifact spike trains on a virtual network with the same topology connection as in the original circuit, to see how the synaptic weights can be changed. Note that here we did not consider how this change of synaptic weights in turn shapes the network dynamics. In this way, we separated the coevolution of dynamics and synapses and only considered the impact of dynamics properties on the evolution of synapses under plasticity (Figure 1(a), right).

We aimed to explore how the change of (virtual) synaptic strength depends on the firing rate and synchrony index of the spiking series with the following approaches. (1) To examine the effect of different degree of synchronization, we randomized a portion of spike time within every 100 ms interval. This can generate spike trains with different synchrony index while almost keeping the average firing rate (in a short time scale of 100 ms) of neurons. (2) To investigate the effect of different average firing rates, we insert empty bins with a certain length Tem into the spike trains every 100 ms. This can generate spike trains with different firing rate (in a short time scale) while keeping the synchrony index. Note that this manipulation changes the time ranges of the spike trains. (3) We also tried the combination of the above two schemes (i.e., inserting empty bins together with randomizing some spikes) that simultaneously changes the firing rate and synchrony index.

When applying plasticity to the spike trains from highly synchronous dynamics in the presence of extra stimulations (small τdE), the synaptic weight quickly increases and stabilizes to a large value (Figure 3(e), dashed line). When spikes are randomized to decrease the synchrony, the final average stabilized synaptic weights, the gamma power, and the time required for stabilizing the synaptic weight get smaller (Figure 3(a)). Next, by inserting empty bins, we found that a lower firing rate would produce lower finalized synaptic weight (Figure 3(f)), lower gamma oscillation power (Figure 3(b)), and longer stabilizing time. Moreover, if the firing rate is low enough, the role of synaptic plasticity changes from potentiation to depression (Figure 3(f)).

Figure 3.

Figure 3

Effect of firing rate and synchrony on virtual synaptic weight evolution under plasticity. (a, e, c, g) The effect of spike time randomization. (b, f, d, h) The effect of empty bin inserting. (a–d) The gamma power and firing rate of the network depend on the proportion of randomization/size of empty bin insert. (e–h) The change of synaptic weight with time. The time axis in (f, h) is modified based on the empty bins insert. (a, b, e, f) Synchronous state. τdE = 6ms. (c, d, g, h) Asynchronous state. τdE = 90ms. The other parameters are set as τdI = 8ms, As = 3.5.

In the asynchronous dynamics, neurons often fire in a burst way (Figure 2(b)), that is, a neuron can have several spikes in a very short time period (see distribution of instantaneous rate in Figure S1 in the Supplementary Material where in the asynchronous dynamics, there is a peak at high firing rate up to 200 to 300 Hz). In the asynchronous state, the spike time randomization does not change the synchrony index (Figure 3(c)) since the correlation is already very low, but it does change the synaptic weight evolution. Different from the case of synchronous dynamics, here, the synaptic strength increases when a larger portion of spikes is randomized (Figure 3(g)). This is because the high instantaneous firing rate due to neuron bursts can strengthen the heterosynaptic plasticity (the zi3 term in Eq. (5) will become very large) that suppresses synaptic potentiation, and bursts are destroyed by spike time randomization. Furthermore, the final stable synaptic strength increases only slightly with the reduction of the firing rate by inserting empty bins (Figure 3(h)). This may be because the effect of inserting empty bins for destroying bursts is not as strong as randomization.

By combination of the randomization of spike times and the insert of empty bin, spike trains with different combinations of firing rate and synchrony index can be generated. We found that for synchronous dynamics, the final stable synaptic weight depends mainly on firing rate but slightly on synchrony index (See Figure S2C in the Supplementary Material), since a too high instantaneous firing rate would activate heterosynaptic plasticity and a too low firing rate would favor triplet STDP depression. Thus, the synaptic weight potentiation requires a suitable range of firing rate. Besides, synchrony index and gamma power are accompanied on synaptic potentiation (Figures 3(a) and 3(b)). However, for asynchronous dynamics, synaptic weight can only potentiate slightly (See Figure S2A in the Supplementary Material) due to the burst spiking nature of the neurons.

3.3. Circuit with Plasticity

In the following, we study the interplay between structure and dynamic properties as a whole in the E-I circuit with plasticity, where the synaptic weights and dynamics patterns self-organize through coevolution. We referred the results in section 3.2, where the spiking dynamic is not influenced by the updated network structure, to as the manipulation results.

First, we examined the synaptic weight evolution. In the case of synchronous dynamics, the coevolution dynamics induce a slightly lower final stable average synaptic weight (Figure 4(a)) compared with the manipulation results. This is because, under coevolution dynamics, neurons can be more excited after the potentiation of the synapses, producing a higher firing rate that supports stronger heterosynaptic plasticity to reduce the potentiation. However, the synaptic weight evolution under asynchronous dynamics does not show much difference (Figure 4(b)) compared with manipulation results.

Figure 4.

Figure 4

Properties of circuits with plasticity. (a, b) Synaptic weight evolution. Blue curves are results from networks with plasticity, whereas orange curves are from virtual networks in Figure 3. (c, d) The evolution of gamma oscillation and synaptic weight in different dynamical state. (e, f) The ability of working memory maintenance after recalling in the network after learning. Extra stimuli (As = 5) are applied to the neuron group with memory encoded for 2 s (from 1 s to 3 s). The persistent firing ability of the group after extra stimuli removal is checked. The insets compare the firing rate of the memory-coding group and the firing rate of other neurons during the persistent period. (a, c, e) Synchronous state. τdE = 6ms. (b, d, f) Asynchronous state. τdE = 90ms. The other parameters are set as τdI = 8ms, As = 3.5.

Below, we investigated how gamma power evolves in the self-organized plastic circuit. We calculated the gamma power (see Method) from the population spike train of high firing neurons from simulation result for every short period with a length of 600 ms. The initial gamma power is very weak (around 10−3 to 10−2) with weak background inputs, and it undergoes a jump (Figures 4(c) and 4(d)) when extra stimulus starts. In circuits with synchronous dynamics, the synaptic weight potentiates and gamma oscillation increases significantly together (Figure 4(c)). In circuits with asynchronous dynamics, the increases of gamma oscillation and the synaptic weight are both tiny (Figure 4(d)).

We further tested the performance of supporting working memory of the network after learning under different dynamics modes. Under synchronous dynamics, the significant potentiation of synaptic weights facilitates the maintenance of working memory (Figure 4(e)) after initial recall, whereas for asynchronous dynamics, the network after learning is not potentiated enough for the robust maintenance of working memory (Figure 4(f)).

Furthermore, we were curious about how fast the synaptic weight can be stabilized, which reflects the learning speed. We explored the stabilizing times (defined as the time required for reaching half maximum synaptic weight) in circuits with plasticity under different initial dynamics states (at the time right after extra stimulation onset) such as different firing rates and synchrony using both different τdE and As. Under synchronous dynamics, it is found that stabilizing time in circuits with plasticity is inversely proportional to the initial firing rate (Figure 5(a)), as well as to the initial synchrony index (Figure 5(b)). There is a linear relationship between the initial firing rate and synchrony index (Figure 5(c)). Thus, the increase of firing rate is important to stabilize the synapses quickly. As the stabilized synaptic weights are different for synchronous and asynchronous state, we further studied the speed of synaptic potentiation with respect to firing rata and synchrony, and we recorded the synaptic weight change in different synaptic weight intervals (shown above Figures 5(d)5(i)). A higher initial firing rate produces a higher initial synaptic weight change (Figure 5(d)). Potentiated synapses generate higher firing rate and in turn trigger stronger heterosynaptic plasticity and thus more depression (Figures 5(e) and 5(f)). Consequently, the stabilized synaptic weight would be less (Figure 4(a)) but also takes less time to stabilize. In contrast, asynchronous state stabilizes quickly (<600 ms), and the synapses are only potentiated slightly (Figure 4(b)). The higher initial firing rate due to bursting in the asynchronous state (Figure 5(g)) causes strong heterosynaptic plasticity to prevent the synapses from further potentiation (Figures 5(h) and 5(i)). However, the speed of synaptic potentiation, that is, the amount of potentiation in unit time, is higher under synchronous dynamics (Figures 5(d) and 5(g)).

Figure 5.

Figure 5

The process of synaptic stabilizing in circuits with plasticity. (a) The relation between stabilizing time and firing rate. (b) The relation between stabilizing time and synchrony index. (c) The relation between firing rate and synchrony index. In (a–c), black/red dots are results under synchronous/asynchronous dynamics. (d–i) The relationship between the change of synaptic weight with respect to the instantaneous firing rate and synchrony index during the simulation. The studied ranges of synaptic weight are shown as title above the subfigures. (d–f) Under synchronous dynamics with τdE = 3, 4, 5 ⋯ 9ms. (g–i) Under asynchronous dynamics with τdE = 20, 30, 40 ⋯ 90ms. Other parameters areAs = 2, 2.5, 3, 3.5. and τdI = 8ms.

In circuits without plasticity, it was found that the increase of synaptic weight results in increased firing rate (Figure 2(c)). This relation does not hold in the presence of plasticity, as too high firing rate would induce synaptic weight depression due to heterosynaptic plasticity. First, the initial firing rate and gamma power of the network (in the beginning stage of extra stimulus onset, Figures 6(a) and 6(b)) is positively correlated to those after learning (Figures 6(e) and 6(f)). In general, we found that high synchrony index and gamma power facilitates the increase of synaptic weight (Figures 6(a)6(d)), whereas the effect of firing rate is not pronounced relatively. It suggests that synchrony and gamma oscillation are beneficial for synaptic potentiation in the plastic E-I networks.

Figure 6.

Figure 6

The relationship between synaptic potentiation and network dynamics in circuit with plasticity. (a, b) The relationship between the final stable synaptic weight (represented by color) and initial firing rates and gamma power. (c, d) The relationship between the final stable synaptic weight and final firing rates and gamma power. In general, there is a positive correlation between the gamma power/synchrony index and the stable synaptic weight. Stars represent the results in asynchronous states whereas dots represent results in synchronous and moderate synchronous states. (e, f) The relationship between initial and final firing rate and synchrony index, respectively. In (a–f), parameters used are τdE = 3, 4, 5 ⋯ 10,20,30 ⋯ 90ms, τdI = 8ms, As = 2, 2.5, 3, 3.5.

The above studies suggest that there should be essential differences in the network structures after learning under different dynamic states. To investigate the essential features about this structure changes, we performed various shuffling of the connectivity matrix of the subgroup of excitatory neurons for memory encoding adopted from the plastic circuits when they have evolved into the stable state. Then, we simulated the whole network dynamics without plasticity after shuffling this sub connectivity matrix with stimulation input applying at 30 s. In particular, we focused on exploring whether the structure can still support synchronous dynamics after shuffling.

The major finding is that the network can preserve synchronous dynamics when the incoming connections of the memory-coding neurons are shuffled (total input is preserved) (Figure 7(c)). On the contrary, the network cannot preserve synchronous dynamics when the outgoing connections of the neurons are shuffled (total input is changed) (Figure 7(d)). If the row of the submatrix is further randomized (rows are moved as a whole) after shuffling the incoming connections, synchronous dynamic is destroyed (Figure 7(e)). However, synchronous dynamics can be preserved if the randomization of row is performed in the whole connectivity matrix (Figure 7(f)), that is, preserving the inhibitory inputs in this randomization. Taken together, it implies that the ability for network supporting synchronization is related to the sum of incoming synaptic weights.

Figure 7.

Figure 7

Special circuit structures formed by plasticity. (a, b) The relation between the sums of incoming excitatory and inhibitory synaptic weight of different neurons in the plastic circuit after learning, sorted according to the ascending order of inhibitory weight sum. Parameters are set as τdI = 8ms, As = 3.5. (a) Synchronous with τdE = 6ms. (b) Asynchronous with τdE = 90ms. (c–f) Illustration of shuffling schemes. Each figure represents a connectivity matrix. In (c–e), we shuffled the elements of the connectivity sub-matrix of the chosen excitatory neurons (to encode the memory). (c) The input elements of each neuron (each row) are shuffled (total input is preserved). (d) The output elements of each neuron (each column) are shuffled (total input is changed). (e) The elements of each row are first shuffled and then the row is randomly shuffled (total excitatory input is preserved, but total inhibitory input is changed). The shuffling scheme of (f) the elements of each row of the submatrix of excitatory neurons is first shuffled, and then, the row of the entire matrix is randomly shuffled (total excitatory and inhibitory input are preserved). If the shuffled matrix results in synchronous/asynchronous dynamics, then it is marked in a red/black box.

Interestingly, we found that there is a high correlation among the sum of excitatory and the sum of inhibitory synaptic weights in the learning-stabilized matrix under synchronous dynamics (Figure 7(a)), suggesting that neurons receiving strong inhibitory synapses (these inhibitory synapses are without plasticity) also receive stronger excitatory synapses (these synapses are with plasticity). This special network structure is not present under asynchronous dynamics (Figure 7(b)). To further confirm networks with such structural features really favors synchronous spiking dynamics, a random circuit with similar synaptic weight sum distribution is generated and we found that the generated circuit indeed supports synchrony dynamics (See Figure S3C in the Supplementary Material).

Synapses potentiation leads to burst firing when excitation is strongly over inhibition (see Figure S3A in the Supplementary Material). When a neuron gets burst, its synapses would be suppressed by heterosynaptic plasticity. Therefore, neuron stays synchrony with only a few bursting events. As neurons receiving stronger inhibitory current need more excitation for burst firing to happen, their synapses can potentiate more before the heterosynaptic plasticity sets in. Hence, the correlation between excitatory and inhibitory inputs is developed. We should note that it is not related with the number of incoming excitatory connections but the sum of the incoming excitatory weights.

3.4. The Moderate Synchronous Dynamics

From the comparison between synchronous and asynchronous states, we have seen that the effect of learning depends significantly on the background dynamics. This raises a question: whether there are special properties on the boundary between these two dynamics regimes, that is, the moderate synchronous state. We found that under moderate synchronous dynamics, the transient network activity can switch between asynchronous burst and weak synchronization. Therefore, such a switching state should have both features in synchronous and asynchronous states. In static networks without plasticity, this switching dynamic is a long-lasting property (data not shown here). In networks with plasticity, this switching state is only temporal and the network will finally evolve into synchronous dynamics (Figure 8(c)).

Figure 8.

Figure 8

The dynamic properties of learning under moderate synchronous state. (a, b) Effect of firing rate and synchrony on synaptic weight evolution under plasticity with spike manipulation. (a) The effect of spike time randomization. (b) The effect of empty bin inserting. (c) Raster plot. Neuron number is sorted in an increasing order of inhibitory input connections. Red curve is the mean excitatory synaptic strength of high firing neurons, which increases during synchronous windows and decreases during asynchronous windows. Green curve is the mean excitatory synaptic strength of low firing neurons. (d) The synaptic weight evolution. (e) Sum of incoming excitatory and inhibitory synaptic weight of neurons in the plastic circuit after learning, sorted according to the ascending of inhibitory sum. (f) The evolution of gamma oscillation and synaptic weight in moderately synchronous states.

We first tested the effect of manipulation spike trains from circuit of moderate synchronization on shaping a virtual network structure with similar methods in Figure 3. For spike time randomization, the stable synaptic weights are almost unchanged with synchrony index (Figure 8(a)). When inserting an empty bin to reduce the firing rate, the stable synaptic weight is first kept unchanged and then starts to reduce and even becomes depressed (Figure 8(b)) when the firing rate is lower than a level. Next, we studied the network with plasticity under moderate synchronous dynamics. Interestingly, the transient synaptic weight evolution in plastic network differs strongly compared with manipulation results (Figure 8(d)). The synaptic weight increases during temporally synchronous periods and recovers during asynchronous periods, but this switching period is transient. The circuit will finally evolve into a fully synchronous state (e.g., from t = 80s in Figure 8(c)), and the synaptic weight increases after that until reaching the maximum (Figure 8(d)). The gamma oscillation increases gradually during the time period with switching synchronous and asynchronous spiking (Figure 8(f)), where the gamma oscillation power and synaptic weight have large fluctuations. Once the circuit reaches the synchronous state, both synaptic weights and gamma oscillation power increase to the stable value quickly. Neurons in moderately synchronous circuit can perform differently with respect to the amount of inhibitory input they receive, as shown in Figure 8(c) (neurons sorted according to total inhibitory synaptic weights). The evolution of the mean excitatory synaptic weight is also plotted in Figure 8(c). For neurons receiving too many inhibitory inputs, they do not have enough instantaneous excitatory current to overcome inhibitory suppression and they fire sparsely. In contrast, neurons receiving relatively fewer inhibitory inputs can overcome the inhibitory suppression and have the potential to perform like synchronous state, with a similar mechanism described above for the synchronous state.

A special relationship between excitatory and inhibitory current causes moderately synchronous state to switch between synchronous and asynchronous because its excitatory current is both high in amplitude (not as high as synchronous state) and long lasting (not as long as asynchronous state), as τdE is in between that of synchronous and asynchronous state. Once neurons are synchronized, inhibitory neurons can be activated by synchronized excitatory current to induce strong inhibition and terminate the synchrony event. Therefore, the circuit becomes asynchronous again and neurons which receive few inhibitory inputs may start to burst, and this process underlies the temporal switching between synchronous and asynchronous states. When plasticity is applied, synapses are potentiated during synchrony and depressed during asynchrony (Figure 8(c)). However, when the synapses are potentiated to a large enough value, the excitatory current is too strong that inhibitory current cannot terminate the synchrony anymore and the circuit starts to develop strong synchrony.

3.5. Other Dynamic Properties during Learning

We simulated circuits with plasticity under different background dynamics, to study the transient evolution of dynamic properties. Every time, we checked the corresponding dynamic properties of a static circuit sharing the same average synaptic weight but without plasticity, to examine their difference. Here, the synaptic weight, average firing rate, and synchrony index among high firing neurons in every 600 ms periods during the coevolution process are considered and compared with the firing rate and synchrony index among the same number of highest firing neurons in a static circuit with differentgEE in the range [0.1, 1.2]. It was found that the coevolution dynamics of synchronous and asynchronous plastic circuits are very close to that of static circuit at different coupling strength gEE (Figures 9(a) and 9(c)), suggesting that although the network architecture is changed by plasticity, but it does not significantly alter the plastic network dynamics in these two states. However, the moderately synchronous circuit is strongly different from the corresponding static circuit (Figure 9(b)). In the static circuit, it is switching between asynchronous and synchronous state in all synaptic strength tested (0.1-1.2), and the synchrony index is relatively small. However, in the self-organized plastic circuit, it differs from the dynamics of static circuit, and it becomes synchronous monotonically at last (Figure 8(c)). Therefore, it suggests that the plasticity has significantly changed the dynamics.

Figure 9.

Figure 9

Other dynamic properties. (a–c) Blue curves are the evolution dynamics in circuits with plasticity starts from strength 0.1. Red diamond represents the start of simulation, and green diamond represent the start of extra stimulus. Orange curves are the results of static circuits with different synaptic strengths. (a) Synchronous dynamics with τdE = 6ms. (b) Moderately synchronous dynamics with τdE = 10ms. (c) Asynchronous dynamics with τdE = 90ms. The other parameters: τdI = 8ms, As = 3.5. Transient excitatory and inhibitory current ratio in early stage of stimulus (d) (<5 s after applying extra stimulus input at t = 30s) and late stage of stimulus (e) (after synaptic weight stabilized). (f) Average E/I ratio before extra stimulus and after the removal of stimulus. (g) Firing rate before and after extra stimulus. (h) L2 norm of the difference of the connectivity matrices (reshaped to 1d and normalized) at the removal of the extra stimulus and 10 s later under normal background input.

Biological neural circuits are reported to work in an excitation-inhibition (E-I) balanced condition [50, 51]. Our model can maintain an overall E-I balanced current input (the time-average E/I ratio in our model is close to 1) on average, and this property can maintain after applying the strong extra stimulus. Asynchronous dynamics can best support E-I balanced input (Figures 9(d) and 9(e)). Considering the transient balance, for moderate synchronous and synchronous dynamics, there is a temporally strong imbalance corresponding to gamma oscillations during learning (Figures 9(d) and 9(e)). It suggests that the temporal imbalance is beneficial to effective learning.

Next, we examined whether the network is stable, and the balance can be restored when extra stimulus stops after learning. To check whether the network connectivity still changes significantly with plasticity under normal background input, the L2 norm difference between the connectivity (reshaped to 1d and then normalized, with value in [0, 2]) among the chosen neurons for memory encoding right after the removal of extra stimulus input and 10 s later is measured. It was found that the connectivity only changes slightly (Figure 9(h)). Firing rate (Figure 9(g)) and average E/I ratio (Figure 9(f)) are also returned to the levels similar as the circuit before learning with the stimulation.

4. Discussion

In this work, we studied the dynamics of plasticity neural circuit under the learning process with the combination of triplet plasticity, heterosynaptic plasticity, and transmitter-induced plasticity, with particular focus on the role of gamma oscillation. To tackle the challenging coevolution of network structure and dynamics, we first separately explored the effect of structure on dynamics and vice versa. We then unified these understandings to elucidate the principles of dynamics state-dependent learning in E-I neural circuit with plasticity.

4.1. Summary of the Finding

We first studied the static circuit without plasticity and found that networks with small τdE show higher synchrony. Furthermore, synchrony in network spiking and gamma oscillation power increase together with coupling strength gEE throughout these cases.

Through studying the effect of spiking dynamics on shaping a virtual network structure through plasticity, we found that firing rate plays a crucial role. Firing rate needs to be in a suitable range in order to induce potentiation. Synaptic potentiation is restricted by heterosynaptic plasticity in the case of too high firing rate and by triplet plasticity rule in the case of too low firing rate. Synchrony index and gamma power show greater value when potentiation happens.

These understandings from the separate studies shed light on the properties of coevolution of dynamics and synaptic weights in plastic circuit. Increasing synchrony (higher gamma power and synchrony index) is always beneficial to synaptic potentiation, but there is no such relation on firing rate. It thus suggests that gamma oscillation may be more important for synaptic potentiation in learning process in biological E-I circuits. Furthermore, the circuit structure after learning depends on the original basic dynamic states (different degrees of synchrony). The effective learning where the synaptic weights are potentiated is accompanied by a temporal deviation from E-I balance to the case of slight dominance in excitation. The E-I balance can be restored after learning when extra stimulus input is removed.

4.2. Gamma Oscillation

The origin of gamma oscillation is still under debate in the literature. It is suggested that gamma oscillation is due to inhibitory loops [11]. Another study found that controlling long-range synaptic weight, synchronization between two columns of E-I neural circuits [36] can generate gamma oscillation in the cross-correlograms between the two E-I circuits' population firing rates. It is known that mediated by synaptic kinetics, E-I balanced neural circuits can support such fast oscillation out from low firing rate [26, 34], which is the way to produce gamma oscillation in our study.

Furthermore, it was discovered that long-range axonal delay can modulate oscillation between gamma and beta [52]. In the moderate synchronous state, we observed the switching dynamics between asynchronous spiking and synchronous spiking. If the switching frequency is in theta band, it is highly similar to the spindle [6, 53, 54] observed in the hippocampus during sleeping, which has been assumed to relate to memory consolidation. It is also found in an experimental study that U1 snRNA overexpression mice, having weaker gamma oscillation than normal mice due to reduction in inhibition-related proteins, have deficit in learning [55]. It may be possible to further explore the phenomena related to sharp-wave ripple and its functional benefits in our model.

4.3. The Role of Plasticity in Learning

It has been shown that heterosynaptic plasticity can prevent explosive synaptic weight potentiation [24, 25]. However, it is unclear what kind of circuit structure can be formed by heterosynaptic plasticity. Our model uses a combination of three plasticity roles. The triplet STDP is mainly responsible for synaptic potentiation, bounded by the heterosynaptic plasticity rule, while the transmitter-induced plasticity does not play a significant role since our simulation time is not long enough. Thus, the final stable network structure is mainly shaped by heterosynaptic plasticity. We found that heterosynaptic plasticity shapes the circuit into a structure where the sum of excitatory input weights of a neuron is correlated/uncorrelated with the sum of inhibitory input weights of a neuron under the conditions of synchronous/asynchronous background dynamics, respectively.

4.4. Learning with Gamma Oscillation in Biological Neural Circuits

Our work using a biologically plausible E-I circuit confirmed several previous studies and provided a new understanding. Cell assembly formation through spike-timing-dependent plasticity has been studied previously [19, 5659]. It has been shown that during learning, plasticity shapes the circuit synaptic weights and spiking correlation [45, 6062]. The research of learning in biological neural networks has mainly focused on the recall reliability of working memories and the usage of the network for performing classification tasks, but seldom considered the effect of gamma oscillation in learning process, although gamma oscillations have been widely observed to accompany learning in experiments [6, 7, 11, 27, 28]. Here, we found that the increase of gamma power and synchrony always facilitates potentiation in plasticity circuits, providing the understanding behind the experimental observations. The synchrony during gamma oscillations within suitable firing rate is beneficial to synaptic potentiation, which in turn forms a network structure better support gamma oscillations. Thus, gamma oscillation is important for forming cell assemblies, leading to efficient learning.

We have considered the variation of excitatory synaptic time constant (τdE) on the dynamical modes of the circuits. AMPA receptor has a smaller synaptic time constant ~5 ms [38], and NMDA receptor has a large synaptic time constant ~100 ms [39]. Thus, the value of τdE used in the network is biologically determined by the ratio between these two receptors. Experiments show that AMPA and NMDA receptors exist extensively in synapses, and their ratio varies in different brain region [6365]. For example, the number of NMDA receptor is larger in the prefrontal cortex [63] than in the visual cortex. Thus, the fact that neural circuits in different brain systems favor different τdE sheds light on the differential functional role of different brain systems in learning.

4.5. Outlook

In this study, we did not consider inhibitory plasticity, which is mainly thought of as a kind of homeostatic plasticity that works in a longer timescale [19, 35]. It was shown that this kind of plasticity helps to maintain tight E-I balance automatically after learning under a wide range of initial parameters. However, how inhibitory plasticity changes the circuit structure is not yet clear. This deserves further study in the future.

So far, we have considered the encoding and retrieval of a single memory cluster. We have preliminarily tested to encode multiple memory clusters successively whose connections are nonoverlapping with each other in the network and found that the learning processes of different clusters work almost independently with each other (data not shown). This issue needs further exploration in future work. This is consistent with our results that the learned circuit is structurally stable under plasticity with weak background input. An interesting extension is to consider overlapping memories which may interfere with each other and generate more complicated interaction between gamma oscillations within each cluster and their learning process.

How to analytically predict the stable synaptic weights after learning through STDP is a challenging issue. Existing theory [66, 67] tended to use mean-field approximation to analyze the mean firing rate, mean synaptic trace, and their covariance. However, the heterosynaptic plasticity rule used in our model involves a cubic exponent of synaptic trace and thus requires high-order covariance, which induces large errors in the corresponding approximation. An effective theory to analyze the effect of heterosynaptic plasticity is still lacking and deserves further study.

Acknowledgments

This work was supported by the Hong Kong Research Grant Council (GRF12200620), the HKBU Research Committee and Interdisciplinary Research Clusters Matching Scheme 2018/19 (RC-IRCMs/18-19/SCI01), and the National Natural Science Foundation of China (Grants No. 11975194). This research was conducted using the resources of the High-Performance Computing Cluster Centre at HKBU, which receives funding from the RGC and the HKBU.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no competing interests.

Supplementary Materials

Supplementary Materials

The following supplementary materials are available for this paper. Figure S1: instantaneous firing rate distribution of neurons in different states. Figure S2: result of final stable synaptic strengths on manipulations combining spike time randomization and empty bin inserting to modify spike synchrony and rate. Figure S3: results of artificial generated network structure.

References

  • 1.Buzsáki G. Rhythms of the Brain. New York: Oxford University Press; 2009. [Google Scholar]
  • 2.McDermott B., Porter E., Hughes D., et al. Gamma band neural stimulation in humans and the promise of a new modality to prevent and treat Alzheimer’s disease. Journal of Alzheimer's Disease. 2018;65(2):363–392. doi: 10.3233/JAD-180391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Masquelier T., Hugues E., Deco G., Thorpe S. J. Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme. Journal of Neuroscience. 2009;29(43):13484–13493. doi: 10.1523/JNEUROSCI.2207-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Erdogdu E., Kurt E., Duru A. D., Uslu A., Başar-Eroğlu C., Demiralp T. Measurement of cognitive dynamics during video watching through event-related potentials (ERPs) and oscillations (EROs) Cognitive Neurodynamics. 2019;13(6):503–512. doi: 10.1007/s11571-019-09544-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yamada M., Sakurai Y. An observational learning task using Barnes maze in rats. Cognitive Neurodynamics. 2018;12(5):519–523. doi: 10.1007/s11571-018-9493-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lisman J. E., Jensen O. The theta-gamma neural code. Neuron. 2013;77(6):1002–1016. doi: 10.1016/j.neuron.2013.03.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Osipova D., Takashima A., Oostenveld R., Fernández G., Maris E., Jensen O. Theta and gamma oscillations predict encoding and retrieval of declarative memory. Journal of Neuroscience. 2006;26(28):7523–7531. doi: 10.1523/JNEUROSCI.1948-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Rao A. R. An oscillatory neural network model that demonstrates the benefits of multisensory learning. Cognitive Neurodynamics. 2018;12(5):481–499. doi: 10.1007/s11571-018-9489-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Frankland P. W., Bontempi B. The organization of recent and remote memories. Nature Reviews Neuroscience. 2005;6(2):119–130. doi: 10.1038/nrn1607. [DOI] [PubMed] [Google Scholar]
  • 10.Diekelmann S., Born J. The memory function of sleep. Nature Reviews Neuroscience. 2010;11(2):114–126. doi: 10.1038/nrn2762. [DOI] [PubMed] [Google Scholar]
  • 11.Buzśaki G., Wang X. J. Mechanisms of gamma oscillations. Annual Review of Neuroscience. 2012;35(1):203–225. doi: 10.1146/annurev-neuro-062111-150444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bosman C. A., Lansink C. S., Pennartz C. M. A. Functions of gamma-band synchronization in cognition: from single circuits to functional diversity across cortical and subcortical systems. European Journal of Neuroscience. 2014;39(11):1982–1999. doi: 10.1111/ejn.12606. [DOI] [PubMed] [Google Scholar]
  • 13.Oliva A., Fernández-Ruiz A., de Oliveira E. F., Buzsáki G. Origin of gamma frequency power during hippocampal sharp-wave ripples. Cell Reports. 2018;25(7):1693–1700.e4. doi: 10.1016/j.celrep.2018.10.066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wang Z., Fan H., Han F. A new regime for highly robust gamma oscillation with co-exist of accurate and weak synchronization in excitatory–inhibitory networks. Cognitive Neurodynamics. 2014;8(4):335–344. doi: 10.1007/s11571-014-9290-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gu X., Han F., Wang Z. Dependency analysis of frequency and strength of gamma oscillations on input difference between excitatory and inhibitory neurons. Cognitive Neurodynamics. 2020;5 doi: 10.1007/s11571-020-09622-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Gilson M., Burkitt A., van Hemmen J. L. STDP in recurrent neuronal networks. Frontiers in Computational Neuroscience. 2010;4:1–15. doi: 10.3389/fncom.2010.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Bi G. Q., Poo M. M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience. 1998;18(24):10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Caporale N., Dan Y. Spike timing–dependent plasticity: a Hebbian learning rule. Annual Review of Neuroscience. 2008;31(1):25–46. doi: 10.1146/annurev.neuro.31.060407.125639. [DOI] [PubMed] [Google Scholar]
  • 19.Zenke F., Agnes E. J., Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications. 2015;6:1–13. doi: 10.1038/ncomms7922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Câteau H., Kitano K., Fukai T. Interplay between a phase response curve and spike-timing-dependent plasticity leading to wireless clustering. Physical Review E-Statistical, Nonlinear, and Soft Matter Physics. 2008;77(5):1–6. doi: 10.1103/PhysRevE.77.051909. [DOI] [PubMed] [Google Scholar]
  • 21.Pfister J.-P. Triplets of spikes in a model of spike timing-dependent plasticity. Journal of Neuroscience. 2006;26(38):9673–9682. doi: 10.1523/JNEUROSCI.1425-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Morrison A., Diesmann M., Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biological Cybernetics. 2008;98(6):459–478. doi: 10.1007/s00422-008-0233-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Song S., Miller K. D., Abbott L. F. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience. 2000;3(9):919–926. doi: 10.1038/78829. [DOI] [PubMed] [Google Scholar]
  • 24.Chen J.-Y., Lonjers P., Lee C., Chistiakova M., Volgushev M., Bazhenov M. Heterosynaptic plasticity prevents runaway synaptic dynamics. Journal of Neuroscience. 2013;33(40):15915–15929. doi: 10.1523/JNEUROSCI.5088-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Chistiakova M., Bannon N. M., Bazhenov M., Volgushev M. Heterosynaptic plasticity: multiple mechanisms and multiple roles. The Neuroscientist. 2014;20(5):483–498. doi: 10.1177/1073858414529829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Brunel N. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. Journal of Neurophysiology. 2003;90(1):415–430. doi: 10.1152/jn.01095.2002. [DOI] [PubMed] [Google Scholar]
  • 27.Ognjanovski N., Schaeffer S., Wu J., et al. Parvalbumin-expressing interneurons coordinate hippocampal network dynamics required for memory consolidation. Nature Communications. 2017;8:1–13. doi: 10.1038/ncomms15039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ognjanovski N., Maruyama D., Lashner N., Zochowski M., Aton S. J. CA1 hippocampal network activity changes during sleep-dependent memory consolidation. Frontiers in Systems Neuroscience. 2014;8(1):1–11. doi: 10.3389/fnsys.2014.00061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Luz Y., Shamir M. Oscillations via spike-timing dependent plasticity in a feed-forward model. PLOS Computational Biology. 2016;12(4):1–20. doi: 10.1371/journal.pcbi.1004878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Lee S., Sen K., Kopell N. Cortical gamma rhythms modulate NMDAR-mediated spike timing dependent plasticity in a biophysical model. PLoS Computational Biology. 2009;5(12) doi: 10.1371/journal.pcbi.1000602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Muller L., Brette R., Gutkin B. Spike-timing dependent plasticity and feed-forward input oscillations produce precise and invariant spike phase-locking. Frontiers in Computational Neuroscience. 2011;5:p. 8. doi: 10.3389/fncom.2011.00045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Sherf N., Shamir M. Multiplexing rhythmic information by spike timing dependent plasticity. PLoS Computational Biology. 2020;16(6):1–25. doi: 10.1371/journal.pcbi.1008000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Amit D., Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7(3):237–252. doi: 10.1093/cercor/7.3.237. [DOI] [PubMed] [Google Scholar]
  • 34.Yang D. P., Zhou H. J., Zhou C. Co-emergence of multi-scale cortical activities of irregular firing, oscillations and avalanches achieves cost-efficient information capacity. PLoS Computational Biology. 2017;13(2):1–28. doi: 10.1371/journal.pcbi.1005384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Vogels T. P., Sprekeler H., Zenke F., Clopath C., Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011;334(6062):1569–1573. doi: 10.1126/science.1211095. [DOI] [PubMed] [Google Scholar]
  • 36.Heinzle J., König P., Salazar R. F. Modulation of synchrony without changes in firing rates. Cognitive Neurodynamics. 2007;1(3):225–235. doi: 10.1007/s11571-007-9017-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Liang J., Zhou T., Zhou C. Hopf bifurcation in mean field explains critical avalanches in excitation-inhibition balanced neuronal networks: a mechanism for multiscale variability. Frontiers in Systems Neuroscience. 2020;14 doi: 10.3389/fnsys.2020.580011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Zhou F.-M., Hablitz J. J. AMPA receptor-mediated EPSCs in rat neocortical layer II/III interneurons have rapid kinetics. Brain Research. 1998;780(1):166–169. doi: 10.1016/S0006-8993(97)01311-5. [DOI] [PubMed] [Google Scholar]
  • 39.Wang X.-J. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. The Journal of Neuroscience. 1999;19(21):9587–9603. doi: 10.1523/jneurosci.19-21-09587.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Angulo M. C., Rossier J., Audinat E. Postsynaptic glutamate receptors and integrative properties of fast-spiking interneurons in the rat neocortex. Journal of Neurophysiology. 1999;82(3):1295–1302. doi: 10.1152/jn.1999.82.3.1295. [DOI] [PubMed] [Google Scholar]
  • 41.Mongillo G., Barak O., Tsodyks M. Synaptic theory of working memory. Science. 2008;319(5869):1543–1546. doi: 10.1126/science.1150769. [DOI] [PubMed] [Google Scholar]
  • 42.Zucker R. S., Regehr W. G. Short-term synaptic plasticity. Annual Review of Physiology. 2002;64:355–405. doi: 10.1146/annurev.physiol.64.092501.114547. [DOI] [PubMed] [Google Scholar]
  • 43.Hennig M. H. Theoretical models of synaptic short term plasticity. Frontiers in Computational Neuroscience. 2013;7 doi: 10.3389/fncom.2013.00154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Wu S., Zhou K., Ai Y., Zhou G., Yao D., Guo D. Induction and propagation of transient synchronous activity in neural networks endowed with short-term plasticity. Cognitive Neurodynamics. 2020;6 doi: 10.1007/s11571-020-09578-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Tetzlaff C. Synaptic scaling in combination with many generic plasticity mechanisms stabilizes circuit connectivity. Frontiers in Computational Neuroscience. 2011;5 doi: 10.3389/fncom.2011.00047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Yger P., Gilson M. Addendum: models of metaplasticity: a review of concepts. Frontiers in Computational Neuroscience. 2016;10 doi: 10.3389/fncom.2016.00004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Kwon H. B., Sabatini B. L. Glutamate induces de novo growth of functional spines in developing cortex. Nature. 2011;474(7349):100–104. doi: 10.1038/nature09986. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Lev-Ram V., Wong S. T., Storm D. R., Tsien R. Y. A new form of cerebellar long-term potentiation is postsynaptic and depends on nitric oxide but not cAMP. Proceedings of the National Academy of Sciences of the United States of America. 2002;99(12):8389–8393. doi: 10.1073/pnas.122206399. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Wang X., Wang X. Synaptic reverberation underlying mnemonic persistent activity. Trends in Neurosciences. 2001;24(8):455–463. doi: 10.1016/S0166-2236(00)01868-3. [DOI] [PubMed] [Google Scholar]
  • 50.Denève S., Machens C. K. Efficient codes and balanced networks. Nature Neuroscience. 2016;19(3):375–382. doi: 10.1038/nn.4243. [DOI] [PubMed] [Google Scholar]
  • 51.Goaillard J. M., Taylor A. L., Pulver S. R., Marder E. Slow and persistent postinhibitory rebound acts as an intrinsic short-term memory mechanism. Journal of Neuroscience. 2010;30(13):4687–4692. doi: 10.1523/jneurosci.2998-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Bibbig A., Traub R. D., Whittington M. A. Long-range synchronization of γ and β oscillations and the plasticity of excitatory and inhibitory synapses: a network model. Journal of Neurophysiology. 2002;88(4):1634–1654. doi: 10.1152/jn.2002.88.4.1634. [DOI] [PubMed] [Google Scholar]
  • 53.Norman Y., Yeagle E. M., Khuvis S., Harel M., Mehta A. D., Malach R. Hippocampal sharp-wave ripples linked to visual episodic recollection in humans. Science. 2019;365(6454):p. eaax1030. doi: 10.1126/science.aax1030. [DOI] [PubMed] [Google Scholar]
  • 54.Nishida H., Takahashi M., Lauwereyns J. Within-session dynamics of theta–gamma coupling and high-frequency oscillations during spatial alternation in rat hippocampal area CA1. Cognitive Neurodynamics. 2014;8(5):363–372. doi: 10.1007/s11571-014-9289-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Kumari E., Shang Y., Cheng Z., Zhang T. U1 snRNA over-expression affects neural oscillations and short-term memory deficits in mice. Cognitive Neurodynamics. 2019;13(4):313–323. doi: 10.1007/s11571-019-09528-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Srinivasa N., Cho Y. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity. Frontiers in Computational Neuroscience. 2014;8:1–23. doi: 10.3389/fncom.2014.00159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Stepp N., Plenz D., Srinivasa N. Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Computational Biology. 2015;11(1):p. e1004043. doi: 10.1371/journal.pcbi.1004043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Litwin-Kumar A., Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature Communications. 2014;5(1) doi: 10.1038/ncomms6319. [DOI] [PubMed] [Google Scholar]
  • 59.Panda P., Roy K. Learning to generate sequences with combination of hebbian and non-hebbian plasticity in recurrent spiking neural networks. Frontiers in Neuroscience. 2017;11 doi: 10.3389/fnins.2017.00693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Izhikevich E. M., Desai N. S. Relating STDP to BCM. Neural Computation. 2003;15(7):1511–1523. doi: 10.1162/089976603321891783. [DOI] [PubMed] [Google Scholar]
  • 61.Bi Z., Zhou C. Spike pattern structure influences synaptic efficacy variability under STDP and synaptic homeostasis. I: spike generating models on converging motifs. Frontiers in Computational Neuroscience. 2016;10 doi: 10.3389/fncom.2016.00014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Bi Z., Zhou C. Spike pattern structure influences synaptic efficacy variability under STDP and synaptic homeostasis. II: spike shuffling methods on LIF networks. Frontiers in Computational Neuroscience. 2016;10 doi: 10.3389/fncom.2016.00083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Myme C. I. O., Sugino K., Turrigiano G. G., Nelson S. B. The NMDA-to-AMPA ratio at synapses onto layer 2/3 pyramidal neurons is conserved across prefrontal and visual cortices. Journal of Neurophysiology. 2003;90(2):771–779. doi: 10.1152/jn.00070.2003. [DOI] [PubMed] [Google Scholar]
  • 64.Tozzi A., Sclip A., Tantucci M., et al. Region- and age-dependent reductions of hippocampal long-term potentiation and NMDA to AMPA ratio in a genetic model of Alzheimer’s disease. Neurobiology of Aging. 2015;36(1):123–133. doi: 10.1016/j.neurobiolaging.2014.07.002. [DOI] [PubMed] [Google Scholar]
  • 65.Ye G. L., Yi S., Gamkrelidze G., Pasternak J. F., Trommer B. L. AMPA and NMDA receptor-mediated currents in developing dentate gyrus granule cells. Developmental Brain Research. 2005;155(1):26–32. doi: 10.1016/j.devbrainres.2004.12.002. [DOI] [PubMed] [Google Scholar]
  • 66.Akil A. E., Rosenbaum R., Josić K. Synaptic plasticity in correlated balanced networks. 2020. http://arxiv.org/abs/2004.12453.
  • 67.Ocker G. K., Litwin-Kumar A., Doiron B. Self-organization of microcircuits in networks of spiking neurons with plastic synapses. PLoS Computational Biology. 2015;11(8):p. e1004458. doi: 10.1371/journal.pcbi.1004458. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Materials

The following supplementary materials are available for this paper. Figure S1: instantaneous firing rate distribution of neurons in different states. Figure S2: result of final stable synaptic strengths on manipulations combining spike time randomization and empty bin inserting to modify spike synchrony and rate. Figure S3: results of artificial generated network structure.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.


Articles from Neural Plasticity are provided here courtesy of Wiley

RESOURCES