Abstract
The Bienenstock–Cooper–Munro (BCM) learning rule provides a simple setup for synaptic modification that combines a Hebbian product rule with a homeostatic mechanism that keeps the weights bounded. The homeostatic part of the learning rule depends on the time average of the post-synaptic activity and provides a sliding threshold that distinguishes between increasing or decreasing weights. There are, thus, two essential time scales in the BCM rule: a homeostatic time scale, and a synaptic modification time scale. When the dynamics of the stimulus is rapid enough, it is possible to reduce the BCM rule to a simple averaged set of differential equations. In previous analyses of this model, the time scale of the sliding threshold is usually faster than that of the synaptic modification. In this paper, we study the dynamical properties of these averaged equations when the homeostatic time scale is close to the synaptic modification time scale. We show that instabilities arise leading to oscillations and in some cases chaos and other complex dynamics. We consider three cases: one neuron with two weights and two stimuli, one neuron with two weights and three stimuli, and finally a weakly interacting network of neurons.
Keywords: BCM, Learning rule, Oscillation, Chaos
Introduction
For several decades now, the topic of synaptic plasticity has remained relevant. A pioneering theory on this topic is the Hebbian theory of synaptic modification [1, 2], in which Donald Hebb proposed that when neuron A repeatedly participates in firing neuron B, the strength of the action of A onto B increases. This implies that changes in synaptic strengths in a neural network is a function of the pre- and post-synaptic neural activities. A few decades later, Nass and Cooper [3] developed a Hebbian synaptic modification theory for the synapses of the visual cortex, which was later extended to a threshold dependent setup by Cooper et al. [4]. In this setup, the sign of a weight modification is based on whether the post-synaptic response is below or above a static threshold. A response above the threshold is meant to strengthen the active synapse, and a response below the threshold should lead to a weakening of the active synapse.
One of the widely used models of synaptic plasticity is the Bienenstock–Cooper–Munro (BCM) learning rule with which Bienenstock et al. [5]—by incorporating a dynamic threshold that is a function of the average post-synaptic activity over time—captured the development of stimulus selectivity in the primary visual cortex of higher vertebrates. In corroborating the BCM theory, it has been shown that a BCM network develops orientation selectivity and ocular dominance in natural scene environments [6, 7]. Although the BCM rule was developed to model selectivity of visual cortical neurons, it has been successfully applied to other types of neurons. For instance, it has been used to explain experience-dependent plasticity in the mature somatosensory cortex [8]. Furthermore the BCM rule has been reformulated and adapted to suit various interaction environments of neural networks, including laterally interacting neurons [9, 10] and stimuli generalizing neurons [11]. The BCM rule has also been in the center of the discussion as regards the relationship between rate-based plasticity and spike-time dependent plasticity (STDP); it has been shown that the applicability of the BCM formulation is not limited to rate-based neurons but under certain conditions extends to STDP-based neurons [12–14].
Based on the BCM learning rule, a few data mining applications of neuronal selectivity have emerged. It has been shown that a BCM neural network can perform projection pursuit [7, 15, 16], i.e. it can find projections in which a data set departs from statistical normality. This is an important finding that highlights the feature detecting property of a BCM neural model. As a result, the BCM neural network has been successfully applied to some specific pattern recognition tasks. For example Bachman et al. [17] incorporated the BCM learning rule in their algorithm for classifying radar data. Intrator et al. developed an algorithm for recognizing 3D objects from 2D view by combining existing statistical feature extraction models with the BCM model [18, 19]. There has been a preliminary simulation on how the BCM learning rule has the potential to identify alpha numeric letters [20].
Mathematically speaking, the BCM learning rule is a system of differential equations involving the synaptic weights, the stimulus coming into the neuron, the activity response of the neuron to the stimulus, and the threshold for the activity. Unlike its predecessors, which use static thresholds to modulate neuronal activity, the BCM learning rule allows the threshold to be dynamic. This dynamic threshold provides stability to the learning rule, and from a biological perspective, provides homeostasis to the system. Treating the BCM learning rule as a dynamical system, this paper explores the stability properties and shows that the dynamic nature of the threshold guarantees stability only in a certain regime of homeostatic time scale. This paper also explores the stability properties as a function of the relationship between homeostasis time scale and the weight time scale. Indeed, there is no biological reason why the homeostatic time scale should be dramatically shorter than the synaptic modification time scale [21], so in this paper, we relax those restrictions. In Sect. 3, we illustrate a stochastic simulation in the simplest case of a single neuron with two weights and two different competing stimuli. We derive the averaged mean field equations and show that there are changes in the stability as the homeostatic time constant changes. In Sect. 4, we continue the study of a single neuron, but now assume that there are more inputs than weights. Here, we find rich dynamics including multiple period-doubling cascades and chaotic dynamics. Finally, in Sect. 5, we study small linearly coupled networks and prove stability results while uncovering more rich dynamics.
Methods
The underlying BCM theory expresses the changes in synaptic weights as a product of the input stimulus pattern vector, x, and a function, ϕ. Here, ϕ is a nonlinear function of the post-synaptic neuronal activity, v, and a dynamic threshold, θ, of the activity (see Fig. 1A).
If at any time, the neuron receives a stimulus x from a stimulus set, say , the weight vectors, w, evolve according to the BCM rule as
1 |
θ is sometimes referred to as the “sliding threshold” because, as can be seen from Eq. (1), it changes with time, and this change depends on the output v, the sum of the weighted input to the neuron, . ϕ has the following property: for low values of the post-synaptic activity , ϕ is negative; for , ϕ is positive. In the results presented by Bienenstock et al. [5], is used, is a running temporal average of v and the learning rule is stable for . Later formulations of the learning rule (for instance by [7]) have shown that a spatial average can be used in lieu of a temporal average, and that with , is an excellent approximation of . We can also replace the moving temporal average of v with first order low-pass filter. Thus a differential form of the learning rule is
2 |
where and are time-scale factors, which in simulated environments, can be used to adjust how fast the system is changing with respect to time. We point out that this is the version of the model that is found in Dayan and Abbott [22]. We point out that the vector input, x is changing rapidly compared to θ and w, so that Eq. (2) is actually a stochastic equation. The stimuli, x are generally taken from a finite set of patterns, and are randomly selected and presented to the model.
Results I: One Neuron, Two Weights, Two Stimuli
For a single linear neuron that receives a stimulus pattern with synaptic weights , the neuronal response is . The results we present in this section are specific to when and when there are two patterns. In this case, the neuronal response is . In the next section, we explore a more general setting.
Stochastic Experiment
A good starting point in studying the dynamical properties of the BCM neuron is to explore the steady states of v for different time-scale factors of θ. This is equivalent to varying the ratio in Eq. (2). We start with a BCM neuron that receives a stimulus input x stochastically from a set with equal probabilities, that is, . We create a simple hybrid stochastic system where the value of x switches between the pair at a rate λ as a two state Markov process. At steady state, the neuron is said to be selective if it yields a high response to one stimulus and a low (≈0) response to the other.
Figures 1B–D plot the neuronal response v as a function of time. In each case, the initial conditions of , and θ lie in the interval . The stimuli are and where . is the response of the neuron to the stimulus and is the response of the neuron to the stimulus . In each simulation, the presentation of stimulus is a Markov process with rate presentations per second. When , Fig. 1B shows a stable selective steady state of the neuron. At this state, while , implying that the neuron selects . This scenario is equivalent to one of the selective steady states demonstrated by Bienenstock et al. [5].
When the threshold, θ changes slower than the weights, w, the dynamics of the BCM neuron take on a different kind of behavior. In Fig. 1C, . As can be seen, there is a difference between this figure and Fig. 1B. Here, the steady state of the system loses stability and a noisy oscillation appears to emerge. The neuron is still selective since there is a large enough empty intersection between these ranges of oscillation.
Setting the time-scale factor of θ to be a little more than twice that of w reveals a different kind of oscillation from the one seen in Fig. 1C. In Fig. 1D where , the oscillation has very sharp maxima and flat minima and can be described as an alternating combination of spikes and rest states. As can be seen, the neuron is not selective.
Mean Field Model
The dynamics of the BCM neuron (Eq. (2)) is stochastic in nature, since at each time step, the neuron randomly receives one out of a set of stimuli. One way to gain more insight into the nature of these dynamics is to study a mean field deterministic approximation of the learning rule. If the rate of change of the stimuli is rapid compared to that of the weights and threshold, then we can average over the fast time scale to get a mean field or averaged model and then study this through the usual methods of dynamical systems. Consider a BCM neuron that receives a stimulus input x, stochastically from the set such that and . A mean field equation for the synaptic weights is
Now let the responses to the two stimuli be and . With this, changes in the responses can be written as
3 |
So a mean field equation in terms of the responses is
4 |
This equation is our starting point for the analysis of the effects of changing the time-scale factor of θ, . Thus all that matters with regard to the time scales is the ratio, . We note that we could also write down the averaged equations in terms of the weights, but the form of the equations is much more cumbersome.
We now look for equilibria and the stability of these fixed points. We note that if the two stimuli are not collinear and , then if and only if . Using the fact that at equilibrium, , we find
5 |
which gives the fixed points
6 |
The fixed points and are stable (as we will see) for small enough τ and selective, while and are neither stable nor selective. Bienenstock et al. [5] discussed the stability of these fixed points as they pertain to the original formulation. Castellani et al. [9] and Intrator and Cooper [7] gave a similar treatment to the objective formulation. In Sect. 3.4, it will be shown that the stability of and depends on the angle between the stimuli, the amplitude of the stimuli, ρ, and the ratio of to .
Oscillatory Properties: Simulations
As seen in the preceding section, the fixed points to the mean field BCM equation are invariant (with regards to stimuli and synaptic weights) and depend only on the probabilities with which the stimuli are presented. The stability of the selective fixed points, however, depends on the time-scale parameters, the angular relationship between the stimuli, and the amplitudes of the stimuli. To get a preliminary understanding of this property of the system, consider the following simulations of Eq. (4); each with different stimulus set characteristics. We remark that because Eq. (4) depends only on the inner product of stimuli, equal rotation of both has no effect on the equations. What matters is the magnitude, angle between them, and frequency.
Simulation A: orthogonal, equal magnitudes, equal probabilities
Let , , . In this case, the two stimuli have equal magnitudes, are perpendicular to each other, and are presented with equal probabilities. Figure 2(A) shows the evolution of and in the last 100 time-steps of a 400 time step simulation. The dashed line shows the unstable non-zero equilibrium point. For , there is a stable limit cycle oscillation of . Since the stimuli are orthogonal, is an invariant set.
Simulation B: non-orthogonal, equal magnitudes, equal probabilities
Let , , , . In this case, the two stimuli have equal magnitudes, are not perpendicular to each and are presented with equal probabilities. Figure 2(B) shows an oscillation, but now oscillates as well since the stimuli are not orthogonal.
Simulation C: non-orthogonal, equal magnitudes, unequal probabilities
Let , , . The only difference between this case and simulation B is that the stimuli are now presented with unequal probabilities. For , there is a stable oscillation of both centered around their unstable equilibrium values.
Simulation D: orthogonal, unequal magnitude, equal probabilities
Let , , . The only difference between this case and simulation B is that stimulus 2 has a larger magnitude and . We remark that in this case, even when , the equilibrium point has become unstable.
These four examples demonstrate that there are oscillations of various shapes and frequencies that arise pretty generically no matter what the specifics of the mean field model are; they can occur in symmetric cases (e.g. simulation A) or with more general parameters as in B-D. We also note that to get oscillatory behavior in the BCM rule, we do not even need as seen in example D. We will see shortly that the oscillations arise from a Hopf bifurcation as the parameter, τ increases beyond a critical value. To find this value, we perform a stability analysis of the equilibria for Eq. (4).
Stability Analysis
We begin with a very general stability theorem that will allow us to compute stability for an arbitrary pair of vectors and arbitrary probabilities of presentation. Looking at Eq. (4), we see that by rescaling time, we can assume that without loss of generality. To simplify the calculations, we let , , , and . Note that by the Schwartz inequality and that with being the case of equal probability.
For completeness, we first dispatch with the two non-selective equilibria, and . At , it is easy to see that the characteristic polynomial has a constant coefficient that is , which means that it is negative since . Thus, is linearly unstable.
Linearization about yields a matrix that has double zero eigenvalue and a negative eigenvalue, . Since the only linear term in Eq. (4) is , the center manifold is parameterized by and first terms in a center manifold calculation for θ are . This term only contributes cubic terms to the right-hand sides so that to quadratic order:
Hence,
We claim that there exists a solution to this equation of the form, for a constant . Plugging in this assumption we see that K satisfies
For , there is a unique satisfying this equation. (Note means the vectors form an acute angle with each other. If then has a singularity and there is still a root to . If , then there is also a unique solution.) Plugging into the equation for yields
If , then clearly goes away from the origin, which implies that is unstable. If , the singularity occurs when and the root to is less than . This yields
and, again, using the fact that , we see that leaves the origin. Thus, we have proven that is unstable.
We now have to look at the stability of the selective equilibria: and , since the latter has different stability properties if the parameter . The Jacobian matrix for the right-hand sides of Eq. (4) around is
From this we get the characteristic polynomial:
where
Equilibria are stable if these three coefficients are positive and from the Routh–Hurwitz criterion, . We note that since (unless ) and . This means that no branches of fixed points can bifurcate from the equilibrium point; that is there are no zero eigenvalues. For τ small and the other coefficients are positive, so the rest state is asymptotically stable. A Hopf bifurcation will occur if and and . Setting yields the quadratic equation:
7 |
In the “standard” case (e.g. as in Fig. 2B), we have and or
8 |
A similar calculation can be done for the fixed point . In this case, the coefficients of the characteristic polynomial are
As with the equilibrium , there can be no zero eigenvalue and is positive except at the extreme cases where or . The Routh–Hurwitz quantity, vanishes at roots of
9 |
We note that when , we recover Eq. (8). For τ sufficiently small, is asymptotically stable.
We now use Eqs. (7) and (9) to explore the stability of the two solutions as a function of τ. We have already eliminated the possibility of losing stability through a zero eigenvalue since both are positive. Thus, the only way to lose stability is through a Hopf bifurcation which occurs when either of vanishes. We can use the quadratic formula to solve for τ for each of these two cases, but one has to be careful since the coefficient of vanishes when or .
Figure 3 shows stability curves as different parameters vary. In panel A, we use the standard setup (Fig. 2B) where , the stimuli are unit vectors ( and ), and α denotes the angle between the vectors. The curve is explicitly obtained from Eq. (8), with . For any τ above , either of the two selective equilibria is unstable. In Fig. 3B, we show the dependence of on ρ, the frequency of a given stimulus. All values of are greater than or equal to 1, so that in order to get instability the time-scale factor, , of homeostasis must be more than or equal to that of the weights, . In panel C, we show the dependence on the amplitude, A, where . This figure shows two curves: the red curve give for the equilibrium, while the black curve is for . The latter equilibrium can lose stability at arbitrarily low values of τ if the amplitude is large enough. Indeed, as .
We summarize the results in this section with the following theorem.
Theorem 3.1
Assume that the two stimuli are not collinear and that . Then there are exactly four equilibria to Eq. (4): . The first two are always unstable. Let , , , and . Then
- is linearly asymptotically stable if and only if
- is linearly asymptotically stable if and only if
- If (that is, the stimuli have equal amplitude), then are linearly asymptotically stable if and only if
- In the simplest case where , then both selective equilibria are stable if and only if
Bifurcation Analysis
The previous section shows that as the ratio τ increases, the two selective equilibria lose stability via a Hopf bifurcation. We now use numerical methods to study the behavior as τ increases. As the stability theorem shows, if the amplitude of the two stimuli are the same, then the stability is exactly the same for both, no matter what the other parameters. We will fix , and , and let τ vary. In Fig. 4A, we show the case so that both stimuli have the same magnitudes. As τ increases, each of the selective equilibria loses stability at the same value of τ, here given by (cf. Eq. (8)). At this point a stable limit cycle bifurcates and exists up until where the orbit appears to be homoclinic to the nonlinear saddle at the origin. (Note that near the homoclinic, there are some numerical issues with the stability; we believe that the branch is stable all the way up to the homoclinic.) We remark that the dynamics for τ slightly larger than is difficult to analyze; while the origin is unstable, it has stable directions and it appears that all initial data eventually converge to it. For τ large enough, we have found that solutions blow up in finite time.
If the amplitude of is different from that of , then the theorem shows that the two selective equilibria have different stability properties. Figure 4C shows the bifurcation diagram for . When we follow the stability of (shown as the lower curve labeled 1), there is a Hopf bifurcation at and a stable branch of periodic orbits bifurcates from it that persists up until where it bends around (LP), becomes unstable, and terminates on the symmetric unstable equilibrium, at a Hopf bifurcation () for this equilibrium, labeled Hs. Figure 4D shows the small amplitude periodic orbit at projected in the plane where it is centered around . The upper curve in panel C (labeled 2) shows the stability of as τ varies. Here, there is a Hopf bifurcation at and a stable branch of periodic orbits bifurcates from the equilibrium. The branch terminates at a homoclinic orbit at . Figure 4D shows an orbit for that surrounds .
In sum, in this section we have analyzed a very simple BCM model where there are two stimuli, two weights, and one neuron. We have shown that if the time-scale factor () of the homeostatic threshold, θ is too slow relative to the time-scale factor of the weights, then, the selective equilibria lose stability via a Hopf bifurcation and limit cycles emerge. For very large ratios, , solutions become unbounded and intermediate values of τ, the origin becomes an attractor even though it is unstable. In the next section, we consider the case when there are more stimuli than there are weights and, in the subsequent section, we consider small coupled networks.
Results II: One Neuron, n Weights, m Stimuli
We next consider the general scenario where a single neuron receives an n-dimensional input selected from m different possibilities with probability , . We will label the stimuli with j running from , and k as above. The weights are and the response of a neuron to stimulus k is
10 |
If the weights and the threshold change slowly compared to the change in the stimulus presentation, then the differential equations for the BCM rule can be averaged over the inputs:
where is given in Eq. (10). We note that, classically, what is of interest is not the evolution of the weights, but rather the evolution of the responses. Using Eq. (10), we see that
11 |
where is the vector whose entries are . It is very clear that using this formulation, the equations are very simple. Let X denote the matrix whose entries are ; it is an matrix. If , then X is square, and if it is invertible, then the two formulations with respect to the weights and the responses are equivalent. That is, . However, if , then there will be some degeneracy with respect to the two formulations. Most typically, the dimension of the stimulus space will be larger than the dimension of the weight space () and in this case there will be degeneracy with respect to the responses. As should be clear from the two formulations, the equations are much simpler in the response space, so that this is the preferred set of ODEs and thus there will be redundancy in the equations. That is, there will be linearly independent vectors, such that . This implies that here will be constants of motion in the response space:
12 |
Thus, in the case when , we still need only study the -dimensional dynamical system consisting of n choices of the along with the linear constraints (12).
Example:
As an example of the kinds of dynamics that is possible, we will consider and where the three stimuli are , and and these are distributed with equal probability. In this case, the equations for are
13 |
with , , , , and . Since there are two weights and three stimuli, we can reduce the dimension by 1 with the constraint:
where , and . As long as one of these is non-zero (which will happen if the vectors are not all collinear), we can solve for one of the and reduce the dimension by 1. In the example that we analyze here, we fix and and eliminate . This leaves two parameters, and C, the constant of integration. Equilibria are independent of τ but the existence of limit cycles and other complex dynamics obviously depends on τ.
Figure 5 shows the dynamics as C is varied for different values of the ratio τ. Panel E shows the full range of equilibria as the constant, C varies. For large negative values of C, there is a unique equilibrium point and for there are two additional equilibria formed by an isola (isolated circle) of equilibria. The stability of all of these equilibria depends on the values of τ and C. The change in stability occurs when there is a Hopf bifurcation. Panel A shows a summary in two parameters of the curves of Hopf bifurcation points. The green curve corresponds to the stability of the upper branch of equilibria in panels B–E. For , there are no Hopf bifurcations on either branch and there appear to be no periodic orbits. For all , the upper branch has two Hopf bifurcations (labeled a, b) so that we can expect the possibility of periodic behavior. The curve of the Hopf bifurcations is more complicated for the isola. We first note that the upper part of the isola always has one real positive eigenvalue, so that it is unstable for all τ. The lower part of the isola has a negative real eigenvalue and its stability depends on τ. Returning to the Hopf bifurcations on the isola of equilibria (shown in red in panel A), we see that there can be 1, 2 or 3 Hopf bifurcations as C changes. We label these c, d, e. Since there are generally two Hopf bifurcations on the main branch of equilibria, there can be up to five Hopf bifurcations for a given value of τ as C increases. We start with (panel B). For this value of τ, we see it is below the minimum for which there are Hopf bifurcations on the isola, so all the bifurcations appear on the main branch. Both bifurcations are supercritical and lead to small amplitude stable oscillations that grow in amplitude. The branches of periodic orbits arising from the two Hopf bifurcations are joined and thus represent a single continuous branch. However, the branch starting at a loses stability via a period-doubling bifurcation (PD in panel B) at . There does not appear to be any chaotic behavior that we have been able to find. For , shown in panel C, we see that the branch of periodic orbits that bifurcated from the main branch (at points a, b), has split into two separate branches that terminate on Hopf bifurcations of the upper branch of the isola (points c, d). The left branch that joins a and c also undergoes a period-doubling bifurcation (PD) and for a limited range of C, there appears to be chaos in the dynamics; specifically around . Two arrows delimit the range of parameters that are shown in Fig. 6. For , there are 5 Hopf bifurations and as with the periodic orbits arising from point a join with those on point c and those arising from b join with the branch arising from d. The branch of stable periodic orbits arising from the Hopf bifurcation at e is lost at a homoclinic labeled Hom in panel E. There is a small regime of chaotic behavior for shown in panel D, but we find no chaos when , For larger values of τ, there are three Hopf bifurcations (a, b, d). The bifurcations c,e merge and disappear so that all the equilibria on the isola are unstable. The branch of periodic orbits arising from d, becomes disconnected from the branch arising from b while the branch of orbits arising ftom b joins the branch arising from a. Other than the unique stable equilibrium when C is large or small, there is only a principal branch of stable periodic orbits between the Hopf bifurcations a and b. There are other complex structures, but none of them are stable.
Figure 6 shows some probable chaos for and . Panel A shows a trajectory projected in the plane for . Panel B shows the evolution of the attracting dynamics as C varies. We take a Poincaré section at and plot the successive values of θ after removing transients and letting C vary between 0 and 0.25. As C increases, there is a periodic orbit that undergoes multiple period-doubling bifurcations before becoming chaotic. There are several regions showing period three orbits (, , ) as well as many regions with complex behavior. The chaos and periodic dynamics terminates near , which is the value of C at which the lower stable branch of equilibria in the isola begins. Chaos and similar complex dynamics occurs for other values of τ.
In this section, we have shown that the degeneracy that occurs when there are more stimulus patterns than weights can be resolved by finding some simple constants of motion. The resulting reduced system will always be three-dimensional. In the simplest case of three patterns and two weights, we have found rich dynamics when is larger than 1.
Results III: Small Coupled Network
To implement a network of coupled BCM neurons receiving stimulus patterns from a common set, it is important to incorporate a mechanism for competitive selectivity within the network. A mechanism of this sort, found in visual processes [23] (and also in tactile [24], auditory [25], and olfactory processing [26]) is called lateral inhibition, during which an excited neuron reduces the activity of its neighbors by disabling the spreading of action potentials to neighboring neurons in the lateral direction. This creates a contrast in stimulation that allows increased sensory perception.
Consider a network of N mutually inhibiting neurons. At any time, let be the net activities of the neurons. Let be the partial activities induced by a stimulus x i.e. where is the synaptic weight vector for neuron i. At any given time, the activity, of the linear neuron i (where ) follows the differential equation:
where is the external input to the neuron [27]. Since neuron i is inhibited by its neighbors
where γ is the inhibition parameter, measuring the amount of inhibition that i gets. Therefore
14 |
At a steady state, this equation becomes
15 |
Thus, the overall activity of the network can be expressed as
where
Now let . Then we can write Eq. (15) as
or
16 |
thus
implying
or
Substituting V into Eq. (16) we get
The left-hand side of this equation is undefined when or . Thus G is invertible when .
Linearizing around the steady state solution of Eq. (14), we obtain the Jacobian
Notice that is an eigenvector of M with corresponding eigenvalue . This eigenvalue is negative when . Also notice that M can be written as
where is the matrix of all 1’s and is the identity matrix. Note that , since and . is in the eigenspace of M because if then
Thus u is an eigenvector of M corresponding to the eigenvalue . This eigenvalue is negative when . Thus whenever G is invertible, the system is also stable.
Now consider two neurons a and b who mutually inhibit each other and, at any instant, receive the same stimulus pattern x, with synaptic weight vectors (for neuron a) and (for neuron b). Let their responses to x be and , and their net responses (after accounting for inhibition) be and . Finally, let the dynamic threshold to and be and , respectively. The BCM learning rule of these two neurons is given by
17 |
where and and thus
18 |
or
19 |
Mean Field Model
Consider the general two-dimensional stimulus pattern . Let the two neurons, a and b, receive this stimulus with the synaptic weight vectors and . If and , then according to Eq. (19)
20 |
where , , and .
The rate of change of is given by
21 |
A similar expression exists for . Assume that x is from the set such that and . Then in terms of the responses, the mean field version of the BCM rule for two mutually inhibiting neurons a and b is derived as follows:
22 |
Observing that each of ρ, , and is non-zero, and setting the right-hand side of Eq. (22) to zero yields
Solving this system of equations gives the set of fixed points . The … in these fixed points correspond to the symmetric variants of the last equilibrium, for example swapping the and the or swapping the latter triplet for .
Castellani et al. [9] and Intrator and Cooper [7] give a detailed analysis on the stability of most of these fixed points in the limit of . They showed that and are unstable and the fully selective fixed points are stable. This leaves the fixed points of the form . We address these below for our particular choice of stimuli.
We will explore the dynamics of Eq. (22) as changes in a very simple scenario in which, , and and . In this case, there are only two equilibria that need to be studied: the symmetric case and the antisymmetric case, . The other selective equilibria are symmetric to these two. In the symmetric case, neurons a and b are both selective to stimulus 1 and in the antisymmetric case, neuron a selects stimulus 1 and neuron b selects stimulus 2. Fixing α, the angle between the stimuli leaves two free parameters, τ and γ, the inhibitory coupling.
Stability of the Selective Equilibria
We now consider the stability of these equilibria in the simplified case of the previous paragraph (, are unit vectors with where α is the angle between them). We exploit the symmetry of the resulting equations to factor the characteristic polynomial into the product of two cubic polynomials and then use the Routh–Hurwitz criteria. We have made use of the symbolic capabilities of Maple. Again, let , and . The linearization of the symmetric equilibrium is
This matrix is clearly block symmetric with blocks . The the stability is thus found by studying and . Let and . Then the blocks have the form
The characteristic polynomial of is
Clearly the coefficient and the constant coefficient are positive. The λ- coefficient could become negative if . The Routh–Hurwitz criterion also requires
This quantity becomes negative for . Clearly , so as τ increases there will be a Hopf bifurcation. Since , we see that the symmetric equilibrium will be stable if and only if
where we have used the definitions of . The critical value of τ is a linear function of γ and this critical value can be arbitrarily small as . We also remark that the critical instability is due to , which is associated with . Thus, we expect a symmetry-breaking Hopf bifurcation to out-of-phase oscillations. We will numerically confirm this result in the subsequent numerical analysis.
We can do a similar calculation for the antisymmetric equilibrium. Here, we just state the final result; the approach and calculations are similar. The characteristic polynomial factors. Each factor has the form
The constant coefficient and the quadratic coefficient are always positive. The linear coefficient is positive as long as
The additional Routh–Hurwitz condition is positive if and only if
which is clearly less than . Applying the definitions of , yields
Clearly , so that the antisymmetric solution is stable as long as .
We summarize the stability results in the following theorem.
Theorem 5.1
Assume that the two stimuli are unit vectors with an angle and are presented with equal probability. Then
- The symmetric equilibrium, is linearly asymptotically stable if and only if
Furthermore the unstable direction is antisymmentric. - The antisymmetric equilibrium is linearly asymptotically stable if and only if
Furthermore the unstable direction is symmetric.
We remark that, for acute angles where , the symmetric equilibrium loses stability at lower values of τ than does the antisymmetric equilibrium and for obtuse angles () it is vice versa.
Partially selective equilibria. Using the same notation as for the selective equilibria, we consider the partially selective fixed points for (e.g. etc.). An elementary evaluation of the constant coefficient of the characteristic polynomial yields a value:
which is clearly negative. Since the product of the eigenvalues is this implies that the eigenvalues have mixed signs and these equilibria are saddle points.
Numerical Results
In this section, we study the numerical behavior of Eq. (17) for , unit vectors with angle as τ and γ vary. We will generally set . The choice for α is somewhat arbitrary but was found to yield rich dynamics.
We first study the behavior of the symmetric equilibrium as τ increases. In Fig. 7A, we set . For τ small enough, the selective symmetric equilibrium is stable, with increasing τ loses stability and we have a Hopf bifurcation (HB). A branch of periodic solutions emerges where for each neuron, . At a critical value of τ there is a branch point (or pitchfork) bifurcation (BP) where this selective periodic solution intersects a non-selective periodic solution. The selective periodic solutions have either (, top branch) or (, lower branch). The non-selective branch (with a # on it) loses stability at a torus bifurcation (TR). Beyond the torus, there are, at first, stable non-selective quasi-periodic solutions, and then some possible chaos. We will look at chaotic solutions when we describe the antisymmetric behavior. Figures 7B1, 2 show the V’s for the selective and non-selective stable oscillations at values of τ denoted by the ⋆, and the ♯ (, respectively). In Fig. 7C, we set and see a behavior similar to panel A, but the selective branches lose stability at a torus bifurcation at values of τ less than the branch point and this gives rise to attracting quasi-periodic selective behavior, and then, for τ a bit larger, selective chaos. For all γ, when τ is larger than about 3, the solutions produce a “spike” and then return to 0. We know that the origin is unstable, but there are some stable directions and all solutions appear1 to approach this stable direction when τ is large enough.
Figure 8 summarizes the behavior of the symmetric branch of solutions as τ and γ vary. Specific bifurcation points from Fig. 7 are labeled a, b, c, d in this figure. There are seven regions in the diagram. In R1, the equilibrium is stable; this region is delineated by the Hopf bifurcation (HB) line that we computed in Theorem 5.1, . Region R2 corresponds to a non-symmetric periodic orbit such as shown in Fig. 7B1. If γ is small (roughly, less than 0.26), then, as τ increases, there is a reverse pitchfork bifurcation (BP) to a non-selective periodic orbit (R3) such as shown in Fig. 7B2. This orbit loses stability at the non-selective torus bifurcation (NSTR) as we enter R4. In R4, there is quasi-periodic and chaotic behavior, but the dynamics lies on the four-dimensional space . Passing from R2 to R5 also appears to lead to quasi-periodic and chaotic behaviors. Region R6, delineated below by the curve of limit points (LP) above, by an apparent homoclinic orbit (HC) consists of large amplitude stable periodic orbits where and . This branch of solutions (seen in the one-parameter diagram, Fig. 7A at the top right) does not connect to the other branches until γ is close to zero (not shown). As τ increases, the period of these orbits appears to go to infinity and they spend more and more time near the origin. We find that in R7, the origin is a global attractor, even though it is unstable. Figures 9A,B show simulations when τ is in R6 (panel A) and in R7 (panel B). Initial conditions were chosen with no special symmetry. In Fig. 9A, we see a brief transient, followed by a gap and then, eventually long period activity. In Fig. 9B, we only show the first five time units, but after , we still saw no return to activity.
We next turn to the behavior of the antisymmetric equilibrium, as τ increases. Figure 10A shows the fate of this branch of solutions as τ increases for and . As described in Theorem 5.1, there is a Hopf bifurcation at and this gives rise to a branch of periodic orbits (labeled i). Figure 10B(i) shows a time series of the which maintains the symmetry of and . At a critical value of τ there is symmetry-breaking bifurcation and a new branch of solutions emerges where all the v’s are different. This is shown in Fig. 10B(ii). Further increases in τ lead to a pair of period-doubling bifurcations, PD1, PD2. The branch emerging from PD1 leads to a stable periodic branch, an example of which is in Fig. 10B(iii). The second branch, PD2, leads to an unstable branch of solutions and re-stabilizes the branch labeled ii. This branch and the branch labeled iii then lose stability at torus bifurcations, labeled TR1 and TR2, respectively. Below, we will explore what happens after the bifurcation at TR2. Once τ gets large enough, the dynamics appears to become symmetric with and where it is as in Fig. 9.
Figure 11 shows the behavior of the antisymmetric branch as τ and γ change. For a fixed value of γ, as τ increases, the selective state () loses stability at the Hopf bifurcation (HB) at as shown in Theorem 5.1. The branch of periodic orbits such as seen in Fig. 10B(ii) is found in and loses stability at a pitchfork bifurcation (BP). The HB and BP curves appear sequentially for all in contrast to Fig. 8. In the region , solutions have lost the symmetry of and resemble the solutions shown in Fig. 10B(ii). Further increases in τ lead to a periodic doubling and solutions in the region look like Fig. 10B(iii). Region is bounded by PD1 and the torus bifurcation TR2 for . For , instead of a torus bifurcation, there is a period-doubling cascade to chaos (not shown). Beyond the torus bifurcation, there seems to be quasi-periodic motion that persists until PD2 where the branch labeled ii stabilizes again to form a new region . This branch loses stability at a torus bifurcation TR1. For τ beyond TR1, there seems to be chaos, quasi-periodic behavior, and complex periodic orbits. Eventually, for τ large enough, the dynamics of Fig. 9 is all that remains.
Discussion
We have explored the BCM rule as a dynamical system. Although the literature does not suggest a homeostatic time-scale range that ensures stability of a biological system, we have shown that the selective fixed points of the BCM rule are generally stable when the homeostatic time scale is faster than synaptic modification time scale, and that some complex dynamics emerges as the homeostatic time scale varies. The nature of this complex dynamics also depends on the angular and amplitudinal relationships between stimuli in the stimulus set. In our analysis, the neuron is presented with stimuli that switch rapidly, so it was possible to reduce the learning rule to a simple averaged set of differential equations. We studied the dynamics and bifurcation structures of these averaged equations when the homeostatic time scale is close to the synaptic modification time scale, and found that instabilities arise, leading to oscillations and in some cases chaos and other complex dynamics. Similar results would hold if the quadratic term in the second line of Eq. (2) were replaced with , since the original formulation by Bienenstock et al. [5] suggests that the fixed point structures are preserved for any positive value of p. Since the onset of the bifurcations (such as the Hopf bifurcation) depends mainly on the symmetry of these fixed points, we expect that the main results will be the same and only the particular values of parameters would change. While this paper has focused on how small changes in the time scale of a homeostatic threshold can lead to complex dynamics, there are many other kinds of homeostases [28] which present many time scales and similar opportunities for analysis.
The model neuron we used in this paper has been assumed to have linear response properties, which may be seen as oversimplified, and hence a potential problem in translating our conclusions to actual biological systems. It is well known that plasticity goes beyond synapses, and it is sometimes even a neuron-wide phenomenon [29], and that there is no unique route to regulating the sliding threshold of the BCM rule [10, 30]. Thus in addition to synaptic activities, intrinsic neuronal properties may also play a role in the evolution of responses and linearity may not be able to capture this scenario. The introduction of a nonlinear transfer function to the BCM learning rule has been addressed by Intrator and Cooper [7]. In their formulation, the learning rule is derived as a gradient descent rule on an objective function that is cubic in the nonlinear response. Our decision to use linear units is motivated by the accessibility to formal analysis. Biologically, linearity can be justified if we assume that the underlying biochemical mechanisms are governed by membrane voltage rather than firing rate; see, for example Clopath and Gerstner [31].
The theoretical contributions of this paper are based on an analysis that we did using a mean field model of the BCM learning rule. Similar mean field models have been made, but in terms of synaptic weights; see, for example Yger and Gilson [32]. With this approach to the mean field, it is difficult to arrive at the fact that the fixed points—not their stability—of the learning rule depend only on the probabilities with which each stimulus is presented. In this paper, we have given a derivation of the mean field model of the BCM learning rule as a rate of change of the activity response v, with time. The derived model considers the amplitudes of the stimuli presented, the pairwise angular relationships between the stimuli, and the probabilities with which the stimuli are presented. The appeal of this derivation is that it easily highlights the fact that the fixed points depend on these probabilities. Additionally, the derivation is important because the dynamics of the BCM learning rule is driven by the activity response (not the synaptic weights), and many analyses in the literature rely on this fact; see, for example Castellani et al. [9]. Our analyses considered three cases: one neuron with two weights and two stimuli, one neuron with two weights and three stimuli, and lastly a weakly interacting small network of neurons.
In exploring the dynamics of a single neuron, we used Fig. 3 to show the dependence of critical value of —which leads to a Hopf bifurcation—on the angle, α between the stimuli, the amplitudinal factor, A between the stimuli, and the probability distribution, ρ of the stimuli. The role of τ as a bifurcation parameter has been seen in several recent works including Zenke et al. [33], Toyoizumi et al. [34], and Yger and Gilson [32]. A possible future work, which is beyond the scope of this paper, is to investigate the dependence of the selectivity of the neuron on τ. For a single neuron presented with a set of stimuli S, Bienenstock et al. [5] defined the selectivity of the neuron as a function of the area under the tuning curve of the neuronal responses to S. This definition, however, assumes that the learning rule converges to a stable steady state. To analyze the selectivity of a neuron as τ varies, one would need a measure of selectivity that addresses an oscillatory steady state. Thus, it might be more appropriate to talk about relative selectivity (RS) in this case. If the neuron receives stimulus inputs from the set with synaptic weights , then at any point in time, and . If for given τ, we let be the point in time at which the dynamics of the neuron achieves a stable steady state or an oscillatory steady state, and , then we can define RS as follows:
Note that , since it is defined as a fraction of the maximum selectivity. For this reason it tends to have the same maximum value and shape for all values of . Preliminary analysis of this formulation allows us to conjecture that RS stays pretty much at its maximum for decays to 0 as τ increases on .
In our analysis of a small network (see Sect. 5) we have made the simplifying assumption that the lateral inhibitory weight is constant in time. The incorporation of an inhibitory plasticity rule (as in Moldakarimov et al. [35]) would necessitate a third time-scale parameter, and possibly a fourth if the inhibitory rule were to include a dynamic modification threshold. This is beyond the scope of the paper and reserved for future work. Another related possible future direction is to perform an analysis of a large network of BCM neurons, by observing what happens to the network dynamics at different time-scale parametric regimes. A good starting point is to explore the dynamics for a fully connected network with equal inhibition, that is, each neuron is coupled with every other neuron in the network and inhibits each of them equally. The next step would be to let the amount of inhibition vary according to how far away the inhibiting neuron is. It may also be interesting to examine how the architecture of the network is affected. We know, for instance, that spike-time dependent plasticity (STDP) has the ability to yield a feedforward network out of a fully connected network. The analysis that Kozloski and Cecchi [36] used to demonstrate this finding centers around the synaptic weights. Thus it will be useful to pay closer attention to the synaptic weights in future work. Moreover, the oscillatory and chaotic properties we observed in the small coupled network will also be observed had our mean field been derived in terms of the weight and the analyses been done with the synaptic weights.
The debate about synaptic homeostatic time scales in neurobiology remains vibrant. A review of the literature seems to reveal a varied, and somewhat paradoxical set of findings among experimentalists and theoreticians. While homeostasis of synapses found in experiments is slow [12, 37], homeostasis of synapses in most theoretical models needs to be rapid and sometimes even instantaneous to achieve stability [33, 38, 39]. There are, however, ongoing efforts to shed more lights on the debate. It has been suggested that both fast and slow homeostatic mechanisms exist. Zenke and Gerstner [39] suggest that learning and memory use an interplay of both forms of homeostasis; while fast homeostatic control mechanisms help maintain the stability of synaptic plasticity, slower ones are important for fine-tuning neural circuits. In addition to the present work contributing to the debate by demonstrating the relevance of fast homeostasis to synaptic stability, it also furthers the discussion as regards the link between STDP and the BCM rule: Zenke et al. [33] found that homeostasis needs to have a faster rate of change for spike-timing dependent plasticity to achieve stability. Furthermore it is well known that, under certain conditions, the BCM learning rule follows directly from STDP [13, 14].
Footnotes
We have no proof of this, but we have observed it in every choice of parameters.
Competing Interests
There are no competing interests.
Authors’ Contributions
LCU, GBE, PWM performed the research and all three wrote the paper. All authors read and approved the final manuscript.
Funding
GBE was partially supported by the NSF. PWM was supported by NSF grant #SBE 0542013 to the Temporal Dynamics of Learning Center, an NSF Science of Learning Center.
Contributor Information
Lawrence C. Udeigwe, Email: lawrence.udeigwe@manhattan.edu
Paul W. Munro, Email: pmunro@sis.pitt.edu
G. Bard Ermentrout, Email: bard@pitt.edu.
References
- 1.Hebb D. The organization of behavior. New York: Wiley; 1949. [Google Scholar]
- 2.Hertz J, Krogh A, Palmer R. Introduction to the theory of neural computation. Reading: Addison-Wesley; 1991. [Google Scholar]
- 3.Nass MN, Cooper L. A theory for the development of feature detecting cells in the visual cortex. Biol Cybern. 1975;19:1–18. doi: 10.1007/BF00319777. [DOI] [PubMed] [Google Scholar]
- 4.Cooper LN, Liberman F, Oja E. A theory for the acquisition and loss of neuron specificity in the visual cortex. Biol Cybern. 1979;33:9–28. doi: 10.1007/BF00337414. [DOI] [PubMed] [Google Scholar]
- 5.Bienenstock EL, Cooper L, Munro P. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci. 1982;2:32–48. doi: 10.1523/JNEUROSCI.02-01-00032.1982. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Shouval H, Intrator N, Cooper L. BCM network develops orientation selectivity and ocular dominance in natural scene environment. Vis Res. 1997;37(23):3339–3342. doi: 10.1016/S0042-6989(97)00087-4. [DOI] [PubMed] [Google Scholar]
- 7.Intrator N, Cooper L. Objective function formulation of the BCM theory for visual cortical plasticity: statistical connections, stability conditions. Neural Netw. 1992;5:3–17. doi: 10.1016/S0893-6080(05)80003-6. [DOI] [Google Scholar]
- 8.Bliem B, Mueller-Dahlbaus JFM, Dinse HR, Ziemann U. Homeostatic metaplasticity in human somatosensory cortex. J Cogn Neurosci. 2008;20:1517–1528. doi: 10.1162/jocn.2008.20106. [DOI] [PubMed] [Google Scholar]
- 9.Castellani GC, Intrator N, Shouval H, Cooper L. Solutions of the BCM learning rule in a network of lateral interacting nonlinear neurons. Netw Comput Neural Syst. 1999;10:111–121. doi: 10.1088/0954-898X_10_2_001. [DOI] [PubMed] [Google Scholar]
- 10.Cooper LN, Bear MF. The BCM theory of synapse modification at 30: interaction of theory with experiment. Nature. 2012;13:798–810. doi: 10.1038/nrn3353. [DOI] [PubMed] [Google Scholar]
- 11.Munro PW. A model for generalization and specification by a single neuron. Biol Cybern. 1984;51:169–179. doi: 10.1007/BF00346138. [DOI] [PubMed] [Google Scholar]
- 12.Yeung LC, Shouval HC, Blais BS, Cooper LN. Synaptic homeostasis and input selectivity follow from a calcium-dependent plasticity model. Proc Natl Acad Sci USA. 2004;101(41):14943–14948. doi: 10.1073/pnas.0405555101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Izhikevich EM, Desia N. Relating STDP to BCM. Neural Comput. 2003;15:1511–1523. doi: 10.1162/089976603321891783. [DOI] [PubMed] [Google Scholar]
- 14.Gjorgjieva J, Clopath C, Audet J, Pfister J-P. A triplet spike-timing -dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations. Proc Natl Acad Sci USA. 2011;108(48):19383–19388. doi: 10.1073/pnas.1105933108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Dotan Y, Intrator N. Multimodality exploration by an unsupervised projection pursuit neural network. IEEE Trans Neural Netw. 1998;9:464–472. doi: 10.1109/72.668888. [DOI] [PubMed] [Google Scholar]
- 16.Intrator N, Gold JI. Three-dimension object recognition of gray level images: the usefulness of distinguishing features. Neural Comput. 1993;5:61–74. doi: 10.1162/neco.1993.5.1.61. [DOI] [Google Scholar]
- 17.Bachman CM, Musman S, Luong D, Schultz A. Unsupervised BCM projection pursuit algorithms for classification of simulated radar presentations. Neural Netw. 1994;7:709–728. doi: 10.1016/0893-6080(94)90047-7. [DOI] [Google Scholar]
- 18.Intrator N. Feature extraction using unsupervised neural network. Neural Comput. 1992;4:98–107. doi: 10.1162/neco.1992.4.1.98. [DOI] [Google Scholar]
- 19.Intrator N, Gold JI, Bülthoff HH, Edelman S. Proceedings of the 8th Israeli Conference on AICV. 1991. Three-dimensional object recognition using an unsupervised neural network: understanding the distinguishing features. [Google Scholar]
- 20.Poljovka A, Benuskova L. Pattern classification with the BCM neural network. In: Stopjakova V, editor. Proc. 2nd Electronic Circuits and Systems Conference—ECS’99; 1999. pp. 207–210. [Google Scholar]
- 21.Turrigiano GG, Nelson SB. Homeostatic plasticity in the developing nervous system. Nat Rev Neurosci. 2004;5(2):97–107. doi: 10.1038/nrn1327. [DOI] [PubMed] [Google Scholar]
- 22.Dayan P, Abbott L. Theoretical neuroscience: computational and mathematical modeling of neural systems. Cambridge: MIT Press; 2001. [Google Scholar]
- 23.Field G, Chichilnisky E. Information processing in the primate retina: circuitry and coding. Annu Rev Neurosci. 2007;30:1–30. doi: 10.1146/annurev.neuro.30.051606.094252. [DOI] [PubMed] [Google Scholar]
- 24.Johansson R, Vallbo AB. TINS. 1983. Tactile sensory coding in the glabrous skin of the human hand. [Google Scholar]
- 25.Pantev C, Okamoto H, Ross B, Stoll W, Ciurlia-Guy E, Kakigi R, Kubo T. Lateral inhibition and habituation of the human auditory cortex. Eur J Neurosci. 2004;19(8):2337–2344. doi: 10.1111/j.0953-816X.2004.03296.x. [DOI] [PubMed] [Google Scholar]
- 26.Yantis S. Sensation and perception. New York, NY: Worth Publishers; 2013. [Google Scholar]
- 27.Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. Berlin: Springer; 2010. [Google Scholar]
- 28.Gjorgjieva J, Drion G, Marder E. Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performance. Curr Opin Neurobiol. 2016;37:44–52. doi: 10.1016/j.conb.2015.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Zhang W, Linden DJ. The other side of the engram: experience-driven changes in neuronal intrinsic excitability. Nat Rev Neurosci. 2003;4:885–900. doi: 10.1038/nrn1248. [DOI] [PubMed] [Google Scholar]
- 30.Anirudhan A, Narayanan R. Analogous synaptic plasticity profiles emerge from disparate channel combinations. J Neurosci. 2015;35(11):4691–4705. doi: 10.1523/JNEUROSCI.4223-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Clopath C, Gerstner W. Voltage and spike timing interact in STDP—a unified model. Front Synaptic Neurosci. 2010;2 doi: 10.3389/fnsyn.2010.00025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Yger P, Gilson M. Models of metaplasticity: a review of concepts. Front Comput Neurosci. 2015;9 doi: 10.3389/fncom.2015.00138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Zenke F, Hennequin G, Gerstner W. Synaptic plasticity in neural networks needs homeostasis with a fast rate detector. PLoS Comput Biol. 2013;9(11) doi: 10.1371/journal.pcbi.1003330. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Toyoizumi T, Kaneko M, Stryker MP, Miller KD. Modeling the dynamic interaction of Hebbian and homeostatic plasticity. Neuron. 2014;84(2):497–510. doi: 10.1016/j.neuron.2014.09.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Moldakarimov SB, McClelland JL, Ermentrout GB. A homeostatic rule for inhibitory synapses promotes temporal sharpening and cortical reorganization. Proc Natl Acad Sci USA. 2006;103(44):16526–16531. doi: 10.1073/pnas.0607589103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Kozloski J, Cecchi G. A theory of loop formation and elimination by spike timing-dependent plasticity. Front Neural Circuits. 2010;4 doi: 10.3389/fncir.2010.00007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Turrigiano GG, Nelson SB. Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol. 2000;10(3):358–364. doi: 10.1016/S0959-4388(00)00091-X. [DOI] [PubMed] [Google Scholar]
- 38.Miller KD, MacKay DJC. The role of constraints in Hebbian learning. Neural Comput. 1994;6:100–126. doi: 10.1162/neco.1994.6.1.100. [DOI] [Google Scholar]
- 39.Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc Lond B. 2016;372 doi: 10.1098/rstb.2016.0259. [DOI] [PMC free article] [PubMed] [Google Scholar]