Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2020 May 8;16(5):e1007835. doi: 10.1371/journal.pcbi.1007835

Autonomous emergence of connectivity assemblies via spike triplet interactions

Lisandro Montangie 1, Christoph Miehl 1,2, Julijana Gjorgjieva 1,2,*
Editor: Brent Doiron3
PMCID: PMC7239496  PMID: 32384081

Abstract

Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures.

Author summary

Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.

Introduction

The synaptic wiring between neurons—originally proposed as a mechanism for learning and memory—is sculpted by experience and has become a most relevant link between circuit structure and function [1]. The original formulation of Hebbian plasticity, whereby “cells that fire together, wire together” [2, 3], fostered the concept of ‘cell assemblies’ [4], defined as groups of neurons that are repeatedly co-activated leading to the strengthening of synaptic connectivity between individual neurons. This has suggested that activity-dependent synaptic plasticity, including both long-term potentiation and long-term depression, is a key mechanism for the emergence of assemblies in the organization of neural circuits [57]. These interconnected groups of neurons have become an important target for many theories of neural computation and associative memory [811]. Recent technological developments that enable multiple neurons to be simultaneously recorded have provided the much needed physiological evidence of assembly organization [1215]. For instance, synaptically connected neurons tend to receive more common input than would be expected by chance, [12, 1618] and cortical pyramidal neurons tend to be more strongly connected to neurons that share stimulus preference [13, 19, 20], providing evidence for clustered architecture. It has been proposed that this organization enables the cortex to intrinsically generate reverberating patterns of neural activity when representing different stimulus features [1, 21]. Thus, neuronal assemblies can be interpreted as the building blocks of cortical microcircuits which are differentially recruited during distinct functions, such as the binding of different features of a sensory stimulus [7, 17, 22]. In addition to cortical circuits, neuronal assemblies have also been observed in the optic tectum (a structure homologous to the superior colliculus in mammals [23]) in the developing zebrafish larva [2427]. Experiments in sensory deprived larvae have demonstrated that the basic structure of spontaneous activity and functional connectivity emerges without intact retinal inputs, suggesting that neuronal assemblies are intrinsically generated in the tectum and not just the product of correlated external inputs [2527]. This raises the important question of what drives the emergence of these clustered structures, and whether patterned external input is necessary.

To understand the emergence of such non-random connectivity, a growing body of theoretical and computational work has been developed to link connectivity architecture to the coordinated spiking activity of neurons, especially in recurrent networks [2841]. These studies can be divided into two classes: those that examine the influence of externally structured input on activity-dependent refinement [4247], and those that investigate the autonomous emergence of non-random connectivity in the absence of patterned external input, purely driven by emergent network interactions [5, 6, 48]. Specifically, assemblies in recurrent networks can be imprinted based on internally-generated network interactions [6] or through rate-based plasticity where inputs with higher firing rates to subsets of neurons strengthen recurrent connections [49, 50]; assemblies can also be initially determined by externally patterned input but maintained by internal correlations [51].

Despite this success, all of these studies have assumed pair-based models of STDP, which induce plasticity based on the precise timing and order of a pair of pre- and postsynaptic spikes [52, 53]. Here, we consider a spike-based learning rule, “the triplet STDP model” [54], which uses sets of three spikes (triplets) to induce plasticity. Specifically, we focus on the ‘minimal’ triplet STDP model, where only potentiation depends on the interval between the pre- and postsynaptic spikes, and on the timing of the previous postsynaptic spike. This triplet learning rule has been shown to explain a variety of synaptic plasticity data [55, 56] significantly better than pair-based STDP [54]. We have previously shown a tight correspondence between the triplet STDP rule and the well-known Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which maximizes the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation and direction selectivity [57]. In addition, triplet STDP can also induce selectivity for input patterns consisting of up to third-order correlations, here referred to as higher-order correlations (HOCs). HOCs have been experimentally measured in several brain areas [58], and shown to account for a substantial amount of information transfer in sensory cortex [5861]. HOCs are also important for characterizing the firing of a postsynaptic neuron [62, 63], circuit function and coding [64, 65], and the synchronous firing and the distribution of activity in a neuronal pool [6669]. Here we investigated the functional significance of such HOCs for shaping recurrent network structure through synaptic plasticity.

First, we investigate how HOCs up to third order affect the development of connectivity in recurrent networks of Poisson spiking neurons in the absence of structured external stimuli, where the stochastic activity of each neuron is described by a mutually exciting Hawkes process [70]. Assuming a slow change of synaptic efficacies and fast spiking dynamics, we develop a formal analytical framework for the evolution of synaptic connections in the network based on the second- and third-order cumulants of spike timing interactions, which arise from assuming an STDP rule governed by pairs and triplets of spikes [54, 55]. The simplified neuronal model allows us to write exact and self-consistent equations for the synaptic change depending on the full network connectivity by taking into account non-local interactions between different neurons in the network and writing them as a sum of structural motifs of varying orders. We demonstrate differences to the classical pair-based STDP rule [52, 71] that ignores those HOCs, and compare the relative strength of the emergent structural motifs up to third-order induced by triplet STDP. Second, we examine the biological conditions which promote the formation of assembly structures of self-connected neurons without externally structured inputs under the triplet STDP rule. We find that this is achieved either by modulating the shape of the STDP function through neuromodulators or the shape of the evoked postsynaptic current (EPSC) and characterize changes in functional connectivity in terms of graph theoretic measures [2527]. Third, we show that the novel structural motifs, and specifically ‘loop’ motifs, which follow from the triplet STDP rule, are crucial for the spontaneous emergence of assemblies. Finally, we compare them to assemblies generated via correlated external input.

Results

We present two main results: first, we derive a formal analytical framework for the evolution of synaptic weights depending on the second- and third-order cumulants of spike time interactions under the triplet STDP rule by expressing them as a sum of structural motifs; second, we discuss the functional implications of this framework and present the biological conditions which promote the formation of assemblies without external instruction.

Average synaptic modification due to the interaction of pairs and triplets of spikes in recurrent networks

To study the autonomous emergence of assemblies in a recurrent network from a general form of STDP that includes the contribution of pairs and triplets of spikes to synaptic plasticity, we require a minimal theoretical representation of the network with plastic synapses driven by internal correlations in the spike timing statistics. In our model, structure is given by the connectivity matrix W between all excitatory neurons in the network (“all-to-all connectivity”), where the synaptic efficacy element Wij denotes the connection strength between postsynaptic neuron i and presynaptic neuron j. The analytical description of the dynamics in recurrent networks can be dauntingly complex. On the one hand, to rigorously analyze the impact of STDP on the formation of functional structures it is indispensable to take into account the precise timing of action potentials or spikes. Therefore, models of neural activity that are based on rates cannot fulfill this criterion [72]. More elaborate models such as Hodgkin-Huxley with multiple ion channels [73] and even the simpler spiking leaky integrate-and-fire (LIF) models are much more accurate in reproducing the spiking dynamics of a population of neurons [7476]. Although they are computationally tractable, to extract extensive and exact mathematical features from these models remains an elusive task. Under certain conditions of approximately asynchronous firing, the spiking statistics in networks of LIF neuron can be described by a linear theory [29]. Using this approach, here we make approximations for the spiking dynamics of each individual excitatory cell and treat each pre- and postsynaptic spike train as if they follow inhomogeneous Poisson statistics [6, 44, 52, 57].

In this model we assume that the probability of each neuron emitting an action potential at a certain time (the ‘intensity’ or mean activity) is proportional to the weighted sum of the preceding activity of all the other cells in the network and a constant, unstructured external input (Fig 1A). The activity of each neuron in this network is a stochastic process, also referred to as a ‘mutually exciting point process’ or a Hawkes process [70]. The availability of an exact expression for spike correlations in this model allows us to develop a precise theory for the synaptic efficacies’ dynamics that are governed by different forms of STDP. To prevent runaway excitation, we also consider that the firing of excitatory neurons is modulated by the activity of a population of inhibitory neurons (Fig 1A). We assume that the total inhibitory input to each excitatory neuron is tuned in order to balance the sum of inhibitory efficacies with the sum of the excitatory ones (Methods) [6, 7779].

Fig 1. Framework set-up.

Fig 1

A. A network of excitatory neurons (light blue triangles) fire stochastically, while their activity is driven by unstructured external input (red arrows) and modulated by a population of inhibitory neurons (yellow circles). Excitatory connections among the neurons can be weak (gray dashed arrows) or strong (black solid arrows), unidirectional or bidirectional. B. Cumulants of the spike trains (see Eq 1). The second-order cumulants Cij, Cii and Cjj are calculated based on the time difference between a pair of spikes (cross-covariance in green; auto-covariances in orange/red). The third-order cumulant Kij is calculated based on the time differences between three spikes (purple). The spike triplets can be two post- and one presynaptic spikes, or one post- and two presynaptic spikes. The time differences are: τ1 between a presynaptic spike and a postsynaptic spike, τ2 between different postsynaptic spikes and τ3 between different presynaptic spikes. C. STDP-induced plasticity by pairs and triplets of spikes. Left: An example of a classical pair-based STDP rule, with a learning window denoted by L2. Potentiation is triggered by a postsynaptic following a presynaptic spike (τ1 = tpost − tpre > 0), whereas if a presynaptic spike follows a postsynaptic spike (τ1 = tpost − tpre < 0), depression is induced. The total potentiation (depression) is given by the red (blue) area under the curve. Right: Examples of triplet STDP rules denoted by L3,y and L3,x. Potentiation (red) and depression (blue) are given by triplets of spikes: post-pre-post with a time difference τ2=tposttpost, and pre-post-pre with a time difference τ3=tpretpre, respectively. D. Minimal triplet STDP rule where potentiation depends on triplets of spikes L3 and depression on pairs of spikes L2.

Given the connectivity matrix W and assuming a slow learning rate (much slower than the dynamics of neural activity), the rate of change in the strength of synaptic efficacy W˙ij between postsynaptic neuron i and presynaptic neuron j, can be expressed in terms of the product of the time dependent cumulants of different orders and the STDP function, accordingly (Methods). Specifically, we consider STDP learning rules where plasticity depends on the timing and order of pairs and triplets of spikes, referred to as pair-based and triplet STDP. Initially, we make no assumptions about the shape of these learning rules keeping the framework general. The sign and magnitude of the net weight modification depends on the time interval between the firing of the pre- and postsynaptic neurons, and also on the relative spike times of individual pre- and postsynaptic neurons (Fig 1B). The exact expression for the evolution of the average (denoted by 〈⋅〉) synaptic efficacy in the recurrent network due to STDP is

W˙ijSTDP=(Cij(τ1)+rirj)L2(τ1)dτ1+(Kij(τ1,τ2)+ri(Cij(τ1)+Cij(τ2τ1))+rjCii(τ2)+ri2rj)L3,y(τ1,τ2)dτ1dτ2+(Kij(τ1,τ3)+rj(Cij(τ1)+Cij(τ3τ1))+riCjj(τ3)+rirj2)L3,x(τ1,τ3)dτ1dτ3. (1)

Here ri and rj denote the mean firing rates of neuron i and j, respectively; Cij is the cross-covariance between neuron i and neuron j, with Cii and Cjj being the auto-covariances (note that all of these covariance terms, Cij, Cii and Cjj, make up the second-order cumulant); and Kij is the third-order cumulant between neuron i and neuron j. These quantities represent internal (i.e. not driven by external input) correlations in the network and are calculated as functions of the excitatory postsynaptic current (EPSC), and assumed to be identical for every pair of neurons. Both the second-order cumulants C and the third-order cumulants K are probability densities of pairs and triplets of spikes separated by the given time lapses τ accordingly (Fig 1B). τ1 is the time difference between a spike emitted by the presynaptic neuron and one from the postsynaptic neuron, whereas τ2 and τ3 are the time intervals between different spikes from the postsynaptic neuron and the presynaptic neuron, respectively. The cumulant Kij is calculated for both ‘post-pre-post’ or ‘pre-post-pre’ spike triplets and therefore depends on combinations of τ1 and τ2 or τ3, according to each case.

The STDP functions that describe how potentiation or depression depend on the spike timing intervals are given by L2 for pairs of spikes, and L3,x and L3,y for triplets of spikes. The sub-indices x and y correspond to the triplet sets ‘pre-post-pre’ and ‘post-pre-post,’ respectively. While Eq 1 can be calculated for any shape of the STDP function that depends on pairs and triplets of spikes, an illustrative example for these learning rules, commonly used in other studies based on fits to experimental data [54, 55, 71], is given in Fig 1C.

The average synaptic efficacy change (Eq 1) is sufficient to describe the plasticity dynamics when the learning rate is small relative to the spiking dynamics, and noise in the STDP dynamics, arising from random fluctuations, is averaged out. Furthermore, Eq 1 is combined with heterosynaptic competition [80] to restrict the amount of connections a neuron can make with the rest and prevent the dominance of a few (Methods). For the sake of simplicity, in the next steps we consider that triplets of spikes contribute only to potentiation and thus L3,y(τ1, τ2) = L3(τ1, τ2) and L3,x(τ1, τ3) = 0, for all τ1 and τ3, in agreement with the so-called ‘minimal’ triplet STDP rule [54] (Fig 1D). Nevertheless, if spike triplets would also be taken into account for depression, the derivation would be identical, with the corresponding modification to the variables involved. We can rewrite Eq (1) in the Fourier domain as

W˙ijSTDP=rirj(L˜2(0)+riL˜3(0,0))Independentspikes+[C˜ij(ω1)L˜2(ω1)δ(ω2)+ri(C˜ij(ω1)δ(ω2)+C˜ij(ω2)δ(ω1+ω2))L˜3(ω1,ω2)]dω1dω2Pre-postpairwisecorrelations(*)+rjC˜ii(ω2)L˜3(ω1,ω2)δ(ω1)dω1dω2Post-postpairwisecorrelations(**)+K˜ij(ω1,ω2)L˜3(ω1,ω2)dω1dω2Post-pre-posttripletcorrelations(***) (2)

where we use the notation f˜ for the Fourier transform of a function f and δ is the Dirac delta function. It should be noted that Eq 2 is not the Fourier transform of Eq 1 but rather an equivalent expression of the latter. This comes about because we can express the integral of the product of two functions as the convolution of the Fourier transform of those functions, evaluated at zero.

This formulation of the previous equation allows us to clearly break down the contribution of spike interactions of different orders to the average synaptic efficacy in the recurrent network. The first term of Eq 2 considers the change in synaptic efficacy that is obtained from independent spiking and thus depends on the first-order cumulant (the mean firing rates) of the activity of both the pre- and postsynaptic neurons, rj and ri, respectively. As firing rates increase, ‘chance’ contributions to plasticity can occur. The second and third term account for the probability of observing changes to the mean synaptic efficacy due to pairwise correlations in the pre- and postsynaptic neurons. Cij refers to the family of probabilities that generate pairwise cross-correlations (second-order cumulant) between neurons i and j, depending on spikes of other neurons in the network (Fig 1B, green). Accordingly, Cii includes the family of probabilities that generate pairwise auto-correlations in the same neuron i due to the spiking activity of all other neurons in the network (Fig 1B, orange). Therefore, the second (*) and third (**) terms describe the total contribution of correlated spike pairs to plasticity through the pair-based STDP rule L2 (Fig 1C, left) and the triplet STDP rule L3 (Fig 1C, right). In the case of the latter, the first-order cumulant of the uncorrelated single postsynaptic neuron’s spikes, ri, is also included in the second term (*) and the first-order cumulant of the uncorrelated single presynaptic neuron’s spikes, rj, in the third term (**). The fourth term (***) describes the total contribution of correlated spike triplets (third-order cumulant) to plasticity. Thus, Kij includes the family of probabilities for third-order correlations, where the relative spike timing interacts with the triplet STDP learning window L3 to induce plasticity (Fig 1B, purple and Fig 1C, right).

In conclusion, we have derived an exact analytical expression for the average change in synaptic efficacy due to firing rates, pairwise and triplet correlations under a general STDP rule that includes pairwise and triplet spike interactions. The resulting cumulants of up to third order can depend in non-trivial ways on the full recurrent connectivity in the network.

Novel structural motifs emerge under triplet STDP compared to pair-based STDP

The calculation of the cumulants involved in the equation for the average weight dynamics (Eq 2) depends on the full network connectivity. Therefore, the second- and third-order cumulants in Eq 2 can be written as a sum over contributions from different structural motifs, following the convention of [6]. These structural motifs determine all possible connectivity paths that a given spike from a source neuron k travels to the postsynaptic neuron i or presynaptic neuron j, and as a consequence affects the synaptic weight Wij. Thus, to calculate each term in Eq 2 we break down the second- and third-order cumulants Cij, Cii and Kij into expressions that include the contribution of every spike propagated in the network through existing synaptic connections, taking into account the full recurrence in the network (Methods). These expressions consist of products of the corresponding synaptic efficacies along the two paths to the presynaptic and postsynaptic neuron, the firing rate of the source neuron and the motif coefficient functions M, which depend on the number of synapses along the two paths, the EPSC function, E, and the STDP learning rules, L2 and L3. The probability that neurons i and j jointly fire a spike is transiently modulated whenever a neuron anywhere in the network produces a spike. We can write the pairwise cross-covariance from Eq 2 as

(*)k=1Nrkα,β(Wα)ik(Wβ)jk(Mα,βpair+riMα,βtrip) (3)

which combines the contribution of structural motifs from the pair-based and triplet STDP rules to a change in the connectivity matrix W. The expression consists of sums over two aspects to provide an intuitive description of the contribution of the pairwise cross-covariance Cij between neurons i and j to plasticity of the connection Wij. The first sum takes into account all spiking neurons in the network, while the second sum takes into account all possible ‘paths’ by which spikes originating from a ‘source’ neuron k affect the cross-covariance Cij. Specifically, α and β constitute the ‘path lengths’ of synapses from source neuron k to the postsynaptic neuron i and the presynaptic neuron j, respectively (Fig 2A; see also [6]). We refer to the total path length of a motif, α + β, as the ‘order’ of the motif.

Fig 2. Second-order cumulant contributions to plasticity.

Fig 2

A. The cross-covariance Cij between the presynaptic neuron j and the postsynaptic neuron i is obtained by summing over all the possible α- and β-paths from every possible source neuron k in the network. Each path is calculated via the corresponding weights in the connectivity matrix and the EPSC function (see Eq 3). B. Same as A but for the auto-covariance Cii of the postsynaptic neuron i (see Eq 4). In this case, γ is the second index to sum over the path from the source neuron k to the postsynaptic neuron i. It should be noted that the main difference between the α- and γ-path is given by the time dependence of the EPSC function, here represented in the Fourier domain for convenience.

The contribution of the pair-based STDP rule includes the motif coefficient functions, Mα,βpair, which are calculated in the Fourier domain (Eq 29 in Methods). The pairwise correlations between i and j also contribute to plasticity of Wij based on the triplet STDP rule through the motif coefficient functions Mα,βtrip (Eq 30 in Methods). Examples of some motifs common for both the pair-based and the triplet STDP rule are provided in Fig 3A. Their contribution to plasticity through the EPSC function E and the STDP rules L2 and L3 is illustrated in Fig 3B.

Fig 3. Structural motifs in the network under pair-based and triplet STDP.

Fig 3

A. Examples of structural motifs common for both the pair-based and triplet STDP framework. Here α and β constitute the path lengths of synapses from the source neuron to the postsynaptic neuron i and the presynaptic neuron j. α = 1, β = 0: Presynaptic neuron j projects to the postsynaptic neuron i. α = 0, β = 1: Postsynaptic neuron i projects to the presynaptic neuron j. α = 1, β = 1: Common input from source neuron k to presynaptic neuron j and postsynaptic neuron i. α = 2, β = 0: Presynaptic neuron j projects to the postsynaptic neuron i through another neuron k in the network. B. Illustration of the calculation of the common input motif with α = 1 and β = 1 framed in purple in A (there are also additional terms which are not illustrated). The motif coefficients Mα=1,β=1 (right) are calculated as the total area under the curve resulting from the product of the convolution of the EPSC function E (left) and the STDP functions (pair-based L2 and triplet L3, middle). C. Examples of structural motifs found only in the triplet STDP framework, where γ denotes the time-delayed path length from the source neuron to the postsynaptic neuron i. α = 1, γ = 1: Source neuron k projects twice to postsynaptic neuron i with a different time delay. α = 2, γ = 0: Feedback loop through another neuron k in the network (source and projecting neuron are the postsynaptic neuron i). α = 1, β = 1, γ = 1: Source neuron k projects to the presynaptic neuron j and postsynaptic neuron i via all the three possible paths. D. Illustration of the calculation of the motif with α = 1 and γ = 1 for the triplet STDP rule framed in purple in C, compare to B.

In addition to the α and β path lengths, to derive the contribution of the triplet STDP rule to the average change in synaptic efficacy, we also introduced the γ-path so that now motifs have order α + β + γ. γ is the synapse path length from the source neuron k to the postsynaptic neuron i, including a time delay relative to the α path from k to i, to account for the second postsynaptic spike of the triplet (Fig 3C and 3D). Thus, for the auto-covariance term in Eq 2, we obtain (Fig 2B)

(**)k=1Nrkα,γ(Wα)ik(Wγ)ikrjMα,γtrip (4)

where the motif coefficient function involving the triplet STDP rule is given in the Methods (Eq 31).

For third-order interactions, however, it is possible that the paths by which spikes are propagated branch out from a neuron other than the source neuron. Therefore, the third-order cumulant Kij (Eq 2) is broken down into four sums:

(***)k=1Nrkα,β,γ(Wα)ik(Wγ)ik(Wβ)jkMα,β,γtripNobranching(straightpaths)+k,l=1Nrkα,β,γ,ζ(Wζ)lk(Wα)ik(Wβ)jl(Wγ)ilM(α,ζ),β,γtrip+k,l=1Nrkα,β,γ,ζ(Wζ)lk(Wα)il(Wβ)jk(Wγ)ilMα,(β,ζ),γtrip+k,l=1Nrkα,β,γ,ζ(Wζ)lk(Wα)il(Wβ)jl(Wγ)ikMα,β,(γ,ζ).trip}Branchedpaths (5)

The first term in Eq 5 sums over the paths to the presynaptic neuron j and postsynaptic neuron i from a source neuron k in the network that do not branch out. In other words, it considers that the ‘distance’ to each respective spike of the triplet is given by α, β and γ (Fig 4A). The remaining terms include the sum over possible branches in the network ‘tree’: ζ ≥ 1 is the synapse path length from the source neuron k to the neuron l that is the branching point (Fig 4B–4D). It should be noted that the branched paths all have a total path length of at least four (i.e. α + β + γ + ζ ≥ 4) so that the motif order is at minimum four, since at least one synapse must be taken into account before the splitting of the path. The corresponding motif coefficients for the ‘straight’ triplet motif (Fig 4A, see Eq 32), and for the ‘branching’ motifs (Fig 4B–4D, see Eqs 33, 34 and 35) are provided in the Methods.

Fig 4. Third-order cumulant contributions to plasticity can be broken down into four terms.

Fig 4

A. The first term contains all the α-, β- and γ-paths originating from the source neuron k to the spiking neurons i and j. B-D. The other terms take into account the possibility of an intermediate neuron l that acts as a new source neuron for two of the paths. These are referred to as ‘branched paths’, and the path length from the source neuron k to the intermediate neuron l is denoted with ζ. The branching describes the individual terms in Eq 5.

This analysis reveals novel motifs in the triplet STDP rule which have the potential to promote particular connectivity structures that are not possible with pair-based STDP [6] (Fig 3C). These include motifs which directly exclude the presynaptic neuron j but can still impact the synaptic weight, Wij (Fig 3C, left and middle). This can be achieved, for example, through an additional neuron k that does not directly affect the weight Wij but projects to the postsynaptic neuron i through the synaptic weight Wik (Fig 3C, left and middle). Because these motifs exclude the presynaptic neuron j, they do not impact the pairwise cross-covariance term Cij and do not have influence on the weight Wij through pair-based STDP. For example, in the case when α = 2 and γ = 0 (Fig 3C, middle), the postsynaptic neuron i is both the source neuron and the neuron involved in the path with the additional neuron k. We call this path involving the synaptic efficacies Wik and Wki a ‘loop’. These loops involve a different neuron in addition to the pre- and postsynaptic neuron of the weight Wij, and are a unique feature of incorporating spike triplets in the STDP rule. Loops include a neuron as both the source and target for the spike in the corresponding path, so that a ‘loop’ closes on itself. The direction of the edges are relevant for this definition. We propose that motifs with these ‘loop’ characteristics promote the formation of connections between clusters of neurons, and therefore assemblies.

To illustrate motifs of different orders and their relationship to cumulants of different orders, we depict all motifs up to third order arising from the expansion of the second- and the third-order cumulants (Fig 5). While it is clear that the full network connectivity through motifs of different orders from the cross-covariance Cij influences plasticity under pair-based, as well as triplet STDP (Fig 5, first row), we also reveal novel motifs from the auto-covariance Cii and the third-order cumulant Kij that influence plasticity uniquely under triplet STDP (Fig 5, second and third row).

Fig 5. Second- and third-order cumulants can be described in terms of structural motifs that contribute to weight change.

Fig 5

All motifs up to third order as they arise from the cross-covariance Cij (top row and Eq 3), the auto-covariance Cii (middle row and Eq 4) (both Cij and Cii together represent the second-order cumulant) or the third-order cumulant Kij (bottom row and Eq 5). The gray boxes indicate the ‘loop’ motifs. The novel motifs which follow from the triplet STDP rule are those that include the path γ (second and third row).

Taken together, our motif expansion framework reveals novel structural motifs under the triplet STDP rule that have the potential to form assemblies without structured external input. We next investigated the role of the different structural motifs (specifically the ‘loop’ motifs) on the emergence of assemblies under triplet STDP.

Modulation of the triplet STDP rule promotes the autonomous emergence of assemblies

So far, we considered general STDP rules that depend on the precise timing between pairs and triplets of spikes, without taking into account the exact dependence of potentiation or depression on these spikes. To further study the complex relationship between plasticity and network correlations, we considered a particular biologically identified STDP rule that relies on third-order interactions (Methods; Fig 1C). This rule has an asymmetric shape around the time lag of 0 (where pre- and postsynaptic spikes are coincident), similar to the classical pair-based STDP rule [71]. However, while synaptic depression is induced by the relative timing of pairs of presynaptic and postsynaptic spikes, the minimal triplet STDP model uses sets of three spikes to induce potentiation: the amount depends on the timing between pre- and postsynaptic spike pairs and in addition, on the timing between the current and the previous postsynaptic spike (Fig 1D). This minimal model successfully captures experimental data, where the pairing frequency of pre- and postsynaptic spikes was varied, equally well compared to a full model that also uses triplets of spikes for depression [54].

Implementations of classical Hebbian learning, such as STDP, use joint pre- and postsynaptic activity to induce potentiation and depression, while neglecting other potential factors such as heterosynaptic plasticity [81], or the location of synaptic inputs on the dendritic tree [82]. However, recent experimental studies have highlighted an important role of neuromodulators in regulating plasticity across the brain [8386], as they convey information about novelty or reward. Indeed, neuromodulators such as dopamine, acetylcholine and noradrenaline, but also brain-derived neurotrophic factor (BDNF) and gamma-aminobutyric acid (GABA), can predominantly act via two mechanisms: by reshaping the learning window for STDP or by regulating neuronal activity at the level of synaptic transmission [84, 86]. Therefore, we next investigated how neuromodulation of synaptic plasticity affects recurrently connected networks considering that pairwise and triplet spike interactions determine plasticity. We assume that the shape of the STDP function can be modulated via the modulation parameter η which preserves the overall level of depression by trading off the depression learning rate A and the depression time constant τ (Methods; Fig 6A). Such a modification of the learning rule has been observed in the lateral amygdala due to the action of dopamine via D2 receptors [85, 87], or in rat visual cortex slices with the activation of both the noradrenaline pathway through β-adrenergic receptors and the acetylcholine pathway through M1-muscarinic receptors [84, 86, 88]. A similar modulation parameter could similarly be included for potentiation.

Fig 6. Spontaneous emergence of assemblies via modulation of the triplet STDP rule.

Fig 6

A. The shape of the STDP function changes as a function of the modulation parameter η, which preserves the overall level of depression by trading off the depression learning rate and the depression time constant. B. Motif coefficients as the modulation parameter η increases. Points of interest given by the crossovers of the strength of particular motifs are indicated by a small arrow. Inset: Amplified scale around zero. Motif coefficients including γ paths are not illustrated, since they are always constant and positive in η and do not provide a meaningful comparison to the other motifs. C. Top: Examples of connectivity matrices obtained with different values of η at steady state. Unidirectional connections are shown in black, bidirectional connections in orange. Matrices are reordered using the k-means clustering algorithm (see Methods). Bottom: Mean fraction of unidirectional and bidirectional connections for 100 trials with different initial synaptic efficacies as a function of η. Error bars represent the standard error of the mean. D. Graphs of the connectivity matrices in C. E. Averaged connectivity matrices over 100 trials at steady state. Note that the tighter clusters emerging near the edges of the matrices are the result of the clustering algorithm but do not affect the quantification of connectivity.

To determine contributions to plasticity arising due to internal network correlations and not just differences in neuronal firing rates [5], we consider the case in which the plasticity rule is balanced, such that L˜2(0)+riL˜3(0,0)=0. We use this condition to calculate all motif coefficients, Mα,β, that arise from the cross-covariance Cij (Eqs 4756 in Methods). We consider only motifs up to third-order in the evolution of the weights (Eq 2) since higher-than-third-order motif contributions are negligible (S1 Fig). Thus, we no longer include the branched path motifs of Eq 5 as they are higher-than-third-order motifs (Fig 4B–4D). This leaves us with a handful of motifs which arise from the second-order cumulant (consisting of the cross-covariance Cij and the auto-covariance Cii) and the third-order cumulant Kij (Methods; Fig 5). This simplification allows us to study the spontaneous emergence of assemblies under the triplet STDP rule based on both the triplet rule contributions to the cross-covariance Cij (Eq 3, Fig 5, top row) and the influence of the novel branching structures that follow from the auto-covariance Cii (Eq 4, Fig 5, second row, including the loop motifs in the gray boxes) and the third-order cumulant Kij (Eq 5, Fig 5, third row).

To systematically study how the dependence of these up to third-order motif coefficients on the shape of the STDP rule affects connectivity structure in the network, we visualized the connectivity matrices obtained by integrating the motif expansion up to third-order (Eqs 4246) numerically, using experimentally-fitted parameters for the triplet STDP rule and the EPSC function (Table 1). Specifically, we investigated the emergence of global network structures, or assemblies, as a function of the modulation parameter η. This parameter has a direct influence on the motifs which follow from the cross-covariance Cij (Fig 5) and the LTD window in the minimal triplet STDP rule (see Eqs 4958 and S2 Text). A key requirement for the emergence of assemblies is the formation of bidirectional or reciprocal connections among groups of neurons. We compare the reciprocal connections of the first-order motif contributions to gain intuition:

W˙ijSTDP(1)=rjWijM1,0+riWjiM0,1,W˙jiSTDP(1)=riWjiM1,0+rjWijM0,1. (6)

Since in the triplet STDP rule M1,0 > 0 (Fig 6B, red), the two bidirectional connections compete if M0,1 < 0, and potentiate if M0,1 > 0. Therefore, the sign of the motif coefficient M0,1, which depends on the contribution from the triplet STDP rule, determines the formation of bidirectional connections. Indeed, increasing η supports the formation of bidirectional connections (Fig 6C) as the motif coefficient M0,1 changes sign (Fig 6B, blue, see inset). In contrast, as previously shown, the classical pair-based STDP rule is unable to support the formation of assemblies and bidirectional connections due to its asymmetric shape [89, 90], although under certain conditions (dominant potentiation) it can promote bidirectional connections [51, 91]. Under the asymmetric pair-based STDP rule, M1,0> and M0,1 < 0 result in competition between the two reciprocal connections. To autonomously generate self-connected assemblies without structured network input requires a symmetric pair-based STDP rule (which is not biologically motivated) and a sufficiently large synaptic latency [6]. In this case, the prominence of the common input motif driven by the M1,1 motif coefficient over all other motif coefficients in the network supports assembly formation [6].

Table 1. Parameter values for figures.

⋆ denotes that values are provided in the figures.

Symbol Description Fig 6 Fig 7 Fig 8 Fig 9 Fig 10 Fig 11
N Number of neurons 48
μ External input firing rate 150 Hz
wmax Upper bound for each individual weight 0.17
Wmax Upper bound for total row/ column weight 0.85
A Depression learning rate 0.01
τ Depression time constant 33.7 ms [54]
τ+ Potentiation time constant 16.8 ms [54]
τy Second potentiation time constant 114 ms [54]
η Depression modulation parameter 1
τε First membrane time constant 5 ms
τι Second membrane time constant 5 ms 5 ms
ν Scaling parameter of learning rate 3.5 ×10−4
ψ Heterosynaptic competition scaling parameter 0.7

Under the triplet STDP rule, small increases in η increase the motif coefficient M1,1, resulting in the formation of bidirectional connections and assemblies, similarly to the symmetric pair-based STDP rule. However, despite its asymmetric shape, the triplet STDP rule can robustly generate bidirectional connections and assemblies even when the M1,1 motif coefficient has already saturated and other motif coefficients dominate (Fig 6C–6E), upon further increases in η. This is because higher-order structural motifs also contribute to the formation of bidirectional connections and assemblies. To understand this, we consider the motif contributions of feedforward motif coefficients—the motifs for which the α-path is longer than the β-path, M1,0, M2,1, M2,0 and M3,0—and reciprocal motif coefficients, where the β-path is longer than the α-path, M0,1, M1,2, M0,2 and M0,3. Given the asymmetry of triplet STDP, the feedforward motif coefficients are stronger. The reciprocal motifs, M0,1, M1,2, M0,2 and M0,3 play an important role in the formation of bidirectional connections as they change sign from negative to positive with increasing η (Fig 6B). A positive contribution from all motifs supports the robust formation of bidirectional connections in the network as the competition between reciprocal connections decreases. Together with the strong common input motif driven by the M1,1 motif coefficient, this leads to the robust emergence of assemblies. In this scenario, η controls the competition between feedforward (Wji) and reciprocal connections (Wij), with large η enabling the potentiation of both. This is not possible under the classical asymmetric pair-based STDP rule as previously discussed.

In summary, we find that the spontaneous formation of self-connected assemblies depends on the modulation parameter η, which influences most of the motifs arising from the cross-covariance Cij. Furthermore, self-connected assemblies can be formed under triplet STDP even when motifs other than the common input motif M1,1 dominate. This occurs despite the asymmetric shape of the triplet STDP rule, in contrast to pair-based STDP which requires a symmetric shape to promote M1,1. Importantly, the dependence of assembly formation on the specific form of the STDP window points towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.

Characterizing emergent assembly structures

To determine the conditions on the learning rule and EPSC properties for the emergence of self-connected assemblies, it is convenient to represent the functional organization of the network given a connectivity matrix as a mathematical graph. In our context, graphs are composed of a set of nodes or neurons with pairs of them joined by edges or synaptic efficacies. The resulting graphs can be described by standard metrics, whose dependence on the shape of the learning rule and the EPSC function might yield insight into the emergent structures during circuit organization driven by spontaneous activity. We focused on common quantities for describing graph structure, including the clustering coefficient, the global efficiency and the modularity [92, 93], used previously in experimental systems like the zebrafish tectum and the mammalian cortex [25, 94].

The clustering coefficient quantifies the existence of densely interconnected groups of nodes in the graph [95]. It represents a measure of segregation, based on counting the number of connection triangles around a node (Methods). In this manner, it reflects the prevalence of clustered connectivity around individual nodes by calculating the fraction of neighbors of that particular node that are also neighbors of each other. As a result, the mean clustering coefficient of a network determines the prevalence of three-neuron-clusters in the network architecture. We find that as the modulation parameter η increases, the mean clustering coefficient also increases until it reaches a plateau (Fig 7A). Ensuring that the motif coefficients M0,1 and M1,2 are positive is sufficient for the formation of clusters beyond the critical value of η ≈ 5 (Fig 6B and 6C), where the clustering coefficient begins to increase (Fig 7A). The value of η at which the clustering coefficient saturates corresponds to the emergence of more robust assemblies where all the motif coefficients are positive (Fig 6B and 6C). Although strong bidirectional connections are localized within clusters, connections from one cluster to some others still exist globally. This is different to the clustering enabled by strong symmetric interactions in which the motif M1,1 dominates, considered previously by a symmetric pair-based STDP rule [6], where the clusters would be unconnected (i.e. isolated from each other) and the clustering coefficient would be much higher.

Fig 7. Graph measures of the stable network configuration.

Fig 7

A. Mean clustering coefficient versus the modulation parameter η. B. Mean global efficiency versus the modulation parameter η. C. Mean modularity versus the modulation parameter η. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

Complementary to the clustering coefficient, the global efficiency is a measure of functional integration, which determines how easily nodes can communicate between each other through sequences of edges [96]. Consequently, the lengths of the paths estimate the potential for the flow of information between nodes, with shorter paths denoting stronger capacity for integration. Then, global efficiency is defined as the average inverse shortest path length of the network (Methods). In comparison to the clustering coefficient, this quantity initially remains approximately constant and then decreases until the point at which strong assemblies emerge autonomously since network structure no longer varies with the parameter η (Fig 7B). We find that as for the clustering coefficient, the value of η for which the motif coefficients M0,1 and M1,2 become positive (η ≈ 5) constitutes a landmark for the formation of assemblies, after which global efficiency significantly decreases.

Finally, modularity is a graph theoretic measure that describes how strongly a network can be divided into modules, by comparing the relative strengths of connections within and outside modules to the case when the network has randomly chosen weights [93, 97, 98]. Recently, it was shown that even in models with rate-based dynamics, increasing modularity amplifies the recurrent excitation within assemblies evoking spontaneous activation [48]. With increasing η, modularity increases until strong assemblies are formed in a similar fashion as the clustering coefficient (Fig 7C). Interestingly, the critical value of η ≈ 5 where assemblies begin to form robustly is consistent with experimental evidence of the shape of STDP where the time constant for depression has been found to be approximately 5 times longer than for potentiation [55, 99].

Contribution of the novel structural motifs under triplet STDP on assembly formation

So far we demonstrated that the spontaneous emergence of assemblies via modulation of triplet STDP depends on the interaction of different motifs that primarily arise from the second-order cross-covariance Cij (Figs 5 and 6), which is also present under pair-based STDP. However, whether the novel structural motifs that are unique to triplet STDP (Figs 2B and 35) play a role remains unclear. We hypothesize that the ‘loop’ motifs, which do not appear for the pair-based STDP rule (Fig 5; gray box) are important for assembly formation.

To investigate the implications of these novel ‘loop’ motifs, we compare the three graph measures in four different scenarios: Using the motifs (1) only from the cross-covariance Cij (Fig 5, top row); (2) from all cumulants (Cij, Cii and Kij) without the ‘loop’ terms (Fig 5 all except the gray boxes in the second row); (3) from the cross-covariance plus the two additional ‘loop’ terms (Fig 5, top row plus the gray boxes in the second row); and (4) from all cumulants (Fig 5, all). We find that cases (1) and (2) have worse performance in all three graph measures compared to cases (3) and (4) (Fig 8). Adding the third-order cumulant Kij and the ‘non-loop’ terms from the second-order auto-covariance Cii (case 4) even worsens the graph measures. We find that the third-order cumulant Kij alone has almost no influence on the spontaneous emergence of assemblies (S2 Fig), since its contribution to the weight change is small, as shown before [100]. We conclude that the additional ‘loop’ terms, which arise as novel structural motifs from the triplet STDP rule (Fig 5), have a significant contribution to spontaneous assembly formation.

Fig 8. Spontaneous emergence of assemblies for four different motif combinations.

Fig 8

Considering only motifs related to the cross-covariance Cij (blue), from all cumulants (Cij, Cii and Kij) without the ‘loop’ terms (red), from the cross-covariance Cij plus the ‘loop’ terms (yellow) and from all cumulants (purple). A. Averaged connectivity matrices over 100 trials at steady state for four different motif combinations and modulation parameter η = 13. Matrices are reordered using the k-means clustering algorithm (see Methods). B. Mean clustering coefficient versus the modulation parameter η. C. Mean global efficiency versus the modulation parameter η. D. Mean modularity versus the modulation parameter η. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

The triplet STDP rule and the EPSC together modulate network structure

The spontaneous emergence of assemblies discussed so far requires a relatively high value of the STDP modulation parameter η, raising the issue of biological plausibility. Although several experimental studies on induction of STDP indeed find longer depression than potentiation time constants [55, 99], we demonstrate an alternative mechanisms for the assembly formation by regulating the synaptic transmission of action potentials between neurons through the shape of the EPSC function. In this case, the strength of internally generated correlations can be changed independently of the STDP functions, L2 and L3. We investigated how the rise of the EPSC function modulated by delay of the spike transmission in the synapse, τι (Fig 9A), shapes motif coefficients (Methods; Fig 9B).

Fig 9. Spontaneous emergence of assemblies due to the modulation of synaptic transmission.

Fig 9

A. Varying the time constant τι changes the shape of the EPSC function, shifting its peak by a few milliseconds. B. Relative value of the motif coefficients as a function of τι. While the common input motif M1,1 rapidly assumes dominance, the motif coefficient M1,2 crosses over in strength with the feedback motifs M0,1, M0,2 and M0,3. C. Averaged connectivity matrices over 100 trials at steady state and different values of the time constant τι. Matrices are reordered using the k-means clustering algorithm (see Methods). D. Mean clustering coefficient versus the time constant τι. E. Mean global efficiency versus the time constant τι. F. Mean modularity versus the the time constant τι. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

The parameter τι has a prominent impact on plasticity in the network. Even small shifts in the peak of the EPSC function by a few milliseconds have a strong impact on the cumulants of different orders, as reflected in the values of the motif coefficients (Fig 9A and 9B). Different to the modulation with the parameter η, the parameter τι affects all motif coefficients. However, the influence of τι on the auto-covariance Cii and the third-order cumulant Kij is negligible. Therefore, although the main cumulant driving plasticity is the second-order cross-covariance Cij, which exists even under pair-based STDP (Fig 5), assemblies easily form under the triplet STDP rule (Fig 9C). The common input motif M1,1 abruptly assumes dominance over all others as τι increases (Fig 9B). However, we observed that the reciprocal motif coefficients M0,1, M1,2, M0,2 and M0,3 remain negative for all values of τι, in contrast to when we modulated the STDP learning rule (Fig 6B). This tells us that assemblies in the network are spontaneously formed in a different fashion (by promoting the potentiation of reciprocal connections in each cluster due to the common input motif, M1,1) than when modulating the STDP rule through η. In fact, assemblies emerge for minor modulations in τι (Fig 9C).

These differences in assembly formation become apparent when we consider the mean clustering coefficient, the global efficiency and the modularity as functions of τι (Fig 9D–9F): the three measures reflect the connectivity matrices as M1,2 crosses the motif coefficients M0,1, M0,2 and M0,3, in the case when M1,1 is already large. When the motif coefficient M1,2 becomes more negative than M0,3 (τι ≈ 20 ms), bidirectional connections are strongly promoted and assemblies robustly form. Even for τι > 20 ms, where the EPSC function does not change significantly (Fig 9A), one sees noticeable changes in the ‘tightness’ of the assemblies as observed in the averaged connectivity matrices (Fig 9C). Interestingly, as M1,2 decreases below M0,2 (τι ≈ 25 ms), the value of the clustering coefficient (≈ 0.1) and the modularity (≈ 0.7) correspond to the values where the clustering coefficient, the modularity, and the global efficiency saturate when modulating the STDP function (compare Figs 7 and 9D–9F). This means that the network structure is very similar (compare Fig 6E, right, with Fig 9C, second from left). Nevertheless, further increasing τι leads to more refined assemblies (Fig 9C, third from left) when M1,2 < M0,2. However, for τι ≳ 35 ms where M1,2 < M0,1, the clustering coefficient slightly decreases (Fig 9D) suggesting the existence of optimal regions in the parameter space of τι to obtain the ‘tightest’ assemblies.

Taken together, our analytical framework enables us to interpret changes in the motif coefficients as changes in the connectivity structure in terms of the formation of self-organized assemblies. Modifying either the shape of the learning rule, or the shape of the EPSC function, can achieve this, however, with different consequences on the nature of the formed structures as demonstrated by the graph theoretic measures.

Comparison with assemblies generated via external correlated input

Until now, we sought to understand the mechanisms that contribute to the autonomous emergence of assemblies in neural circuits without any structured external input. Yet, the training of assemblies and plasticity of recurrent connections has been more frequently studied when these networks are driven by structured external input, both in simulations [49, 89] and analytically [4245, 51]. Significant experimental evidence also exists for the emergence of functional connectivity underlying feature selectivity in the visual cortex around the time of eye opening, which is presumably influenced by structured visual input through the open eyes [14]. Therefore, we wanted to compare the formation of assemblies without structured external input under the triplet STDP rule to that with structured external input. To investigate spatiotemporal input patterns in our framework, we studied the overall mean impact of an external pairwise correlated input. This was implemented by assuming that the driving signal, which could for instance represent retinal input in the optic tectum or visual cortex, is correlated for a pair of neurons in the network, so that the structure of the input is represented as common input to that particular pair of neurons.

We write the covariance as a sum of the internal correlation and a novel term that conveys the external structured activity as common input [40]:

C˜(ω)=C˜int(ω)+(IE˜(ω)W)1E˜(ω)C˜ext(ω)E˜(ω)(IE˜(ω)WT)1. (7)

Here, Cint denotes the covariance matrix (see Eq 22) and Cext is the covariance matrix of the external input. We model the input signal as a correlated pattern that promotes the joint activity of pairs of neurons that belong to a certain assembly.

Using the standard parameters of the minimal triplet model (Table 1; Fig 9C, η = 1) assembly formation is difficult when the feedforward motif coefficients dominate (the motifs for which the α-path is longer than the β-path). However, a significantly stronger external correlation relative to internally generated network correlations can promote the common input motif, M1,1, and support assembly formation. As a function of the external correlation matrix, we quantify the structure of the resulting self-connected assemblies of neurons via the same graph measures used previously (Fig 10). The tight assemblies observed for the modulation of the STDP and the EPSC functions (Figs 6E and 9C) can now be formed for values of correlation strength one order of magnitude smaller than the synaptic upper bound.

Fig 10. Emergence of assemblies in the presence of structured external input.

Fig 10

A. Mean clustering coefficient versus the pairwise correlation coefficient of the input pattern. The strength of the correlation was provided as ratios (0.01, 0.05, 0.125, 0.25, 0.375 and 0.5) of the possible maximum weight of each individual synaptic connection wmax. B. Mean global efficiency versus the pairwise correlation coefficient of the input pattern. C. Mean modularity versus the pairwise correlation coefficient of the input pattern. The rapid increase of the clustering coefficient and the modularity combined with a decrease of the global efficiency is a feature of robust assembly formation. Sufficiently strong correlations in the external signal generate tight assemblies. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

Disrupting the balance between potentiation and depression affects assembly formation

We considered an STDP rule that is balanced in the total potentiation and depression, because disrupting this balance by increasing some firing rater over others favors the particular circuit motifs affected by those rates, as shown before [5, 45, 101]. When the balance is disrupted, the firing rate contribution to plasticity from chance spike coincidences dominates over internal correlations. When the zero-order term of the motif expansion (Eq 42) is non-zero, the mean change in the synaptic efficacies has a term that only depends on the firing rates. In this case, the firing rates of the pre- and postsynaptic neurons are the main drivers of network structure. This means that the overall impact of motifs in the network is diminished [5]. We explored the possible departures from balance through the inclusion of a perturbation parameter δ that can be either positive or negative and we scaled this parameter in proportion to the learning rate (Methods).

Therefore, to study the sensitivity of the emergence of network structure to perturbations on the depression vs. potentiation balance we consider that the zero-order ‘rate’ motif is different from zero. We find that departures from the balanced regime impact plasticity significantly. In the case of a depression dominated imbalance, δ < 0, all connections depress no matter the strength of the modulation through η, even for small absolute value of δ = −0.0001. In the case of potentiation, δ > 0, one might expect that all synaptic efficacies will just saturate; however, due to heterosynaptic competition, some network structure still forms when δ is small (Fig 11). If the perturbation is sufficiently strong, the autonomous emergence of assemblies by increasing the parameter η is disrupted. This is also evidenced when computing the graph measures for the resulting network structures (Fig 11B–11D). In summary, we find that considering an unbalanced STDP rule where either depression or potentiation dominates, prevents the autonomous emergence of assemblies.

Fig 11. Impact of perturbations in the balance of potentiation and depression of the triplet STDP rule.

Fig 11

A. Averaged connectivity matrices over 100 trials at steady state for the four different cases of the perturbation parameter δ and modulation parameter η = 13. B. Mean clustering coefficient versus the modulation parameter η. C. Mean global efficiency versus the modulation parameter η. We removed the cases δ = [0.01, 0.1] here since the global efficiency cannot be computed for weight matrices where all entries are identical. D. Mean modularity versus the modulation parameter η. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

Discussion

We developed a self-consistent theoretical framework to study the impact of HOCs, specifically up to third order, on the plasticity of recurrent networks by using the triplet STDP rule. We derived the dependence of the drift in synaptic efficacy on network structure, taking into account contributions from structural motifs of different orders, and demonstrated the emergence of global network structures i.e. assemblies, from these local motifs. Based on recent work on the calculation of beyond second-order cumulants of mutually exciting Hawkes processes [37, 102], we broke down the spike interactions (including pairs and triplets of spikes) to include the influence of spikes from any source neuron in the network on the firing of the pre- and postsynaptic neurons via paths of different length thus taking into account the full network recurrence (Figs 2 and 4). We characterized structural motifs that arise from these spike interactions, including novel motifs arising due to triplet STDP, and analyzed their impact on the internal up to third-order correlation structure and plasticity in the network through the motif coefficients (Figs 3 and 5). While linearization of neuronal dynamics was required for this approach, it is a common technique used to approximate the dynamics of more realistic biophysical neurons [5, 34]. We found that motif contributions to plasticity from the second-order cross-covariance Cij support assembly formation under triplet STDP. However, since these same motifs exist also under pair-based STDP, we wondered if the novel motifs unique to triplet STDP are important for assembly formation. Indeed, we showed that several novel motifs and specifically the ‘loop’ motifs, which emerge under the triplet STDP, have an important contribution to the formation of assemblies (Fig 8).

We investigated the contribution of up to third-order structural motifs on assembly formation using an asymmetric minimal triplet STDP rule, in which depression is induced by pairs of spikes and, conversely, potentiation is induced by triplets of spikes (Fig 1D). This rule has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture; for instance, plasticity experiments in which the pairing frequency during plasticity induction was varied [54, 55]. As such, the triplet STDP rule is sensitive to third-order correlations, here referred to as HOCs. HOCs have not only been measured in the brain, but also shown to play an important role in visual coding and representing experimental data [58, 59, 103, 104]. HOCs are ubiquitous in sensory stimuli, such as natural stimuli and speech signals [105, 106]. These correlations have been previously utilized in learning rules, such as the BCM rule, to extract the independent components or features in natural images resulting in simple cell receptive fields as seen in V1 [105, 107109]. Because of its mapping to the BCM rule [57], we can interpret the triplet STDP rule as a method for performing similar computations.

Modulating either the STDP rule (Fig 6) or the EPSC function (Fig 9) enabled the spontaneous formation of self-connected assemblies without the need for externally patterned inputs [4951] or assuming a symmetric pair-based STDP rule [6]. We quantified the nature of the emergent assemblies using three graph theoretic measures used to characterize spontaneous assemblies in the tectum of zebrafish larvae [25]. Directly comparing the values of these measures between the experimental data and our model results is difficult given inhomogeneities in the size of biological network assemblies and a multitude of mechanisms that shape their formation during development. Yet, comparing how these measures change as a function of the STDP rule or the EPSC kernel in our model could offer insights into how modulating plasticity and synaptic transmission affect network structure through spontaneous activity under minimal assumptions (Fig 7). Interestingly, the final assemblies formed by modulating the EPSC function were more consistent across networks with different initial connectivity than the assemblies generated through the modification of the STDP function. This could be seen by the ‘tighter’ structures in the average connectivity matrices (Figs 6E and 9C), and the higher values of graph measures (Figs 6 and 9D–9F). The ultimate connectivity structure was determined by the relative strength of motifs which were regulated differently by each modulatory mechanism. In particular, modifying the EPSC function reinforced the influence of the common input motif (driven by the motif coefficient M1,1) over all others (Fig 9B). In comparison, the modulation of the STDP rule by extending the time constant for depression over potentiation reduced the competition between reciprocal connections by maintaining a strong feedforward drive (driven by the feedforward motif coefficients M1,0, M2,1, M2,0 and M3,0) and making the corresponding reciprocal motif coefficients (M0,1, M1,2, M0,2 and M0,3) positive. Therefore, assembly formation was driven by the strengthening of reciprocal connections, even though the M1,1 coefficient was still strong (Fig 6B). Although experimental evidence exists for a longer time constant for depression over potentiation in STDP [55, 99], the much longer values of the STDP modulation parameter η needed for our results raise the question of whether this mechanism is biologically plausible. This might make the modulation of the EPSC function under triplet STDP more suitable for explaining the autonomous emergence of self-connected assemblies. It is probably the case that both mechanisms are used in biological circuits. Studying the effects of neuromodulation, which can alter the shape of STDP or the synaptic transmission function, on the plasticity of connections in many brain regions is possible with recent advances in experimental techniques [8486]. Understanding the consequences of changing the properties of the underlying plasticity mechanisms on network dynamics can further elucidate the process of learning and memory storage in recurrent networks found everywhere in the brain [8486].

Applying external correlated input led to the emergence of self-organized assemblies (Fig 10) that were similar to the assemblies from changing the EPSC function. Consequently, we propose that the mechanisms that promote the formation of assemblies can be diverse in different circuits depending on the nature of the plasticity rules, synaptic transmission (EPSC function) or the structure of external input that dominate in these circuits.

Our framework enabled us to derive global connectivity structures that emerge in recurrent networks such as assemblies, which have been abundantly observed in experimental data. Connectivity matrices of large recurrent networks are generally difficult to assay experimentally, requiring multiple cells to be patched simultaneously [110], although recent developments in the field of connectomics offer potential for these matrices to be obtained in the future [111, 112]. However, a good experimental determinate of assemblies may be derived from functional interactions among neurons, inferred from physiological experiments that simultaneously record the activity of a large number of neurons. While it is clear that neuronal activity exhibits structure in response to sensory input, assemblies are present even during spontaneous activity and have similar spatial organization [21, 25, 26]. This has suggested that these self-organized assemblies are biologically relevant for the processing of information in these networks and for the representation of sensory stimulus attributes [21]. In the rodent visual cortex, a given stimulus, of the form of a natural scene or an orientated grating, consistently activates a specific assembly [21]. On the behavioral scale, recent experiments suggest that functional circuit connectivity may be intrinsically adapted to respond preferentially to stimuli of biological relevance for the survival of the animal, such as catching prey or avoiding predators [24, 27].

Our analytical approach offers a precise description of how synaptic plasticity shapes connectivity in recurrent networks driven by spontaneous activity (though we also considered the role of structured external input). Such spontaneous activity is especially common during early postnatal development, where it activates neural networks before the onset of sensory experience and the maturation of sensory organs. In the rodent visual system, for instance, eye opening only occurs during the second postnatal week of development [113]. Prior to this, spontaneous patterns of activity propagate throughout the entire visual system, including the retina, thalamus and cortex [114], which are known to instruct different aspects of circuit organization [115]. Interestingly, during very early postnatal development of somatosensory cortex in rodents (postnatal day 4), spontaneous activity exhibits a highly correlated state consisting of cell assemblies where multiple neurons show correlated activity [116]. By the second postnatal week this spontaneous activity transitions to a much more decorrelated state that lacks a clear spatial structure. A similar sparsification of spontaneous activity during development is also observed in the visual cortex, though lacking the spatial structure observed in the somatosensory cortex [117]. Since these two studies argue that over development functional connectivity becomes more desynchronized, this framework is more consistent with our analysis of the depression window of the STDP rule becoming smaller over development (Fig 6). This broadening of the depression window in early development is consistent with a previously described burst-timing-dependent plasticity where the temporal integration of activity occurs over much longer timescales on the order of several hundred milliseconds than in adulthood [115, 118, 119].

Assembly formation has been the goal of many other previous works, typically instructed by externally structured input in recurrent network models with balanced excitation and inhibition [4247]. These assemblies exhibit attractor dynamics which have been argued to serve as the substrate of different computations, such as predictive coding through the spontaneous retrieval of evoked response patterns [49, 50, 120]. We investigated the generation of assemblies through triplet STDP driven by higher-order correlations generated internally in the network. Other works have also studied the emergence of non-random structure in the absence of structured external input [5, 6, 48]; our work takes a similar approach of incorporating the full recurrence in the network through the expansion into structural motifs as [6]. As it becomes evident from these studies, the investigation of STDP in recurrent networks for unsupervised learning involves a lot of parameters and additional mechanisms (including short-term plasticity, heterosynaptic plasticity and inhibitory plasticity) which make the identification of general rules difficult. Nevertheless, the precise theoretical description of triplet STDP in recurrent networks provided by our framework highlights a set of novel motifs absent in the case of pair-based STDP that promote assembly formation, in the process highlighting an important role of HOCs in the generation of global network structure from local motifs.

Methods

Network dynamics

The time dependent activity of a neuron i is given by a stochastic realization of an inhomogeneous Poisson process [70], with expectation value

λi(t)=μi+k=1NWik[E*Sk](t), (8)

where μi is the external input firing rate, W is the synaptic weight matrix, S(t) is the spike train and E(t) is the EPSC function, which we assume to be identical for all N neurons. Then, the product WE(t) is referred to as the interaction kernel. The operator ‘*’ corresponds to the convolution operation. In all plasticity simulations, the connectivity weight matrix is divided into an excitatory and an inhibitory component, such that the effective connectivity matrix is calculated as Weff = WWinh. The inhibitory weight matrix Winh is updated to balance the excitatory (see section ‘Additional plasticity mechanisms besides STDP’). For simplicity in notation, we refer to Weff as W in the manuscript.

Averaged synaptic efficacy dynamics for pair-based and triplet STDP rules

Plasticity of the connectivity matrix W is determined by pair-based and triplet STDP rules. We assume ‘all-to-all’ interactions between spikes, where each postsynaptic spike interacts with every previous pre- and postsynaptic spike and vice-versa [52, 121123].

Plasticity due to the pair-based STDP can be expressed as:

W˙ijpairSTDP(t)=Si(t)Sj(tτ1)L2(τ1)dτ1 (9)

and plasticity due to the triplet STDP rule as:

W˙ijtripletSTDP(t)=Si(t)Sj(tτ1)Si(tτ2)L3,y(τ1,τ2)dτ1dτ2+Sj(t)Si(tτ1)Sj(tτ3)L3,x(τ1,τ3)dτ1dτ3 (10)

L2 corresponds to the pair-based STDP rule and L3 to the triplet STDP rule. The additional subscripts x and y denote that the triplets which contribute to plasticity are two pre- and one postsynaptic spikes and one pre- and two postsynaptic spikes, respectively. τ1 is the time difference between the spikes of the pre- and the postsynaptic neuron. τ2 is the time difference between two postsynaptic spikes and τ3 is the time difference between two presynaptic spikes (Fig 1B). It should be highlighted that this derivation is independent of the specific shape of the STDP functions.

Assuming slow learning in comparison to neuronal dynamics and that pairs and triplets of spikes between the pre- and postsynaptic neurons are relevant to plasticity [54, 57], the mean evolution of the synaptic efficacies due to STDP is given by

W˙ijSTDP(t)=Si(t)Sj(tτ1)L2(τ1)dτ1+Si(t)Sj(tτ1)Si(tτ2)L3,y(τ1,τ2)dτ1dτ2+Sj(t)Si(tτ1)Sj(tτ3)L3,x(τ1,τ3)dτ1dτ3 (11)

where 〈⋅〉 denotes averaging over different realizations of the Poisson neuronal dynamics for different connectivity.

We define the mean rates of the pre- (j) and postsynaptic neuron (i) as rj and ri. We consider both to be stationary at equilibrium. The second-order correlation between the pre- and postsynaptic neurons with time delay τ1 is 〈Si(t)Sj(tτ1)〉 and we define the covariance matrix (second-order cumulant) C (Fig 1B) as

Cij(τ1)=Si(t)Sj(tτ1)rirj. (12)

Note that [6, 37] use a different convention for signs.

The third-order correlation between the triplet of spikes ‘post-pre-post’ with time delays between the pre and one post τ1 and between the two post τ2 is 〈Si(t)Sj(tτ1)Si(tτ2)〉 and we define the third-order cumulant as [57]

Kij(τ1,τ2)=Si(t)Sj(tτ1)Si(tτ2)ri(Cij(τ1)+Cij(τ2τ1))rjCii(τ2)ri2rj. (13)

Analogously, for the ‘pre-post-pre’ third-order correlation 〈Sj(t)Si(tτ1)Sj(tτ3)〉, we can define the third-order cumulant

Kij(τ1,τ3)=Sj(t)Si(tτ1)Sj(tτ3)rj(Cij(τ1)+Cij(τ3τ1))riCjj(τ3)rirj2. (14)

With these definitions, Eq 11 becomes Eq 1 of the main text.

Calculation of cumulants

The definition of cumulants in the Fourier space is imperative for the derivation of our results. Assuming stationarity, the expected firing rate (i.e. the first order cumulant) vector r = 〈λ(t)〉 no longer depends on time and can be written as

r=(IE˜(0)W)1μ, (15)

where E˜(0)=F[E(t)]t=0 denotes the Fourier transform of the EPSC function evaluated at zero. For all the calculations, we define the Fourier transform as

F[f(τ)]=f˜(ω)=f(τ)ejωτdτ. (16)

The second-order cumulant, consisting of the cross- and auto-covariance, can be calculated in the time domain as [37, 102]

Cij(τ1)=k=1NrkRik(u)Rjk(uτ1)du, (17)

where R(t)=n0G*n(t) is defined as a ‘convolution power series’ [37, 102] of the interaction kernel G(t) = WE(t), with

G*n(t)={Iδ(t),ifn=0tG*(n1)(ts)G(s)ds=t(WE(ts))*(n1)WE(s)ds,ifn1. (18)

Since W does not depend on the integration domain, the convolution in the operation *n is calculated on E, while for W it becomes a power operation. Formally, the computation of each element Rmn(t) consists of calculating the probability of a spike from neuron m at time t given that neuron n fired at time 0. Therefore, in Eq 17 we write the covariance for the spike trains of neurons i and j as the probability of a pair of spikes in neurons i and j at a time lag τ1 given that neuron k fired, where k can be any neuron in the network. This representation provides a convenient formalism for representing causality of spiking events in our model. Then, considering the definition of ‘path lengths’ α and β from the source neuron k to the postsynaptic neuron i and the presynaptic neuron j (Fig 2A), we can rewrite Eq 17 as

Cij(τ1)=α,βE*α(u)E*β(uτ1)duk=1Nrk(Wα)ik(Wβ)jk. (19)

Here, E*κ denotes a series of convolutions of the EPSC function

E*κ(t)=E(t)*E(t)**E(t)κterms. (20)

For the auto-covariance Cii for path lengths α and γ from the source neuron k to the postsynaptic neuron i (Fig 2B), we analogously obtain

Cii(τ2)=α,γE*α(u)E*γ(uτ2)duk=1Nrk(Wα)ik(Wγ)ik. (21)

Since each R function consists of the convolution of the EPSC functions, then its Fourier transform is the product of the Fourier transforms of each of those functions, which simplifies calculations. Therefore, the cross-covariance Cij in the frequency domain (i.e. the Fourier transform of Eq 17) is given by (detailed derivation in S1 Text)

C˜ij(ω)=k=1NrkR˜ik(ω)R˜jk(ω), (22)

and, finally we obtain the expression

C˜ij(ω)=α,βE˜α(ω)Eβ(ω)k=1Nrk(Wα)ik(Wβ)jk. (23)

It should be noted that Eq 23 was also derived in previous works using a different approach [6, 70]. However, for the third-order cumulant Kij (Fig 4) that same approach is not possible. Therefore, it is convenient to write Kij in the time domain in terms of the previously defined R [37, 102] as

Kij(τ1,τ2)=k=1NrkRik(u)Rjk(uτ1)Rik(uτ2)du+k,l=1NrkRik(u)Rjl(vτ1)Ril(vτ2)Ψlk(vu)dvdu+k,l=1NrkRjk(uτ1)Ril(v)Ril(vτ2)Ψlk(vu)dvdu+k,l=1NrkRik(uτ2)Ril(v)Rjl(vτ1)Ψlk(vu)dvdu, (24)

where additionally

Ψ(t)=R(t)Iδ(t)=n1G*n(t). (25)

In Eq 24, Ψlk(vu) is the probability density of the event that a spike from neuron k at a time vu = 0 causes a neuron l (different from neuron k) to emit a spike at a time vu ≠ 0, after at least one synaptic connection. The function Ψ is necessary in Eq 24 to take into account the branching structures in the calculation of Kij (Fig 4B–4D). In addition to α, β and γ, ζ is the path length from the source neuron k to the neuron l where the synaptic connection path branches out and is equal to or larger than one. Then, replacing both the R and Ψ functions by their corresponding definitions in terms of the connectivity matrix W and EPSC function E(t) yields

Kij(τ1,τ2)=α,β,γE*α(u)E*β(uτ1)E*γ(uτ2)duk=1Nrk(Wα)ik(Wγ)ik(Wβ)jk+α,β,γζ1E*α(u)E*β(vτ1)E*γ(vτ2)E*ζ(vu)dvduk,lNrk(Wα)ik(Wβ)jl(Wγ)il(Wζ)lk+α,β,γζ1E*β(uτ1)E*α(v)E*γ(vτ2)E*ζ(vu)dvduk,lNrk(Wβ)jk(Wα)il(Wγ)il(Wζ)lk+α,β,γζ1E*γ(uτ2)E*α(v)E*β(vτ1)E*ζ(vu)dvduk,lNrk(Wγ)ik(Wα)il(Wβ)jl(Wζ)lk. (26)

As with the second-order cumulant, we can calculate the Fourier transform of the third-order cumulant Kij from Eq 24 as (detailed derivation in S1 Text)

K˜ij(ω1,ω2)=k=1NrkR˜ik(ω1+ω2)R˜jk(ω1)R˜ik(ω2)+k,l=1NrkR˜ik(ω1+ω2)R˜jl(ω1)R˜il(ω2)Ψ˜lk(ω1+ω2)+k,l=1NrkR˜il(ω1+ω2)R˜jk(ω1)R˜il(ω2)Ψ˜lk(ω1)+k,l=1NrkR˜il(ω1+ω2)R˜jl(ω1)R˜ik(ω2)Ψ˜lk(ω2). (27)

Finally, we obtain the third-order cumulant Kij in the Fourier domain in terms of the connectivity matrix W, the EPSC function E(t), and the path lengths α, β, γ and ζ as

Kij(ω1,ω2)=α,β,γE˜α(ω1+ω2)E˜β(ω1)E˜γ(ω2)k=1Nrk(Wα)ik(Wγ)ik(Wβ)jk+α,β,γζ1E˜α+ζ(ω1+ω2)E˜β(ω1)E˜γ(ω2)k,l=1Nrk(Wζ)lk(Wα)ik(Wβ)jl(Wγ)il+α,β,γζ1E˜α(ω1+ω2)E˜β+ζ(ω1)E˜γ(ω2)k,l=1Nrk(Wζ)lk(Wα)il(Wβ)jk(Wγ)il+α,β,γζ1E˜α(ω1+ω2)E˜β(ω1)E˜γ+ζ(ω2)k,l=1Nrk(Wζ)lk(Wα)il(Wβ)jl(Wγ)ik. (28)

Calculation of motif coefficients

Extending the work of [6], who artificially tuned the values of motif coefficients to investigate the consequences on the network structures, we derive them analytically as a function of the STDP rule and the EPSC function. To obtain the expression for the motif coefficients necessary for Eqs 3, 4 and 5, we first need to insert Eqs 23 and 28, i.e. the definitions of the second- and third-order cumulants in the frequency domain, in Eq 2. Then, it can easily be seen that it is possible to separate the part that depends on the products of the connectivity matrix W from the rest. This way we define the motif coefficients as the integral of the products of the Fourier transforms of the STDP functions and the EPSC functions, considering the appropriate path lengths α, β, γ and ζ (Fig 3B and 3D). In particular, for the motif coefficients in Eq 3 we derive

Mα,βpair=E˜α(ω1)E˜β(ω1)L˜2(ω1)dω1. (29)

and

Mα,βtrip=(E˜α(ω1)E˜β(ω1)δ(ω2)+E˜α(ω2)E˜β(ω2)δ(ω1+ω2))L˜3(ω1,ω2)dω1dω2. (30)

We note that this definition combines motif coefficients where α is the index corresponding to paths to the postsynaptic neuron, regardless of which of the two postsynaptic spike of the spike triplet it refers to (Fig 1C). For the motif coefficient in Eq 4 we derive

Mα,γtrip=E˜α(ω2)E˜γ(ω2)L˜3(ω1,ω2)δ(ω1)dω1dω2. (31)

Lastly, for the ‘straight’ triplet motif (Fig 4A) in Eq 5 we get:

Mα,β,γtrip=E˜α(ω1+ω2)E˜β(ω1)E˜γ(ω2)L˜3(ω1,ω2)dω1dω2 (32)

while for the ‘branching’ motifs (Fig 4B–4D) in Eq 5:

M(α,ζ),β,γtrip=E˜α+ζ(ω1+ω2)E˜β(ω1)E˜γ(ω2)L˜3(ω1,ω2)dω1dω2, (33)
Mα,(β,ζ),γtrip=E˜α(ω1+ω2)E˜β+ζ(ω1)E˜γ(ω2)L˜3(ω1,ω2)dω1dω2, (34)

and

Mα,β,(γ,ζ)trip=E˜α(ω1+ω2)E˜β(ω1)E˜γ+ζ(ω2)L˜3(ω1,ω2)dω1dω2. (35)

These expressions give us a concise representation of how the spiking activity interacts with network structure to impact plasticity.

Synaptic dynamics

To calculate the values for the motif coefficients in Eqs 2935, we define the EPSC function E(t) as

E(t)={τε+τιτε2etτε(1etτι)ift00ift<0. (36)

This function depends on two time constants τε and τι that define the onset and decay of the increase in the membrane potential with each spike. In particular, when τι → 0 the current is instantaneous and decays exponentially. The function is normalized to have an integral equal to 1, so that on average the number of postsynaptic spikes with the arrival of a presynaptic spike scales with the same order of magnitude as the synaptic efficacy. Its Fourier transform is

E˜(ω)=(1+τιτε)1jτεω1+τε2ω2τιτε1jτετιτε+τιω1+(τετιτε+τι)2ω2. (37)

With respect to the choice of STDP function, we consider the minimal triplet STDP rule [54, 57] that consists of the pair-based STDP function for depression and of a triplet STDP function for potentiation (Fig 1C). Furthermore, we introduce a ‘modulation parameter’ η to model the reshaping of the depression window of the STDP function via modulatory effects.

The depression window of the STDP function can be written as

L2(τ1)={Aηeτ1ητifτ1<00otherwise, (38)

where τ1 = tpost − tpre denotes the time difference between a post- and a presynaptic spike, A is the depression learning rate, τ is the depression time constant and η is the depression modulation parameter. The potentiation window of the STDP function depends on the timing of spike triplets (tpre, tpost, tpost)

L3(τ1,τ2)={A+eτ1τ+eτ2τyifτ10,τ200otherwise, (39)

where again τ1 = tpost − tpre denotes the time difference between a post- and a presynaptic spike and τ2=tposttpost is the time difference between the two postsynaptic spikes; A+ is the potentiation learning rate, τ+ is the potentiation time constant and τy is the second potentiation time constant.

While the ‘A’ parameters scale the amplitude of weight changes, the ‘τ’ coefficients determine how synchronous pre- and post-synaptic spikes must be to drive plasticity. The η parameter enables the modification of the shape of the STDP function. This additional parameter does not affect the total depression and potentiation in the rule and one can easily recover the ‘standard’ expressions for η → 1.

The Fourier transforms for these two functions are

L˜2(ω1)=Aτ1+jητω11+η2τ2ω12, (40)

and

L˜3(ω1,ω2)=A+τ+τy1jτ+ω11+τ+2ω121jτyω21+τy2ω22. (41)

Motif expansion up to third-order

After including Eqs 3, 4 and 5 into Eq 2, we can rewrite it in terms of the order of interactions in which they contribute to the averaged synaptic modification:

W˙ij=W˙ij(0)+W˙ij(1)+W˙ij(2)+W˙ij(3)+ (42)

Assuming non-zero mean rates, no self-excitation (i.e. Wii = 0) and that terms of order higher than three can be disregarded in comparison to lower order ones, the terms of Eq 42 are

W˙ij(0)=rirjM0 (43)

for the zeroth-order contributions,

W˙ij(1)=rjWijM1,0+riWjiM0,1 (44)

for the first-order contributions,

W˙ij(2)=ki,jrkWikWjkM1,1+rj(W2)ijM2,0+ri(W2)jiM0,2+ki,jriWikWkirjMα=2,γ=0trip+ki,jrkWik2rjMα=1,γ=1trip+rjWij2Mα=1,β=0,γ=1trip, (45)

for the second-order contributions, and finally

W˙ij(3)=ki,jrk(W2)ikWjkM2,1+ki,jrkWik(W2)jkM1,2+rj(W3)ijM3,0+ri(W3)jiM0,3+ki,jrk(W2)ikWikrj(Mα=2,γ=1trip+Mα=1,γ=2trip)+rj2(W3)iiMα=3,γ=0trip+ki,jrkWik2WjkMα=1,β=1,γ=1trip (46)

for the third-order contributions. Examples and illustrations of these motifs are given in Figs 3 and 5. For conciseness, we grouped motif coefficients arising from the pair-based STDP rule and from the triplet STDP rule that shared values of α and β and relabeled them as

M0=L˜2(0)+riL˜3(0,0), (47)
M1,0=riMα=1,β=0trip, (48)
M0,1=Mα=0,β=1pair+riMα=0,β=1trip, (49)
M2,0=riMα=2,β=0trip, (50)
M0,2=Mα=0,β=2pair+riMα=0,β=2trip, (51)
M1,1=Mα=1,β=1pair+riMα=1,β=1trip, (52)
M3,0=riMα=3,β=0trip, (53)
M0,3=Mα=0,β=3pair+riMα=0,β=3trip, (54)
M2,1=Mα=2,β=1pair+riMα=2,β=1trip, (55)
M1,2=Mα=1,β=2pair+riMα=1,β=2trip. (56)

Using the defined functions for the EPSC (Eq 37) and STDP functions (Eqs 40 and 41) we calculated the motif coefficients in Eqs 2932, and consequently Eqs 4756. Note, we excluded the higher-than-third-order motif coefficients in Eqs 3335. The truncated approximation of motifs up to third-order is valid, since the difference of the weight change from the full contribution and the weight change from up to third-order motif truncation is very small (S1 Fig). These quantities represent the strength of contributions of each particular combination of paths from the source neuron to the pre- and postsynaptic neurons involved in the synaptic connection. In principle, also the motifs Mα=0,γ=2 and Mα=0,γ=3 from the auto-covariance and the motifs Mα=2,β=0,γ=1 and Mα=1,β=0,γ=2 from the third-order cumulant would need to be considered, however we find that the contribution of these motifs is zero and therefore we did not include them in our analysis. Since we assume a learning rule balanced in potentiation and depression,

M0=0, (57)

and thus

Aτ+riA+τ+τy=0, (58)

which is independent of the modulation parameter η, this allows us to rewrite these motifs so that they are independent of the mean firing rate of the postsynaptic neuron. Since the firing rate ri is not fixed, it should be noted that this assumption implies that the amplitude of the LTP window A+ adjusts to balance the learning rule, similar to metaplasticity [57]. However, we verify that the firing rates in the system are relatively stable over the time of the simulation (S5 Fig) and therefore A+ does not vary much. We analyze the evolution of these quantities in the main text, because they involve only α and β paths and remain constant throughout the numerical integration, in contrast to the motif coefficients in Eqs 45 and 46 which involve both α and γ paths, and which have an additional rate dependence. The expressions for the motif coefficients defined by Eqs 4756 in terms of the EPSC and STDP functions’ parameters are given in S2 Text.

Perturbation of the zero-order motif

We consider a small perturbation δ to the zero-order (or rate) motif

M0=±δ. (59)

A minus sign indicates that the balance is tilted towards depression and, conversely, a plus sign conveys a potentiation-dominated regime. Then, given the minimal triplet STDP rule, we obtain that

Aτ+riA+τ+τy=±δ. (60)

Since the firing rate contribution to plasticity is now different from zero, chance spike coincidences impact the averaged evolution of the synaptic efficacies as follows

W˙ij=rirj(±δ)+ (61)

In this new scenario, the motif coefficients calculated from the triplet rule (equations given in S2 Text) now scale with (A τ ± δ). All weights depress to zero for δ < 0 because we impose a lower bound on the weights at 0, and do not include any growth terms independent of synaptic potentiation.

Additional plasticity mechanisms besides STDP

Although we did not formally model inhibitory plasticity, we assume that the overall effect of the inhibitory population on the synaptic efficacies among excitatory neurons is to balance the network activity. Thus, the sum of inhibitory synapses into each neuron is dynamically adjusted to match the sum of the excitatory synaptic efficacies, such that each element of the inhibitory connectivity matrix is equal to the average of the excitatory input as

Winh=[w1inhwNinh][11]Dinh (62)

where

wlinh=1N1kWlk (63)

is the value of each row element and

Dinh=[w1inh0000wNinh] (64)

is a diagonal matrix to take into account there is no self-connectivity. Then, the effective connectivity weight matrix is calculated as Weff = WWinh. The inhibitory connections are fast and updated in each integration step. As mentioned earlier, we refer to Weff as W in the manuscript. It should be noted that deviations from this perfect balance between excitation and inhibition, modeled with an inhibitory multiplicative factor δinh which scales the overall inhibitory inputs to deviate from a perfect balance, do not affect the emergence of network structure (S4 Fig). Furthermore, we find that the formation of network structure does somewhat depend on the input rate μi but not on heterogeneity in the input firing rates (S3 Fig). The input rates effectively determine the mean firing rates of the network throughout the whole simulation (S5 Fig).

We also implement heterosynaptic competition based on previous work [6, 80] as an additional mechanism for the plasticity dynamics to restrict the maximum number of strong connections a neuron can make, and thus keep the spectral radius of the connectivity matrix lower than one. The total synaptic input and output of each neuron is limited: the sum of the inbound (afferent) connections to each postsynaptic neuron i and the sum of outbound (efferent) excitatory synaptic efficacies from each presynaptic neuron j have an upper bound Wmax. The plasticity due to heterosynaptic competition can be written as

W˙ijhc=(WmaxkWki)H(kWkiWmax)+(WmaxkWjk)H(kWjkWmax), (65)

where H is the Heaviside function. Imposing an upper bound wmax for each synaptic efficacy restricts the possible number of connections a neuron can make to Wmaxwmax. Therefore, the average amount of plasticity is the sum of the change due to STDP based on Eq 2 and heterosynaptic competition based on Eq 65.

W˙ij=ν(W˙ijSTDP+ψW˙ijhc). (66)

Here, the learning rate scale ν ensures that the synaptic efficacy increments in each integration step are small. The relative contribution of heterosynaptic competition to overall plasticity is determined by the heterosynaptic competition term ψ. The values for these parameters can be found in Table 1. The values for the parameters in the Supplementary Figures can be found in Table 2.

Table 2. Parameter values for supplementary figures.

⋆ denotes that values are provided in the figures.

Symbol Description S1 Fig S2 Fig S3 Fig S4 Fig S5 Fig
N Number of neurons 12 48
μ External input firing rate 150 Hz 150 Hz
wmax Upper bound for each individual weight 0.17
Wmax Upper bound for total row/ column weight 0.85
A Depression learning rate 0.01
τ Depression time constant 33.7 ms [54]
τ+ Potentiation time constant 16.8 ms [54]
τy Second potentiation time constant 114 ms [54]
η Depression modulation parameter 13 13
τε First membrane time constant 5 ms
τι Second membrane time constant 5 ms
ν Scaling parameter of learning rate 3.5 ×10−4
ψ Heterosynaptic competition scaling parameter 0.7

Numerical integration of connectivity matrices

To generate the different connectivity matrices in each Figure, we integrate Eq 2 numerically. The plasticity dynamics are implemented using the Euler method with an adaptive time step. The maximal amount that a weight can change in each integration step is 0.00035. Although the weight evolution is deterministic and determined by the plasticity parameters (Table 1), final connectivity matrices depend on initial connectivity matrices. The initial connection weights are chosen independently from a uniform distribution between 0 and Wmax/N × 0.001, and each one of these initial conditions corresponds to a different “trial”. The numerical integration for each initial condition is continued until the network connectivity achieves a steady state (no longer changes).

Averaged ordered connectivity matrices

The connectivity matrices resulting from integrating Eq 2 numerically are ordered to reflect the graph structure of the network [6] (Fig 6C). K-means classification groups neurons that share similar connectivity using a squared Euclidean distance. We then reorder the connectivity matrix based on the groups identified by the k-means clustering. Since the structures studied depend on initial conditions, despite the deterministic nature of our approach, we average the rearranged synaptic efficacy matrix over many trials with different (but random and weak) initial connectivity to obtain the most likely connectivity (Figs 6E and 9C). Assemblies on the edges of the connectivity matrices have sharper edges due to an artifact created by the ordering algorithm, but this does not affect results.

Network analysis

We calculated graph theoretic measures for directed networks using algorithms of the Brain Connectivity Toolbox [124] from http://www.brain-connectivity-toolbox.net. All graph measures were calculated at the steady state and increased during a simulation as network organization improved.

Clustering coefficient

For each connectivity matrix we computed the clustering coefficient [95]. For node i, this is

Ci=numberofcompletetripletsnumberofallpossibletriplets. (67)

The number of complete triplets is obtained from the product of the corresponding edges of the node (from the adjancency matrix), and the total number of triplets depends on network size. Then, the average of the clustering coefficients of all the vertices N is given by [92]

C¯=1Ni=1NCi. (68)

Global efficiency

The efficiency in the communication between nodes i and j can be defined to be inversely proportional to the shortest distance. The average efficiency of a network is calculated as [96]

E=1N(N1)ij1d(i,j) (69)

where N denotes the nodes in the network and d(i, j) is the length of the shortest path between a node i and a different node j. As an alternative to the average path length, the global efficiency of a network is defined as

Eglob=EE(fullnetwork) (70)

where the efficiency is scaled by an ideal graph where all the possible edges exist (i.e. full network). The difference between these measures is that the first measure quantifies the efficiency in a network where only one packet of information is being moved through it and the global measure quantifies the efficiency where all the vertices are exchanging packets of information with each other [96].

Modularity

The modularity Q of a connectivity matrix is a measure of the strength of its division into clusters or modules. Formally, modularity can be calculated as [93]

Q=12mvw(Avwkvkw2m)svsw+12 (71)

where A is the adjancency matrix of the graph, k is the node degree, v and w are the nodes’ indices and s is a variable that determines if the node belongs to a community or not. Modularity is the non-randomly distributed proportion of the edges that belong to the given cluster in a graph. It is positive if the number of edges within groups exceeds the number expected at random and depends on the chosen method for community detection.

The first algorithm we used for community detection, referred to as ‘spectral clustering’ algorithm, is based on the fact that modularity of a network is closely related to the structure of the eigenvalue spectrum of its weight matrix [93, 125, 126], high modularity means more strongly embedded communities. This is reflected in the spectra of the connectivity matrices as the separation of eigenvalues into a group with most eigenvalues and another of outliers, the number of which is often used to estimate the number of communities present in the network [93, 125, 126].

The second algorithm, called the Louvain method [98, 124] is a greedy optimization method. First, smaller cliques are found by optimizing modularity locally on all nodes, then each small-sized community is grouped into one node and the first step is repeated. The complete modularity is then calculated by maximizing this value over all the divisions of the network into clusters [98, 124]. We did not find any relevant differences between the Louvain method [98] and the spectral clustering algorithm [93, 125, 126], which were used to define community structure (see Fig 7C).

Supporting information

S1 Fig. The truncated weight change including motifs up to third-order provides a good match for the full weight change including motifs of all orders.

The total weight change is calculated either by including motifs up to third-order, or using the full contribution of all motifs based on the integral of Eq 2 (main text) in Fourier space. Each dot represents the respective weight change calculated with the truncated (abscissa) and the full version (ordinate) starting with a random set of initial connectivity weights. The axis are normalized to the maximum weight change. Parameters used are the ones used in the manuscript, except for N = 12 (to speed up calculations) and η = 13.

(TIF)

S2 Fig. Spontaneous emergence of assemblies does not depend on the third-order cumulant.

Considering only (up to third order) motifs related to the second-order cross-covariance Cij (blue) leads to generally worse graph measures compared to the case when adding motifs from the second-order auto-covariance Cii (red) and the case where all motifs are considered (yellow). A. Mean clustering coefficient versus the modulation parameter η. B. Mean global efficiency versus the modulation parameter η. C. Mean modularity versus the modulation parameter η. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

(TIF)

S3 Fig. Network structure at steady state is sensitive to the external input firing rate, but not to heterogeneities in the input firing rates.

A. Mean clustering coefficient, mean global efficiency and modularity versus the external input firing rate. Assembly formation breaks down for very large input firing rates. B. Mean clustering coefficient, mean global efficiency and modularity versus the standard deviation of firing rate distribution, σ, to introduce heterogeneity in the external input firing rates. Varying σ preserves assembly formation as can be seen from the different graph measures. The mean external input rate was chosen to be 150 Hz. The modulation parameter used is η = 13 and all other parameters are taken as in the main text. Note that the abscissa is logarithmic.

(TIF)

S4 Fig. Departures from the balance of excitation and inhibition do not effect assembly formation.

Mean clustering coefficient, mean global efficiency and modularity versus the inhibitory multiplicative factor δinh = {0.5, 0.8, 1.2, 1.5}, which scales the overall inhibitory matrix (see Methods). Increasing or decreasing δinh does not disrupt assembly formation as can be seen from the comparison of the different graph measures. The modulation parameter used is η = 13 and all other parameters are taken as in the main text.

(TIF)

S5 Fig. The network converges to steady firing rates during ongoing plasticity.

The different curves indicate different external input firing rates and in each case the network converges to the same rate as the external input firing rate. The modulation parameter used is η = 13 and all other parameters are taken as in the main text.

(TIF)

S1 Text. Fourier transform of the second- and third-order cumulants.

(PDF)

S2 Text. Calculation of motif coefficients up to third-order.

(PDF)

Acknowledgments

We thank members of the “Computation in Neural Circuits” group for useful discussions and Leonidas Richter for reading the manuscript. We thank Neta Ravid Tannenbaum and Yoram Burak for sharing the code used to obtain the connectivity matrices, and to Lilach Avitan and Jan Mölter for sharing further information regarding the methods used for community detection in their publications.

Data Availability

All relevant data are within the manuscript and its Supporting Information files. All code is available at https://github.com/comp-neural-circuits/Autonomous-assembly-emergence-via-triplet-STDP.

Funding Statement

LM, CM and JG thank the Max Planck Society for funding. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 804824). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Hebb DO. The Organization of Behavior; a neuropsychological theory. New York: Wiley; 1949. [Google Scholar]
  • 2. Löwel S, Singer W. Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity. Science. 1992; 255(5041):209–212. [DOI] [PubMed] [Google Scholar]
  • 3. Shatz CJ. The Developing Brain. Sci Am. 1992; 267(3):60–67. 10.1038/scientificamerican0992-60 [DOI] [PubMed] [Google Scholar]
  • 4. Singer W. Neuronal representations, assemblies and temporal coherence. Prog Brain Res. 1993; 95:461–474. 10.1016/s0079-6123(08)60388-x [DOI] [PubMed] [Google Scholar]
  • 5. Ocker GK, Litwin-Kumar A, Doiron B. Self-organization of microcircuits in networks of spiking neurons with plastic synapses. PLoS Comput Biol. 2015; 11(8):e1004458 10.1371/journal.pcbi.1004458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Ravid Tannenbaum N, Burak Y. Shaping neural circuits by high order synaptic interactions. PLoS Comput Bio. 2016; 12(8):e1005056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Carrillo-Reid L, Yang W, Bando Y, Peterka DS, Yuste R. Imprinting and recalling cortical ensembles. Science. 2016; 353(6300):691–694. 10.1126/science.aaf7560 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Harris KD. Neural signatures of cell assembly organization. Nat Rev Neurosci. 2005; 6:399–407. 10.1038/nrn1669 [DOI] [PubMed] [Google Scholar]
  • 9. Neves G, Cooke SF, Bliss TVP. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nat Rev Neurosci. 2008; 9:65–75. 10.1038/nrn2303 [DOI] [PubMed] [Google Scholar]
  • 10. Lansner A. Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations. Trends Neurosci. 2009; 32(3):178–186. 10.1016/j.tins.2008.12.002 [DOI] [PubMed] [Google Scholar]
  • 11. Buzsáki G. Neural syntax: Cell assemblies, synapsembles, and readers. Neuron. 2010; 68(3):362–385. 10.1016/j.neuron.2010.09.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Perin R, Berger TK, Markram H. A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci USA. 2011; 108(13):5419–5424. 10.1073/pnas.1016051108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011; 473(7345):87–91. 10.1038/nature09880 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Ko H, Cossell L, Baragli C, Antolik J, Clopath C, Hofer SB, Mrsic-Flogel TD. The emergence of functional microcircuits in visual cortex. Nature. 2013; 496(7443):96–100. 10.1038/nature12015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Holtmaat A, Caroni P. Functional and structural underpinnings of neuronal assembly formation in learning. Nat Neurosci. 2016; 19(12):1553–1562. 10.1038/nn.4418 [DOI] [PubMed] [Google Scholar]
  • 16. Yoshimura Y, Dantzker JLM, Callaway EM. Excitatory cortical neurons form fine-scale functional networks. Nature. 2005; 433(7028):868–873. 10.1038/nature03252 [DOI] [PubMed] [Google Scholar]
  • 17. Kampa BM, Letzkus JJ, Stuart GJ. Cortical feed-forward networks for binding different streams of sensory information. Nat Neurosci. 2006; 9(12):1472–1473. 10.1038/nn1798 [DOI] [PubMed] [Google Scholar]
  • 18. Miner D, Triesch J. Plasticity-driven self-organization under topological constraints accounts for non-random features of cortical synaptic wiring. PLoS Comput Bio. 2016; 12(2):e1004759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Cossell L, Iacaruso MF, Muir DR, Houlton R, Sader EN, Ko H, Hofer SB, Mrsic-Flogel TD. Functional organization of excitatory synaptic strength in primary visual cortex. Nature. 2015; 518(7539):399–403. 10.1038/nature14182 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Lee WC, Bonin V, Reed M, Graham BJ, Hood G, Glattfelder K, Reid RC. Anatomy and function of an excitatory network in the visual cortex. Nature. 2016; 532(7599):370–374. 10.1038/nature17192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Miller JK, Ayzenshtat I, Carrillo-Reid L, Yuste R. Visual stimuli recruit intrinsically generated cortical ensembles. Proc Natl Acad Sci USA. 2014; 111(38):E4053–E4061. 10.1073/pnas.1406077111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Kruskal PB, Li L, MacLean JN. Circuit reactivation dynamically regulates synaptic plasticity in neocortex. Nat Commun. 2013; 4:2474. [DOI] [PubMed] [Google Scholar]
  • 23. Gebhardt C, Baier H, Del Bene F. Direction selectivity in the visual system of the zebrafish larva. Front Neural Circuits. 2013; 7:111 10.3389/fncir.2013.00111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Romano SA, Pietri T, Pérez-Schuster V, Jouary A, Haudrechy M, Sumbre G. Spontaneous neuronal network dynamics reveal circuit’s functional adaptations for behavior. Neuron. 2015; 85(5):1070–1085. 10.1016/j.neuron.2015.01.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Avitan L, Pujic Z, Mölter J, Poll MVD, Sun B, Teng H, Amor R, Scott EK, Goodhill GJ. Spontaneous activity in the zebrafish tectum reorganizes over development and is influenced by visual experience. Curr Biol. 2017; 27(16):2407–2419.e4. 10.1016/j.cub.2017.06.056 [DOI] [PubMed] [Google Scholar]
  • 26. Pietri T, Romano SA, Pérez-Schuster V, Boulanger-Weill J, Candat V, Sumbre G. The emergence of the spatial structure of tectal spontaneous activity is independent of visual inputs. Cell Rep. 2017; 19(5):939–948. 10.1016/j.celrep.2017.04.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Marachlian E, Avitan L, Goodhill GJ, Sumbre G. Principles of functional circuit connectivity: Insights from spontaneous activity in the zebrafish optic tectum. Front in Neural Circuits. 2018; 12:46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Kriener B, Helias M, Aertsen A, Rotter S. Correlations in spiking neuronal networks with distance dependent connections. J Comput Neurosci. 2009; 27(2):177–200. 10.1007/s10827-008-0135-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Bio. 2011; 7(5):e1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Zhao L, Beverlin B, Netoff T, Nykamp DQ. Synchronization from second order network connectivity statistics. Front Comput Neurosci. 2011; 5:28 10.3389/fncom.2011.00028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Roxin A. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons. Front Comput Neurosci. 2011; 5:8 10.3389/fncom.2011.00008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Gaiteri C, Rubin JE. The Interaction of Intrinsic Dynamics and Network Topology in Determining Network Burst Synchrony. Front Comput Neurosci. 2011; 5:10 10.3389/fncom.2011.00010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012; 15(11):1498–1505. 10.1038/nn.3220 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Trousdale J, Hu Y, Shea-Brown E, Josić K. Impact of network structure and cellular response on spike time correlations. PLoS Comput Bio. 2012; 8(3):e1002408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Pernice V, Deger M, Cardanobile S, Rotter S. The relevance of network micro-structure for neural dynamics. Front Comput Neurosci. 2013;7:72 10.3389/fncom.2013.00072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Helias M, Tetzlaff T, Diesmann M. The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLoS Comput Bio. 2014; 10(1):e1003428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Jovanović S, Rotter S. Interplay between graph topology and correlations of third order in spiking neuronal networks. PLoS Comput Biol. 2016; 12(6):e1004963 10.1371/journal.pcbi.1004963 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Hu Y, Trousdale J, Josić K, Shea-Brown E. Motif statistics and spike correlations in neuronal networks. J Stat Mech. 2013; P03012. [Google Scholar]
  • 39. Hu Y, Trousdale J, Josić K, Shea-Brown E. Local paths to global coherence: Cutting networks down to size. Phys Rev E Stat Nonlin Soft Matter Phys. 2014; 89(3):032802 10.1103/PhysRevE.89.032802 [DOI] [PubMed] [Google Scholar]
  • 40. Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol. 2017; 46:109–119. 10.1016/j.conb.2017.07.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Dechery JB, MacLean JN. Functional triplet motifs underlie accurate predictions of single-trial responses in populations of tuned and untuned V1 neurons. PLoS Comput Biol. 2018; 14(5):e1006153 10.1371/journal.pcbi.1006153 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity–strengthening correlated input pathways. Biol Cybern. 2009; 101(2):81–102. 10.1007/s00422-009-0319-4 [DOI] [PubMed] [Google Scholar]
  • 43. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. II. Input selectivity—symmetry breaking. Biol Cybern. 2009; 101(2):103–114. 10.1007/s00422-009-0320-y [DOI] [PubMed] [Google Scholar]
  • 44. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections. Biol Cybern. 2009; 101(5-6):427–444. 10.1007/s00422-009-0346-1 [DOI] [PubMed] [Google Scholar]
  • 45. Burkitt AN, Gilson M, van Hemmen JL. Spike-timing-dependent plasticity for neurons with recurrent connections. Biol Cybern. 2007; 96(5):533–546. 10.1007/s00422-007-0148-2 [DOI] [PubMed] [Google Scholar]
  • 46. Hiratani N, Fukai T. Interplay between Short- and Long-Term Plasticity in Cell-Assembly Formation. PLoS One. 2014; 9(7):e101535 10.1371/journal.pone.0101535 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Vasilaki E, Giugliano M. Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-Term Plastic Synapses. PLoS One. 2014; 9(1):e84626 10.1371/journal.pone.0084626 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Triplett MA, Avitan L, Goodhill GJ. Emergence of spontaneous assembly activity in developing neural networks without afferent input. PLoS Comput Biol. 2018; 14(9):e1006421 10.1371/journal.pcbi.1006421 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat Commun. 2014; 5:5319 10.1038/ncomms6319 [DOI] [PubMed] [Google Scholar]
  • 50. Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun. 2015; 6:6922 10.1038/ncomms7922 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Ocker GK, Doiron B. Training and spontaneous reinforcement of neuronal assemblies by spike timing plasticity. Cereb Cortex. 2019; 29(3):937–951. 10.1093/cercor/bhy001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Kempter R, Gerstner W, van Hemmen JL. Hebbian learning and spiking neurons. Physical Review E. 1999; 59(4):4498–4514. [Google Scholar]
  • 53. Markram H, Gerstner W, Sjöström PJ. A history of spike-timing-dependent plasticity. Front Synaptic Neurosci. 2011; 3:4 10.3389/fnsyn.2011.00004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Pfister JP, Gerstner W. Triplets of spikes in a model of spike timing-dependent plasticity. J Neurosci. 2006; 26(38):9673–9682. 10.1523/JNEUROSCI.1425-06.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Sjöström PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001; 32(6):1149–1164. 10.1016/s0896-6273(01)00542-6 [DOI] [PubMed] [Google Scholar]
  • 56. Wang HX, Gerkin RC, Nauen DW, Bi GQ. Coactivation and timing-dependent integration of synaptic potentiation and depression. Nat Neurosci. 2005; 8(2):187–193. 10.1038/nn1387 [DOI] [PubMed] [Google Scholar]
  • 57. Gjorgjieva J, Clopath C, Audet J, Pfister JP. A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations. Proc Natl Acad Sci USA. 2011; 108(48):19383–19388. 10.1073/pnas.1105933108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Ohiorhenuan IE, Mechler F, Purpura KP, Schmid AM, Hu Q, Victor JD. Sparse coding and high-order correlations in fine-scale cortical networks. Nature. 2010; 466(7306):617–621. 10.1038/nature09178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Montani F, Ince RAA, Senatore R, Arabzadeh E, Diamond ME, Panzeri S. The impact of high-order interactions on the rate of synchronous discharge and information transmission in somatosensory cortex. Philos Trans A Math Phys Eng Sci. 2009; 367(1901):3297–3310. 10.1098/rsta.2009.0082 [DOI] [PubMed] [Google Scholar]
  • 60. Ohiorhenuan IE, Victor JD. Information-geometric measure of 3-neuron firing patterns characterizes scale-dependence in cortical networks. J Comput Neurosci. 2010; 30(1):125–141. 10.1007/s10827-010-0257-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Yu S, Yang H, Nakahara H, Santos GS, Nikolic D, Plenz D. Higher-order interactions characterized in cortical activity. J Neuroscience. 2011; 31(48):17514–17526. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Bohte SM, Spekreijse H, Roelfsema PR. The effects of pair-wise and higher order correlations on the firing rate of a post-synaptic neuron. Neural Comput. 2000; 12(1):153–179. 10.1162/089976600300015934 [DOI] [PubMed] [Google Scholar]
  • 63. Zylberberg J, Shea-Brown E. Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations. Phys Rev E Stat Nonlin Soft Matter Phys. 2015; 92(6):062707 10.1103/PhysRevE.92.062707 [DOI] [PubMed] [Google Scholar]
  • 64. Barreiro AK, Gjorgjieva J, Rieke F, Shea-Brown E. When do microcircuits produce beyond-pairwise correlations? Front Comput Neurosci. 2014;8:10 10.3389/fncom.2014.00010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Cayco-Gajic NA, Zylberberg J, Shea-Brown E. Triplet correlations among similarly tuned cells impact population coding. Front Comput Neurosci. 2015; 9:57 10.3389/fncom.2015.00057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Amari S, Nakahara H, Wu S, Sakai Y. Synchronous firing and higher-order interactions in neuron pool. Neural Comput. 2003; 15(1):127–142. 10.1162/089976603321043720 [DOI] [PubMed] [Google Scholar]
  • 67. Macke JH, Opper M, Bethge M. Common input explains higher-order correlations and entropy in a simple model of neural population activity. Phys Rev Lett. 2011; 106(20):208102 10.1103/PhysRevLett.106.208102 [DOI] [PubMed] [Google Scholar]
  • 68. Montangie L, Montani F. Quantifying higher-order correlations in a neuronal pool. Physica A: Statistical Mechanics and its Applications. 2015; 421:388–400. [Google Scholar]
  • 69. Montangie L, Montani F. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity. Phys Rev E. 2018; 97(6-1):060302 10.1103/PhysRevE.97.060302 [DOI] [PubMed] [Google Scholar]
  • 70. Hawkes AG. Spectra of some self-exciting and mutually exciting point processes. Biometrika. 1971; 58(1):83–90. [Google Scholar]
  • 71. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci. 1998; 18(24):10464–10472. 10.1523/JNEUROSCI.18-24-10464.1998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72. Brette R. Philosophy of the spike: Rate-based vs. spike-based theories of the brain. Front Syst Neurosci. 2015; 9:151 10.3389/fnsys.2015.00151 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Gjorgjieva J, Drion G, Marder E. Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performance. Curr Opin Neurobiol. 2016; 37:44–52. 10.1016/j.conb.2015.12.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Hansel D, Mato G. Asynchronous states and the emergence of synchrony in large networks of interacting excitatory and inhibitory neurons. Neural Comput. 2003; 15(1):1–56. 10.1162/089976603321043685 [DOI] [PubMed] [Google Scholar]
  • 75. Cessac B, Viéville T. On dynamics of integrate-and-fire neural networks with conductance based synapses. Frontiers in Computational Neuroscience. 2008; 2:2 10.3389/neuro.10.002.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010; 327(5965):587–590. 10.1126/science.1179850 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011; 334(6062):1569–1573. 10.1126/science.1211095 [DOI] [PubMed] [Google Scholar]
  • 78. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLoS Comput Biol. 2012; 8(1):e1002334 10.1371/journal.pcbi.1002334 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Rubin R, Abbott LF, Sompolinsky H. Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. Proc Natl Acad Sci USA. 2017; 114(44):E9366–E9375. 10.1073/pnas.1705841114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Fiete IR, Senn W, Wang CZH, Hahnloser RHR. Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity. Neuron. 2010; 65(4):563–576. 10.1016/j.neuron.2010.02.003 [DOI] [PubMed] [Google Scholar]
  • 81. Field R, D’amour J, Tremblay R, Miehl C, Rudy B, Gjorgjieva J, Froemke RC. Heterosynaptic plasticity determines the set-point for cortical excitatory-inhibitory balance. Neuron. 2020; 106:1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. Froemke RC, Poo M, Dan Y. Spike-timing-dependent synaptic plasticity depends on dendritic location. Nature. 2005; 434(7030):221–225. 10.1038/nature03366 [DOI] [PubMed] [Google Scholar]
  • 83. Cassenaer S, Laurent G. Conditional modulation of spike-timing-dependent plasticity for olfactory learning. Nature. 2012; 482(7383):47–52. 10.1038/nature10776 [DOI] [PubMed] [Google Scholar]
  • 84. Frémaux N, Gerstner W. Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Front Neural Circuits. 2016; 9:85 10.3389/fncir.2015.00085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85. Pedrosa V, Clopath C. The role of neuromodulators in cortical plasticity. A computational perspective. Front Synaptic Neurosci. 2017; 8:38 10.3389/fnsyn.2016.00038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86. Foncelle A, Mendes A, Jędrzejewska-Szmek J, Valtcheva S, Berry H, Blackwell KT, Venance L. Modulation of spike-timing dependent plasticity: Towards the inclusion of a third factor in computational models. Front Comput Neurosci. 2018; 12:49 10.3389/fncom.2018.00049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Bissière S, Humeau Y, Lüthi A. Dopamine gates LTP induction in lateral amygdala by suppressing feedforward inhibition. Nat Neurosci. 2003; 6(6):587–592. 10.1038/nn1058 [DOI] [PubMed] [Google Scholar]
  • 88. Seol GH, Ziburkus J, Huang S, Song L, Kim IT, Takamiya K, Huganir RL, Lee HK, Kirkwood A. Neuromodulators control the polarity of spike-timing-dependent synaptic plasticity. Neuron. 2007; 55(6):919–929. 10.1016/j.neuron.2007.08.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89. Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci. 2010; 13(3):344–352. 10.1038/nn.2479 [DOI] [PubMed] [Google Scholar]
  • 90. Gjorgjieva J, Evers JF, Eglen SJ. Homeostatic activity-dependent tuning of recurrent networks for robust propagation of activity. J Neurosci. 2016; 36(13):3722–3734. 10.1523/JNEUROSCI.2511-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91. Babadi B, Abbott LF. Pairwise analysis can account for network structures arising from Spike-Timing Dependent Plasticity. PLoS Comput Biol. 2013; 9(2):e1002906 10.1371/journal.pcbi.1002906 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92. Watts DJ, Strogatz SH. Collective dynamics of ‘small-world’ networks. Nature. 1998; 393(6684):440–442. 10.1038/30918 [DOI] [PubMed] [Google Scholar]
  • 93. Newman MEJ. Modularity and community structure in networks. Proc Natl Acad Sci USA. 2006; 103(23):8577–8582. 10.1073/pnas.0601602103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94. Betzel R, Wood KC, Angeloni C, Geffen MN, Bassett DS. Stability of spontaneous, correlated activity in mouse auditory cortex. PLoS Comput Biol. 2019; 15(12):e1007360 10.1371/journal.pcbi.1007360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95. Fagiolo G. Clustering in complex directed networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2007; 76(2):026107 10.1103/PhysRevE.76.026107 [DOI] [PubMed] [Google Scholar]
  • 96. Latora V, Marchiori M. Efficient behavior of small-world networks. Phys Rev Lett. 2001; 87(19):198701 10.1103/PhysRevLett.87.198701 [DOI] [PubMed] [Google Scholar]
  • 97. Reichardt J, Bornholdt S. Statistical mechanics of community detection. Phys Rev E Stat Nonlin Soft Matter Phys. 2006; 74(1):016110 10.1103/PhysRevE.74.016110 [DOI] [PubMed] [Google Scholar]
  • 98. Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment. 2008; 10:P10008. [Google Scholar]
  • 99. Feldmann DE. Timing-Based LTP and LTD at Vertical Inputs to Layer II/III Pyramidal Cells in Rat Barrel Cortex. Neuron. 2000; 27(1):45–56. [DOI] [PubMed] [Google Scholar]
  • 100. Graupner M, Wallisch P, Ostojic S. Natural Firing Patterns Imply Low Sensitivity of Synaptic Plasticity to Spike Timing Compared with Firing Rate. J Neurosci. 2016; 36(44):11238–11258. 10.1523/JNEUROSCI.0104-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. Biol Cybern. 2009; 101(5-6):411–426. 10.1007/s00422-009-0343-4 [DOI] [PubMed] [Google Scholar]
  • 102. Jovanović S, Hertz J, Rotter S. Cumulants of Hawkes point processes. Phys Rev E Stat Nonlin Soft Matter Phys. 2015; 91(4):042802 10.1103/PhysRevE.91.042802 [DOI] [PubMed] [Google Scholar]
  • 103. Luczak A, Barthó P, Marguet SL, Buzsaki G, Harris KD. Sequential structure of neocortical spontaneous activity in vivo. Proc Natl Acad Sci USA. 2007; 104(1):347–352. 10.1073/pnas.0605643104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, Simoncelli EP. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008; 454(7207):995–999. 10.1038/nature07140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996; 381(6583):607–609. 10.1038/381607a0 [DOI] [PubMed] [Google Scholar]
  • 106. Simoncelli EP, Olhausen BA. Natural Image Statistics and Neural Representation. Annu Rev Neurosci. 2001; 24:1193–1216. 10.1146/annurev.neuro.24.1.1193 [DOI] [PubMed] [Google Scholar]
  • 107. Bell AJ, Sejnowski TJ. An information-maximization approach to blind separation and blind deconvolution. Neural Comp. 1995; 7(6):1129–1159. [DOI] [PubMed] [Google Scholar]
  • 108. Intrator N, Cooper LN. Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. Neural Networks. 1992;5(1):3–17. [Google Scholar]
  • 109. Blais BS, Intrator N, Shouval H, Cooper L. Receptive field formation in natural scene environments. Comparison of single-cell learning rules. Neural Comp. 1998; 10(7):1797–1813. [DOI] [PubMed] [Google Scholar]
  • 110. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005; 3(10):e350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111. Lichtman J, Helmstaedter M. Connectomics at the cutting edge: Challenges and opportunities in high-resolution brain mapping. Science. 2014; 346(6209):651–651. [Google Scholar]
  • 112. Swanson LW, Lichtman JW. From Cajal to Connectome and Beyond. Ann Rev Neurosci. 2016; 39:197–216. 10.1146/annurev-neuro-071714-033954 [DOI] [PubMed] [Google Scholar]
  • 113. Blankenship AG, Feller MB. Mechanisms underlying spontaneous patterned activity in developing neural circuits. Nat Rev Neurosci. 2010; 11(1):18–29. 10.1038/nrn2759 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114. Ackman JB, Burbridge TH, Crair MC. Retinal waves coordinate patterned activity throughout the developing visual system. Nature. 2012; 490(7419):219–225. 10.1038/nature11529 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115. Richter LM, Gjorgjieva J. Understanding neural circuit development through theory and models. Curr Opin in Neurobiol. 2017; 46:39–47. [DOI] [PubMed] [Google Scholar]
  • 116. Golshani P, Gonçalves JT, Khoshkhoo S, Mostany R, Smirnakis S, Portera-Cailliau C. Internally Mediated Developmental Desynchronization of Neocortical Network Activity. J Neurosci. 2009; 29(35):10890–10899. 10.1523/JNEUROSCI.2012-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117. Rochefort NL, Garaschuk O, Milos RI, Narushima M, Marandi N, Pichler B, Kovalchuk Y, Konnerth A. Sparsification of neuronal activity in the visual cortex at eye-opening. Proc Natl Acad Sci USA. 2009; 106(35):15049–15054. 10.1073/pnas.0907660106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118. Butts DA, Kanold PO, Shatz CJ. A Burst-Based “Hebbian” Learning Rule at Retinogeniculate Synapses Links Retinal Waves to Activity-Dependent Refinement. PLoS Biol. 2007; 5(3):e61 10.1371/journal.pbio.0050061 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119. Winnubst J, Cheyne JE, Niculescu D, Lohmann C. Spontaneous Activity Drives Local Synaptic Plasticity In Vivo. Neuron. 2015;87(2):399–410. 10.1016/j.neuron.2015.06.029 [DOI] [PubMed] [Google Scholar]
  • 120. Miconi T, McKinstry JL, Edelman GM. Spontaneous emergence of fast attractor dynamics in a model of developing primary visual cortex. Nat Commun. 2016; 7:13208 10.1038/ncomms13208 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121. Gerstner W, Kempter R, van Hemmen JL, Wagner H. A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996; 383(6595):76–78. 10.1038/383076a0 [DOI] [PubMed] [Google Scholar]
  • 122. Kistler WM, van Hemmen JL. Modeling synaptic plasticity in conjunction with the timing of pre- and postsynaptic action potentials. Neural Comput. 2000; 12(2):385–405. 10.1162/089976600300015844 [DOI] [PubMed] [Google Scholar]
  • 123. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000; 3(9):919–926. 10.1038/78829 [DOI] [PubMed] [Google Scholar]
  • 124. Rubinov M, Sporns O. Complex network measures of brain connectivity: Uses and interpretations. Neuroimage. 2010; 52(3):1059–1069. 10.1016/j.neuroimage.2009.10.003 [DOI] [PubMed] [Google Scholar]
  • 125. Leicht EA, Newman MEJ. Community structure in directed networks. Phys Rev Lett. 2008; 100(11):118703 10.1103/PhysRevLett.100.118703 [DOI] [PubMed] [Google Scholar]
  • 126. Nadakuditi RR, Newman MEJ. Spectra of random graphs with arbitrary expected degrees. Phys Rev E Stat Nonlin Soft Matter Phys. 2013; 87(1):012803 10.1103/PhysRevE.87.012803 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1 Fig. The truncated weight change including motifs up to third-order provides a good match for the full weight change including motifs of all orders.

The total weight change is calculated either by including motifs up to third-order, or using the full contribution of all motifs based on the integral of Eq 2 (main text) in Fourier space. Each dot represents the respective weight change calculated with the truncated (abscissa) and the full version (ordinate) starting with a random set of initial connectivity weights. The axis are normalized to the maximum weight change. Parameters used are the ones used in the manuscript, except for N = 12 (to speed up calculations) and η = 13.

(TIF)

S2 Fig. Spontaneous emergence of assemblies does not depend on the third-order cumulant.

Considering only (up to third order) motifs related to the second-order cross-covariance Cij (blue) leads to generally worse graph measures compared to the case when adding motifs from the second-order auto-covariance Cii (red) and the case where all motifs are considered (yellow). A. Mean clustering coefficient versus the modulation parameter η. B. Mean global efficiency versus the modulation parameter η. C. Mean modularity versus the modulation parameter η. All results are calculated from 100 trials at steady state connectivity. Error bars represent the standard error of the mean.

(TIF)

S3 Fig. Network structure at steady state is sensitive to the external input firing rate, but not to heterogeneities in the input firing rates.

A. Mean clustering coefficient, mean global efficiency and modularity versus the external input firing rate. Assembly formation breaks down for very large input firing rates. B. Mean clustering coefficient, mean global efficiency and modularity versus the standard deviation of firing rate distribution, σ, to introduce heterogeneity in the external input firing rates. Varying σ preserves assembly formation as can be seen from the different graph measures. The mean external input rate was chosen to be 150 Hz. The modulation parameter used is η = 13 and all other parameters are taken as in the main text. Note that the abscissa is logarithmic.

(TIF)

S4 Fig. Departures from the balance of excitation and inhibition do not effect assembly formation.

Mean clustering coefficient, mean global efficiency and modularity versus the inhibitory multiplicative factor δinh = {0.5, 0.8, 1.2, 1.5}, which scales the overall inhibitory matrix (see Methods). Increasing or decreasing δinh does not disrupt assembly formation as can be seen from the comparison of the different graph measures. The modulation parameter used is η = 13 and all other parameters are taken as in the main text.

(TIF)

S5 Fig. The network converges to steady firing rates during ongoing plasticity.

The different curves indicate different external input firing rates and in each case the network converges to the same rate as the external input firing rate. The modulation parameter used is η = 13 and all other parameters are taken as in the main text.

(TIF)

S1 Text. Fourier transform of the second- and third-order cumulants.

(PDF)

S2 Text. Calculation of motif coefficients up to third-order.

(PDF)

Data Availability Statement

All relevant data are within the manuscript and its Supporting Information files. All code is available at https://github.com/comp-neural-circuits/Autonomous-assembly-emergence-via-triplet-STDP.


Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES