Abstract
Neural activity sequences are ubiquitous in the brain and play pivotal roles in functions such as long-term memory formation and motor control. While conditions for storing and reactivating individual sequences have been thoroughly characterized, it remains unclear how multiple sequences may interact when activated simultaneously in recurrent neural networks. This question is especially relevant for weak sequences, comprised of fewer neurons, competing against strong sequences. Using a non-linear rate -based and a spiking model with discrete, pre-configured assemblies, we demonstrate that weak sequences can compensate for their competitive disadvantage either by increasing excitatory connections between subsequent assemblies or by cooperating with other co-active sequences. Further, our models suggest that such cooperation can negatively affect sequence speed unless subsequently active assemblies are paired. Our analysis characterizes the conditions for successful sequence progression in isolated, competing, and cooperating assembly sequences, and identifies the distinct contributions of recurrent and feed-forward projections. This proof-of-principle study shows how even disadvantaged sequences can be prioritized for reactivation, a process which has recently been implicated in hippocampal memory processing.
Author summary
While competition and cooperation are well studied phenomena in various domains of life, the conditions under which they occur in neuronal circuits require further investigation. A central open question is what allows a neural activity sequence to reactivate when competing sequences are present. To address this knowledge gap, we built two types of simple network models: rate-based models and more biological-plausible spiking models. Each model contained up to three sequences, composed of discrete neuronal assemblies connected by feed-forward and recurrent projections. We found that a) both feed-forward and recurrent projections play crucial roles for a sequence to withstand competition, b) weak sequences can overcome their competitive disadvantage through cooperation, c) Hebbian cooperation can however negatively affect sequence speed, d) unless cooperative projections are shifted forward in time. Despite their simplicity, our rate-based and spiking models allow us to make predictions on network properties and plasticity mechanisms in brain regions involved in sequence processing. Given the ubiquitous role of neural activity sequences for information processing, a deeper comprehension of their dynamics will help to unravel basic and pathological brain functions.
1. Introduction
Sequences of neural activity are a universal phenomenon in the brain, fundamentally underpinning a range of functions including olfactory processing [20], birdsong generation [23], motor control [15], and episodic memory encoding in the hippocampus [11,13,19,47]. These sequences unfold over various timescales and can be driven by either external stimuli or intrinsic mechanisms. Thus, to understand information processing in the brain, we need to comprehend the dynamics of neural activity sequences.
The emergence and reliable propagation of individual neural activity sequences have been extensively studied using computational models [1,2,4,5,8,12,18,26,28,30,32,35,38–40,43,46,53,57,58,69]. A number of studies characterized conditions for storing and reactivating multiple sequences in recurrent networks [1,4,5,32,35,51,57]. However, interactions between sequences within a network are less understood, and in particular the influence of competition and cooperation on sequence reactivation has not yet come into focus.
A popular experimental paradigm to expose the functional role of neural activity sequences is to record the activity of hippocampal neurons in a spatial navigation task, commonly performed in rats or mice. While traversing an environment, place cells are activated in a sequential manner [13,47]. Subsequently, when the animal is resting, planning or consuming, the same neural activity sequences may be reactivated (or replayed) at a faster time scale during sharp wave ripple (SWR) events [11,29,56,66]. Such offline reactivation can represent multiple distinct experiences [54]. However, it is assumed that normally only one sequence is reactivated per sharp-wave ripple [24].
Replay of sequences is crucial for memory consolidation [14,17,21,48]. Successful generation of long sequences during SWR is associated with better memory [17]. Moreover, the probability that a particular sequence will be reactivated varies with experience, with novel and reward-related sequences being prioritized [3,27,42,55, but see [22]]. Intriguingly, the fact that neither the generation of sharp-wave ripples [6,68] nor the reactivation of sequences [7] are abolished by lesions or inhibition of the medial entorhinal cortex, the primary input structure to the hippocampus, suggests the existence of inherent mechanisms for sequence prioritization within the hippocampus.
Hippocampal activity sequences differ in key properties depending on which information they represent. When encoding the location of objects and other animals fewer hippocampal place cells are recruited and their firing rates are lower compared to place cells for the animal’s own location [9,49]. Thus, hippocampal sequences are likely composed of differently sized cell assemblies. In the following we call sequences with large assemblies strong and those with small assemblies weak. To consolidate their corresponding experiences, it is conceivable that both weak and strong sequences compete for reactivation during SWRs.
A computational model suggests that successful reactivation becomes more difficult for weak sequences, unless recurrent connections within and/or feed-forward projections between cell assemblies are strengthened [8]. However, the required amount of potentiation increases non-linearly with decreasing assembly size, and synapses may quickly reach their physiological boundaries [8]. If multiple sequences are activated at the same time, mutual inhibition between them may create a winner-take-all type competition. In such a scenario, weak sequences essentially stand no chance of winning the competition.
Here, we explore how weak sequences may cooperate to win over stronger sequences during replay events. Inspired by recent findings about gated synaptic plasticity and mutual feed-forward inhibition between region CA3 and CA2 in the hippocampus, we proposed that co-occurring sequences in these regions may be selectively paired by the release of neuromodulatory substances [59]. In addition to linking distinct information [34,41,67] in each region, mutual excitatory support between CA3 and CA2 sequences may ensure their reactivation, while at the same time recruiting sufficient inhibition to suppress competing sequences [24,36].
To develop a theoretical understanding based on these hippocampal insights, we demonstrate that cooperation and competition of assembly sequences can be implemented both in rate-based and spiking models. Within a sequence, reliable and fast signal transmission is achieved by excitatory feed-forward projections between subsequent assemblies, employing balanced amplification [8,45]. Competition and cooperation are implemented by feed-forward inhibition and excitation across assemblies. Characterizing conditions for competition and cooperation, we show that a) feed-forward excitation is crucial, but must remain within a certain range to avoid excessive and persistent activation, b) recurrence within assemblies helps the surviving sequence to recover, c) feed-forward inhibition can mediate competition, d) excitatory coupling between co-active assemblies allows weak sequences to win, but slows sequence progression, and e) preferentially pairing subsequently instead of co-active assemblies maintains sequence speed. Taken together, these results demonstrate that reactivation dynamics of neural sequences are shaped both by modifying feed-forward properties as well as by interactions among multiple sequences.
2. Results
2.1. Conditions for progression of a single sequence
We used a rate-based model with a non-linear activation function to first study the progression of a single assembly sequence (Fig 1a, 1b). Each assembly is composed of discrete and recurrently interacting populations of excitatory and inhibitory neurons. Sequences are defined by connecting subsequent excitatory populations with feed-forward projections. In addition, all assemblies – independent of their position in the sequence – send feed-forward inhibition to each other; they send excitatory projections to each other’s inhibitory populations. Parameter values are chosen to be in approximate agreement with experimental findings and existing computational models (see Table 2). For example, the activation function reflects documented peak rates of pyramidal neurons in the hippocampus both in vivo [41] and in vitro [16], and timescale τ has been adjusted such that individual assemblies in the single sequence case are active for around 5ms, which approximates activation times in a spiking model of assembly sequences [8]. Please note that the time constant can be flexibly adjusted to represent activation times or sequence speeds found in real brains.
Fig 1. Recurrent and feed-forward interactions influence the progression of a single sequence.
a) Non-linear activation function of excitatory and inhibitory populations. b) Connections within and between assemblies in a single sequence. Each assembly is formed by two recurrently interacting populations, representing excitatory (E) and inhibitory (I) neurons. A given sequence s0 is established by connecting subsequent excitatory populations via feed-forward excitatory projections with strength . Assemblies generally suppress each other via feed-forward inhibition; excitatory projections from excitatory to all inhibitory populations of other assemblies with strength . c) Successful sequence progression, black region, depends on both recurrent, prc and, feed-forward, pff, connection probability. Red line, analytic solution of the linearized rate model for sustained activity propagation. For low values of prc, the non-linear rate model and the analytic solution diverge. d) Example sequences, corresponding to red arrows in b). Only s2 and s3 successfully reactivate. Activity of every third excitatory population is shown. Different colors correspond to different excitatory populations. e) Mean activation time across all assemblies. Large values of pff lead to persistent activity and, thus, to large mean activation time. f) Parameter region (black) where all excitatory populations in sequence become activated, fulfilling condition 1: All active. g) Parameter region fulfilling condition 2: All informative, all excitatory populations must exceed activity of others at least once (black). h) Parameter region fulfilling condition 3: Sparse activity. Red, dashed contours in c), e), f) and g) correspond to black region for successful sequence progression in b).
Table 2. Parameters of the rate-based and the spiking model.
Columns are organized in figures and sequences. If necessary, rows are split into default parameters, upper part, and parameter ranges, lower part. If not otherwise stated, default parameters are used. Ranges are indicated with (start, stop, stepsize). Dashed lines indicate values applying to multiple columns. The irrelevant parameters for each experiment are shown by × symbol.

To characterize successful sequence progression, we defined four conditions: 1) All active: Within each assembly, the excitatory population must be activated at least at one point in time. 2) All informative: In addition, each excitatory population must exceed the activity of others at least one point in time. 3) Sparse activity: Global activity of the whole network must be sparse, e.g. peak activity is not to be reached by more than two assemblies at any point in time. 4) Order: Peak activation of any excitatory population must maintain its predefined order.
Successful sequence progression depends on both recurrent and feed-forward projections. The strength of each connection is a product of the respective excitatory or inhibitory population size, M{E,I}, synaptic strength, g{E,I} and recurrent or feed-forward connection probability, p{rc,ff}. To investigate the dependence of sequence progression on connection strength, we systematically varied prc and pff, simultaneously for excitatory and inhibitory projections. We found that the parameter region allowing successful sequence progression for pff is relatively narrow compared to prc (Fig 1c, black region). Closer investigation revealed that, without sufficient feed-forward projections, activity dies out (s1 in Fig 1d), preventing all assemblies from being activated (Fig 1f), violating condition 1 (Fig 1g). By contrast, strong feed-forward projections led to rapid and persistent activation (s4 in Fig 1d, 1e), violating the condition 2 (Fig 1h). However, if excitatory and inhibitory populations recurrently interact with sufficient strength, assembly activation can become transient, allowing sequences to progress in a sparse fashion for an increasing range of feed-forward weights, condition 3 (s2 and s3 in Fig 1d).
The V-shape of the parameter region reflecting successful progression illustrates the dual role of recurrent interaction (see Fig 1c). On its left flank (for weak feed-forward connections), increasing recurrent interactions, prc>0.025, decreases the required feed-forward weights, pff by positively amplifying weak inputs. On the right flank (for strong feed-forward connections), stronger recurrent inhibition prevents persistent activity and, thus, increases permissible feed-forward weights. For extremely low recurrence values, , sequence progression is limited to few specific values of pff (s0 in Fig 1d, 1e).
2.2 Competition between two sequences
Next, we studied competition between two sequences, s0 and s1. As before, each assembly sends feed-forward inhibition to all other assemblies, both within and between sequences (Fig 2a). If the first assemblies in both sequences are simultaneously activated, the interplay between excitation within the assemblies and inhibition between sequences can lead to one of four scenarios: a) Activity in both sequences ceases before the sequence is completed; referred to as no winner; b) s0 successfully progresses and s1 ceases; referred to as s0 wins; c) s1 successfully progresses and s0 ceases; referred to as s1 wins; d) both sequences successfully progress; referred to as both win.
Fig 2. Competition between two sequences.
a) Scheme of connections for competition scenario. Two sequences, s0 and s1, compete via feed-forward inhibition between all assemblies. For visual clarity, only feed-forward inhibition with strength from one assembly of s0 to two assemblies of s1 is shown. b) Example: Larger sequence s0 wins over s1. Only activities of excitatory populations are shown. Both sequences suppress each other’s activity until s1 ceases and s0 recovers. The last assembly of s1 maintains its activity for longer because there is no subsequently active assembly from which it may receive inhibition. Colors repeat after 10 assemblies. c) Competition scenarios for different values of feed-forward, recurrent, and feed-forward inhibition connection probability. Each column represents a separate parameter scan for feed-forward pff, recurrent prc, feed-forward inhibition pffi connection probabilities. For each value of the respective connection probability we make a full scan across the ME,0,ME,1 space. The upper row summarizes the fraction of the ME,0,ME,1 space covered by the four possible outcomes: both win, black; s1 wins, dark grey, s0 wins, light grey, and no winner, white. Middle and bottom rows provide examples of the ME,0,ME,1 space for different values of pff, left, prc, center, or pffi, right, as indicated by the dashed and pointed lines. Note, origin in plots in b) and upper row in c) at (0,0).
To exemplify competition dynamics, we let two sequences compete for a given set of parameters (Fig 2b). After an initial surge, activity diminished in both sequences. While s1 ceased, activity in s0, with slightly larger assemblies, recovered and successfully progressed.
The competition outcome depends on assembly sizes as well as interactions within and between sequences. To systematically characterize the occurrence of the four competition scenarios, we varied assembly sizes ME,0, ME,1 for different connection probabilities of either feed-forward excitation pff, recurrent excitation prc, or feed-forward inhibition pffi. Note, in the following, sizes of the inhibitory populations, MI,0, MI,1, are scaled accordingly to maintain a constant ratio of excitatory and inhibitory population sizes.
For example, for moderate levels of feed-forward excitation, pff = 0.014, relatively large assemblies, ME,0, ME,1>1400 were required for one sequence to win over the other (Fig 2c, central row in left column). Nevertheless, even for large assemblies, the difference between sequences had to be prominent, otherwise both sequences ceased to exist. By contrast, if feed-forward excitation is increased, even moderately sized assembly sequences could win as long as they are larger than their competitor (Fig 2c, bottom row in left column). As a consequence of this, the fraction of the ME,0, ME,1 parameter space spanned by either s0 or s1 winning increased with a rise in the strength of feed-forward connections, pff, until it hit an upper bound (Fig 2c, upper row in left column).
Without recurrence, even sequences with large assemblies failed to successfully propagate when competing. As we showed in Figure 1c and know from the literature on synfire chains [12,26,33], individual sequences can progress without recurrent interactions. However, we hypothesized that in a competition scenario, recurrence is paramount for the surviving sequence to recover. To test this, we characterized the competition outcome for a range of assembly sizes given different values of prc. Consistent with our expectation, for relatively weak recurrence, prc = 0.01, larger assemblies were required to avoid that both sequences cease their progression (Fig 2c, central row in central column). Surprisingly, we found that weak recurrence allows both strong sequences to win (Fig 2c, black region, central row in central column). With an increase in the recurrence, prc = 0.02, the fraction of the ME,0,ME,1 parameter space spanned by either s0 or s1 winning increased (Fig 2c, bottom row in central column). Thus, we conclude that recurrence is indeed crucial when sequences compete.
Feed-forward inhibition ensures that only one sequence wins. Here, sequences competed by inhibiting each other. Therefore, we expected that relatively weak feed-forward inhibition will allow both sequences to win. Again classifying competitions outcomes, we could indeed show that for low values of pffi a considerable fraction of the ME,0,ME,1 parameter space was covered by the both win scenario (Fig 2c, black region, upper and central row, right column). Further, we found that weak feed-forward inhibition corresponded to a large fraction of failed progressions for both sequences, no winner. On the other hand, when pffi was increased, the both win case disappeared (Fig 2c, upper and bottom row, left column).
Competition makes it harder for sequences with small assemblies to ensure progression by strengthening feed-forward projections. As proposed in [59], a weak sequence, comprised of smaller assemblies, competing with a strong sequence, can ensure progression by further potentiating feed forward projections [see also 61]. However, the required strengthening scales non-linearly with assembly size [8] and therefore may hit physiological boundaries for weak sequences. We hypothesized that this becomes even more severe during competition and tested this prediction by varying the respective parameters for sequence s1, while keeping parameters in s0 fixed. As expected, we found a non-linear increase in the required feed-forward connection probability for decreasing assembly sizes ME,1 (Fig 3a). If both assembly sizes and feed-forward weights were strong (no winner region in upper right corner of Fig 3a), persisting activity violated both the activation and the sparsity condition (Fig 3c).
Fig 3. Competition makes it harder for sequences with small assemblies to ensure progression by strengthening feed-forward weights.
a) For fixed parameters of s0, ME,0 = 1000, , assembly size ME,1 and feed-forward connection probability of s1 are varied. For s1 to win (dark area), smaller assemblies must be compensated by increasingly larger feed-forward weights. b) Same as a), but without activating s0.
To compare the presented results to a situation without competition, we repeated the simulation with silenced s0 (Fig 3b). As before, the required feed-forward connection probability increased non-linearly with decreasing assembly size. However, without a competing sequence, smaller feed-forward connection probabilities allowed successful propagation. In conclusion, these findings show that competition increases the required strength of feed-forward weights, making it even more difficult to reactivate sequences with small assemblies.
2.3 Cooperation and competition between three sequences
Given the physiological limits on the potentiation of feed-forward projections, an alternative or additional way for sequences to ensure progression despite competition is to mutually support each other. This may happen if simultaneously active assemblies in co-occurring sequences are paired by Hebbian plasticity [59]. To demonstrate both cooperation and competition between assembly sequences, we created a minimal scenario with one strong, and two weak sequences (Fig 4a). As before, all assemblies mutually inhibited each other and the excitatory populations at the start of each sequence were simultaneously activated. The strong sequence s0 has a competitive advantage due to its larger assemblies. As expected, without any cooperation between s1 and s2, sequence s0 won (case c0, Fig 4b, 4c).
Fig 4. Cooperation via mutual excitation between assembly sequences.
a) Network scheme for competition and cooperation between discrete assembly sequences. Sequence with larger assemblies, s0, competes with s1 and s2. Competition via feed-forward inhibition between all sequences not shown. Cooperation between s1 and s2 through reciprocal excitatory connections to co-active assemblies with strength and . Larger assemblies of s0 are indicated by larger circles. b) Connection probabilities between s1 and s2, and , are varied. Sufficiently strong mutual excitation is required for s1 and s2 to outcompete s0. c) Examples: , mutual excitatory interactions between s1 and s2 not sufficient; , pairing between s1 and s2 strong enough to win; , increased excitatory interactions lead to slower sequence progression, e.g. longer activation times; , if excitatory interactions are too strong, sequence progression fails because first assemblies of s1 and s2 remain active. Only activity of every second excitatory population is shown. d) Mean activation times of excitatory populations in s1. Strong mutual excitation leads to longer activation times and slow sequence progression. e) Ratio of activated excitatory populations in s1. Only in region with successful cooperation with s2 all assemblies of s1 are activated. f) Activation time of first excitatory population in s1. Strong interactions halt propagation, because early assemblies maintain activity over the full simulation duration.
When weak sequences were able to cooperate, they could however overcome a strong competitor. We introduced feed-forward excitatory projections between co-active excitatory populations in s1 and s2, summarized by their strengths and . Given sufficient mutual support, s1 and s2 were able to out-compete s0 (c1, Fig 4b, 4c). However, the stronger the mutual excitatory connections, the longer were the activation times of excitatory populations (c1 vs. c2, Fig 4d). Increasing the excitatory interactions further led to persistent activity in the first assemblies, halting successful sequence progression (c3, Fig 4c, 4e, 4f).
2.4 Potentiation of excitatory synapses to subsequently active assemblies in paired sequence increases propagation speed
In the previous section we observed that pairing sequences by potentiating co-active assemblies can indeed facilitate their reactivation, but it slows sequence progression and, if too strong, leads to persistent activity. Thus, we hypothesized that sequence speed can be increased by introducing excitatory projections to subsequently active assemblies (see Fig 5a). Adding this type of projection to the three sequence model and explicitly measuring sequence speed by the inverse of the median interpeak interval of excitatory populations, we observed a range of different speeds depending on the relative levels of potentation between co-active and subsequent assemblies (Fig 5b,5c). As expected, stronger potentiation between co-active assemblies led to slower progression (purple region, Fig 5b). Additionally increasing the synaptic strength between subsequent while maintaining strong synapses between co-active assemblies marginally increased speed at the expense of prolonged activation times of individual assemblies (c0 vs. d0 and d1). However, reducing synapse strength between co-active while maintaining relatively strong synapses to subsequent assemblies can increase sequence speed up to the level of the competing sequence s0 (yellow region, Fig 5b; example d2, Fig 5c). Thus, potentiating subsequently active assemblies can indeed facilitate reactivation of paired sequences while preserving the timescale of sequence progression.
Fig 5. Shifting feedforward excitation from co-active to subsequent assemblies increases speed of cooperating sequences.
a) Beyond prior simulations which modified feedforward excitation solely between simultaneously active assemblies, , we also introduce feedforward excitation to subsequent assemblies, , in cooperating sequences s1 and s2. b) Sequence speed of s1, relative to s0 of scenario c0 from Fig 4b. Successful reactivations of s1 and s2 are shown for reduced (shades of purple) and similar speed as competing sequence (yellow interval, from 0.9-1.). Non-successful reactivations are indicated in grey. Adjustments in weights and are achieved by altering the corresponding connection probabilities and . c) Examples: c1 - From Fig 4b without pairing between subsequently active assemblies d0 - Increasing feedforward excitation to subsequent assemblies can counterbalance the reduced speed brought on by the additional feedforward excitation between co-active assemblies. d1 - Further increasing feedforward excitation to subsequent assemblies expands activation duration of individual assemblies. d2 - To reach speed of competing sequence s0 from scenario c0, feedforward excitation to co-active assemblies must be reduced.
2.5 Sequence competition and cooperation can be implemented in spiking neural networks
Rate-based models do not fully capture the temporal dynamics and variability inherent to spiking neural networks. Thus, to validate our findings in a more biologically plausible setting, we replicated sequence cooperation and competition in networks of Leaky Integrate-and-Fire (LIF) neurons with current-based synapses. As before, we let a dominant sequence s0 compete with two weaker sequences s1 and s2, characterize the parameter space for interactions between assemblies of the weaker sequences, and outline different cases of successful or unsuccessful cooperation (Fig 6). To avoid sequence slowdown upon pairing, we jointly increased excitatory projections between co-active and subsequently active assemblies of weak sequences, .
Fig 6. Sequence competition and cooperation can be implemented in spiking neural networks.
a) Connection probabilities between s1 and s2 are varied. This includes both connections between co-active assemblies, as well as those to subsequently active assemblies with, and . As before, in order for s1 and s2 to outcompete s0, a sufficiently strong connection is needed (dark grey area). Raster plots exemplify spiking activity of competing sequences. b) Averaged population activity of 10 independent simulations, with parameters corresponding to examples in a). , mutual excitatory interactions between s1 and s2 are not sufficient, s0 wins; , pairing between s1 and s2 is strong enough to overcome s0 and succesfully progress. , increased excitatory interactions extend assembly activation times; , if excitatory interactions are too strong, sequence progression fails because assemblies of s1 and s2 remain active. c) Similar experiments as in b), with the only difference that . Note the reduced the propagation speed. In b) and c) activity of every second excitatory population is shown, according to the equation 6.
Similar to our previous results, the strong sequence s0 reliably outcompeted the weaker sequences s1 and s2 - in case the latter two do not sufficiently cooperate (case c0). However, simultaneously increasing excitatory interactions between co-active and subsequently active assemblies of s1 and s2, allowed them to jointly overcome the strong sequence (case c1 and c2). As in the rate-based model, we find both a transition region, without a clear winner (Fig 6a, middle white region), as well as a region of sustained activity, which prevented proper sequence progression, (Fig 6a, upper right white region, and case c3). Thus, even in spiking neural networks, cooperative excitation must remain within certain boundaries to allow for successful sequence propagation. In agreement with the rate model, cooperation wihtout forward-oriented connections , drastically slowed sequence progression (Fig 6c).
To enforce sparse activity, we limit the number of neurons allowed to transmit action potentials within each assembly and timestep via a k-Winners-Take-All (kWTA) mechanism. Such a mechanism may be interpreted as a form of inhibition acting on a fast timescale. Alternatively, we demonstrate successful sequence competition and cooperation with LIF neurons with adaptive membrane dynamics, avoiding the kWTA mechanism (see Fig A in S1 Text).
Taken together these results corroborate our findings from the rate-based model with more biologically plausible spiking dynamics.
3 Discussion
Using both non-linear rate-based and spiking models with discrete and pre-configured assemblies, we provided a proof-of-principle for competition and cooperation between neural activity sequences. While a simplification of biological complexity, these models allowed us to study the dynamics of isolated, competing and cooperating sequences. Characterizing conditions for successful sequence progression, we can attribute specific roles to the interactions within and between assemblies. Projections between subsequent excitatory populations ensure sequence progression. However, if too weak, activity does not propagate, and if too strong, activity saturates. Recurrent excitatory and inhibitory interactions implement balanced amplification which boosts weak excitatory inputs and prevents saturating activity [25,45]. Thus, with increasing recurrency, a larger range of excitatory inputs is permissible. Further, the boost of weak inputs is especially beneficial in the competition scenario and allows the surviving sequence to quickly recover. Excitatory interactions between co-active assemblies allow weak sequences to win against a stronger competitor, but such interactions slow the propagation of activity. Shifting feedforward excitation from co-active to subsequent assemblies of cooperating sequences increases sequence speed, enabling successful replay without slowing sequence propagation.
Similar dynamics should be present if more than three sequences interact. To be maximally parsimonious on competition and cooperation, this work considers only three cases: one isolated, two competing and two cooperating sequences competing with another sequence. However, if more than two competing sequences are simultaneously activated, we expect qualitatively similar dynamics. Mutual inhibition, while presumably rising faster, may silence all but the strongest sequence, which would recover and successfully reactivate. This model could in principle be extended to allow for more than two sequences to cooperate and here we would also expect similar dynamics as in the two sequence case: Cooperation would increase their chance for successful reactivation.
Our results highlight a key constraint on which synapses may be potentiated to support successful pairing of activity sequences. We report that direct excitatory interactions between co-active assemblies lead to increased activation times and slower sequence propagation. Maintaining propagation speed for paired sequences was made possible by potentiating excitatory projections to subsequently active excitatory populations in the cooperating sequence. Such temporally skewed potentiation may naturally occur via asymmetric spike-timing-dependent plasticity during encoding [31,44]. We note that several other mechanisms may modulate speed: Dynamic firing rate adaptation to mimic refractory periods [65], inhibitory oscillations to rhythmically gate propagation [50], or inhibitory plasticity to maintain EI balance [64].
The presented results equally relate to the creation of new synapses as well as to potentiation of existing synapses. The strength of an individual connection is defined by the product of population size, the average connection probability and the synaptic weight. Unlike population size which also affects other projections of this population, the specific connection probability and synaptic weight are interchangeable scaling factors.
Weak sequences may also compensate for small assembly sizes by potentiating recurrent interactions, weakening feed-forward inhibition, or recruiting more neurons [assembly outgrowth, see 37,60]. Here, the underlying learning scenario is highly simplified. We assume that during learning pre-configured, recurrently interacting assemblies are activated by external input. This is thought to induce the formation or potentiation of excitatory projections between subsequently activated excitatory populations. For this reason we only evaluated the possibility that weak sequences compensate for small assemblies by strengthening projections between subsequent excitatory populations.
Competition between neural activity sequences may be directly observed in hippocampal recordings. If reactivation of neural activity sequences in the hippocampus is indeed the outcome of a competition process, signatures of this process should be detectable. In the presented model, competition dynamics are characterized by an initial rise in the activity of assemblies of different sequences, followed by reduced activity due to mutual inhibition, until one sequence starts to out-compete the others. Such dynamics should be particularly strong if competing sequences are of equal strength. Studying the reactivation of place cell sequences after running on two or more distinct linear tracks may be an adequate experimental paradigm [24,54].
In summary, our work investigated the interaction of multiple sequences of different strengths within both rate-based and spiking recurrent neural networks. We considered scenarios of competition and cooperation between interacting sequences and characterized the effects on sequence reactivation and sequence dynamics. We showed that pairing weak sequences allows them to win over a stronger competitor. This has implications for hippocampal replay – the number of hippocampal neurons recruited to represent certain types of information strongly differ between sensory modalities [9,52], thus making it important to develop a theoretical understanding of how heterogeneity in assembly size influences replay statistics.
4 Methods
For a summarized description of all parameters used in the rate-based and the spiking models, see Table 1. For a summary of all used parameters see Table 2.
Table 1. Description of parameters in the non-linear rate model and the spiking model.
| Parameter | Description |
|---|---|
| M E,i | number of excitatory neurons per assembly in si |
| M I,i | number of inhibitory neurons per assembly insi |
| peak activation rate [Hz] | |
| τ | population time constant [ms] |
| a | rightward shift of activation function |
| g E | strength of excitatory synapses [nS] |
| g I | strength of inhibitory synapses[nS] |
| p rc | recurrent connection probability |
| feed-forward exc. connection probability between subsequent assemblies in si | |
| feed-forward exc. connection probability between co-active assemblies in si and sj | |
| feed-forward exc. connection probability between subsequently active assemblies in si and sj | |
| p ffi | feed-forward inhibition connection probability |
| n ass | number of assemblies per sequence |
| r 0 | initial activity of first assembly [Hz] |
| t | simulation time [ms] |
| r min | minimal activity for classification [Hz] |
| r tol | activity tolerance for classification [Hz] |
| time constant of LIF neurons [ms] | |
| resting membrane potential of LIF neurons | |
| reset membrane potential of LIF neurons | |
| R | resistance of LIF neuron [Ω] |
| θ | spiking threshold of LIF neurons [mv] |
| dt | time resolution of spiking simulation [ms] |
| r bg | rate of background activity of each population [Hz] |
4.1. Conditions for successful sequence reactivation
To be classified as successfully progressing, a sequence must satisfy the following four conditions: 1) All active: All assemblies must be activated. There should exist at least one point in time during which the activity of a given excitatory population exceeds a minimal threshold rmin. 2) All informative: Each excitatory population must exceed the activity of others at least one point in time. 3) Sparse activity: While the sequence is running, maximum firing rates at any given point in time must not be reached by more than two assemblies simultaneously. To exclude numerical edge cases we consider assemblies to have similar firing rates, whenever the absolute value of the difference is less than rtol. Allowing two assemblies to both have peak activity is necessary for the time points when decreasing activity of the previous and increasing activity of the subsequent assembly are equal. Condition 2 and 3 ensure that each assembly is prominently activated. Without condition 2, a strongly active assembly could overshadow the activation of subsequent assemblies, which would make the readout more challenging. Condition 3 prevents cases where multiple assemblies just saturate at maximum firing rate. 4) Order: Activation times must maintain sequence order. The order of peak activities agrees with the predefined order of assemblies in the sequence. Given our predefined one-step feed-forward interactions this is almost always the case, though we mention it for completeness.
4.2. Assembly sequences in a non-linear rate model
In the non-linear rate model each assembly is formed by one excitatory and one inhibitory population. The evolution of rate, , of a given population i of sequence sj is described by
| (1) |
with τ a fixed population time constant, equal across all populations.
The sigmoidal activation function S over the input x is defined by
| (2) |
with the rectifier (compare Fig 1a) and a small constant rightward shift of the activation function, preventing numerical imprecision around x = 0 from inadvertently driving network activity.
Each population in sequence sj receives input by recurrent excitatory and inhibitory projections with strengths and . Excitatory populations may receive additional excitatory input by the preceding assembly of the same sequence with strength . In the case of cooperating sequences, each excitatory population receives excitatory input of a co-active assembly of another sequence sm with strength . All assemblies send feed-forward inhibition to each other, e.g. they excite each others inhibitory population. Thus, in addition to the recurrent input from their associated excitatory population, they receive input from all remaining n excitatory populations of all sequences sm. Thus, the full input to an excitatory population and an inhibitory population of assembly i in sequence sj is described by:
| (3) |
| (4) |
Weight values are the product of the number of excitatory, ME,j, and inhibitory, MI,j, neurons in the sending population, equal for all assemblies in a given sequence sj, as well as recurrent, prc, feed-forward, pff, and feed-forward inhibitory, pffi, connection probabilities and excitatory, gE, or inhibitory, gI, synaptic strengths.
| (5) |
Sequences are comprised of nass assemblies, all assemblies within a given sequence are equal in size and the E/I size ratio is fixed to .
Sequence speed is determined as the inverse of the median interpeak interval of excitatory populations. Before determining timepoints of peak activation, we rounded values to a precision of rtol and ignored values below rmin to avoid numerical fluctuations to be considered a peak.
Simulations were run for a fixed time interval and a fixed step size with the solve_ivp function in SciPy’s integrate package with integration method LSODA. As initial condition, the excitatory population in the first assembly of each activated sequence sj is set to , while all other rates are at zero.
Simulations and analysis of rate models were performed with Jupyter notebooks 6.0.3 and Python 3.7.8 with standard libraries, such as NumPy 1.18.5, SciPy 1.18.5, Matplotlib 3.2.2 and SymPy 1.5.1.
4.3. Assembly sequences in spiking neural networks
In our experiments, we implemented a network of spiking neurons using PymoNNtorch 0.1.4 [62], a Pytorch-adapted version of PymoNNto [63]. All simulations were executed on CPU. We employed the Leaky Integrate-and-Fire (LIF) neuron model, which is described by the following differential equation:
where is the neuronal membrane time constant, represents the resting membrane potential, R is the membrane resistance, and I(t) denotes the synaptic input current received by each neuron. The synaptic input current I(t) for each neuron is computed as:
with N the number of pre-synaptic neurons j connected to post-synaptic neuron i, W the connectivity matrix, () spiking activity of j, and δ the Dirac delta function, ensuring that input currents are updated at precise spike times.
Each assembly consists of an excitatory and inhibitory population. The ratio of excitatory to inhibitory neurons within each sequence is fixed to . The simulation duration is set to 100 ms, with a time resolution of 1 ms.
A k-Winners-Take-All (kWTA) mechanism enforces sparse and competitive activation patterns. At any time step, only spikes from the k neurons with the highest membrane potential of a population are transmitted. However, membrane potentials of all neurons crossing the firing threshold are reset.
To simulate background activity, at each time step t, a uniformly distributed random number is assigned to each neuron i, and those with (where r is the predefined activation rate parameter) receive an external input current Ibg. This approach mimics the random synaptic inputs observed in biological neural networks, where most neurons receive ongoing background activity even in the absence of specific stimuli [10].
To analyze network dynamics, the average neuronal activity of the excitatory population of each assembly is calculated as:
| (6) |
where A(t) represents the total activity at time t, normalized by the number of neurons.
The feedforward and recurrent connectivity schemes were adopted from the previously described rate-based models, ensuring a comparable network structure while incorporating more biologically plausible spiking dynamics. However, for simplicity, feedforward inhibition is limited to co-active assemblies of competing sequences.
To start sequences, we forced the excitatory population of the first assembly to spike for 10 ms. For each time step, we randomly selected 10% of the neurons and injected a current of 1000 mA. The kWTA mechanisms remained active.
Supporting information
Sequence competition and cooperation in spiking neural networks with adaptive membrane dynamics.
(PDF)
Acknowledgments
We thank Henning Sprekeler and Tom McHugh for valuable input to an earlier version of this work.
Data Availability
All code is available at https://github.com/tristanstoeber/sequence_competition_cooperation.
Funding Statement
This research was partly funded by Research Council of Norway Grant No. 248828 and the Simula-UCSD-University of Oslo Research and PhD training (SUURPh) program, an international collaboration in computational biology and medicine funded by the Norwegian Ministry of Education and Research. A.B.L. was supported by a Natural Sciences and Engineering Research Council of Canada PGSD-3 scholarship, A.K. by Research project grant 2018-03118 from Swedish Research Council (VR-vetenskap radet) - vr.se. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. T.M.S. and M.F. received salary from the University of Oslo and Simula Research Laboratories, T.M.S. from the Ruhr University Bochum, funded by the German Mercator Research Center Ruhr (Mercur), project number Ex-2021-0018 “NeuroMind. Memories from the Future”, and the University Hospital Frankfurt, A.B.L. from the University of Göttingen and A.K. from the KTH Royal Institute of Technology.
References
- 1.Abeles M, Hayon G, Lehmann D. Modeling compositionality by dynamic binding of synfire chains. J Comput Neurosci. 2004;17(2):179–201. doi: 10.1023/B:JCNS.0000037682.18051.5f [DOI] [PubMed] [Google Scholar]
- 2.Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27(2):77–87. doi: 10.1007/BF00337259 [DOI] [PubMed] [Google Scholar]
- 3.Ambrose RE, Pfeiffer BE, Foster DJ. Reverse Replay of Hippocampal Place Cells Is Uniquely Modulated by Changing Reward. Neuron. 2016;91(5):1124–36. doi: 10.1016/j.neuron.2016.07.047 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Arnoldi HM, Brauer W. Synchronization without oscillatory neurons. Biol Cybern. 1996;74(3):209–23. doi: 10.1007/BF00652222 [DOI] [PubMed] [Google Scholar]
- 5.Azizi AH, Wiskott L, Cheng S. A computational model for preplay in the hippocampus. Front Comput Neurosci. 2013;7:161. doi: 10.3389/fncom.2013.00161 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Bragin A, Jandó G, Nádasdy Z, van Landeghem M, Buzsáki G. Dentate EEG spikes and associated interneuronal population bursts in the hippocampal hilar region of the rat. J Neurophysiol. 1995;73(4):1691–705. doi: 10.1152/jn.1995.73.4.1691 [DOI] [PubMed] [Google Scholar]
- 7.Chenani A, Sabariego M, Schlesiger MI, Leutgeb JK, Leutgeb S, Leibold C. Hippocampal CA1 replay becomes less prominent but more rigid without inputs from medial entorhinal cortex. Nat Commun. 2019;10(1):1341. doi: 10.1038/s41467-019-09280-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Chenkov N, Sprekeler H, Kempter R. Memory replay in balanced recurrent networks. PLoS Comput Biol. 2017;13(1):e1005359. doi: 10.1371/journal.pcbi.1005359 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Danjo T, Toyoizumi T, Fujisawa S. Spatial representations of self and other in the hippocampus. Science. 2018;359(6372):213–8. doi: 10.1126/science.aao3898 [DOI] [PubMed] [Google Scholar]
- 10.Destexhe A, Rudolph M, Fellous JM, Sejnowski TJ. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience. 2001;107(1):13–24. doi: 10.1016/s0306-4522(01)00344-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Diba K, Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nat Neurosci. 2007;10(10):1241–2. doi: 10.1038/nn1961 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Diesmann M, Gewaltig MO, Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature. 1999;402(6761):529–33. [DOI] [PubMed] [Google Scholar]
- 13.Dragoi G, Buzsáki G. Temporal encoding of place sequences by hippocampal cell assemblies. Neuron. 2006;50(1):145–57. doi: 10.1016/j.neuron.2006.02.023 [DOI] [PubMed] [Google Scholar]
- 14.Dupret D, O’Neill J, Pleydell-Bouverie B, Csicsvari J. The reorganization and reactivation of hippocampal maps predict spatial memory performance. Nat Neurosci. 2010;13(8):995–1002. doi: 10.1038/nn.2599 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Eichenlaub J-B, Jarosiewicz B, Saab J, Franco B, Kelemen J, Halgren E, et al. Replay of Learned Neural Firing Sequences during Rest in Human Motor Cortex. Cell Rep. 2020;31(5):107581. doi: 10.1016/j.celrep.2020.107581 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Fernandez FR, Broicher T, Truong A, White JA. Membrane voltage fluctuations reduce spike frequency adaptation and preserve output gain in CA1 pyramidal neurons in a high-conductance state. J Neurosci. 2011;31(10):3880–93. doi: 10.1523/JNEUROSCI.5076-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Fernández-Ruiz A, Oliva A, Fermino de Oliveira E, Rocha-Almeida F, Tingley D, Buzsáki G. Long-duration hippocampal sharp wave ripples improve memory. Science. 2019;364(6445):1082–6. doi: 10.1126/science.aax0758 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Fiete IR, Senn W, Wang CZH, Hahnloser RHR. Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity. Neuron. 2010;65(4):563–76. doi: 10.1016/j.neuron.2010.02.003 [DOI] [PubMed] [Google Scholar]
- 19.Foster DJ, Wilson MA. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature. 2006;440(7084):680–3. doi: 10.1038/nature04587 [DOI] [PubMed] [Google Scholar]
- 20.Friedrich RW, Laurent G. Dynamic optimization of odor representations by slow temporal patterning of mitral cell activity. Science. 2001;291(5505):889–94. doi: 10.1126/science.291.5505.889 [DOI] [PubMed] [Google Scholar]
- 21.Girardeau G, Benchenane K, Wiener SI, Buzsáki G, Zugaro MB. Selective suppression of hippocampal ripples impairs spatial memory. Nat Neurosci. 2009;12(10):1222–3. doi: 10.1038/nn.2384 [DOI] [PubMed] [Google Scholar]
- 22.Gupta AS, van der Meer MAA, Touretzky DS, Redish AD. Hippocampal replay is not a simple function of experience. Neuron. 2010;65(5):695–705. doi: 10.1016/j.neuron.2010.01.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Hahnloser RHR, Kozhevnikov AA, Fee MS. An ultra-sparse code underlies the generation of neural sequences in a songbird. Nature. 2002;419(6902):65–70. doi: 10.1038/nature00974 [DOI] [PubMed] [Google Scholar]
- 24.He H, Boehringer R, Huang AJY, Overton ETN, Polygalov D, Okanoya K, et al. CA2 inhibition reduces the precision of hippocampal assembly reactivation. Neuron. 2021;109(22):3674–87. [DOI] [PubMed] [Google Scholar]
- 25.Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2012;86(1 Pt 1):011909. doi: 10.1103/PhysRevE.86.011909 [DOI] [PubMed] [Google Scholar]
- 26.Hertz J. Modelling synfire processing. 1997.
- 27.Igata H, Ikegaya Y, Sasaki T. Prioritized experience replays on a hippocampal predictive map for learning. Proc Natl Acad Sci U S A. 2021;118(1):e2011266118. doi: 10.1073/pnas.2011266118 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Itskov V, Curto C, Pastalkova E, Buzsáki G. Cell assembly sequences arising from spike threshold adaptation keep track of time in the hippocampus. J Neurosci. 2011;31(8):2828–34. doi: 10.1523/JNEUROSCI.3773-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Ji D, Wilson MA. Coordinated memory replay in the visual cortex and hippocampus during sleep. Nature Neuroscience. 2007;10(1):100–7. [DOI] [PubMed] [Google Scholar]
- 30.Kappel D, Nessler B, Maass W. STDP installs in Winner-Take-All circuits an online approximation to hidden Markov model learning. PLoS Comput Biol. 2014;10(3):e1003511. doi: 10.1371/journal.pcbi.1003511 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Klos C, Miner D, Triesch J. Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLoS Comput Biol. 2018;14(6):e1006187. doi: 10.1371/journal.pcbi.1006187 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Kumar A, Rotter S, Aertsen A. Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. Journal of Neuroscience. 2008;28(20):5268–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Kumar A, Rotter S, Aertsen A. Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding. Nat Rev Neurosci. 2010;11(9):615–27. doi: 10.1038/nrn2886 [DOI] [PubMed] [Google Scholar]
- 34.Lee H, Wang C, Deshmukh SS, Knierim JJ. Neural Population Evidence of Functional Heterogeneity along the CA3 Transverse Axis: Pattern Completion versus Pattern Separation. Neuron. 2015;87(5):1093–105. doi: 10.1016/j.neuron.2015.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Lehr AB, Kumar A, Tetzlaff, C. Sparse clustered inhibition projects sequential activity onto unique neural subspaces. bioRxiv 2023, p. 2023–09.
- 36.Lehr AB, Kumar A, Tetzlaff C, Hafting T, Fyhn M, Stöber TM. CA2 beyond social memory: Evidence for a fundamental role in hippocampal information processing. Neurosci Biobehav Rev. 2021;126:398–412. doi: 10.1016/j.neubiorev.2021.03.020 [DOI] [PubMed] [Google Scholar]
- 37.Lehr AB, Luboeinski J, Tetzlaff C. Neuromodulator-dependent synaptic tagging and capture retroactively controls neural coding in spiking neural networks. Sci Rep. 2022;12(1):17772. doi: 10.1038/s41598-022-22430-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Lu Y, Sato Y, Amari S-I. Traveling bumps and their collisions in a two-dimensional neural field. Neural Comput. 2011;23(5):1248–60. doi: 10.1162/NECO_a_00111 [DOI] [PubMed] [Google Scholar]
- 39.Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. bioRxiv. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol. 2020;16(1):e1007606. doi: 10.1371/journal.pcbi.1007606 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Mankin EA, Diehl GW, Sparks FT, Leutgeb S, Leutgeb JK. Hippocampal CA2 activity patterns change over time to a larger extent than between spatial contexts. Neuron. 2015;85(1):190–201. doi: 10.1016/j.neuron.2014.12.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.McNamara CG, Tejero-Cantero Á, Trouche S, Campo-Urriza N, Dupret D. Dopaminergic neurons promote hippocampal reactivation and spatial memory persistence. Nature Neuroscience. 2014;17(12):1658–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Michaelis C, Lehr AB, Tetzlaff C. Robust Trajectory Generation for Robotic Control on the Neuromorphic Research Chip Loihi. Front Neurorobot. 2020;14:589532. doi: 10.3389/fnbot.2020.589532 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Miner D, Tetzlaff C. Hey, look over there: Distraction effects on rapid sequence recall. PLoS One. 2020;15(4):e0223743. doi: 10.1371/journal.pone.0223743 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Murphy BK, Miller KD. Balanced amplification: a new mechanism of selective amplification of neural activity patterns. Neuron. 2009;61(4):635–48. doi: 10.1016/j.neuron.2009.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Murray JM, Escola GS. Learning multiple variable-speed sequences in striatum via cortical tutoring. Elife. 2017;6:e26084. doi: 10.7554/eLife.26084 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.O’Keefe J. Place units in the hippocampus of the freely moving rat. Exp Neurol. 1976;51(1):78–109. doi: 10.1016/0014-4886(76)90055-8 [DOI] [PubMed] [Google Scholar]
- 48.Oliva A, Fernández-Ruiz A, Leroy F, Siegelbaum SA. Hippocampal CA2 sharp-wave ripples reactivate and promote social memory. Nature. 2020;587(7833):264–9. doi: 10.1038/s41586-020-2758-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Omer DB, Maimon SR, Las L, Ulanovsky N. Social place-cells in the bat hippocampus. Science. 2018;359(6372):218–24. doi: 10.1126/science.aao3474 [DOI] [PubMed] [Google Scholar]
- 50.Recanatesi S, Katkov M, Romani S, Tsodyks M. Neural Network Model of Memory Retrieval. Front Comput Neurosci. 2015;9:149. doi: 10.3389/fncom.2015.00149 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Romani S, Tsodyks M. Short-term plasticity based network model of place cells dynamics. Hippocampus. 2015;25(1):94–105. doi: 10.1002/hipo.22355 [DOI] [PubMed] [Google Scholar]
- 52.Salz DM, Tiganj Z, Khasnabish S, Kohley A, Sheehan D, Howard MW, et al. Time Cells in Hippocampal Area CA3. J Neurosci. 2016;36(28):7476–84. doi: 10.1523/JNEUROSCI.0087-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Seeholzer A, Deger M, Gerstner W. Stability of working memory in continuous attractor networks under the control of short-term plasticity. PLoS Comput Biol. 2019;15(4):e1006928. doi: 10.1371/journal.pcbi.1006928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Silva D, Feng T, Foster DJ. Trajectory events across hippocampal place cells require previous experience. Nat Neurosci. 2015;18(12):1772–9. doi: 10.1038/nn.4151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Singer AC, Frank LM. Rewarded outcomes enhance reactivation of experience in the hippocampus. Neuron. 2009;64(6):910–21. doi: 10.1016/j.neuron.2009.11.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Skaggs WE, McNaughton BL. Replay of neuronal firing sequences in rat hippocampus during sleep following spatial experience. Science. 1996;271(5257):1870–3. doi: 10.1126/science.271.5257.1870 [DOI] [PubMed] [Google Scholar]
- 57.Spalla D, Cornacchia IM, Treves A. Continuous attractors for dynamic memories. Elife. 2021;10:e69499. doi: 10.7554/eLife.69499 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Spreizer S, Aertsen A, Kumar A. From space to time: Spatial inhomogeneities lead to the emergence of spatiotemporal sequences in spiking neuronal networks. PLoS Comput Biol. 2019;15(10):e1007432. doi: 10.1371/journal.pcbi.1007432 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Stöber TM, Lehr AB, Hafting T, Kumar A, Fyhn M. Selective neuromodulation and mutual inhibition within the CA3-CA2 system can prioritize sequences for replay. Hippocampus. 2020;30(11):1228–38. doi: 10.1002/hipo.23256 [DOI] [PubMed] [Google Scholar]
- 60.Tetzlaff C, Dasgupta S, Kulvicius T, Wörgötter F. The Use of Hebbian Cell Assemblies for Nonlinear Computation. Sci Rep. 2015;5:12866. doi: 10.1038/srep12866 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.van Albada SJ, Helias M, Diesmann M. Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLoS Comput Biol. 2015;11(9):e1004490. doi: 10.1371/journal.pcbi.1004490 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Vieth M, Rahimi A, Gorgan Mohammadi A, Triesch J, Ganjtabesh M. Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch. Front Neuroinform. 2024;18:1331220. doi: 10.3389/fninf.2024.1331220 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Vieth MA, Stöber TM, Triesch J. Pymonnto: a flexible modular toolbox for designing brain-inspired neural networks. Frontiers in Neuroinformatics. 2021;50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011;334(6062):1569–73. doi: 10.1126/science.1211095 [DOI] [PubMed] [Google Scholar]
- 65.Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12(1):1–24. doi: 10.1016/S0006-3495(72)86068-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Wilson MA, McNaughton BL. Reactivation of hippocampal ensemble memories during sleep. Science. 1994;265(5172):676–9. doi: 10.1126/science.8036517 [DOI] [PubMed] [Google Scholar]
- 67.Wintzer ME, Boehringer R, Polygalov D, McHugh TJ. The hippocampal CA2 ensemble is sensitive to contextual change. J Neurosci. 2014;34(8):3056–66. doi: 10.1523/JNEUROSCI.2563-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Yamamoto J, Tonegawa S. Direct Medial Entorhinal Cortex Input to Hippocampal CA1 Is Crucial for Extended Quiet Awake Replay. Neuron. 2017;96(1):217-227.e4. doi: 10.1016/j.neuron.2017.09.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.York LC, van Rossum MCW. Recurrent networks with short term synaptic depression. J Comput Neurosci. 2009;27(3):607–20. doi: 10.1007/s10827-009-0172-4 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Sequence competition and cooperation in spiking neural networks with adaptive membrane dynamics.
(PDF)
Data Availability Statement
All code is available at https://github.com/tristanstoeber/sequence_competition_cooperation.






