Skip to main content
eLife logoLink to eLife
. 2024 May 7;12:RP88053. doi: 10.7554/eLife.88053

Drift of neural ensembles driven by slow fluctuations of intrinsic excitability

Geoffroy Delamare 1,, Yosif Zaki 2, Denise J Cai 2, Claudia Clopath 1,
Editors: Lisa M Giocomo3, Laura L Colgin4
PMCID: PMC11076042  PMID: 38712831

Abstract

Representational drift refers to the dynamic nature of neural representations in the brain despite the behavior being seemingly stable. Although drift has been observed in many different brain regions, the mechanisms underlying it are not known. Since intrinsic neural excitability is suggested to play a key role in regulating memory allocation, fluctuations of excitability could bias the reactivation of previously stored memory ensembles and therefore act as a motor for drift. Here, we propose a rate-based plastic recurrent neural network with slow fluctuations of intrinsic excitability. We first show that subsequent reactivations of a neural ensemble can lead to drift of this ensemble. The model predicts that drift is induced by co-activation of previously active neurons along with neurons with high excitability which leads to remodeling of the recurrent weights. Consistent with previous experimental works, the drifting ensemble is informative about its temporal history. Crucially, we show that the gradual nature of the drift is necessary for decoding temporal information from the activity of the ensemble. Finally, we show that the memory is preserved and can be decoded by an output neuron having plastic synapses with the main region.

Research organism: None

Introduction

In various brain regions, the neural code tends to be dynamic although behavioral outputs remain stable. Representational drift refers to the dynamic nature of internal representations as they have been observed in sensory cortical areas (Driscoll et al., 2017; Sadeh and Clopath, 2022; Driscoll et al., 2022) or the hippocampus (Ziv et al., 2013; Hainmueller and Bartos, 2018) despite stable behavior. It has even been suggested that pyramidal neurons from the CA1 and CA3 regions form dynamic rather than static memory engrams (Hainmueller and Bartos, 2018; Spalla et al., 2021), namely that the set of neurons encoding specific memories varies across days. In the amygdala, retraining of a fear memory task induces a turnover of the memory engram (Cho et al., 2021). Additionally, plasticity mechanisms have been proposed to compensate for drift and to provide a stable read-out of the neural code (Rule and O’Leary, 2022), suggesting that information is maintained. Altogether, this line of evidence suggests that drift might be a general mechanism with dynamical representations observed in various brain regions.

However, the mechanisms underlying the emergence of drift and its relevance for the neural computation are not known. Drift is often thought to arise from variability of internal states (Sadeh and Clopath, 2022), neurogenesis (Rechavi et al., 2022; Driscoll et al., 2017) or synaptic turnover (Attardo et al., 2015) combined with noise (Kossio et al., 2021; Manz and Memmesheimer, 2023). On the other hand, excitability might also play a role in memory allocation (Zhou et al., 2009; Mau et al., 2020; Rogerson et al., 2014; Silva et al., 2009), so that neurons having high excitability are preferentially allocated to memory ensembles (Cai et al., 2016; Rashid et al., 2016; Silva et al., 2009). Moreover, excitability is known to fluctuate over timescales from hours to days, in the amygdala (Rashid et al., 2016), the hippocampus (Cai et al., 2016; Grosmark and Buzsáki, 2016), or the cortex (Huber et al., 2013; Levenstein et al., 2019). Subsequent reactivations of a neural ensemble at different time points could therefore be biased by excitability (Mau et al., 2022), which varies at similar timescales than drift (Mau et al., 2018). Altogether, this evidence suggest that fluctuations of excitability could act as a cellular mechanism for drift (Mau et al., 2020).

In this short communication, we aimed at proposing how excitability could indeed induce a drift of neural ensembles at the mechanistic level. We simulated a recurrent neural network (Delamare et al., 2022) equipped with intrinsic neural excitability and Hebbian learning. As a proof of principle, we first show that slow fluctuations of excitability can induce neural ensembles to drift in the network. We then explore the functional implications of such drift. Consistent with previous works (Rubin et al., 2015; Clopath et al., 2017; Mau et al., 2018; Miller et al., 2018), we show that neural activity of the drifting ensemble is informative about the temporal structure of the memory. This suggest that fluctuations of excitability can be useful for time-stamping memories (i.e. for making the neural ensemble informative about the time at which it was form). Finally, we confirmed that the content of the memory itself can be steadily maintained using a read-out neuron and local plasticity rule, consistently with previous computational works (Rule and O’Leary, 2022). The goal of this study is to show one possible mechanistic implementation of how excitability can drive drift.

Results

Many studies have shown that memories are encoded in sparse neural ensembles that are activated during learning and many of the same cells are reactivated during recall, underlying a stable neural representation (Josselyn and Tonegawa, 2020; Poo et al., 2016; Mau et al., 2020). After learning, subsequent reactivations of the ensemble can happen spontaneously during replay, retraining or during a memory recall task (e.g. following presentation of a cue Josselyn and Tonegawa, 2020; Káli and Dayan, 2004). Here, we directly tested the hypothesis that slow fluctuations of excitability can change the structure of a newly-formed neural ensemble, through subsequent reactivations of this ensemble.

To that end, we designed a rate-based, recurrent neural network, equipped with intrinsic neural excitability (Methods). We considered that the recurrent weights are all-to-all and plastic, following a Hebbian rule (Methods). The network was then stimulated following a 4day protocol: the first day corresponds to the initial encoding of a memory and the other days correspond to spontaneous or cue-induced reactivations of the neural ensemble (Methods). Finally, we considered that excitability of each neuron can vary on a day long timescale: each day, a different subset of neurons has increased excitability (Figure 1a, Methods).

Figure 1. Excitability-induced drift of memory ensembles.

(a) Distribution of excitability ϵi for each neuron i, fluctuating over time. During each stimulation, a different pool of neurons has a high excitability (Methods). (b, c) Firing rates of the neurons across time. The red traces in panel (c) correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. (d) Recurrent weights matrices after each of the four stimuli show the drifting assembly.

Figure 1.

Figure 1—figure supplement 1. Comparison of drifting behavior for different values of excitability amplitude.

Figure 1—figure supplement 1.

(a) E=0, no drift. A neural assembly is initially formed during the first stimulation and later reactivated every subsequent day. (b) E=1.5, partial drift. The ensemble is gradually modified during each new stimulation. (c) E=3, full drift. Each new stimulation leads to formation of a new ensemble, containing neurons that have high excitability during this time.

Figure 1—figure supplement 2. The rate of the drift does not depend on the size of the initial engram.

Figure 1—figure supplement 2.

Drift rate against the size of the original engram. Bars show minimum, mean and maximum values. n=100 simulations.

Fluctuations of intrinsic excitability induce drifting of neural ensembles

While stimulating the naive network on the first day, we observed the formation of a neural ensemble: some neurons gradually increase their firing rate (Figure 1b and c, neurons 10–20, time steps 1000–3000) during the stimulation. We observed that these neurons are highly recurrently connected (Figure 1d, leftmost matrix) suggesting that they form an assembly. This assembly is composed of neurons that have a high excitability (Figure 1a, neurons 10–20 have increase excitability) at the time of the stimulation. We then show that further stimulations of the network induce a remodeling of the synaptic weights. During the second stimulation for instance (Figure 1b and c, time steps 4000–6000), neurons from the previous assembly (10–20) are reactivated along with neurons having high excitability at the time of the second stimulation (Figure 1a, neurons 20–30). Moreover, across several days, recurrent weights from previous assemblies tend to decrease while others increase (Figure 1d). Indeed, neurons from the original assembly (Figure 1c, red traces) tend to be replaced by other neurons, either from the latest assembly or from the pool of neurons having high excitability. This is translated at the synaptic level, where weights from previous assemblies tend to decay and be replaced by new ones. Overall, each new stimulation updates the ensemble according to the current distribution of excitability, inducing a drift towards neurons with high excitability. Finally, in our model, the drift rate does not depend on the size of the original ensemble (Figure 1—figure supplement 2, Methods).

Activity of the drifting ensemble is informative about the temporal structure of the past experience

After showing that fluctuations of excitability can induce a drift among neural ensembles, we tested whether the drifting ensemble could contain temporal information about its past experiences, as suggested in previous works (Rubin et al., 2015).

Inspired by these works, we asked whether it was possible to decode relevant temporal information from the patterns of activity of the neural ensemble. We first observed that the correlation between patterns of activity after just after encoding decreases across days (Figure 2a, Methods), indicating that after each day, the newly formed ensemble resembles less the original one. Because the patterns of activity differ across days, they should be informative about the absolute day from which they were recorded. To test this hypothesis, we designed a day decoder (Figure 2b, Methods), following the work of Rubin et al., 2015. This decoder aims at inferring the reactivation day of a given activity pattern by comparing the activity of this pattern during training and the activity just after memory encoding without increase in excitability (Figure 2b, Methods). We found that the day decoder perfectly outputs the reactivation day as compared to using shuffled data (Figure 2c).

Figure 2. Neuronal activity is informative about the temporal structure of the reactivations.

(a) Correlation of the patterns of activity between the first day and every other days, for n=10 simulations. Data are shown as mean ± s.e.m. (b) Schema of the day decoder. The day decoder maximises correlation between the patterns of each day with the pattern from the simulation with no increase in excitability. (c) Results of the day decoder for the real data (red) and the shuffled data (orange). Shuffled data consist of the same activity pattern for which the label of each cell for every seed has been shuffled randomly. For each simulation, the error is measured for each day as the difference between the decoded and the real day. Data are shown for n=10 simulations and for each of the 4 days. (d) Schema of the ordinal time decoder. This decoder output the permutation 𝒑 that maximises the sum S(𝒑) of the correlations of the patterns for each pairs of days. (e) Distribution of the value S(𝒑) for each permutation of days 𝒑. The value for the real permutation S(𝒑real) is shown in black. (f) Student’s test t-value for n=10 simulations, for the real (red) and shuffled (orange) data and for different amplitudes of excitability E. Data are shown as mean ± s.e.m. for n=10 simulations.

Figure 2.

Figure 2—figure supplement 1. Sparse recurrent connectivity shows similar drifting behavior as all-to-all connectivity.

Figure 2—figure supplement 1.

The same simulation protocol as Figure 1 was used while the recurrent weights matrix was made 50% sparse (Methods). (a) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. (b) Recurrent weights matrices after each of the four stimuli show the drifting assembly. (c) Correlation of the patterns of activity between the first day and every other days. (d) Student’s test t-value of the ordinal time decoder, for the real (red) and shuffled (orange) data and for different amplitudes of excitability E. (e) Center of mass of the distribution of the output weights (Methods) across days. (c–e) Data are shown as mean ± s.e.m. for n=10 simulations.
Figure 2—figure supplement 2. Change of excitability as a variable slope of the input-output function shows similar drifting behavior as considering a change in the threshold.

Figure 2—figure supplement 2.

The same simulation protocol as Figure 1 was used while the excitability changes were modeled as a change in the slope of the activation function (Methods). (a) Schema showing two different ways of defining excitability, as a threshold (top) or slope (bottom) of the activation function. Each line shows one neuron and darker lines correspond to neurons with increased excitability. (b) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. (c) Recurrent weights matrices after each of the four stimuli show the drifting assembly. (d) Correlation of the patterns of activity between the first day and every other days. (e) Student’s test t-value of the ordinal time decoder, for the real (red) and shuffled (orange) data and for different amplitudes of excitability E. (f) Center of mass of the distribution of the output weights (Methods) across days. (d-f) Data are shown as mean ± s.e.m. for n=10 simulations.
Figure 2—figure supplement 3. Two distinct ensembles can be encoded and drift independently.

Figure 2—figure supplement 3.

(a, b) Firing rates of the neurons across time. The red traces in panel (b) correspond to neurons belonging to the first assembly and the green traces to the second assembly on the first day. They correspond to neurons having a firing rate higher than the active threshold after the first stimulation of each assembly. The black bars show the stimulation and the dashed line shows the active threshold. (c) Recurrent weights matrices after each of the eight stimuli showing the drifting of the first (top) and second (bottom) assembly.
Figure 2—figure supplement 4. The two ensembles are informative about their temporal history and can be decoded using two output neurons.

Figure 2—figure supplement 4.

(a) Correlation of the patterns of activity between the first day and every other days, for the first assembly (red) and the second assembly (green). (b) Student’s test t-value of the ordinal time decoder, for the first (red, left) and second ensemble (green, right) for different amplitudes of excitability E. Shuffled data are shown in orange. (c) Center of mass of the distribution of the output weights (Methods) across days for the first (𝑾1out, red) and second (𝑾2out, green) ensemble. (a–c) Data are shown as mean ± s.e.m. for n=10 simulations. (d) Output neurons firing rate across time for the first ensemble (y1, top) and the second ensemble (y2, bottom). The red and green traces correspond to the real output. The dark blue, light blue and yellow traces correspond to the cases where the output weights were randomly shuffled for every time points after presentation of the first, second and third stimulus, respectively.

After showing that the patterns of activity are informative about the reactivation day, we took a step further by asking whether the activity of the neurons is also informative about the order in which the memory evolved. To that end, we used an ordinal time decoder (Methods, as in Rubin et al., 2015) that uses the correlations between activity patterns for pairs of successive days, and for each possible permutation of days 𝒑 (Figure 2d, Methods). The sum of these correlations S(𝒑) differs from each permutation 𝒑 and we assumed that the neurons are informative about the order at which the reactivations of the ensemble happened if the permutation maximising S(𝒑) corresponds to the real permutation 𝒑real (Figure 2e, Methods). We found that S(𝒑real) was indeed statistically higher than S(𝒑) for the other permutations p (Figure 2f, Student’s t-test, Methods). However, this was only true when the amplitude of the fluctuations of excitability E was in a certain range. Indeed, when the amplitude of the fluctuations is null, that is when excitability is not increased (E=0), the ensemble does not drift (Figure 1—figure supplement 1a). In this case, the patterns of activity are not informative about the order of reactivations. On the other hand, if the excitability amplitude is too high (E=3), each new ensemble is fully determined by the distribution of excitability, regardless of any previously formed ensemble (Figure 1—figure supplement 1c). In this regime, the patterns of activity are not informative about the order of the reactivations either. In the intermediate regime (E=1.5), the decoder is able to correctly infer the order at which the reactivations happened, better than using the shuffled data (Figure 2f, Figure 1—figure supplement 1b).

Finally, we sought to test whether the results are independent on the specific architecture of the model. To that end, we defined a change of excitability as a change in the slope of the activation function, rather than of the threshold (Figure 2—figure supplement 2, Methods). We also used sparse recurrent synaptic weights instead of the original all-to-all connectivity matrix (Figure 2—figure supplement 1, Methods). In both cases, we observed a similar drifting behavior and were able to decode the temporal order in which the memory evolved.

A read-out neuron can track the drifting ensemble

So far, we showed that the drifting ensemble contains information about its history, namely about the days and the order at which the subsequent reactivations of the memory happened.

However, we have not shown that we could use the neural ensemble to actually decode the memory itself, in addition to its temporal structure. To that end, we introduced a decoding output neuron connected to the recurrent neural network, with plastic weights following a Hebbian rule (Methods). As shown by Rule and O’Leary, 2022, the goal was to make sure that the output neuron can track the ensemble even if it is drifting. This can be down by constantly decreasing weights from neurons that are no longer in the ensemble and increasing those associated with neurons joining the ensemble (Figure 3a). We found that the output neuron could steadily decode the memory (i.e. it has a higher firing than in the case where the output weights are randomly shuffled; Figure 3b). This is due to the fact that weights are plastic under Hebbian learning, as shown by Rule and O’Leary, 2022. We confirmed that this was induced by a change in the output weights across time (Figure 3c). In particular, the weights from neurons that no longer belong to the ensemble are decreased while weights from newly recruited neurons are increased, so that the center of mass of the weights distribution drifts across time (Figure 3d). Finally, we found that the quality of the read-out decreases with the rate of the drift (Figure 3—figure supplement 1, Methods).

Figure 3. A single output neuron can track the memory ensemble through Hebbian plasticity.

(a) Conceptual architecture of the network: the read-out neuron y in red ‘tracks’ the ensemble by decreasing synapses linked to the previous ensemble and increasing new ones to linked to the new assembly. (b) Output neuron’s firing rate across time. The red trace corresponds to the real output. The dark blue, light blue and yellow traces correspond to the cases where the output weights were randomly shuffled for every time points after presentation of the first, second and third stimulus, respectively. (c) Output weights for each neuron across time. (d) Center of mass of the distribution of the output weights (Methods) across days. The output weights are centered around the neurons that belong to the assembly at each day. Data are shown as mean ± s.e.m. for n=10 simulations.

Figure 3.

Figure 3—figure supplement 1. The quality of the read-out decreases with the rate of the drift.

Figure 3—figure supplement 1.

Read-out quality computed on the firing rate of the output neuron against the rate of the drift (Methods). Each dot shows one simulation. n=100 simulations.

Two memories drift independently

Finally, we tested whether the network is able to encode two different memories and whether excitability could make two ensembles drift. On each day, we stimulated a random half of the neurons (context A) and the other half (context B) sequentially (Methods). We found that, day after day, the two ensembles show a similar drift than when only one ensemble was formed (Figure 2—figure supplement 3). In particular, the correlation between the patterns activity on the first day and the other days decay in a similar way (Figure 2—figure supplement 4a). For both contexts, the temporal order of the reactivations can be decoded for a certain range of excitability amplitude (Figure 2—figure supplement 4b). Finally, we found that using two output decoders allowed us to decode both memories independently. The output weights associated to both ensembles are remodeled to follow the drifting ensembles, but are not affected by the reactivation of the other ensemble (Figure 2—figure supplement 4c). Indeed, both neurons are able to ‘track’ the reactivation of their associated ensemble while not responding to the other ensemble (Figure 2—figure supplement 4d).

Discussion

Overall, our model suggests a potential cellular mechanisms for the emergence of drift that can serve a computational purpose by ‘time-stamping’ memories while still being able to decode the memory across time. Although the high performance of the day decoder was expected, the performance of the ordinal time decoder is not trivial. Indeed, the patterns of activity of each day are informative about the distribution of excitability and therefore about the day at which the reactivation happened. However, the ability for the neural ensemble to encode the order of past reactivations requires drift to be gradual (i.e. requires consecutive patterns of activity to remain correlated across days). Indeed, if the amplitude of excitability is too low (E=0) or too high (E=3), it is not possible to decode the order at which the successive reactivations happened. This result is consistent with the previous works showing gradual change in neural representations, that allows for decoding temporal information of the ensemble (Rubin et al., 2015). Moreover, such gradual drifts could support complex cognitive mechanisms like mental time-travel during memory recall (Rubin et al., 2015).

In our model, drift is induced by co-activation of the previously formed ensemble and neurons with high excitability at the time of the reactivation. The pool of neurons having high excitability can therefore ‘time-stamps’ memory ensembles by biasing allocation of these ensembles (Clopath et al., 2017; Mau et al., 2018; Rubin et al., 2015). We suggest that such time-stamping mechanism could also help link memories that are temporally close and dissociate those which are spaced by longer time (Driscoll et al., 2022; Mau et al., 2020; Aimone et al., 2006). Indeed, the pool of neurons with high excitability varies across time so that any new memory ensemble is allocated to neurons which are shared with other ensembles formed around the same time. This mechanism could be complementary to the learning-induced increase in excitability observed in amygdala (Rashid et al., 2016), hippocampal CA1 (Cai et al., 2016) and dentate gyrus (Pignatelli et al., 2019).

Finally, we intended to model drift in the firing rates, as opposed to a drift in the turning curve, of the neurons. Recent studies suggest that drifts in the mean firing rate and tuning curve arise from two different mechanisms (Geva et al., 2023; Khatib et al., 2023). Experience drives a drift in neurons turning curve while the passage of time drives a drift in neurons firing rate. In this sense, our study is consistent with these findings by providing a possible mechanism for a drift in the mean firing rates of the neurons driven a dynamical excitability. Our work suggests that drift can depend on any experience having an impact on excitability dynamics such as exercise as previously shown experimentally (Rechavi et al., 2022; de Snoo et al., 2023) but also neurogenesis (Aimone et al., 2006; Tran et al., 2022; Rechavi et al., 2022), sleep (Levenstein et al., 2017) or increase in dopamine level (Chowdhury et al., 2022).

Overall, our work is a proof of principle which highlights the importance of considering excitability when studying drift, although further work would be needed to test this link experimentally.

Methods

Recurrent neural network with excitability

Our rate-based model consists of a single region of N neurons (with firing rate ri, 1iN). All-to-all recurrent connections W are plastic and follow a Hebbian rule given by:

dWijdt=rirj/τWWij/τdecay (1)

where i and j correspond to the pre- and post-synaptic neuron respectively. τW and τdecay are the learning and the decay time constants of the weights, respectively.

A hard bound of [0,c] was applied to these weights. We also introduced a global inhibition term dependent on the activity of the neurons:

I=I0+I1i=1Nri+I2i=1Nri2 (2)

here I0, I1 and I2 are positive constants. All neurons receive the same input, Δ(t), during stimulation of the network (Figure 1c, black bars). Finally, excitability is modeled as a time-varying threshold ϵi of the input-output function of each neuron i. The rate dynamics of a neuron i is given by:

τrdridt+ri=ReLU(Δ(t)+j=1NWijrjI+ϵi(t)) (3)

where τr is the decay time of the rates and ReLU is the rectified linear activation function. We considered that a neurons is active when its firing rate reaches the active threshold θ.

In Figure 2—figure supplement 1, we applied a random binary mask to the recurrent weights in order to set 50% of the synapses at 0. A new mask was randomly sampled for each simulation.

In Figure 2—figure supplement 2, we modeled excitability as a change of the slope of the activation function (ReLU) instead of a change of the threshold as previously used (Figure 2—figure supplement 2a):

τrdridt+ri=ϵi(t)ReLU(Δ(t)+j=1NWijrjI) (4)

Protocol

We designed a 4-day protocol, corresponding to the initial encoding of a memory (first day) and subsequent random or cue-induced reactivations of the ensemble (Josselyn and Tonegawa, 2020; Káli and Dayan, 2004) (second, third, and fourth day). Each stimulation consists of Nrep repetitions of interval T spaced by a inter-repetition delay IR. Δ(t) takes the value δ during these repetitions and is set to 0 otherwise. The stimulation is repeated four times, modeling four days of reactivation, spaced by an inter-day delay ID. Excitability ϵi of each neuron i is sampled from the absolute value of a normal distribution of mean 0 and standard deviation 1. In Figure 2—figure supplement 2, excitability ϵi is sampled from the absolute value of a normal distribution of mean 0.4 and standard deviation 0.2. Neurons 10–20, 20–30, 30–40, and 40–50 then receive an increase of excitability of amplitude E, respectively on days 1, 2, 3, and 4 (Figure 1a). A different random seed is used for each repetition of the simulation. When two memories were modeled (Figure 2—figure supplements 3 and 4), we stimulated a random half of the neurons (context A) and the other half (context B) successively (Figure 2—figure supplement 3a), every day.

Decoders

For each day d, we recorded the activity pattern Vd, which is a vector composed of the firing rate of the neurons at the end of the last repetition of stimulation. To test the decoder, we also stimulated the network while setting the excitability at baseline (E=0), and recorded the resulted pattern of activity Vd0 for each day d. We then designed two types of decoders, inspired by previous works (Rubin et al., 2015): (1) a day decoder which infers the day at which each stimulation happened and (2) an ordinal time decoder which infers the order at which the reactivations occurred. For both decoders, the shuffled data was obtained by randomly shuffling the day label of each neuron. When two memories were modeled (Figure 2—figure supplements 3 and 4), the patterns of activity were taken at the end of the stimulations by context A and B, and the decoders were used independently on each memory.

1. The day decoder aims at inferring the day at which a specific pattern of activity occurred. To that end, we computed the Pearson correlation between the pattern with no excitability Vd0 of the day d and the patterns of all days d from the first simulation Vd. Then, the decoder outputs the day dinf that maximises the correlation:

dinf=argmaxd{corr(Vd0,Vd)} (5)

The error was defined as the difference between the inferred and the real day dinfd.

2. The ordinal time decoder aims at inferring the order at which the reactivations happened from the patterns of activity Vd of every day d. To that end, we computed the pairwise correlations of each pair of consecutive days, for the 4! possible permutations of days 𝒑. The real permutation is called 𝒑real=(1,2,3,4) and corresponds to the real order of reactivations: day 1 → day 2 → day 3 → day 4. The sum of these correlations over the 3 pairs of consecutive days is expressed as:

S(p)=i=13corr(Vpi,Vpi+1) (6)

We then compared the distribution of these quantities for each permutation 𝒑 to that of the real permutation S(𝒑real) (Figure 2). The patterns of activity are informative about the order of reactivations if S(𝒑real) corresponds to the maximal value of S(𝒑). To compare S(𝒑real) with the distribution S(𝒑), we performed a Student’s t-test, where the t-value is defined as:

t=S(preal)μσ/N (7)

where μ and σ correspond to the mean and standard deviation of the distribution S(𝒑), respectively.

The drift rate Δ (Figure 1—figure supplement 2 and Figure 3—figure supplement 1) was computed as:

Δ=i=24(1corr(V1,Vi)) (8)

Memory read-out

To test if the network is able to decode the memory at any time point, we introduced a read-out neuron with plastic synapses to neurons from the recurrent network, inspired by previous computational works (Rule and O’Leary, 2022). The weights of these synapses are named 𝑾out=(Wiout)1iN and follow the Hebbian rule defined as:

dWijoutdt=h(Wout)riy/τout+Wiout/τout (9)

where τout+ and τout corresponds to the learning time and decay time constant, respectively. h(𝑾out) is a homeostatic term defined as h(𝑾out)=1-j=1NWjout which decreases to 0 throughout learning. h takes the value 1 before learning and 0 when the sum of the weights reaches the value 1. y is the firing rate of the output neuron defined y as:

y=i=1NWioutri (10)

The read-out quality index Q (Figure 3—figure supplement 1) was defined as:

Q=d=24yd/ydshuffleNshuffle (11)

where yd corresponds to the value of y taken at the end of the last repetition of day d, and ydshuffle the equivalent with shuffled outputs weights. ...Nshuffle indicates the average over Nshuffle=10 simulations.

In Figure 2—figure supplements 3 and 4, two output decoders yk, k{1,2}, with corresponding weights 𝑾𝒌out=(Wi,kout)1iN are defined as:

yk=i=1NWi,koutri+βk (12)

and follow the Hebbian rule defined as:

dWi,koutdt=h(Wi,kout)riyk/τout+Wi,kout/τout (13)

Then, we aimed at allocating y1 and y2 to the first and the second ensemble (context A and B), respectively. To that end, we used supervised learning on the first day by adding a current βk to the output neurons which is positive when the corresponding context is on:

β0=0.1 if 1000<t<30000 otherwiseβ1=0.1 if 4000<t<60000 otherwise (14)

The shuffled traces were obtained by randomly shuffling the output weights 𝑾out or 𝑾𝒌out for each ensemble k.

Table of parameters

The following parameters have been used for the simulations. When unspecified, the defaults values were used. All except N are in arbitrary unit. Figure 2—figure supplement 2 corresponds to the change from a threshold-based to a slope-based excitability. Figure 2—figure supplements 3 and 4 corresponds to the stimulation of two ensembles. Figure 2—figure supplement 1 corresponds to the sparsity simulation.

Param. Description Default Figure 2—figure supplement 2 Figure 2—figure supplements 3 and 4 Figure 2—figure supplement 1
N Number of neurons 50 - - -
τW Learning time constant of the recurrent weights 800 700 - -
τdecay Decay time constant of the recurrent weights 1000 800 4000 -
τr Decay time constant of the firing rates 20 - - -
τout+ Learning time constant of the output weights 200 - - -
τout Decay time constant of the output weights 1000 - - -
I0 First inhibition parameter 12 4 8 7
I1 Second inhibition parameter 0.5 0.7 0.8 0.8
I2 Third inhibition parameter 0.05 - - -
δ Input current during stimulation 15 - 12 20
E Amplitude of the fluctuations of excitability 1.5 0.5 - -
Nrep Number of repetitions 10 - - -
T Duration of each repetition 100 - - -
IR Inter-repetition delay 100 - - -
ID Inter-stimulation delay 1000 - - -
θ Active threshold 5 1 - -
c Cap on the recurrent weights 1 .5 - -

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.

Contributor Information

Geoffroy Delamare, Email: g.delamare21@imperial.ac.uk.

Claudia Clopath, Email: c.clopath@imperial.ac.uk.

Lisa M Giocomo, Stanford School of Medicine, United States.

Laura L Colgin, University of Texas at Austin, United States.

Funding Information

This paper was supported by the following grants:

  • Biotechnology and Biological Sciences Research Council BB/N013956/1 to Claudia Clopath.

  • Wellcome Trust 10.35802/200790 to Claudia Clopath.

  • Simons Foundation 564408 to Claudia Clopath.

  • Engineering and Physical Sciences Research Council EP/R035806/1 to Claudia Clopath.

Additional information

Competing interests

No competing interests declared.

Reviewing editor, eLife.

Author contributions

Conceptualization, Data curation, Software, Formal analysis, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Conceptualization, Writing - review and editing.

Conceptualization, Project administration, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Validation, Investigation, Methodology, Project administration, Writing - review and editing.

Additional files

MDAR checklist

Data availability

The code for simulations and figures is available at GitHub (copy archived at Delamare, 2024).

References

  1. Aimone JB, Wiles J, Gage FH. Potential role for adult neurogenesis in the encoding of time in new memories. Nature Neuroscience. 2006;9:723–727. doi: 10.1038/nn1707. [DOI] [PubMed] [Google Scholar]
  2. Attardo A, Fitzgerald JE, Schnitzer MJ. Impermanence of dendritic spines in live adult CA1 hippocampus. Nature. 2015;523:592–596. doi: 10.1038/nature14467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Cai DJ, Aharoni D, Shuman T, Shobe J, Biane J, Song W, Wei B, Veshkini M, La-Vu M, Lou J, Flores SE, Kim I, Sano Y, Zhou M, Baumgaertel K, Lavi A, Kamata M, Tuszynski M, Mayford M, Golshani P, Silva AJ. A shared neural ensemble links distinct contextual memories encoded close in time. Nature. 2016;534:115–118. doi: 10.1038/nature17955. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Cho HY, Shin W, Lee HS, Lee Y, Kim M, Oh JP, Han J, Jeong Y, Suh B, Kim E, Han JH. Turnover of fear engram cells by repeated experience. Current Biology. 2021;31:5450–5461. doi: 10.1016/j.cub.2021.10.004. [DOI] [PubMed] [Google Scholar]
  5. Chowdhury A, Luchetti A, Fernandes G, Filho DA, Kastellakis G, Tzilivaki A, Ramirez EM, Tran MY, Poirazi P, Silva AJ. A locus coeruleus-dorsal CA1 dopaminergic circuit modulates memory linking. Neuron. 2022;110:3374–3388. doi: 10.1016/j.neuron.2022.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Clopath C, Bonhoeffer T, Hübener M, Rose T. Variance and invariance of neuronal long-term representations. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 2017;372:20160161. doi: 10.1098/rstb.2016.0161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Delamare G, Feitosa Tomé D, Clopath C. Intrinsic Neural Excitability Induces Time-Dependent Overlap of Memory Engrams. bioRxiv. 2022 doi: 10.1101/2022.08.27.505441. [DOI] [PMC free article] [PubMed]
  8. Delamare G. Drift. swh:1:rev:c79e1baeff8e3c3294ef794c77287827e3af9cecSoftware Heritage. 2024 https://archive.softwareheritage.org/swh:1:dir:50f44c9d73aa82fd3cd38347b9b6ed5c9af43a53;origin=https://github.com/gdelamar/drift;visit=swh:1:snp:d7c0ec941cf4282bc25b00df0862de834b7e1ed1;anchor=swh:1:rev:c79e1baeff8e3c3294ef794c77287827e3af9cec
  9. de Snoo ML, Miller AMP, Ramsaran AI, Josselyn SA, Frankland PW. Exercise accelerates place cell representational drift. Current Biology. 2023;33:R96–R97. doi: 10.1016/j.cub.2022.12.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Driscoll LN, Pettit NL, Minderer M, Chettih SN, Harvey CD. Dynamic reorganization of neuronal activity patterns in parietal cortex. Cell. 2017;170:986–999. doi: 10.1016/j.cell.2017.07.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Current Opinion in Neurobiology. 2022;76:102609. doi: 10.1016/j.conb.2022.102609. [DOI] [PubMed] [Google Scholar]
  12. Geva N, Deitch D, Rubin A, Ziv Y. Time and experience differentially affect distinct aspects of hippocampal representational drift. Neuron. 2023;111:2357–2366. doi: 10.1016/j.neuron.2023.05.005. [DOI] [PubMed] [Google Scholar]
  13. Grosmark AD, Buzsáki G. Diversity in neural firing dynamics supports both rigid and learned hippocampal sequences. Science. 2016;351:1440–1443. doi: 10.1126/science.aad1935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hainmueller T, Bartos M. Parallel emergence of stable and dynamic memory engrams in the hippocampus. Nature. 2018;558:292–296. doi: 10.1038/s41586-018-0191-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Huber R, Mäki H, Rosanova M, Casarotto S, Canali P, Casali AG, Tononi G, Massimini M. Human cortical excitability increases with time awake. Cerebral Cortex. 2013;23:332–338. doi: 10.1093/cercor/bhs014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020;367:eaaw4325. doi: 10.1126/science.aaw4325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Káli S, Dayan P. Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions. Nature Neuroscience. 2004;7:286–294. doi: 10.1038/nn1202. [DOI] [PubMed] [Google Scholar]
  18. Khatib D, Ratzon A, Sellevoll M, Barak O, Morris G, Derdikman D. Active experience, not time, determines within-day representational drift in dorsal CA1. Neuron. 2023;111:2348–2356. doi: 10.1016/j.neuron.2023.05.014. [DOI] [PubMed] [Google Scholar]
  19. Kossio YFK, Goedeke S, Klos C, Memmesheimer R-M. Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. PNAS. 2021;118:e2023832118. doi: 10.1073/pnas.2023832118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Levenstein D, Watson BO, Rinzel J, Buzsáki G. Sleep regulation of the distribution of cortical firing rates. Current Opinion in Neurobiology. 2017;44:34–42. doi: 10.1016/j.conb.2017.02.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Levenstein D, Buzsáki G, Rinzel J. NREM sleep in the rodent neocortex and hippocampus reflects excitable dynamics. Nature Communications. 2019;10:2478. doi: 10.1038/s41467-019-10327-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Manz P, Memmesheimer RM. Purely STDP-based assembly dynamics: Stability, learning, overlaps, drift and aging. PLOS Computational Biology. 2023;19:e1011006. doi: 10.1371/journal.pcbi.1011006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Mau W, Sullivan DW, Kinsky NR, Hasselmo ME, Howard MW, Eichenbaum H. The Same Hippocampal CA1 population simultaneously codes temporal information over multiple timescales. Current Biology. 2018;28:1499–1508. doi: 10.1016/j.cub.2018.03.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Mau W, Hasselmo ME, Cai DJ. The brain in motion: How ensemble fluidity drives memory-updating and flexibility. eLife. 2020;9:e63550. doi: 10.7554/eLife.63550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Mau W, Morales-Rodriguez D, Dong Z, Pennington ZT, Francisco T, Baxter MG, Shuman T, Cai DJ. Ensemble Remodeling Supports Memory-Updating. bioRxiv. 2022 doi: 10.1101/2022.06.02.494530. [DOI]
  26. Miller AMP, Frankland PW, Josselyn SA. Memory: Ironing out a wrinkle in time. Current Biology. 2018;28:R599–R601. doi: 10.1016/j.cub.2018.03.053. [DOI] [PubMed] [Google Scholar]
  27. Pignatelli M, Ryan TJ, Roy DS, Lovett C, Smith LM, Muralidhar S, Tonegawa S. Engram cell excitability state determines the efficacy of memory retrieval. Neuron. 2019;101:274–284. doi: 10.1016/j.neuron.2018.11.029. [DOI] [PubMed] [Google Scholar]
  28. Poo MM, Pignatelli M, Ryan TJ, Tonegawa S, Bonhoeffer T, Martin KC, Rudenko A, Tsai LH, Tsien RW, Fishell G, Mullins C, Gonçalves JT, Shtrahman M, Johnston ST, Gage FH, Dan Y, Long J, Buzsáki G, Stevens C. What is memory? The present state of the engram. BMC Biology. 2016;14:6. doi: 10.1186/s12915-016-0261-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Rashid AJ, Yan C, Mercaldo V, Hsiang H-LL, Park S, Cole CJ, De Cristofaro A, Yu J, Ramakrishnan C, Lee SY, Deisseroth K, Frankland PW, Josselyn SA. Competition between engrams influences fear memory formation and recall. Science. 2016;353:383–387. doi: 10.1126/science.aaf0594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Rechavi Y, Rubin A, Yizhar O, Ziv Y. Exercise increases information content and affects long-term stability of hippocampal place codes. Cell Reports. 2022;41:111695. doi: 10.1016/j.celrep.2022.111695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Rogerson T, Cai DJ, Frank A, Sano Y, Shobe J, Lopez-Aranda MF, Silva AJ. Synaptic tagging during memory allocation. Nature Reviews. Neuroscience. 2014;15:157–169. doi: 10.1038/nrn3667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Rubin A, Geva N, Sheintuch L, Ziv Y. Hippocampal ensemble dynamics timestamp events in long-term memory. eLife. 2015;4:e12247. doi: 10.7554/eLife.12247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Rule ME, O’Leary T. Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. PNAS. 2022;119:e2106692119. doi: 10.1073/pnas.2106692119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Sadeh S, Clopath C. Contribution of behavioural variability to representational drift. eLife. 2022;11:e77907. doi: 10.7554/eLife.77907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Silva AJ, Zhou Y, Rogerson T, Shobe J, Balaji J. Molecular and cellular approaches to memory allocation in neural circuits. Science. 2009;326:391–395. doi: 10.1126/science.1174519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Spalla D, Cornacchia IM, Treves A. Continuous attractors for dynamic memories. eLife. 2021;10:e69499. doi: 10.7554/eLife.69499. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Tran LM, Santoro A, Liu L, Josselyn SA, Richards BA, Frankland PW. Adult neurogenesis acts as a neural regularizer. PNAS. 2022;119:e2206704119. doi: 10.1073/pnas.2206704119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Zhou Y, Won J, Karlsson MG, Zhou M, Rogerson T, Balaji J, Neve R, Poirazi P, Silva AJ. CREB regulates excitability and the allocation of memory to subsets of neurons in the amygdala. Nature Neuroscience. 2009;12:1438–1443. doi: 10.1038/nn.2405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Ziv Y, Burns LD, Cocker ED, Hamel EO, Ghosh KK, Kitch LJ, El Gamal A, Schnitzer MJ. Long-term dynamics of CA1 hippocampal place codes. Nature Neuroscience. 2013;16:264–266. doi: 10.1038/nn.3329. [DOI] [PMC free article] [PubMed] [Google Scholar]

eLife assessment

Lisa M Giocomo 1

This is an important theoretical study providing insight into how fluctuations in excitability can contribute to gradual changes in the mapping between population activity and stimulus, commonly referred to as representational drift. The authors provide convincing evidence that fluctuations can contribute to drift. Overall, this is a well-presented study that explores the question of how changes in intrinsic excitability can influence distinct memory representations.

Reviewer #3 (Public Review):

Anonymous

Summary of the findings:

The authors explore an important question concerning the underlying mechanism of representational drift, which despite intense recent interest remains obscure. The paper explores the intriguing hypothesis that drift may reflect changes in the intrinsic excitability of neurons. The authors set out to provide theoretical insight into this potential mechanism.

They construct a rate model with all-to-all recurrent connectivity, in which recurrent synapses are governed by a standard Hebbian plasticity rule. This network receives a global input, constant across all neurons, which can be varied with time. Each neuron also is driven by an "intrinsic excitability" bias term, which does vary across cells. The authors study how activity in the network evolves as this intrinsic excitability term is changed.

They find that after initial stimulation of the network, those neurons where the excitability term is set high become more strongly connected and are in turn more responsive to the input. Each day the subset of neurons with high intrinsic excitability is changed, and the network's recurrent synaptic connectivity and responsiveness gradually shift, such that the new high intrinsic excitability subset becomes both more strongly activated by the global input and also more strongly recurrently connected. These changes result in drift, reflected by a gradual decrease across time in the correlation of the neuronal population vector response to the stimulus.

The authors are able to build a classifier that decodes the "day" (i.e. which subset of neurons had high intrinsic excitability) with perfect accuracy. This is despite the fact that the excitability bias during decoding is set to 0 for all neurons, and so the decoder is really detecting those neurons with strong recurrent connectivity, and in turn strong responses to the input. The authors show that it is also possible to decode the order in which different subsets of neurons were given high intrinsic excitability on previous "days". This second result depends on the extent by which intrinsic excitability was increased: if the increase in intrinsic excitability was either too high or too low, it was not possible to read out any information about the past ordering of excitability changes.

Finally, using another Hebbian learning rule, the authors show that an output neuron, whose activity is a weighted sum of the activity of all neurons in the network, is able to read out the activity of the network. What this means specifically, is that although the set of neurons most active in the network changes, the output neuron always maintains a higher firing rate than a neuron with randomly shuffled synaptic weights, because the output neuron continuously updates its weights to sample from the highly active population at any given moment. Thus, the output neuron can read out a stable memory despite drift.

Strengths:

The authors are clear in their description of the network they construct and in their results. They convincingly show that when they change their "intrinsic excitability term", upon stimulation, the Hebbian synapses in their network gradually evolve, and the combined synaptic connectivity and altered excitability result in drifting patterns of activity in response to an unchanging input (Fig. 1, Fig. 2a). Furthermore, their classification analyses (Fig. 2) show that information is preserved in the network, and their readout neuron successfully tracks the active cells (Fig. 3). Finally, the observation that only a specific range of excitability bias values permits decoding of the temporal structure of the history of intrinsic excitability (Fig. 2f and Figure S1) is interesting, and as the authors point out, not trivial.

Weaknesses:

1. The way the network is constructed, there is no formal difference between what the authors call "input", Δ(t), and what they call "intrinsic excitability" Ɛ_i(t) (see Equation 3). These are two separate terms that are summed (Eq. 3) to define the rate dynamics of the network. The authors could have switched the names of these terms: Δ(t) could have been considered a global "intrinsic excitability term" that varied with time and Ɛ_i(t) could have been the external input received by each neuron in the network. In that case, the paper would have considered the consequence of "slow fluctuations of external input" rather than "slow fluctuations of intrinsic excitability", but the results would have been the same. The difference is therefore semantic. The consequence is that this paper is not necessarily about "intrinsic excitability", rather it considers how a Hebbian network responds to changes in excitatory drive, regardless of whether those drives are labeled "input" or "intrinsic excitability".

A revised version of the manuscript models "slope-based" excitability changes in addition to "threshold-based" changes. This serves to address the above concern that as constructed here changes in excitability threshold are not distinguishable from changes in input. However, it remains unclear what the model would do should only a subset of neurons receive a given, fixed input. In that case, are excitability changes sufficient to induce drift? This remains an important question that is not addressed by the paper in its current form.

1. Given how the learning rule that defines the input to the readout neuron is constructed, it is trivial that this unit responds to the most active neurons in the network, more so than a neuron assigned random weights. What would happen if the network included more than one "memory"? Would it be possible to construct a readout neuron that could classify two distinct patterns? Along these lines, what if there were multiple, distinct stimuli used to drive this network, rather than the global input the authors employ here? Does the system, as constructed, have the capacity to provide two distinct patterns of activity in response to two distinct inputs?

A revised version of the manuscript addresses this question, demonstrating that the network is capable of maintaining two distinct memories.

Impact:

Defining the potential role of changes in intrinsic excitability in drift is fundamental. Thus, this paper represents an important contribution. What we see here is that changes in intrinsic excitability are sufficient to induce drift. This raises the question for future work of the specific contributions of changing excitability from changing input to representational drift.

eLife. 2024 May 7;12:RP88053. doi: 10.7554/eLife.88053.3.sa2

Author response

Geoffroy Delamare 1, Yosif Zaki 2, Denise J Cai 3, Claudia Clopath 4

The following is the authors’ response to the latest reviews.

A revised version of the manuscript models "slope-based" excitability changes in addition to "threshold-based" changes. This serves to address the above concern that as constructed here changes in excitability threshold are not distinguishable from changes in input. However, it remains unclear what the model would do should only a subset of neurons receive a given, fixed input. In that case, are excitability changes sufficient to induce drift? This remains an important question that is not addressed by the paper in its current form.

Thank you for this important point. In the simulation of two memories (Fig. S6), we stimulated half of the neural population for each of the two memories. We therefore also showed that drift happens when only a subset of neuron was simulated.

The following is the authors’ response to the original reviews.

Reviewer #1 (Public Review):

Current experimental work reveals that brain areas implicated in episodic and spatial memory have a dynamic code, in which activity r imulated networks for epresenting familiar events/locations changes over time. This paper shows that such reconfiguration is consistent with underlying changes in the excitability of cells in the population, which ties these observations to a physiological mechanism.

Delamare et al. use a recurrent network model to consider the hypothesis that slow fluctuations in intrinsic excitability, together with spontaneous reactivations of ensembles, may cause the structure of the ensemble to change, consistent with the phenomenon of representational drift. The paper focuses on three main findings from their model: (1) fluctuations in intrinsic excitability lead to drift, (2) this drift has a temporal structure, and (3) a readout neuron can track the drift and continue to decode the memory. This paper is relevant and timely, and the work addresses questions of both a potential mechanism (fluctuations in intrinsic excitability) and purpose (time-stamping memories) of drift.

The model used in this study consists of a pool of 50 all-to-all recurrently connected excitatory neurons with weights changing according to a Hebbian rule. All neurons receive the same input during stimulation, as well as global inhibition. The population has heterogeneous excitability, and each neuron's excitability is constant over time apart from a transient increase on a single day. The neurons are divided into ensembles of 10 neurons each, and on each day, a different ensemble receives a transient increase in the excitability of each of its neurons, with each neuron experiencing the same amplitude of increase. Each day for four days, repetitions of a binary stimulus pulse are applied to every neuron.

The modeling choices focus in on the parameter of interest-the excitability-and other details are generally kept as straightforward as possible. That said, I wonder if certain aspects may be overly simple. The extent of the work already performed, however, does serve the intended purpose, and so I think it would be sufficient for the authors to comment on these choices rather than to take more space in this paper to actually implement these choices. What might happen were more complex modeling choices made? What is the justification for the choices that are made in the present work?

The two specific modeling choices I question are (1) the excitability dynamics and (2) the input stimulus. The ensemble-wide synchronous and constant-amplitude excitability increase, followed by a return to baseline, seems to be a very simplified picture of the dynamics of intrinsic excitability. At the very least, justification for this simplified picture would benefit the reader, and I would be interested in the authors' speculation about how a more complex and biologically realistic dynamics model might impact the drift in their network model. Similarly, the input stimulus being binary means that, on the singleneuron level, the only type of drift that can occur is a sort of drop-in/drop-out drift; this choice excludes the possibility of a neuron maintaining significant tuning to a stimulus but changing its preferred value. How would the use of a continuous input variable influence the results.

(1) In our model, neurons tend to compete for allocation to the memory ensemble: neurons with higher excitability tend to be preferentially allocated and neurons with lower excitability do not respond to the stimulus. Because relative, but not absolute excitability biases this competition, we suggest that the exact distribution of excitability would not impact the results qualitatively. On the other hand, the results might vary if excitability was considered dependent on the activity of the neurons as previously reported experimentally (Cai 2016, Rachid 2016, Pignatelli 2019). An increase in excitability following neural activity might induce higher correlation among ensembles on consecutive days, decreasing the drift.

(2) We thank the reviewer for this very good point. Indeed, two recent studies (Geva 2023 , Khatib 2023) have highlighted distinct mechanisms for a drift of the mean firing rate and the tuning curve. We extended the last part of the discussion to include this point: “Finally, we intended to model drift in the firing rates, as opposed to a drift in the turning curve of the neurons. Recent studies suggest that drifts in the mean firing rate and tuning curve arise from two different mechanisms [33, 34]. Experience drives a drift in neurons turning curve while the passage of time drives a drift in neurons firing rate. In this sense, our study is consistent with these findings by providing a possible mechanism for a drift in the mean firing rates of the neurons driven a dynamical excitability. Our work suggests that drift can depend on any experience having an impact on excitability dynamics such as exercise as previously shown experimentally [9, 35] but also neurogenesis [9, 31, 36], sleep [37] or increase in dopamine level [38]”

Result (1): Fluctuations in intrinsic excitability induce drift

The two choices highlighted above appear to lead to representations that never recruit the neurons in the population with the lowest baseline excitability (Figure 1b: it appears that only 10 neurons ever show high firing rates) and produce networks with very strong bidirectional coupling between this subset of neurons and weak coupling elsewhere (Figure 1d). This low recruitment rate need may not necessarily be problematic, but it stands out as a point that should at least be commented on. The fact that only 10 neurons (20% of the population) are ever recruited in a representation also raises the question of what would happen if the model were scaled up to include more neurons.

This is a very good point. To test how the model depends on the network size, we plotted the drift index against the size of the ensemble. With this current implementation, we did not observe a significant correlation between the drift rate and size of the initial ensemble (Figure S2).

Author response image 1. The rate of the drift does not depend on the size of the engram.

Author response image 1.

Drift rate against the size of the original engram. Each dot shows one simulation (Methods). n = 100 simulations.

Result (2): The observed drift has a temporal structure

The authors then demonstrate that the drift has a temporal structure (i.e., that activity is informative about the day on which it occurs), with methods inspired by Rubin et al. (2015). Rubin et al. (2015) compare single-trial activity patterns on a given session with full-session activity patterns from each session. In contrast, Delamare et al. here compare full-session patterns with baseline excitability (E = 0) patterns. This point of difference should be motivated. What does a comparison to this baseline excitability activity pattern tell us? The ordinal decoder, which decodes the session order, gives very interesting results: that an intermediate amplitude E of excitability increase maximizes this decoder's performance. This point is also discussed well by the authors. As a potential point of further exploration, the use of baseline excitability patterns in the day decoder had me wondering how the ordinal decoder would perform with these baseline patterns.

This is a good point. Here, we aimed at dissociating the role of excitability from the one of the recurrent currents. We introduced a time decoder that compares the pattern with baseline excitability (E = 0), in order to test whether the temporal information was encoded in the ensemble i.e. in the recurrent weights. By contrast, because the neural activity is by construction biased towards excitability, a time decoder performed on the full session would work in a trivial way.

Result (3): A readout neuron can track drift

The authors conclude their work by connecting a readout neuron to the population with plastic weights evolving via a Hebbian rule. They show that this neuron can track the drifting ensemble by adjusting its weights. These results are shown very neatly and effectively and corroborate existing work that they cite very clearly.

Overall, this paper is well-organized, offers a straightforward model of dynamic intrinsic excitability, and provides relevant results with appropriate interpretations. The methods could benefit from more justification of certain modeling choices, and/or an exploration (either speculative or viaimplementation) of what would happen with more complex choices. This modeling work paves the way for further explorations of how intrinsic excitability fluctuations influence drifting representations.

Reviewer #2 (Public Review):

In this computational study, Delamare et al identify slow neuronal excitability as one mechanism underlying representational drift in recurrent neuronal networks and that the drift is informative about the temporal structure of the memory and when it has been formed. The manuscript is very well written and addresses a timely as well as important topic in current neuroscience namely the mechanisms that may underlie representational drift.

The study is based on an all-to-all recurrent neuronal network with synapses following Hebbian plasticity rules. On the first day, a cue-related representation is formed in that network and on the next 3 days it is recalled spontaneously or due to a memory-related cue. One major observation is that representational drift emerges day-by-day based on intrinsic excitability with the most excitable cells showing highest probability to replace previously active members of the assembly. By using a daydecoder, the authors state that they can infer the order at which the reactivation of cell assemblies happened but only if the excitability state was not too high. By applying a read-out neuron, the authors observed that this cell can track the drifting ensemble which is based on changes of the synaptic weights across time. The only few questions which emerged and could be addressed either theoretically or in the discussion are as follows:

1. Would the similar results be obtained if not all-to-all recurrent connections would have been molded but more realistic connectivity profiles such as estimated for CA1 and CA3?

This is a very interesting point. We performed further simulations to show that the results are not dependent on the exact structure of the network. In particular, we show that all-to-all connectivity is not required to observe a drift of the ensemble. We found similar results when the recurrent weights matrix was made sparse (Fig. S4a-c, Methods). Similarly to all-to-all connectivity, we found that the ensemble is informative about its temporal history (Fig. S4d) and that an output neuron can decode the ensemble continuously (Fig. S4e).

Author response image 2. Sparse recurrent connectivity shows similar drifting behavior as all-to-all connectivity.

Author response image 2.

The same simulation protocol as Fig. 1 was used while the recurrent weights matrix was made 50% sparse (Methods). (a) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. (b) Recurrent weights matrices after each of the four stimuli show the drifting assembly. (c) Correlation of the patterns of activity between the first day and every other days. (d) Student's test t-value of the ordinal time decoder, for the real (red) and shuffled (orange) data and for different amplitudes of excitability E. (e) Center of mass of the distribution of the output weights (Methods) across days. (c-e) Data are shown as mean ± s.e.m. for n = 10 simulations.

1. How does the number of excited cells that could potentially contribute to an engram influence the representational drift and the decoding quality?

This is indeed a very good question. We did not observe a significant correlation between the drift rate and size of the initial ensemble (Fig. S2).

Author response image 3. The rate of the drift does not depend on the size of the engram.

Author response image 3.

Drift rate against the size of the original engram. Each dot shows one simulation (Methods). n = 100 simulations.

1. How does the rate of the drift influence the quality of readout from the readout-out neuron?

We thank the reviewer for this interesting question. We introduced a measure of the “read-out quality” and plotted this value against the rate of the drift. We found a small correlation between the two quantities. Indeed, the read-out quality decreases with the rate of the drift.

Author response image 4. The quality of the read-out decreases with the rate of the drift.

Author response image 4.

Read-out quality computed on the firing rate of the output neuron against the rate of the drift (Methods). Each dot shows one simulation. n = 100 simulations.

Reviewer #3 (Public Review):

The authors explore an important question concerning the underlying mechanism of representational drift, which despite intense recent interest remains obscure. The paper explores the intriguing hypothesis that drift may reflect changes in the intrinsic excitability of neurons. The authors set out to provide theoretical insight into this potential mechanism.

They construct a rate model with all-to-all recurrent connectivity, in which recurrent synapses are governed by a standard Hebbian plasticity rule. This network receives a global input, constant across all neurons, which can be varied with time. Each neuron also is driven by an "intrinsic excitability" bias term, which does vary across cells. The authors study how activity in the network evolves as this intrinsic excitability term is changed.

They find that after initial stimulation of the network, those neurons where the excitability term is set high become more strongly connected and are in turn more responsive to the input. Each day the subset of neurons with high intrinsic excitability is changed, and the network's recurrent synaptic connectivity and responsiveness gradually shift, such that the new high intrinsic excitability subset becomes both more strongly activated by the global input and also more strongly recurrently connected. These changes result in drift, reflected by a gradual decrease across time in the correlation of the neuronal population vector response to the stimulus.

The authors are able to build a classifier that decodes the "day" (i.e. which subset of neurons had high intrinsic excitability) with perfect accuracy. This is despite the fact that the excitability bias during decoding is set to 0 for all neurons, and so the decoder is really detecting those neurons with strong recurrent connectivity, and in turn strong responses to the input. The authors show that it is also possible to decode the order in which different subsets of neurons were given high intrinsic excitability on previous "days". This second result depends on the extent by which intrinsic excitability was increased: if the increase in intrinsic excitability was either too high or too low, it was not possible to read out any information about past ordering of excitability changes.

Finally, using another Hebbian learning rule, the authors show that an output neuron, whose activity is a weighted sum of the activity of all neurons in the network, is able to read out the activity of the network. What this means specifically, is that although the set of neurons most active in the network changes, the output neuron always maintains a higher firing rate than a neuron with randomly shuffled synaptic weights, because the output neuron continuously updates its weights to sample from the highly active population at any given moment. Thus, the output neuron can readout a stable memory despite drift.

Strengths:

The authors are clear in their description of the network they construct and in their results. They convincingly show that when they change their "intrinsic excitability term", upon stimulation, the Hebbian synapses in their network gradually evolve, and the combined synaptic connectivity and altered excitability result in drifting patterns of activity in response to an unchanging input (Fig. 1, Fig. 2a). Furthermore, their classification analyses (Fig. 2) show that information is preserved in the network, and their readout neuron successfully tracks the active cells (Fig. 3). Finally, the observation that only a specific range of excitability bias values permits decoding of the temporal structure of the history of intrinsic excitability (Fig. 2f and Figure S1) is interesting, and as the authors point out, not trivial.

Weaknesses:

1. The way the network is constructed, there is no formal difference between what the authors call "input", Δ(t), and what they call "intrinsic excitability" Ɛ_i(t) (see Equation 3). These are two separate terms that are summed (Eq. 3) to define the rate dynamics of the network. The authors could have switched the names of these terms: Δ(t) could have been considered a global "intrinsic excitability term" that varied with time and Ɛ_i(t) could have been the external input received by each neuron i in the network. In that case, the paper would have considered the consequence of "slow fluctuations of external input" rather than "slow fluctuations of intrinsic excitability", but the results would have been the same. The difference is therefore semantic. The consequence is that this paper is not necessarily about "intrinsic excitability", rather it considers how a Hebbian network responds to changes in excitatory drive, regardless of whether those drives are labeled "input" or "intrinsic excitability".

This is a very good point. We performed further simulations to model “slope-based”, instead of “threshold-based”, changes in excitability (Fig. S5a, Methods). In this new definition of excitability, we changed the slope of the activation function, which is initially sampled from a random distribution. By introducing a varying excitability, we found very similar results than when excitability was varied as the threshold of the activation function (Fig. S5b-d). We also found similarly that the ensemble is informative about its temporal history (Fig. S5e) and that an output neuron can decode the ensemble continuously (Fig. S5f).

Author response image 5. Change of excitability as a variable slope of the input-output function shows similar drifting behavior as considering a change in the threshold.

Author response image 5.

The same simulation protocol as Fig. 1 was used while the excitability changes were modeled as a change in the activation function slope (Methods). (a) Schema showing two different ways of defining excitability, as a threshold (top) or slope (bottom) of the activation function. Each line shows one neuron and darker lines correspond to neurons with increased excitability. (b) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. (c) Recurrent weights matrices after each of the four stimuli show the drifting assembly. (d) Correlation of the patterns of activity between the first day and every other days. (e) Student's test t-value of the ordinal time decoder, for the real (red) and shuffled (orange) data and for different amplitudes of excitability E. (f) Center of mass of the distribution of the output weights (Methods) across days. (d-f) Data are shown as mean ± s.e.m. for n = 10 simulations.

1. Given how the learning rule that defines input to the readout neuron is constructed, it is trivial that this unit responds to the most active neurons in the network, more so than a neuron assigned random weights. What would happen if the network included more than one "memory"? Would it be possible to construct a readout neuron that could classify two distinct patterns? Along these lines, what if there were multiple, distinct stimuli used to drive this network, rather than the global input the authors employ here? Does the system, as constructed, have the capacity to provide two distinct patterns of activity in response to two distinct inputs?

This is an interesting point. In order to model multiple memories, we introduced non-uniform feedforward inputs, defining different “contexts” (Methods). We adapted our model so that two contexts target two random sub-populations in the network. We also introduced a second output neuron to decode the second memory. The simulation protocol was adapted so that each of the two contexts are stimulated every day (Fig. S6a). We found that the network is able to store two ensembles that drift independently (Fig. S6 and S7a). We were also able to decode temporal information from the patterns of activity of both ensembles (Fig. S7b). Finally, both memories could be decoded independently using two output neurons (Fig. S7c and d).

Author response image 6. Two distinct ensembles can be encoded and drift independently.

Author response image 6.

a) and b) Firing rates of the neurons across time. The red traces in panel b) correspond to neurons belonging to the first assembly and the green traces to the second assembly on the first day. They correspond to neurons having a firing rate higher than the active threshold after the first stimulation of each assembly. The black bars show the stimulation and the dashed line shows the active threshold. c) Recurrent weights matrices after each of the eight stimuli showing the drifting of the first (top) and second (bottom) assembly.

Author response image 7. The two ensembles are informative about their temporal history and can be decoded using two output neurons.

Author response image 7.

a) Correlation of the patterns of activity between the first day and every other days, for the first assembly (red) and the second assembly (green). b) Student's test t-value of the ordinal time decoder, for the first (red, left) and second ensemble (green, right) for different amplitudes of excitability E. Shuffled data are shown in orange. c) Center of mass of the distribution of the output weights (Methods) across days for the first (w?ut , red) and second (W20L't , green) ensemble. a-c) Data are shown as mean ± s.e.m. for n = 10 simulations. d) Output neurons firing rate across time for the first ensemble (Yl, top) and the second ensemble (h, bottom). The red and green traces correspond to the real output. The dark blue, light blue and yellow traces correspond to the cases where the output weights were randomly shuffled for every time points after presentation of the first, second and third stimulus, respectively.

Impact:

Defining the potential role of changes in intrinsic excitability in drift is fundamental. Thus, this paper represents a potentially important contribution. Unfortunately, given the way the network employed here is constructed, it is difficult to tease apart the specific contribution of changing excitability from changing input. This limits the interpretability and applicability of the results.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    MDAR checklist

    Data Availability Statement

    The code for simulations and figures is available at GitHub (copy archived at Delamare, 2024).


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES