Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2021 Jun 1;17(6):e1008927. doi: 10.1371/journal.pcbi.1008927

Embedding optimization reveals long-lasting history dependence in neural spiking activity

Lucas Rudelt 1,*, Daniel González Marx 1, Michael Wibral 2, Viola Priesemann 1,3,*
Editor: Daniele Marinazzo4
PMCID: PMC8205186  PMID: 34061837

Abstract

Information processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. This measure captures how much past information is necessary to predict current spiking. In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long—potentially redundant—past information can still be read out. Strikingly, we find for model neurons that our method disentangles the strength and timescale of history dependence, whereas the two are mixed in classical approaches. When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so-called past embedding. To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins. For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations. While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short. This work enables an information-theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence. To facilitate the application of the method, we provide practical guidelines and a toolbox.

Author summary

Even with exciting advances in recording techniques of neural spiking activity, experiments only provide a comparably short glimpse into the activity of only a tiny subset of all neurons. How can we learn from these experiments about the organization of information processing in the brain? To that end, we exploit that different properties of information processing leave distinct footprints on the firing statistics of individual spiking neurons. In our work, we focus on a particular statistical footprint: We quantify how much a single neuron’s spiking depends on its own preceding activity, which we call history dependence. By quantifying history dependence in neural spike recordings, one can, in turn, infer some of the properties of information processing. Because recording lengths are limited in practice, a direct estimation of history dependence from experiments is challenging. The embedding optimization approach that we present in this paper aims at extracting a maximum of history dependence within the limits set by a reliable estimation. The approach is highly adaptive and thereby enables a meaningful comparison of history dependence between neurons with vastly different spiking statistics, which we exemplify on a diversity of spike recordings. In conjunction with recent, highly parallel spike recording techniques, the approach could yield valuable insights on how hierarchical processing is organized in the brain.

Introduction

How is information processing organized in the brain, and what are the principles that govern neural coding? Fortunately, footprints of different information processing and neural coding strategies can be found in the firing statistics of individual neurons, and in particular in the history dependence, the statistical dependence of a single neuron’s spiking on its preceding activity.

In classical, noise-less efficient coding, history dependence should be low to minimize redundancy and optimize efficiency of neural information transmission [13]. In contrast, in the presence of noise, history dependence and thus redundancy could be higher to increase the signal-to-noise ratio for a robust code [4]. Moreover, history dependence can be harnessed for active information storage, i.e. maintaining past input information to combine it with present input for temporal processing [57] and associative learning [8]. In addition to its magnitude, the timescale of history dependence provides an important footprint of processing at different processing stages in the brain [911]. This is because higher-level processing requires integrating information on longer timescales than lower-level processing [12]. Therefore, history dependence in neural spiking should reach further into the past for neurons involved in higher-level processing [9, 13]. Quantifying history dependence and its timescale could probe these different footprints and thus yield valuable insights on how neural coding and information processing is organized in the brain.

Often, history dependence is characterized by how much spiking is correlated with spiking with a certain time lag [14, 15]. From the decay time of this lagged correlation, one obtains an intrinsic timescale of how long past information can still be read out [911, 16]. However, to quantify not only a timescale of statistical dependence, but also its strength, one has to quantify how much of a neuron’s spiking depends on its entire past. Here, this is done with the mutual information between the spiking of a neuron and its own past [17], also called active information storage [57], or predictive information [18, 19].

Estimating this mutual information directly from spike recordings, however, is notoriously difficult. The reason is that statistical dependencies may reside in precise spike times, extend far into the past and contain higher-order dependencies. This makes it hard to find a parametric model, e.g. from the family of generalized linear models [20, 21], that is flexible enough to account for the variety of spiking statistics encountered in experiments. Therefore, one typically infers mutual information directly from observed spike trains [2226]. The downside is that this requires a lot of data, otherwise estimates can be severely biased [27, 28]. A lot of work has been devoted to finding less biased estimates, either by correcting bias [2831], or by using Bayesian inference [3234]. Although these estimators alleviate to some extent the problem of bias, a reliable estimation is only possible for a much reduced representation of past spiking, also called past embedding [35]. For example, many studies infer history dependence and transfer entropy by embedding the past spiking using a single bin [26, 36].

While previously most attention was devoted to a robust estimation given a (potentially limited) embedding, we argue that a careful embedding of past activity is crucial. In particular, a past embedding should be well adapted to the spiking statistics of a neuron, but also be low-dimensional enough to enable a reliable estimation. To that end, we here devise an embedding optimization scheme that selects the embedding that maximizes the estimated history dependence, while reliable estimation is ensured by two independent regularization methods.

In this paper, we first provide a methods summary where we introduce the measure of history dependence and the information timescale, as well as the embedding optimization method employed to estimate history dependence in neural spike trains. A glossary of all the abbreviations and symbols used in this paper can be found at the beginning of Materials and methods. In Results, we first compare the measure of history dependence with classical time-lagged measures of temporal dependence on different models of neural spiking activity. Second, we test the embedding optimization approach on a tractable benchmark model, and also compare it to existing estimation methods on a variety of experimental spike recordings. Finally, we demonstrate that the approach reveals interesting differences between neural systems, both in terms of the total history dependence, as well as the information timescale. For the reader interested in applying the method, we provide practical guidelines in the discussion and in the end of Materials and methods. The method is readily applicable to highly parallel spike recordings, and a toolbox for Python3 is available online [37].

Methods summary

Definition of history dependence

First, we define history dependence R(T) in the spiking of a single neuron. We quantify history dependence based on the mutual information

I(spiking;past(T))=H(spiking)H(spiking|past(T)) (1)

between current spiking in a time bin [t, t + Δt) and its own past in a past range [tT, t) (Fig 1B). Here, we assume stationarity and ergodicity, hence the measure is an average over all times t. This mutual information is also called active information storage [5], and is related to the predictive information [18, 19]. It quantifies how much of the current spiking information H(spiking) can be predicted from past spiking. The spiking information is given by the Shannon entropy [38]

H(spiking)=p(spike)log2p(spike)(1p(spike))log2(1p(spike)), (2)

where p(spike) = rΔt is the probability to spike within a small time bin Δt for a neuron with average firing rate r. The Shannon entropy H(spiking) quantifies the average information that a spiking neuron could transmit within one bin, assuming no statistical dependencies on its own past. In contrast, the conditional entropy H(spiking|past(T)) (see Materials and methods) quantifies the average spiking information (in the sense of entropy) that remains when dependencies on past spiking are taken into account. Note that past dependencies can only reduce the average spiking information, i.e. H(spiking|past(T)) ≤ H(spiking). The difference between the two then gives the amount of spiking information that is redundant or entirely predictable from the past. To transform this measure of information into a measure of statistical dependence, we normalize the mutual information by the entropy H(spiking) and define history dependence R(T) as

R(T)I(spiking;past(T))H(spiking)=1H(spiking|past(T))H(spiking)[0,1]. (3)

While the mutual information quantifies the amount of predictable information, R(T) gives the proportion of spiking information that is predictable or redundant with past spiking. As such, it interpolates between the following intuitive extreme cases: R(T) = 0 corresponds to independent and R(T) = 1 to entirely predictable spiking. Moreover, while the entropy and thus the mutual information I(spiking;past(T)) increases with the firing rate (see S13 Fig for an example on real data), the normalized R(T) is comparable across recordings of neurons with very different firing rates. Finally, all the above measures can depend on the size of the time bin Δt, which discretizes the current spiking activity in time. Too small a time bin holds the risk that noise in the spike emission reduces the overall predictability or history dependence, whereas an overly large time bin holds the risk of destroying coding relevant time information in the neuron’s spike train. Thus, we chose the smallest time bin Δt = 5ms that does not yet show a decrease in history dependence (S16 Fig).

Fig 1. Illustration of history dependence and related measures in a neural spike train.

Fig 1

(A) For the analysis, spiking is represented by 0 or 1 in a small time bin Δt (grey box). Autocorrelation C(Ti) or the lagged mutual information L(Ti) quantify the statistical dependence of spiking on past spiking in a single past bin with time lag Ti (green box). (B) In contrast, history dependence R(Ti) quantifies the dependence of spiking on the entire spiking history in a past range Ti. The gain in history dependence ΔR(Ti) = R(Ti) − R(Ti−1) quantifies the increase in history dependence by increasing the past range from Ti−1 to Ti, and is defined in analogy to the lagged measures. (C) Autocorrelation C(T) and lagged mutual information L(T) for a typical example neuron (mouse, primary visual cortex). Both measures decay with increasing T, where L(T) decays slightly faster due to the non-linearity of the mutual information. Timescales τC and τL (vertical dashed lines) can be computed either by fitting an exponential decay (autocorrelation) or by using the generalized timescale (lagged mutual information). (D) In contrast, history dependence R(T) increases monotonically for systematically increasing past range T, until it saturates at the total history dependence Rtot. From R(T), the gain ΔR(Ti) can be computed between increasing past ranges Ti−1 and Ti (grey dashed lines). The gain ΔR(T) decays to zero like the time-lagged measures, with information timescale τR (dashed line).

Total history dependence and the information timescale

Here, we introduce measures to quantify the strength and the timescale of history dependence independently. First, note that the history dependence R(T) monotonically increases with the past range T (Fig 1D), until it converges to the total history dependence

RtotlimTR(T). (4)

The total history dependence Rtot quantifies the proportion of predictable spiking information once the entire past is taken into account.

While the history dependence R(T) is monotonously increasing, the gain in history dependence ΔR(Ti) ≡ R(Ti) − R(Ti−1) between two past ranges Ti > Ti−1 tends to decrease, and eventually decreases to zero for Ti, Ti−1 → ∞ (Fig 1D). This is in analogy to time-lagged measures of temporal dependence such as the autocorrelation C(T) or lagged mutual information L(T) (Fig 1A and 1C). Moreover, because R(T) is monotonically increasing, the gain cannot be negative, i.e. ΔR(Ti) ≥ 0. From ΔR(Ti), we quantify a characteristic timescale τR of history dependence similar to an autocorrelation time. In analogy to the integrated autocorrelation time [39], we define the generalized timescale

τRi=1nT¯iΔR(Ti)j=1nΔR(Tj)T0. (5)

as the average of past ranges T¯i=(Ti+Ti1)/2, weighted with their gain ΔR(Ti) = R(Ti) − R(Ti−1). Here, steps between two past ranges Ti−1 and Ti should be chosen small enough, and summing the middle points T¯i of the steps further reduces the error of discretization. T0 is the starting point, i.e. is the first past range for which R(T) is computed, and was set to T0 = 10ms to exclude short-term past dependencies like refractoriness (see Materials and methods for details). Moreover, the last past range Tn has to be high enough such that R(Tn) has converged, i.e. R(Tn) = Rtot. Here, we set Tn = 5 s unless stated otherwise.

To illustrate the analogy to the autocorrelation time, we note that if the gain decays exponentially, i.e. ΔR(Ti)exp(Tiτauto) with decay constant τauto, then τR = τauto for n → ∞ and sufficiently small steps TiTi−1. The advantage of τR is that it also generalizes to cases where the decay is not exponential. Furthermore, it can be applied to any other measure of temporal dependence (e.g. the lagged mutual information) as long as the sum in Eq (5) remains finite, and the coefficients are non-negative. Note that estimates of ΔR(Ti) can also be negative, so we included corrections to allow a sensible estimation of τR (Materials and methods). Finally, since τR quantifies the timescale over which unique predictive information is accumulated, we refer to it as the information timescale.

Binary past embedding of spiking activity

In practice, estimating history dependence R from spike recordings is extremely challenging. In fact, if data is limited, a reliable estimation of history dependence is only possible for a reduced representation of past spiking, also called past embedding [35]. Here, we outline how we embed past spiking activity to estimate history dependence from neural spike recordings.

First, we choose a past range T, which defines the time span of the past embedding. For each point in time t, we partition the immediate past window [tT, t) into d bins and count the number of spikes in each bin. The number of bins d sets the temporal resolution of the embedding. In addition, we let bin sizes scale exponentially with the bin index j = 1, …, d as τj = τ110(j−1)κ (Fig 2A). A scaling exponent of κ = 0 translates into equal bin sizes, whereas for κ > 0 bin sizes increase. For fixed d, this allows to obtain a higher temporal resolution on recent past spikes by decreasing the resolution on distant past spikes.

Fig 2. Illustration of embedding optimization to estimate history dependence and the information timescale.

Fig 2

(A) History dependence R is estimated from the observed joint statistics of current spiking in a small time bin [t + Δt) (dark grey) and the embedded past, i.e. a binary sequence representing past spiking in a past window [tT, t). We systematically vary the number of bins d and bin sizes for fixed past range T. Bin sizes scale exponentially with bin index and a scaling exponent κ to reduce resolution for spikes farther into the past. (B) The joint statistics of current and past spiking are obtained by shifting the past range in steps of Δt and counting the resulting binary sequences. (C) Finding a good choice of embedding parameters (e.g. embedding dimension d) is challenging: When d is chosen too small, the true history dependence R(T) (dashed line) is not captured appropriately (insufficient embedding) and underestimated by estimates R^(T,d) (blue solid line). When d is chosen too high, estimates R^(T,d) are severely biased and R(T, d), as well as R(T), are overestimated (biased regime). Past-embedding optimization finds the optimal embedding parameter d* that maximizes the estimated history dependence R^(T,d) subject to regularization. This yields a best estimate R^(T) of R(T) (blue diamond). (D) Estimation of history dependence R(T) as a function of past range T. For each past range T, embedding parameters d and κ are optimized to yield an embedding-optimized estimate R^(T). From estimates R^(T), we obtain estimates τ^R and R^tot of the information timescale τR and total history dependence Rtot (vertical and horizontal dashed lines). To compute R^tot we average estimates R^(T) in an interval [TD, Tmax], for which estimates R^(T) reach a plateau (vertical blue bars, see Materials and methods). For high past ranges T, estimates R^(T) may decrease because a reliable estimation requires past embeddings with reduced temporal resolution.

The past window [tT, t) of the embedding is slid forward in steps of Δt through the whole recording with recording length Trec, starting at t = T. This gives rise to N = (TrecT)/Δt measurements of current spiking in [t, t + Δt), and of the number of spikes in each of the d past bins (Fig 2B). We chose to use only binary sequences of spike counts to estimate history dependence. To that end, a count of 1 was chosen for a spike count larger than the median spike count over the N measurements in the respective past bin. A binary representation drastically reduces the number of possible past sequences for given number of bins d, thus enabling an estimation of history dependence even from short recordings.

Estimation of history dependence with binary past embeddings

To estimate history dependence R, one has to estimate the probability of a spike occurring together with different past sequences. The probabilities πi of these different joint events i can be directly inferred from the frequencies ni with which the events occurred during the recording. Without any additional assumptions, the simplest way to estimate the probabilities is to compute the relative frequencies π^i=ni/N, where N is the total number of observed joint events. This estimate is the maximum likelihood (ML) estimate of joint probabilities πi for a multinomial likelihood, and the corresponding estimate of history dependence will also be denoted by ML. This direct estimate of history dependence is known to be strongly biased when data is too limited [28, 30]. The bias is typically positive, because, under limited data, probabilities of observed joint events are given too much weight. Therefore, statistical dependencies are overestimated. Even worse, the overestimation becomes more severe the higher the number of possible past sequences K. Since K increases exponentially with the dimension of the past embedding d, i.e. K = 2d for binary spike sequences, history dependence is severely overestimated for high d (Fig 2C). The potential overestimation makes it hard to choose embeddings that represent past spiking sufficiently well. In the following, we outline how one can optimally choose embeddings if appropriate regularization is applied.

Estimating history dependence with past-embedding optimization

Due to systematic overestimation, high-dimensional past embeddings are prohibitive for a reliable estimation of history dependence from limited data. Yet, high-dimensional past embeddings might be required to capture all history dependence. The reason is that history dependence may reside in precise spike times, but also may extend far into the past.

To illustrate this trade-off, we consider a discrete past embedding of spiking activity in a past range T, where the past spikes are assigned to d equally large bins (κ = 0). We would like to obtain an estimate R^(T) of the maximum possible history dependence R(T) for the given past range T, with R(T) ≡ R(T, d → ∞) (Fig 2C). The number of bins d can go to infinity only in theory, though. In practice, we have estimates R^(T,d) of the history dependence R(T, d) for finite d. On the one hand, one would like to choose a high number of bins d, such that R(T, d) approximates R(T) well for the given past range T. Too few bins d otherwise reduce the temporal resolution, such that R(T, d) is substantially less than R(T) (Fig 2C). On the other hand, one would like to choose d not too large in order to enable a reliable estimation from limited data. If d is too high, estimates R^(T,d) strongly overestimate the true history dependence R(T, d) (Fig 2C).

Therefore, if the past embedding is not chosen carefully, history dependence is either overestimated due to strong estimation bias, or underestimated because the chosen past embedding was too simple.

Here, we thus propose the following past-embedding optimization approach: For a given past range T, select embedding parameters d*, κ* that maximize the estimated history dependence R^(T,d,κ), while overestimation is avoided by an appropriate regularization. This yields an embedding-optimized estimate R^(T)=R^(T,d*,κ*) of the true history dependence R(T). In terms of the above example, past-embedding optimization selects the optimal embedding dimension d*, which provides the best lower bound R^(T)=R^(T,d*) to R(T) (Fig 2C).

Since we can anyways provide only a lower bound, regularization only has to ensure that estimates R^(T,d,κ) are either unbiased, or a lower bound to the observable history dependence R(T, d, κ). For that purpose, in this paper we introduce a Bayesian bias criterion (BBC) that selects only unbiased estimates. In addition, we use an established bias correction, the so-called Shuffling estimator [31] that, within leading order of the sample size, is guaranteed to provide a lower bound to the observable history dependence (see Materials and methods for details).

Together with these regularization methods, the embedding optimization approach enables complex embeddings of past activity while minimizing the risk of overestimation. See Materials and methods for details on how we used embedding-optimized estimates R^(T) to compute estimates R^tot and τ^R of the total history dependence and information timescale (Fig 2D, blue dashed lines).

Results

In the first part, we demonstrate the differences between history dependence and time-lagged measures of temporal dependence for several models of neural spiking activity. We then benchmark the estimation of history dependence using embedding optimization on a tractable neuron model with long-lasting spike adaptation. Moreover, we compare the embedding optimization approach to existing estimation methods on a variety of extra-cellular spike recordings. In the last part, we apply this to analyze history dependence for a variety of neural systems, and compare the results to the autocorrelation and other statistical measures on the data.

Differences between history dependence and time-lagged measures of temporal dependence

The history dependence R(T) quantifies how predictable neural spiking is, given activity in a certain past range T. In contrast, time-lagged measures of temporal dependence like the autocorrelation C(T) [40] or lagged mutual information L(T) [41, 42] quantify the dependence of spiking on activity in a single past bin with time lag T (Fig 1A and 1C; Materials and methods). In the following, we showcase the main differences between the two approaches.

History dependence disentangles the effects of input activation, reactivation and temporal depth of a binary autoregressive process

To show the behavior of the measures in a well controlled setup, we analyzed a simple binary autoregressive process with varying temporal depth l (Fig 3A). The process evolves in discrete time steps, and has an active (1) or inactive (0) state. Active states are evoked either by external input with probability h, or by internal reactivations that are triggered by activity within the past l steps. Each past activation increases the reactivation probability by m, which regulates the strength of history dependence in the process. In the following, we describe how the measures behave as we vary each of the different model parameters, and then summarize the key difference between the measures.

Fig 3. History dependence disentangles the effects of input activation, reactivation and temporal depth of a binary autoregressive process.

Fig 3

(A) In the binary autoregressive process, the state of the next time step (grey box) is active (one) either because of an input activation with probability h, or because of an internal reactivation. The internal activation is triggered by activity in the past l time steps (green), where each active state increases the activation probability by m. (B) Increasing the input activation probability h increases the total mutual information Itot, although input activations are random and therefore not predictable. Normalizing the total mutual information by the entropy yields the total history dependence Rtot, which decreases mildly with h. (C) Autocorrelation C(T), lagged mutual information L(T) and gain in history dependence ΔR(T) decay differently with the time lag T. For l = 1 and m = 0.8 (top), autocorrelation C(T) decays exponentially with autocorrelation time τC, whereas L(T) decays faster due to the non-linearity of the mutual information. For l = 5 (bottom), C(T) and L(T) plateau over the temporal depth, and then decay much slower than for l = 1. In contrast, ΔR(T) is non-zero only for T shorter or equal to the temporal depth of the process, with much shorter timescale τR. Parameters m and h were adapted to match the firing rate and total history dependence between l = 1 and l = 5. (D) When increasing the reactivation probability m for l = 1, timescales of time-lagged measures τC and τL increase. For history dependence, the information timescale τR remains constant, but the total history dependence Rtot increases. (E) When varying the temporal depth l, all timescales increased. Parameters h and m were adapted to hold the firing rate and Rtot constant.

The input strength h increases the firing rate and thus the spiking entropy H(spiking). This leads to a strong increase in the total mutual information ItotlimTI(spiking;past(T)), whereas the total history dependence Rtot is normalized by the entropy and does slightly decrease (Fig 3B). This slight decrease is expected from a sensible measure of history dependence, because the input is random and has no temporal dependence. In addition, input activations may fall together with internal activations, which slightly reduces the total history dependence.

In contrast, the total history dependence Rtot increases with the reactivation probability m, as expected (Fig 3D). For the autocorrelation, the reactivation probability m not only influences the magnitude of the correlation coefficients, but also the decay of the coefficients. For autoregressive processes (and l = 1), autocorrelation coefficients C(T) decay exponentially [14] (Fig 3C), where the autocorrelation time τC = −Δt/log(m) increases with m and diverges as m → 1 (Fig 3D). The lagged mutual information L(T) is a non-linear measure of time-lagged dependence, and has a very similar behavior as the autocorrelation, with a slightly faster decay and thus smaller generalized timescale τL (Fig 3C and 3D). Note that we normalized L(T) by the spiking entropy H to make it directly comparable to ΔR(T). In contrast to the time-lagged measures, the gain in history dependence ΔR(T) is only non-zero for T smaller or equal to the true temporal depth l of the process (Fig 3C). As a consequence, the information timescale τR does not increase with m for fixed l (Fig 3D).

Finally, the temporal depth l controls how far into the past activations depend on their preceding activity. Indeed, we find that the information timescale τR increases with l as expected (Fig 3C and 3E). Similarly, the timescales of the time-lagged measures τC and τL increase with the temporal depth l. Note that parameters m and h were adapted for each l to keep the firing rate and total history dependence Rtot constant, hence differences in the timescale can be unambiguously attributed to the increase in l.

To conclude, history dependence disentangles the effects of input activation, reactivation and temporal depth, which provides a comprehensive characterization of past dependencies in the autoregressive model. This is different from the total mutual information, which lacks the entropy normalization and is sensitive to the firing rate. This is also different from time-lagged measures, whose timescales are sensitive to both, the reactivation probability m and the temporal depth l. The confusion of effects in the timescales is rooted in the time-lagged nature of the measures—by quantifying past dependencies out of context, C(T) and L(T) also capture indirect, redundant dependencies onto past events. Indirect, redundant dependencies arise from unique dependencies, because past states that are uniquely predictive of future activities were in turn uniquely dependent on their own past. The stronger the unique dependence, the longer the indirect dependencies reach into the past, which increases the timescale of time-lagged measures. In contrast, indirect dependencies do not contribute to the history dependence, because they add no predictive information once more-recent past is taken into account.

History dependence dismisses redundant past dependencies and captures synergistic effects

A key property of history dependence is that it evaluates past dependencies in the light of more-recent past. This allows the measure to dismiss indirect, redundant past dependencies and to capture synergistic effects. In three common models of neural spiking activity, we demonstrate how this leads to a substantially different characterization of past dependencies compared to time-lagged measures of temporal dependence.

First, we simulated a subsampled branching process [14], which is a minimal model for activity propagation in neural networks and captures key properties of spiking dynamics in cortex [15]. Similar to the binary autoregressive process, active neurons activate neurons in the next time step with probability m, the so-called branching parameter, and are activated externally with some probability h. The process was simulated in time steps of Δt = 4 ms with a population activity of 500 Hz, which was subsampled to obtain a single spike train with a firing rate of 5 Hz (Fig 4A). Similar to the binary autoregressive process, the autocorrelation decays exponentially with autocorrelation time τC = −Δt/log(m) = 198 ms, and the lagged mutual information decays slightly faster (Fig 4B). In comparison, the gain in history dependence ΔR decays much faster. When increasing the branching parameter m (for fixed firing rate), the total history dependence increased, as in the autoregressive process (S11 Fig). Strikingly, the timescale τR remained constant or even decreased for larger m > 0.967 and thus higher autocorrelation time τC > 120 ms (S11 Fig), which is different from the binary autoregressive process. The reason is that the branching process evolves at the population level, whereas history dependence is quantified at the single neuron level. Thereby, history dependence also captures indirect dependencies, because the own spiking history reflects the population activity. The higher the branching parameter m, the more informative past spikes are about the population activity, and the shorter is the timescale τR over which all the relevant information about the population activity can be collected. Thus, for the branching process, the total history dependence Rtot captures the influence of the branching parameter, whereas the information timescale τR behaves very differently from the timescales of time-lagged measures.

Fig 4. History dependence dismisses redundant past dependencies and captures synergistic effects.

Fig 4

(A,B) Analysis of a subsampled branching process. (A) The population activity was simulated as a branching process (m = 0.98) and subsampled to yield the spike train of a single neuron (Materials and methods). (B) Autocorrelation C(T) and lagged mutual information L(T) include redundant dependencies and decay much slower than the gain ΔR(T), with much longer timescales (vertical dashed lines). (C,D) Analysis of an Izhikevich neuron in chattering mode with constant input and small voltage fluctuations. The neuron fires in regular bursts of activity. (D) Time-lagged measures C(T) and L(T) measure both, intra- (T < 10 ms) and inter-burst (T > 10 ms) dependencies, which decay very slowly due to regularity of the firing. The gain ΔR(T) reflects that most spiking can already be predicted from intra-burst dependencies, whereas inter-burst dependencies are highly redundant. In this case, only ΔR(T) yields a sensible time scale (blue dashed line). (E,F) Analysis of a generalized leaky integrate-and-fire neuron with long-lasting adaptation filter ξ [3, 43] and constant input. (F) Here, ΔR(T) decays slower to zero than the autocorrelation C(T), and is higher than L(T) for long T. Therefore, the dependence on past spikes is stronger when taking more-recent past spikes into account (ΔR(T)), as when considering them independently (L(T)). Due to these synergistic past dependencies, ΔR(T) is the only measure that captures the long-range nature of the spike adaptation.

Second, we demonstrate the difference of history dependence to time-lagged measures on an Izhikevich neuron, which is a flexible model that can produce different neural firing patterns similar to those observed for real neurons [44]. Here, parameters were chosen according to the “chattering mode” [44], with constant input and small voltage fluctuations (Materials and methods). The neuron fires in regular bursts of activity, with consistent timing between spikes within and between bursts (Fig 4C). While time-lagged measures capture all the regularities in spiking and oscillate with the bursts of activity, history dependence correctly captures that spiking can almost be entirely predicted from intra-burst dependencies alone (Fig 4D). History dependence dismisses the redundant inter-burst dependencies and thereby yields a sensible measure of a timescale (blue dashed line).

Finally, we analyzed a generalized leaky integrate-and-fire neuron with long-range spike adaptation (22 seconds) (Fig 4E), which reproduces spike-frequency adaptation as observed for somatosensory pyramidal neurons [3, 43]. For this model, time-lagged measures C(T) and L(T) actually decay to zero much faster than the gain in history dependence ΔR(T), which is the only measure that captures the long-range adaptation effects of the model (Fig 4F). This shows that past dependencies in this model include synergistic effects, where the dependence is stronger in the context of more-recent spikes. This is most likely due to the non-linearity of the model, where past spikes cause a different adaptation when taken together as when considered as the sum of their contributions.

Thus, due to its ability to dismiss redundant past dependencies and to capture synergistic effects, history dependence really provides a complementary characterization of past dependencies compared to time-lagged measures. Importantly, because the approach better disentangles the effects of timescale and total history dependence, the results remain interpretable for very different models, and provide a more comprehensive view on past dependencies.

Embedding optimization captures history dependence for a neuron model with long-lasting spike adaptation

On a benchmark spiking neuron model, we first demonstrate that without optimization and proper regularization, past embeddings are likely to capture much less history dependence, or lead to estimates that severely overestimate the true history dependence. Readers that are familiar with the bias problem of mutual information estimation might want to jump to the next part, where we validate that embedding-optimized estimates indeed capture the model’s true history dependence, while being robust to systematic overestimation. As a model we chose a generalized leaky integrate-and-fire (GLIF) model with spike frequency adaptation, whose parameters were fitted to experimental data [3, 43]. The model was chosen, because it is equipped with a long-lasting spike adaptation mechanism, and its total history dependence Rtot can be directly computed from sufficiently long simulations (Materials and methods). For demonstration, we show results on a variant of the model where adaptation reaches one second into the past, and show results on the original model with a 22 second kernel in S1, S2 and S5 Figs. For simulation, the neuron was driven with a constant input current to achieve an average firing rate of 4 Hz. In the following, estimates R^(T) are shown for a simulated recording of 90 minutes, whereas the true values R(T) were computed on a 900 minute recording (Materials and methods).

Without regularization, history dependence is severely overestimated for high-dimensional embeddings

For demonstration, we estimated the history dependence R(τ, d) for varying numbers of bins d and a constant bin size τ = 20 ms (i.e. κ = 0 and T = dτ). We compared estimates R^(τ,d) obtained by maximum likelihood (ML) estimation [28], or Bayesian estimation using the NSB estimator [33], with the model’s true R(τ, d) (Fig 5A). Both estimators accurately estimate R(τ, d) for up to d ≈ 20 past bins. As expected, the NSB estimator starts to be biased at higher d than the ML estimator. For embedding dimensions d > 30, both estimators severely overestimate R(τ, d). Note that ± two standard deviations are plotted as shaded areas, but are too small to be visible. Therefore, any deviation of estimates from the model’s true history dependence R(τ, d) can be attributed to positive estimation bias, i.e. a systematic overestimation of the true history dependence due to limited data.

Fig 5. Embedding optimization captures history dependence for a neuron model with long-lasting spike adaptation.

Fig 5

Results are shown for a generalized leaky integrate-and-fire (GLIF) model with long-lasting spike frequency adaptation [3, 43] with a temporal depth of one second (Methods and material). (A) For illustration, history dependence R(τ, d) was estimated on a simulated 90 minute recording for different embedding dimensions d and a fixed bin width τ = 20 ms. Maximum likelihood (ML) [28] and Bayesian (NSB) [33] estimators display the insufficient embedding versus estimation bias trade-off: For small embedding dimensions d, the estimated history dependence is much smaller, but agrees well with the true history dependence R(τ, d) for the given embedding. For larger d, the estimated history dependence R^(τ,d) increases, but when d is too high (d > 20), it severely overestimates the true R(τ, d). The Bayesian bias criterion (BBC) selects NSB estimates R^(τ,d) for which the difference between ML and NSB estimate is small (red solid line). All selected estimates are unbiased and agree well with the true R(τ, d) (grey line). Embedding optimization selects the highest, yet unbiased estimate (red diamond). (B) The Shuffling estimator (blue solid line) subtracts estimation bias on surrogate data (yellow dashed line) from the ML estimator (blue dashed line). Since the surrogate bias is higher than the systematic overestimation in the ML estimator (difference between grey and blue dashed lines), the Shuffling estimator is a lower bound to R(τ, d). Embedding optimization selects the highest estimate, which is still a lower bound (blue diamond). For A and B, shaded areas indicate ± two standard deviations obtained from 50 repeated simulations, which are very small and thus hardly visible. (C) Embedding-optimized BBC estimates R^(T) (red line) yield accurate estimates of the model neuron’s true history dependence R(T), total history dependence Rtot and information timescale τR (horizontal and vertical dashed lines). The zoom-in (right panel) shows robustness of both regularization methods: For all T the model neuron’s R(T, d*, κ*) lies within errorbars (BBC), or consistently above the Shuffling estimator that provides a lower bound. Here, the model’s R(T, d*, κ*) was computed for the optimized embedding parameters d*, κ* that were selected via BBC or Shuffling, respectively (dashed lines). Shaded areas indicate ± two standard deviations obtained by bootstrapping, and colored vertical bars indicate past ranges over which estimates R^(T) were averaged to compute R^tot (Materials and methods).

The aim is now to identify the largest embedding dimension d* for which the estimate of R(τ, d) is not yet biased. A biased estimate is expected as soon as the two estimates ML and NSB start to differ significantly from each other (Fig 5A, red diamond), which is formalized by the Bayesian bias criterion (BBC) (Materials and methods). According to the BBC, all NSB estimates R^(τ,d) with d lower or equal to d* are unbiased (solid red line). We find that indeed all BBC estimates agree well with the true R(τ, d) (grey line), but d* yields the largest unbiased estimate.

The problem of estimation bias has also been addressed previously by the so-called Shuffling estimator [31]. The Shuffling estimator is based on the ML estimator and applies a bias correction term (Fig 5B). In detail, one approximates the estimation bias using surrogate data, which are obtained by shuffling of the embedded spiking history. The surrogate estimation bias (yellow dashed line) is proven to be larger than the actual estimation bias (difference between grey solid and blue dashed line). Therefore, Shuffling estimates R^(τ,d) provide lower bounds to the true history dependence R(τ, d). As with the BBC, one can safely maximize Shuffling estimates R^(τ,d) over d to find the embedding dimension d* that provides the largest lower bound to the model’s total history dependence Rtot (Fig 5B, blue diamond).

Thus, using a model neuron, we illustrated that history dependence can be severely overestimated if the embedding is chosen too complex. Only when overestimation is tamed by one of the two regularization methods, BBC or Shuffling, embedding parameters can be safely optimized to yield better estimates of history dependence.

Optimized embeddings capture the model’s true history dependence

In the previous part, we demonstrated how embedding parameters are optimized for the example of fixed κ and τ. Now, we optimize all embedding parameters for fixed past range T to obtain embedding-optimized estimates R^(T) of R(T). We find that embedding-optimized BBC estimates R^(T) agree well with the true R(T), hence the model’s total history dependence Rtot and information timescale τR are well estimated (Fig 5C, vertical and horizontal dashed lines). In contrast, the Shuffling estimator underestimates the true R(T) for past ranges T > 200 ms, hence the model’s Rtot and τR are underestimated (blue dashed lines). For large past ranges T > 1000 ms, estimates R^(T) of both estimators decrease again, because no additional history dependence is uncovered, whereas the constraint of an unbiased estimation decreases the temporal resolution of the embedding.

Embedding-optimized estimates are robust to overestimation despite maximization over complex embeddings

In the previous part, we investigated how much of the true history dependence for different past ranges T (grey solid line) we miss by embedding the spiking history. An additional source of error is the estimation of history dependence from limited data. In particular, estimates are prone to overestimate history dependence systematically (Fig 5A and 5B).

To test explicitly for overestimation, we computed the true history dependence R(T, d*, κ*) for exactly the same sets of embedding parameters T, d*, κ* that were found during embedding optimization with BBC (grey dash-dotted line), and the Shuffling estimator (grey dotted line, Fig 5C, zoom-in). We expect that BBC estimates are unbiased, i.e. the true history dependence should lie within errorbars of the BBC estimates (red shaded area) for a given T. In contrast, Shuffling estimates are a lower bound, i.e. estimates should lie below the true history dependence (given the same T, d*, κ*). We find that this is indeed the case for all T. Note that this is a strong result, because it requires that the regularization methods work reliably for every single set of embedding parameters used for optimization—otherwise, parameters that cause overestimation would be selected.

Thus, we can confirm that the embedding-optimized estimates do not systematically overestimate the model neuron’s history dependence, and are on average lower bounds to the true history dependence. This is important for the interpretation of the results.

Mild overfitting can occur during embedding optimization on short recordings, but can be overcome with cross-validation

We also tested whether the recording length affects the reliability of embedding-optimized estimates, and found very mild overestimation (1–3%) of history dependence for BBC for recordings as short as 3 minutes (S1 and S4 Figs). The overestimation is a consequence of overfitting during embedding optimization: Variance in the estimates increases for shorter recordings, hence maximizing over estimates selects embedding parameters that have high history dependence by chance. Therefore, the overestimation can be overcome by cross-validation, e.g. by optimizing embedding parameters on the first half, and computing estimates on the second half of the data (S1 Fig). In contrast, we found that for the model neuron, Shuffling estimates do not overestimate the true history dependence even for recordings as short as 3 minutes (S1 Fig). This is because the effect of overfitting was small compared to the systematic underestimation of Shuffling estimates. Here, all experimental recordings where we apply BBC are long enough (≈ 90 minutes), thus no cross-validation was applied on the experimental data.

Estimates of the information timescale are sensitive to the recording length

Finally, we also tested the impact of the recording length on estimates R^tot of the total history dependence as well as estimates τ^R of the information timescale. While on recordings of 3 minutes embedding optimization still estimated ≈ 95% of the true Rtot, estimates τ^R were only ≈ 75% of the true τR (S2 Fig). Thus, estimates of the information timescale τR are more sensitive to the recording length, because they depend on the small additional contributions to R(T) for high past ranges T, which are hard to estimate for short recordings. Therefore, we advice to analyze recordings of similar length to make results on τR comparable across experiments. In the following, we explicitly shorten some recordings such that all recordings have approximately the same recording length.

In conclusion, embedding optimization accurately estimated the model neuron’s true history dependence. Moreover, for all past ranges, embedding-optimized estimates were robust to systematic overestimation. Embedding optimization is thus a promising approach to quantify history dependence and the information timescale in experimental spike recordings.

Embedding optimization is key to estimate long-lasting history dependence in extra-cellular spike recordings

Here, we apply embedding optimization to long spike recordings (≈ 90 minutes) from rat dorsal hippocampus layer CA1 [45, 46], salamander retina [47, 48] and in vitro recordings of rat cortical culture [49]. In particular, we compare embedding optimization to other popular estimation approaches, and demonstrate that an exponential past embedding is necessary to estimate history dependence for long past ranges.

Embedding optimization reveals history dependence that is not captured by a generalized linear model or a single past bin

We use embedding optimization to estimate history dependence R(T) as a function of the past range T (see Fig 6B for an example single unit from hippocampus layer CA1, and S6, S7 and S8 Figs for all analyzed sorted units). In this example, BBC and Shuffling with a maximum of dmax = 20 past bins led to very similar estimates for all T. Notably, embedding optimization with both regularization methods estimated high total history dependence of almost Rtot ≈ 40% with a temporal depth of almost a second, and an information timescale of τR ≈ 100 ms (Fig 6B). This indicates that embedding-optimized estimates capture a substantial part of history dependence also in experimental spike recordings.

Fig 6. Embedding optimization is key to estimate long-lasting history dependence in extra-cellular spike recordings.

Fig 6

(A) Example of recorded spiking activity in rat dorsal hippocampus layer CA1. (B) Estimates of history dependence R(T) for various estimators, as well as estimates of the total history dependence Rtot and information timescale τR (dashed lines) (example single unit from CA1). Embedding optimization with BBC (red) and Shuffling (blue) for dmax = 20 yields consistent estimates. Embedding-optimized Shuffling estimates with a maximum of dmax = 5 past bins (green) are very similar to estimates obtained with dmax = 20 (blue). In contrast, using a single past bin (dmax = 1, yellow), or fitting a GLM for the temporal depth found with BBC (violet dot), estimates much lower total history dependence. Shaded areas indicate ± two standard deviations obtained by bootstrapping, and small vertical bars indicate past ranges over which estimates of R(T) were averaged to estimate Rtot (Materials and methods). (C) An exponential past embedding is crucial to capture history dependence for high past ranges T. For T > 100 ms, uniform embeddings strongly underestimate history dependence. Shown is the median of embedding-optimized estimates of R(T) with uniform embeddings, relative to estimates obtained by optimizing exponential embeddings, for BBC with dmax = 20 (red) and Shuffling with dmax = 20 (blue) and dmax = 5 (green). Shaded areas show 95% percentiles. Median and percentiles were computed over analyzed sorted units in CA1 (n = 28). (D) Comparison of total history dependence Rtot for different estimation and embedding techniques for three different experimental recordings. For each sorted unit (grey dots), estimates are plotted relative to embedding-optimized estimates for BBC and dmax = 20. Embedding optimization with Shuffling and dmax = 20 yields consistent but slightly higher estimates than BBC. Strikingly, Shuffling estimates for as little as dmax = 5 past bins (green) capture more than 95% of the estimates for dmax = 20 (BBC). In contrast, estimates obtained by optimizing a single past bin, or fitting a GLM, are considerably lower. Bars indicate the median and lines indicate 95% bootstrap confidence intervals on the median over analyzed sorted units (CA1: n = 28; retina: n = 111; culture: n = 48).

Importantly, other common estimation approaches fail to capture the same amount of history dependence (Fig 6B and 6D). To compare how well the different estimation approaches could capture the total history dependence, we plotted for each sorted unit the different estimates of Rtot relative to the corresponding BBC estimate (Fig 6D). Embedding optimization with Shuffling yields estimates that agree well with BBC estimates. The Shuffling estimator even yields slightly higher values on the experimental data. Interestingly, embedding optimization with the Shuffling estimator and as little as dmax = 5 past bins captures almost the same history dependence as BBC with dmax = 20, with a median above 95% for all neural systems. In contrast, we find that a single past bin only accounts for 70% to 80% of the total history dependence. A GLM bears little additional advantage with a slightly higher median of ≈ 85%. To save computation time, GLM estimates were only computed for the temporal depth that was estimated using BBC (Fig 6B, violet square). The remaining embedding parameters d and κ of the GLM’s history kernel were separately optimized using the Bayesian information criterion (Materials and methods). Since parameters were optimized, we argue that the GLM underestimates history dependence because of its specific model assumptions, i.e. no interactions between past spikes. Moreover, we found that the GLM performs worse than embedding optimization with only five past bins. Therefore, we conclude that for typical experimental spike trains, interactions between past spikes are important, but do not require very high temporal resolution. In the remainder of this paper we use the reduced approach (Shuffling dmax = 5) to compare history dependence among different neural systems.

Increasing bin sizes exponentially is crucial to estimate long-lasting history dependence

To demonstrate this, we plotted embedding-optimized BBC estimates of R(T) using a uniform embedding, i.e. equal bin sizes, relative to estimates obtained with exponential embedding (Fig 6C), both for BBC with dmax = 20 (red) and Shuffling with dmax = 20 (blue) or dmax = 5 (green). For past ranges T > 100 ms, estimates using a uniform embedding miss considerable history dependence, which becomes more severe the longer the past range. In the case of dmax = 5, a uniform embedding captures around 80% for T = 1 s, and only around 60% for T = 5 s (median over analyzed sorted units in CA1). Therefore, we argue that an exponential embedding is crucial for estimating long-lasting history dependence.

Together, total history dependence and the information timescale show clear differences between neural systems

In addition to recordings from rat dorsal hippocampus layer CA1, salamander retina and rat cortical culture, we analyzed sorted units in a recording of mouse primary visual cortex using the novel Neuropixels probe [50]. Recordings from primary visual cortex were approximately 40 minutes long. Thus, to make results comparable, we analyzed only the first 40 minutes of all recordings.

We find clear differences between the neural systems, both in terms of the total history dependence, as well as the information timescale (Fig 7A). Sorted units in cortical culture and hippocampus layer CA1 have high total history dependence Rtot with median over sorted units of ≈ 24% and ≈ 25%, whereas sorted units in retina and primary visual cortex have typically low Rtot of ≈ 11% and ≈ 8%. In terms of the information timescale τR, sorted units in hippocampus layer CA1 display much higher τR, with a median of ≈ 96ms, than units in cortical culture, with median τR of ≈ 12 ms. Similarly, sorted units in primary visual cortex have higher τR, with median of ≈ 37 ms, than units in retina, with median of ≈ 23 ms. These differences could reflect differences between early visual processing (retina, primary visual cortex) and high level processing and memory formation in hippocampus, or likewise, between neural networks that are mainly input driven (retina) or exclusively driven by recurrent input (culture). Notably, total history dependence and the information timescale varied independently among neural systems, and studying them in isolation would miss differences, whereas considering them jointly allows to distinguish the different systems. Moreover, no clear differentiation between cortical culture, retina and primary visual cortex is possible using the autocorrelation time τC (Fig 7B), with medians τC ≈ 68 ms (culture), τC ≈ 60 ms (retina) and τC ≈ 80 ms (primary visual cortex), respectively.

Fig 7. Together, total history dependence and the information timescale show clear differences between neural systems.

Fig 7

(A) Embedding-optimized Shuffling estimates (dmax = 5) of the total history dependence Rtot are plotted against the information timescale τR for individual sorted units (dots) from four different neural systems (raster plots show spike trains of different sorted units). No clear relationship between the two quantities is visible. The analysis shows systematic differences between the systems: sorted units in rat cortical culture (n = 48) and rat dorsal hippocampus layer CA1 (n = 28) have higher median total history dependence than units in salamander retina (n = 111) and mouse primary visual cortex (n = 142). At the same time, sorted units in cortical culture and retina show smaller timescale than units in primary visual cortex, and much smaller timescale than units in hippocampus layer CA1. Overall, neural systems are clearly distinguishable when jointly considering the total history dependence and information timescale. (B) Total history dependence Rtot versus the autocorrelation time τC shows no clear relation between the two quantities, similar to the information timescale τR. Also, the autocorrelation time gives the same relation in timescale between retina, primary visual cortex and CA1, whereas the cortical culture has a higher timescale (different order of medians on the x-axis). In general, neural systems are harder to differentiate in terms of the autocorrelation time τC compared to τR. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

To better understand how other well-established statistical measures relate to the total history dependence Rtot and the information timescale τR, we show Rtot and τR versus the median interspike interval (ISI), the coefficient of variation CV = σISI/μISI of the ISI distribution, and the autocorrelation time τC in S14 Fig. Estimates of the total history dependence Rtot tend to decrease with the median ISI, and to increase with the coefficient of variation CV. This result is expected for a measure of history dependence, because a shorter median ISI indicates that spikes tend to occur together, and a higher CV indicates a deviation from independent Poisson spiking. In contrast, the information timescale τR tends to increase with the autocorrelation time, as expected, with no clear relation to the median ISI or the coefficient of variation CV. However, the correlation between the measures depends on the neural system. For example in retina (n = 111), Rtot is significantly anti-correlated with the median ISI (Pearson correlation coefficient: r = −0.69, p < 10−5) and strongly correlated with the coefficient of variation CV (r = 0.90, p < 10−5), and τR is significantly correlated with the autocorrelation time τC (r = 0.75, p < 10−5). In contrast, for mouse primary visual cortex (n = 142), we found no significant correlations between any of these measures. Thus, the relation between Rtot or τR and the established measures is not systematic, and therefore one cannot replace the history dependence by any of them.

In addition to differences between neural systems, we also find strong heterogeneity of history dependence within a single system. Here, we demonstrate this for three different sorted units in primary visual cortex (Fig 8, see S9 Fig for all analyzed sorted units in primary visual cortex). In particular, sorted units display different signatures of history dependence R(T) as a function of the past range T. For some units, history dependence builds up on short past ranges T (e.g. Fig 8A), for some it only shows for higher T (e.g. Fig 8B), and for some it already saturates for very short T (e.g. Fig 8C). A similar behavior is captured by the autocorrelation C(T) (Fig 8, second row). The rapid saturation in Fig 8C indicates history dependence due to bursty firing, which can also be seen by strong positive correlation with past spikes for short T (Fig 8C, bottom). To exclude the effects of different firing modes or refractoriness on the information timescale, we only considered past ranges T > T0 = 10ms when estimating τR, or time lags T > T0 = 10ms when fitting an exponential decay to C(T) to estimate τC. The reason is that differences in the integration of past information are expected to show for larger T. This agrees with the observation that timescales among neural systems were much more similar if one instead sets T0 = 0 ms, whereas they showed clear differences for T0 = 10 ms or T0 = 20 ms (S15 Fig).

Fig 8. Distinct signatures of history dependence for different sorted units within mouse primary visual cortex.

Fig 8

(Top) Embedding-optimized estimates of R(T) reveal distinct signatures of history dependence for three different sorted units (A,B,C) within a single neural system (mouse primary visual cortex). In particular, sorted units have similar total history dependence Rtot, but differ vastly in the information timescale τR (horizontal and vertical dashed lines). Note that for unit C, τR is smaller than 5 ms and thus doesn’t appear in the plot. Shaded areas indicate ± two standard deviations obtained by bootstrapping, and vertical bars indicate the interval over which estimates of R(T) were averaged to estimate Rtot (Materials and methods). Estimates were computed with the Shuffling estimator and dmax = 5. (Bottom) Autocorrelograms for the same sorted units (A,B, and C, respectively) roughly show an exponential decay, which was fitted (solid grey line) to estimate the autocorrelation time τC (grey dashed line). Similar to the information timescale τR, only coefficients for T larger than T0 = 10 ms were considered during fitting.

We also observed that history dependence can build up on all timescales up to seconds, and that it shows characteristic increases at particular past ranges, e.g. T ≈ 100 ms and T ≈ 200 ms in CA1 (Fig 6B), possibly reflecting phase information in the theta cycles [45, 46]. Thus, the analysis does not only serve to investigate differences in history dependence between neural systems, but also resolves clear differences between sorted units. This could be used to investigate differences in information processing between different cortical layers, different neuron types or neurons with different receptive field properties. Importantly, because units are so different, ad hoc embedding schemes with a fixed number of bins or fixed bin width will miss considerable history dependence.

Discussion

To estimate history dependence in experimental data, we developed a method where the embedding of past spiking is optimized for each individual spike train. Thereby, it can estimate a maximum of history dependence, given what is possible for the limited amount of data. We found that embedding optimization is a robust and flexible estimation tool for neural spike trains with vastly different spiking statistics, where ad hoc embedding strategies would estimate substantially less history dependence. Based on our results, we arrived at practical guidelines that are summarized in Fig 9. In the following, we contrast history dependence R(T) with time-lagged measures such as the autocorrelation in more detail, clearly discussing the advantages—but also the limitations of the approach. We then discuss how one can relate estimated history dependence to neural coding and information processing based on the example data sets analyzed in this paper.

Fig 9. Practical guidelines for the estimation of history dependence in single neuron spiking activity.

Fig 9

More details regarding the individual points can be found at the end of Materials and methods.

Advantages and limitations of history dependence in comparison to the autocorrelation and lagged mutual information

A key difference between history dependence R(T) and the autocorrelation or lagged mutual information is that R(T) quantifies statistical dependencies between current spiking and the entire past spiking in a past range T (Fig 1B). This has the following advantages as a measure of statistical dependence, and as a footprint of information processing in single neuron spiking. First, R(T) allows to compute the total history dependence, which, from a coding perspective, represents the redundancy of neural spiking with all past spikes; or how much of the past information is also represented when emitting a spike. Second, because past spikes are considered jointly, R(T) captures synergistic effects and dismisses redundant past information (Fig 4). Finally, we found that this enables R(T) to disentangle the strength and timescale of history dependence for the binary autoregressive process (Fig 3). In contrast, autocorrelation C(T) or lagged mutual information L(T) quantify the statistical dependence of neural spiking on a single past bin with time lag T, without considering any of the other bins (Fig 1A). Thereby, they miss synergistic effects; and they quantify redundant past dependencies that vanish once spiking activity in more-recent past is taken into account (Fig 4). As a consequence, the timescales of these measures reflect both, the strength and the temporal depth of history dependence in the binary autoregressive process (Fig 3).

Moreover, technically, the autocorrelation time τC depends on fitting an exponential decay to coefficients C(T). Computing the autocorrelation time with the generalized timescale is difficult, because coefficients C(T) can be negative, and are too noisy for large T. While model fitting is in general more data efficient than the model-free estimation presented here, it can also produce biased and unreliable estimates [16]. Furthermore, when the coefficients do not decay exponentially, a more complex model has to be fitted [52], or the analysis simply cannot be applied. In contrast, the generalized timescale can be directly applied to estimates of the history dependence R(T) to yield the information timescale τR without any further assumptions or fitting models. However, we found that estimates of τR can depend strongly on the estimation method and embedding dimension (S12 Fig) and the size of the data set (S2 and S3 Figs). The dependence on data size is less strong for the practical approach of optimizing up to dmax = 5 past bins, but still we recommend to use data sets of similar length when aiming for comparability across experiments. Moreover, there might be cases where a model-free estimation of the true timescale might be infeasible because of the complexity of past dependencies (S2 Fig, neuron with a 22 seconds past kernel). In this case, only ≈ 80% of the true timescale could be estimated on a 90 minute recording.

Another downside of quantifying the history dependence R(T) is that its estimation requires more data than fitting the autocorrelation time τC. To make best use of the limited data, we here devised the embedding optimization approach that allows to find the most efficient representation of past spiking for the estimation of history dependence. Even so, we found empirically that a minimum of 10 minutes of recorded spiking activity are advisable to achieve a meaningful quantification of history dependence and its timescale (S2 and S3 Figs). In addition, for shorter recordings, the analysis can lead to mild overestimation due to optimizing and overfitting embedding parameters on noisy estimates (S2 Fig). This overestimation can, however, be avoided by cross-validation, which we find to be particularly relevant for the Bayesian bias criterion (BBC) estimator. Finally, our approach uses an embedding model that ranges from uniform embedding to an embedding with exponentially stretching past bins—assuming that past information farther into the past requires less temporal resolution [53]. This embedding model might be inappropriate if, for example, spiking depends on the exact timing of distant past spikes, with gaps in time where past spikes are irrelevant. In such a case, embedding optimization could be used to optimize more complex embedding models that can also account for this kind of spiking statistics.

Differences in total history dependence and information timescale between data sets agree with ideas from neural coding and hierarchical information processing

First, we found that the total history dependence Rtot clearly differs among the experimental data sets. Notably, Rtot was low for recordings of early visual processing areas such as retina and primary visual cortex, which is in line with the theory of efficient coding [1, 54] and neural adaptation for temporal whitening as observed in experiments [3, 55]. In contrast, Rtot was high for neurons in dorsal hippocampus (layer CA1) and cortical culture. In CA1, the original study [46] found that the temporal structure of neural activity within the temporal windows set by the theta cycles was beyond of what one would expect from integration of feed-forward excitatory inputs. The authors concluded that this could be due to local circuit computations. The high values of Rtot support this idea, and suggest that local circuit computations could serve the integration of past information, either for the formation of a path integration–based neural map [56], or to recognize statistical structure for associative learning [8]. In cortical culture, neurons are exclusively driven by recurrent input and exhibit strong bursts in the population activity [57]. This leads to strong history dependence also at the single-neuron level.

To summarize, history dependence was low for early sensory processing and high for high level processing or past dependencies that are induced by strong recurrent feedback in a neural network. We thus conclude that estimated total history dependence Rtot does indeed provide a footprint of neural coding and information processing.

Second, we observed that the information timescale τR increases from retina (≈ 23 ms) to primary visual cortex (≈ 37 ms) to CA1 (≈ 96 ms), in agreement with the idea of a temporal hierarchy in neural information processing [12]. These results qualitatively agree with similar results obtained for the autocorrelation time of spontaneous activity [9], although the information timescales are overall much smaller than the autocorrelation times. Our results suggest that the hierarchy of intrinsic timescales could also show in the history dependence of single neurons measured by the mutual information.

Conclusion

Embedding optimization enables to estimate history dependence in a diversity of spiking neural systems, both in terms of its strength, as well as its timescale. The approach could be used in future experimental studies to quantify history dependence across a diversity of brain areas, e.g. using the novel Neuropixels probe [58], or even across cortical layers within a single area. To this end we provide a toolbox for Python3 [37]. These analyses might yield a more complete picture of hierarchical processing in terms of the timescale and a footprint of information processing and coding principles, i.e. information integration versus redundancy reduction.

Materials and methods

In this section, we provide all mathematical details required to reproduce the results of this paper. We first provide the basic definitions of history dependence, time-lagged measures and the past embedding. We then describe the embedding optimization approach that is used to estimate history dependence from neural spike recordings, and provide a description of the workflow. Next, we delineate the estimators of history dependence considered in this paper, and present the novel Bayesian bias criterion. Finally, we provide details on the benchmark model and how we approximated its history dependence for given past range and embedding parameters. All code for Python3 that was used to analyze the data and to generate the figures is available online at https://github.com/Priesemann-Group/historydependence.

Ethics statement

Data from salamander retina were recorded in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health, and the protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of Princeton University (Protocol Number: 1828). The rat dorsal hippocampus experimental protocols were approved by the Institutional Animal Care and Use Committee of Rutgers University. Data from mouse primary visual cortex were recorded according to the UK Animals Scientific Procedures Act (1986).

Glossary

Terms

  • Past embedding: discrete, reduced representation of past spiking through temporal binning

  • Past-embedding optimization: Optimization of temporal binning for better estimation of history dependence

  • Embedding-optimized estimate: Estimate of history dependence for optimized embedding

Abbreviations

  • GLM: generalized linear model

  • ML: Maximum likelihood

  • BBC: Bayesian bias criterion

  • Shuffling: Shuffling estimator based on a bias correction for the ML estimator

Symbols

  • Δt: bin size of the time bin for current spiking

  • T: past range of the past embedding

  • [tT, t): embedded past window

  • d: embedding dimension or number of bins

  • κ: scaling exponent for exponential embedding

  • Trec: recording length

  • N = (TrecT)/Δt: number of measurements, i.e. number of observed joint events of current and past spiking

  • X: random variable with binary outcomes x ∈ [0, 1], which indicate the presence of a spike in a time bin Δt

  • XT: random variable whose outcomes are binary sequences XT ∈ {0, 1}d, which represent past spiking activity in a past range T

Information-theoretic quantities

  • H(spiking) ≡ H(X): average spiking information

  • H(spiking|past(T)) ≡ H(X|XT): average spiking information for given past spiking in a past range T

  • I(spiking;past(T)) ≡ I(X; XT): mutual information between current spiking and past spiking in a past range T

  • R(T) ≡ I(X;XT)/H(X): history dependence for given past range T

  • R(T,d,κ)I(X;Xd,κT)/H(X): history dependence for given past range T and past embedding d, κ

  • RtotlimTR(T): total history dependence

  • ΔR(Ti)≡R(Ti) − R(Ti−1): gain in history dependence

  • τR: information timescale or generalized timescale of history dependence R(T)

  • L(T) ≡ I(X; XT): lagged mutual information with time lag T

  • τL: generalized timescale of lagged mutual information L(T)

Estimated quantities

  • R^(T,d,κ): estimated history dependence for given past range T and past embedding d, κ

  • R^(T): embedding-optimized estimate of R(T) for optimal embedding parameters d*, κ*

  • R^tot: estimated total history dependence, i.e. average R^(T) for T ∈ [TD, Tmax], with interval of saturated estimates [TD, Tmax]

  • τ^R: estimated information timescale

Basic definitions

Definition of history dependence

We quantify history dependence R(T) as the mutual information I(X, XT) between present and past spiking X and XT, normalized by the binary Shannon information of spiking H(X), i.e.

R(T)I(X,XT)H(X)=1H(X|XT)H(X). (6)

Under the assumption of stationarity and ergodicity the mutual information can be computed either as the average over the stationary distribution p(x, xT), or the time average [21, 58], i.e.

I(X,XT)=H(X)H(X|XT)(7)=x{0,1}p(x)log21p(x)x{0,1}xT{0,1}dp(x,xT)log21p(x|xT)(8)=x{0,1}xT{0,1}dp(x,xT)log2p(x|xT)p(x)(9)=limN1Nn=1Nlog2p(xtn|xtnT)p(xtn).(10)

Here, xtn{0,1} indicates the presence of a spike in a small interval [tn, tn + Δt) with Δt = 5 ms throughout the paper, and xtnT encodes the spiking history in a time window [tnT, tn) at times tn = nΔt that are shifted by Δt.

Definition of lagged mutual information

The lagged mutual information L(T) [41] for a stationary neural spike trains is defined as the mutual information between present spiking X and past spiking XT with time lag T, i.e.

L(T)I(X;XT) (11)
=x{0,1}xT{0,1}p(x,xT)log2p(x|xT)p(x) (12)
=limN1Nn=1Nlog2p(xtn|xtnT)p(xtn). (13)

Here, xtn{0,1} indicates the presence of a spike in a time bin [tn, tn + Δt) and xtnT{0,1} the presence of a spike in a single past bin [tnT, tnT + Δt) at times tn = nΔt that are shifted by Δt. In analogy to R(T), one can apply the generalized timescale to the lagged mutual information to obtain a timescale τL with

τLi=1nT¯iL(Ti)j=1nL(Tj)T0. (14)

Definition of autocorrelation

The autocorrelation C(T) for a stationary neural spike train is defined as

C(T)=Cov[xtn,xtnT]Var[xtn]=xtnxtnTxtn2xtn2xtn2 (15)

with time lag T and xtn and xtnT as above. For an exponentially decaying autocorrelation C(T)exp(TτC), τC is called autocorrelation time.

Past embedding

Here, we encode the spiking history in a finite time window [tT, t) as a binary sequence xtT=(xt,iT)i=1d of binary spike counts xt,iT{0,1} in d past bins (Fig 2). When more than one spike can occur in a single bin, xt,iT=1 is chosen for spike counts larger than the median activity in the ith bin. This type of temporal binning is more generally referred to as past embedding. It is formally defined as a mapping

ΓT(θ):FTSd (16)

from the set of all possible spiking histories FT=σ(Xτ:τ[tT,t)), i.e. the sigma algebra generated by the point process X (neural spiking) in the time interval [tT, t), to the set of d-dimensional binary sequences Sd. We can drop the dependence on the time t because we assume stationarity of the point process. Here, T is the embedded past range, d the embedding dimension, and θ denotes all the embedding parameters that govern the mapping, i.e. θ = (d, …). The resulting binary sequence at time t for given embedding θ and past range T will be denoted by xt,θT. In this paper, we consider the following two embeddings for the estimation of history dependence.

Uniform embedding

If all bins have the same bin width τ = T/d, the embedding is called uniform. The main drawback of the uniform embedding is that higher past ranges T enforce a uniform decrease in resolution when d is fixed.

Exponential embedding

One can generalize the uniform embedding by letting bin widths increase exponentially with bin index j = 1, …, d according to τj = τ110(j−1)κ. Here, τ1 gives the bin size of the first past bin, and is uniquely determined when T, d and κ are specified. Note that κ = 0 yields a uniform embedding, whereas κ > 0 decreases resolution on distant past spikes. For fixed embedding dimension d and past range T, this allows to retain a higher resolution on spikes in the more-recent past.

Sufficient embedding

Ideally, the past embedding preserves all the information that the spiking history in the past range T has about the present spiking dynamics. In that case, no additional past information has an influence on the probability for xt once the embedded spiking history xt,θT is given, i.e.

p(xt|xt,θT,xt,νT)=p(xt|xt,θT) (17)

for any other past embedding xt,νT. If Eq (17) holds for all times t, the embedding ΓT(θ) is called a sufficient embedding. For the remainder of this paper, the sequences of sufficient embeddings are denoted by xtT.

Insufficient embeddings cause underestimation of history dependence

The past embedding is essential when inferring history dependence from recordings, because an insufficient embedding causes underestimation of history dependence. To show this, we note that for any embedding parameters θ and past range T the Kullback-Leibler divergence between the spiking probability for the sufficient embedding p(xt|xtT) and p(xt|xt,θT) cannot be negative [60], i.e.

DKL[p(xt|xtT)p(xt|xt,θT)]=xt{0,1}p(xt|xtT)log2p(xt|xtT)p(xt|xt,θT)0, (18)

with equality iff p(xt|xt,θT)=p(xt|xtT). By taking the average over all times tn, we arrive at

0limN1Nn=1Nxtn{0,1}p(xtn|xtnT)log2p(xtn|xtnT)p(xtn|xtn,θT) (19)
=limN1Nn=1Nxtn{0,1}p(xtn|xtnT,xtn,θT)log21p(xtn|xtn,θT) (20)
limN1Nn=1Nxtn{0,1}p(xtn|xtnT)log21p(xtn|xtnT) (21)
=H(X|XθT)H(X|XT), (22)

where the last step follows from stationarity and ergodicity and marginalizing out xtnT in the first term. From here, it follows that one always underestimates the history dependence in neural spiking, as long as the embedding is not sufficient, i.e.

R(T,θ)1H(X|XθT)H(X)1H(X|XT)H(X)=R(T). (23)

Estimation of history dependence using past-embedding optimization

The past embedding is crucial in determining how much history dependence we can capture, since an insufficient embedding θ leads to an underestimation of the history dependence R(T) ≥ R(T, θ). In order to capture as much history dependence as possible, the embedding θ should be chosen to maximize the estimated history dependence R(T, θ). Since the history dependence has to be estimated from data, we formulate the following embedding optimization procedure in terms of the estimated history dependence R^(T,θ).

Embedding optimization

For given T, find the optimal embedding θ* that maximizes the estimated history dependence

θ*=argmaxθR^(T,θ). (24)

This yields an embedding-optimized estimate R^(T)=R^(T,θ*) of the true history dependence R(T).

Requirements

Embedding optimization can only give sensible results if the optimized estimates R^(T,θ) are guaranteed to be unbiased or a lower bound to the true R(T, θ). Otherwise, embeddings will be chosen that strongly overestimate history dependence. In this paper, we therefore use two estimators, BBC and Shuffling, the former of which is designed to be unbiased, and the latter a lower bound to the true R(T, θ) (see below). In addition, embedding optimization works only if the estimation variance is sufficiently small. Otherwise, maximizing over variable estimates can lead to a mild overestimation. We found for a benchmark model that this overestimation was negligibly small for a recording length of 90 minutes for a model neuron with a 4 Hz average firing rate (S1 Fig). For smaller recording lengths, potential overfitting can be avoided by cross-validation, i.e. optimizing embeddings on one half of the recording and computing embedding-optimized estimates on the other half.

Implementation

For the optimization, we compute estimates R^(T,d,κ) for a range of embedding dimensions d ∈ [1, 2, …, dmax] and scaling parameters κ = [0, …, κmax]. For each T, we then choose the optimal parameter combination d*, κ* for each T that maximizes the estimated history dependence R^(T,d,κ), and use R^(T,d*,κ*) as the best estimate of R(T).

Estimation of total history dependence and the information timescale

When estimating history dependence R(T) from data, there are some adjustments required to estimate the total history dependence Rtot and the information timescale τR.

First, estimates R^(T) are not guaranteed to converge for large past ranges T, but might decrease due to a reduced resolution of embeddings for higher T (Fig 2D). Thus, we estimated an interval [TD, Tmax] for which estimates have converged. Here, the temporal depth TD and the upper bound Tmax are the first and the last past ranges T for which estimates R^(T) are within one standard deviation of the highest estimate R^max, i.e. R^(T)R^maxσR^max (Fig 2D, vertical blue bars). The standard deviation was estimated by bootstrapping (see Bootstrap confidence intervals). From this interval, an estimate of the total history dependence R^tot is obtained by averaging R^(T) over past ranges T ∈ [TD, Tmax] (Fig 2D, horizontal dashed blue line).

Second, noisy estimates R^(T) are not guaranteed to be monotonously increasing, hence increments ΔR^(T) can be negative. Moreover, noisy estimates can lead to positive ΔR^(T) even though the true R(T) has already converged to Rtot. This can have a huge effect on the estimated information timescale τ^R if one simply uses these estimates in Eq (5). To avoid this, we use knowledge about the behavior of the true R(T) when estimating ΔR(T). In particular, we set estimates R^(T) equal to the largest previous estimate R^(T) for T′ < T if they fall below it, and equal to R^tot if they are larger than R^tot. This enforces that the estimated gain ΔR^(T)0 is non-negative, and excludes spurious gain for high T due to noisy estimates.

Finally, the information timescale τR can crucially depend on the choice of the minimum past range T0 in the sum in Eq (5). A T0 > 0 larger than zero allows to ignore short term effects on the history dependence such as the refractory period or different firing modes, which we found beneficial for resolving differences in the timescale among different neural systems (S15 Fig). In contrast, if the decay is truly exponential, then τR is independent of T0. In this paper, we chose T0 = 10ms to exclude short term effects, while also not excluding too much past information.

Workflow

The estimation workflow using embedding optimization is summarized in Fig 10.

Fig 10. Workflow of past-embedding optimization to estimate history dependence and the information timescale.

Fig 10

1) Define a set of embedding parameters d, κ for fixed past range T. 2) For each embedding d, κ, record sequences of current and past spiking xtn,xtn,θT for all time steps tn in the recording. 3) Use the frequencies of the recorded sequences to estimate history dependence for each embedding, either using maximum likelihood (ML), or fully Bayesian estimation (NSB). 4) Apply regularization, i.e. the Bayesian bias criterion (BBC) or Shuffling bias correction, to ensure that all estimates are unbiased or lower bounds to the true history dependence. 5) Select the optimal embedding to obtain an embedding-optimized estimate of R(T). 6) Repeat the estimation for a set of past ranges T to compute estimates of the information timescale τR and the total history dependence Rtot.

Different estimators of history dependence

To estimate R(T, θ), one has to estimate the binary entropy of spiking H(X) in a small time bin Δt, and the conditional entropy H(X|XθT) from data. The estimation of the binary entropy only requires the average firing probability p(x = 1) = rΔt with

H^(X)=rΔtlog2rΔt(1rΔt)log2(1rΔt), (25)

which can be estimated with high accuracy from the estimated average firing rate r even for short recordings. The conditional entropy H(X|XθT), on the other hand, is much more difficult to estimate. In this paper, we focus on a non-parametric approach that estimates

H(X|XθT)=H(X,XθT)H(XθT) (26)

by a non-parametric estimation of the entropies H(XθT) and H(X,XθT).

The estimation of entropy from data is a well-established problem, and we can make use of previously developed entropy estimation techniques for the estimation of history dependence. We here write out the estimation of the entropy term for joint sequences of present and past spiking H(X,XθT), which is the highest-dimensional term and thus the hardest to estimate. Estimation for the marginal entropy H(XθT) is completely analogous.

Computing the entropy requires knowing the statistical uncertainty and thus the probabilities for all possible joint sequences. In the following we will write probabilities as a vector π=(πk)k=1K, where πkp((x,xθT)=ak) are the probabilities for the K = 2d+1 possible joint spike patterns ak ∈ {0, 1}d+1. The entropy H(X,XθT) then reads

H(X,XθT)=H(π)=k=1Kπklog2πk. (27)

Once we are able to estimate the probability distribution π, we are able to estimate the entropy. In a non-parametric approach, the probabilities π=(πk)k=1K are directly inferred from counts n=(nk)k=1K of different spike sequences ak within the spike recording. Each time step [tn, tn + Δt) provides a sample of present spiking xtn and its history xtn,θT, thus a recording of length Trec provides N = (TrecT)/Δt data points.

Maximum likelihood estimation

Most commonly, probabilities of spike sequences ak are then estimated as the relative frequencies π^k=nk/N of their occurrence in the observed data. It is the maximum likelihood (ML) estimator of π for the multinomial likelihood

p(n|π)k=1Kπknk. (28)

Plugging the estimates π^k into the definition of the entropy in Eq (27) results in the ML estimator of the entropy

H^ML(X,XθT)=k=1KnkNlog2nkN (29)

or history dependence

R^ML(T,θ)=1H^ML(X,XθT)H^ML(XθT)H^(X). (30)

The ML estimator has the right asymptotic properties [28, 61], but is known to underestimate the entropy severely when data is limited [28, 62]. This is because all probability mass is assumed to be concentrated on the observed outcomes. A more concentrated probability distribution results in a smaller entropy, in particular if many outcomes have not been observed. This results in a systematic underestimation or negative bias

Bias[H^ML(X,XθT)]0. (31)

The negative bias in the entropy, which is largest for the highest-dimensional joint entropy H^ML(X,XθT), then typically leads to severe overestimation of the mutual information and history dependence [27, 63]. Because of this severe overestimation, we cannot use the ML estimator for embedding optimization.

Bayesian Nemenman-Shafee-Bialek (NSB) estimator

In a Bayesian framework, the entropy is estimated as the posterior mean or minimum mean square error (MMSE)

H^MMSE(n)=dπH(π)p(π|n)=dπH(π)p(n|π)p(π)dπp(n|π)p(π). (32)

The posterior mean is the mean of the entropy with respect to the posterior distribution on the probability vector π given the observed frequencies of spike sequences n

p(π|n)=p(n|π)p(π)dπp(n|π)p(π). (33)

The probability p(n|π) for i.i.d. observations n from an underlying distribution π is given by the multinomial distribution in Eq (28).

If the prior p(π) is a conjugate prior to the multinomial likelihood, then the high-dimensional integral of Eq (32) can be evaluated analytically [32]. This is true for a class of priors called Dirichlet priors, and in particular for symmetric Dirichlet priors

p(π|β)k=1Kπkβ1. (34)

The prior p(π|β) gives every outcome the same a priori weight, but controls the weight β > 0 of uniform prior pseudo-counts. A β = 1 corresponds to a flat prior on all probability distributions π, whereas β → 0 gives maximum likelihood estimation (no prior pseudo-count).

It has been shown that the choice of β is highly informative with respect to the entropy, in particular when the number of outcomes K becomes large [64]. This is because the a priori variance of the entropy vanishes for K → ∞, thus for any πp(π|β) the entropy H(π) is very close to the a priori expected entropy

ξ(β)=dπH(π)p(π|β)=ψ0(Kβ+1)ψ0(β+1), (35)

where ψm(z)=zm+1logΓ(z) are the polygamma functions. In addition, a lot of data is required to counter-balance this a priori expectation. This is because the prior adds pseudo-counts on every outcome, i.e. it assumes that every outcome has been observed β times prior to inference. In order to influence a prior that constitutes K pseudo-counts, one needs at least N > K samples, with more data required the sparser the true underlying distribution. Therefore, an estimator of the entropy for little data and fixed concentration parameter β is highly biased towards the a priori expected entropy ξ(β).

Nemenman et al. [33] exploited the tight link between concentration parameter β and the a priori expected entropy to derive a mixture prior

pNSB(π)dβ|ξβ|p(π|β), (36)
ξβ=Kψ1(Kβ+1)ψ1(β+1), (37)

that weights Dirichlet priors to be flat with respect to the expected entropy ξ(β). Since the variance of this expectation vanishes for K ≫ 1 [64], for high K the prior is also approximately flat with respect to the entropy, i.e. H(π)U(0,log2K) for πpNSB(π). The resulting MMSE estimator for the entropy is referred to as the NSB estimator

H^NSB(n)=dπH(π)p(n|π)pNSB(π)dπp(n|π)pNSB(π) (38)
=dβdξdβ(β)H^(β)ρ(β,n)dβdξdβ(β)ρ(β,n). (39)

Here, ρ(β, n) is proportional to the evidence for given concentration parameter

ρ(β,n)Γ(Kβ)Γ(N+Kβ)i=1KΓ(ni+β)Γ(β) (40)
dπp(n|π)p(π|β)=p(n|β), (41)

where Γ(x) is the gamma function. The posterior mean of the entropy for given concentration parameter is

H^(β)=i=1Kni+βN+Kβ[ψ0(N+Kβ+1)ψ0(ni+β+1)]. (42)

From the Bayesian entropy estimate, we obtain an NSB estimator for history dependence

R^NSB(T,θ)=1H^NSB(X,XθT)H^NSB(XθT)H^(X), (43)

where the marginal and joint entropies are estimated individually using the NSB method.

To compute the NSB entropy estimator, one has to perform a one-dimensional integral over all possible concentration parameters β. This is crucial to be unbiased with respect to the entropy. An implementation of the NSB estimator for Python3 is published alongside the paper with our toolbox [37]. To compute the integral, we use a Gaussian approximation around the maximum a posteriori β* to define sensible integration bounds when the likelihood is highly peaked, as proposed in [34].

Bayesian bias criterion

The goal of the Bayesian bias criterion (BBC) is to indicate when estimates of history dependence are potentially biased. It might indicate bias even when estimates are unbiased, but the opposite should never be true.

To indicate a potential estimation bias, the BBC compares ML and NSB estimates of the history dependence. ML estimates are biased when too few joint sequences have been observed, such that the probability for unobserved or undersampled joint outcomes is underestimated. To counterbalance this effect, the NSB estimate adds β pseudo-counts to every outcome, and then infers β with an uninformative prior. For the BBC, we turn the idea around: when the assumption of no pseudo-counts (ML) versus a posterior belief on non-zero pseudo-counts (NSB) yield different estimates of history dependence, then too few sequences have been observed and estimates are potentially biased. This motivates the following definition of the BBC.

The NSB estimator RNSB(T, θ) is biased with tolerance p > 0, if

|R^NSB(T,θ)R^ML(T,θ)|>p·R^NSB(T,θ). (44)

Similarly, we define the BBC estimator

R^BBC(T,θ){R^NSB(T,θ)if|R^NSB(T,θ)R^ML(T,θ)|p·R^NSB(T,θ),0otherwise. (45)

This estimator is designed to be unbiased, and thus can be used for embedding optimization in Eq (24). We use the NSB estimator for R(T, θ) instead of the ML estimator, because it is generally less biased. A tolerance p > 0 accounts for this, and accepts NSB estimates when there is only a small difference between the estimates. The bound for the difference is multiplied by R^NSB(T,θ), because this provides the scale on which one should be sensitive to estimation bias. We found that a tolerance of p = 0.05 was small enough to avoid overestimation by BBC estimates on the benchmark model (Fig 5 and S1 Fig).

Shuffling estimator

The Shuffling estimator was originally proposed in [31] to reduce the sampling bias of the ML mutual information estimator. It has the desirable property that it is negatively biased in leading order of the inverse number of samples. Because of this property, Shuffling estimates can safely be maximized during embedding optimization without the risk of overestimation. Here, we therefore propose to use the Shuffling estimator for embedding-optimized estimation of history dependence.

The idea behind the Shuffling estimator is to rewrite the ML estimator of history dependence as

R^ML(T,θ)=1H^(X)(H^ML(XθT)H^ML(XθT|X)) (46)

and to correct for bias in the entropy estimate H^ML(XθT|X). Since X is well sampled and thus H^(X) is unbiased, and the bias of the ML entropy estimator is always negative [28, 61], we know that

Bias[R^ML(T,θ)]=Bias[H^ML(XθT)]Bias[H^ML(XθT|X)] (47)
Bias[H^ML(XθT|X)]. (48)

Therefore, if we find a correction term of the magnitude of Bias[H^ML(XθT|X)], we can turn the bias in the estimate of the history dependence from positive to negative, thus obtaining an estimator that is a lower bound of the true history dependence. This can be achieved by subtracting a lower bound of the estimation bias Bias[H^ML(XθT|X)] from H^ML(XθT|X).

In the following, we describe how [31] obtain a lower bound of the bias in the conditional entropy H^ML(XθT|X) by computing the estimation bias for shuffled surrogate data.

Surrogate data are created by shuffling recorded spike sequences such that statistical dependencies between past bins are eliminated. This is achieved by taking all past sequences that were followed by a spike, and permuting past observations of the same bin index j. The same is repeated for all past sequences that were followed by no spike. The underlying probability distribution can then be computed as

psh(xθT|x)=j=1dp(xθ,jT|x), (49)

and the corresponding entropy is

H(Xθ,shT|X)=j=1dH(Xθ,jT|X). (50)

The pairwise probabilities p(xθ,jT|x) are well sampled, and thus each conditional entropy in the sum can be estimated with high precision. This way, the true conditional entropy H(Xθ,shT|X) for the shuffled surrogate data can be computed and compared to the ML estimate H^ML(Xθ,shT|X) on the shuffled data. The difference between the two

ΔH^ML(Xθ,shT|X)]H^ML(Xθ,shT|X)H(Xθ,shT|X) (51)

yields a correction term that is on average equal to the bias of the ML estimator on the shuffled data.

Importantly, the bias of the ML estimator on the shuffled data is in leading order more negative than on the original data. To see this, we consider an expansion of the bias on the conditional entropy in inverse powers of the sample size N [27, 63]

Bias[H^ML(XθT|X)]=12Nln2x{0,1}(K˜(x)1)+O(1N2). (52)

Here, K˜(x) denotes the number of past sequences with nonzero probability p(xθT=ak|x)>0 of being observed when followed by a spike (x = 1) or no spike (x = 0), respectively. Notably, the bias is negative in leading order, and depends only on the number of possible sequences K˜(x). For the shuffled surrogate data, we know that psh(xθT=ak|x)=0 implies p(xθT=ak|x)=0, but Shuffling may lead to novel sequences that have zero probability otherwise. Hence the number of possible sequences under Shuffling can only increase, i.e. K˜sh(x)K˜(x), and thus the bias of the ML estimator under Shuffling to first order is always more negative than for the original data

Bias[H^ML(Xθ,shT|X)]Bias[H^ML(XθT|X)]. (53)

Terms that could render it higher are of order O(N2) and are assumed to have no practical relevance.

This motivates the following definition of the Shuffling estimator: Compute the difference between the ML estimator on the shuffled and original data to yield a bias-corrected Shuffling estimate

H^ML,sh(XθT|X)H^ML(XθT|X)ΔH^ML(Xθ,shT|X), (54)

and use this to estimate history dependence

R^Shuffling(T,θ)1H^(X)(H^ML(XθT)H^ML,sh(XθT|X)). (55)

Because of Eqs (48) and (53), we know that this estimator is negatively biased in leading order

Bias[R^Shuffling(T,θ)]0 (56)

and can safely be used for embedding optimization.

Estimation of history dependence by fitting a generalized linear model (GLM)

Another approach to the estimation of history dependence is to model the dependence of neural spiking onto past spikes explicitly, and to fit model parameters to maximize the likelihood of the observed spiking activity [21]. For a given probability distribution p(xt|xtT,ν) of the model with parameters ν, the conditional entropy can be estimated as

H^(X|XT,ν)=1Nn=1Nlog2p(xtn|xtnT,ν)1 (57)

which one can plug into Eq (6) to obtain an estimate of the history dependence. The strong law of large numbers [59] ensures that if the model is correct, i.e. p(xt|xtT,ν)=p(xt|xtT) for all t, this estimator converges to the entropy H(X|XT) for N → ∞. However, any deviations from the true distribution due to an incorrect model will lead to an underestimation of history dependence, similar to choosing an insufficient embedding. Therefore, model parameters should be chosen to maximize the history dependence, or to maximize the likelihood

ν*=argmaxνn=1Nlog2p(xtn|xtnT,ν). (58)

We here consider a generalized linear model (GLM) with exponential link function that has successfully been applied to make predictions in neural spiking data [20] and can be used for the estimation of directed, causal information [21]. In a GLM with past dependencies, the spiking probability at time t is described by the instantaneous rate or conditional intensity function

λ(t|xtT,ν)=limδt0p(t^[t,t+δt]|xtT,ν)δt. (59)

Since we discretize spiking activity in time as spiking or non-spiking in a small time window Δt, the spiking probability is given by the binomial probability

p(xt=1|xtT,ν)=λ(t|xtT,ν)Δt1+λ(t|xtT,ν)Δt. (60)

The idea of the GLM is that past events contribute independently to the probability of spiking, such that the conditional intensity function factorizes over their contributions. Hence, it can be written as

λ(t|xtT,μ,h)=exp(μ+j=1dhjxt,jT), (61)

where hj gives the contribution of past activity xt,jT in past time bin j to the firing rate, and μ is an offset that is adapted to match the average firing rate.

Although fitting GLM parameters is more data-efficient than computing non-parametric estimates, overfitting may occur for limited data and high embedding dimensions d, hence d cannot be chosen arbitrarily high. In order to estimate a maximum of history dependence for limited d, we apply the same type of binary past embedding as we use for the other estimators, and optimize the embedding parameters by minimizing the Bayesian information criterion [65]. In particular, for given past range T, we choose embedding parameters d*, κ* that minimize

BIC(d,κ)=(d+1)log2N2L*(d,κ), (62)

where N is the number of samples and

L*(d,κ)=n=1Nlog2p(xtn|xtn,d,κT,μ*,h*) (63)

is the maximized log-likelihood of the recorded spike sequences (xtn,xtn,d,κT)n=1N for optimal model parameters μ*, h*. We then use the optimized embedding parameters to estimate the conditional entropy according to

H^GLM(X|Xd*,κ*T)=1NL*(d*,κ*), (64)

which results in the GLM estimator of history dependence

R^GLM(T)=1H^GLM(X|Xd*,κ*T)H^(X). (65)

Bootstrap confidence intervals

In order to estimate confidence intervals of estimates R^(T,θ) for given past embeddings, we apply the blocks of blocks bootstrap method [66]. To obtain bootstrap samples, we first compute all the binary sequences (xtn,xtn,θT) for n = 1, …, N that result from discretizing the spike recording in N time steps Δt and applying the past embedding. We then randomly draw N/l blocks of length l of the recorded binary sequences such that the total number of redrawn sequences is the same as the in the original data. We choose l to be the average interspike interval (ISI) in units of time steps Δt, i.e. l = 1/(rΔt) with average firing rate r. Sampling successive sequences over the typical ISI ensures that bootstrap samples are representative of the original data, while also providing a high number of distinct blocks that can be drawn.

The different estimators (but not the bias criterion) are then applied to each bootstrap sample to obtain confidence intervals of the estimates. Instead of computing the 95% confidence interval via the 2.5 and 97.5 percentiles of the bootstrapped estimates, we assumed a Gaussian distribution and approximated the interval via [R^(T,θ)2σ^R(T,θ),R^(T,θ)+2σ^R(T,θ)], where σ^R(T,θ) is the standard deviation over the bootstrapped estimates.

We found that the true standard deviation of estimates for the model neuron was well estimated by the bootstrapping procedure, irrespective of the recording length (S10 Fig). Furthermore, we simulated 100 recordings of the same recording length, and for each computed confidence interval for the past range T with the highest estimated history dependence R(T). By measuring how often the model’s true value for the same embedding was included in these intervals, we found that the Gaussian confidence intervals are indeed close to the claimed confidence level (S10 Fig). This indicates that the bootstrap confidence intervals approximate well the uncertainty associated with estimates of history dependence.

Cross-validation

For small recording lengths, embedding optimization may cause overfitting through the maximization of variable estimates (S1 Fig). To avoid this type of overestimation, we apply one round of cross-validation, i.e. we optimize embeddings over the first half of the recording, and evaluate estimates for the optimal past embedding on the second half. We chose this separation of training and evaluation data sets, because it allows the fastest computation of binary sequences (xtn,xtn,θT) for the different embeddings during optimization. We found that none of the cross-validated embedding-optimized estimates were systematically overestimating the true history dependence for the benchmark model for recordings as short as three minutes (S1 Fig). Therefore, cross-validation allows to apply embedding optimization to estimate history dependence even for short recordings.

Benchmark neuron model

Generalized leaky integrate-and-fire neuron with spike-frequency adaptation

As a benchmark model, we chose a generalized leaky integrate-and-fire model (GLIF) with an additional adaptation filter ξ (GLIF-ξ) that captures spike-frequency adaptation over 20 seconds [43].

For a standard leaky integrate-and-fire neuron, the neuron’s membrane is formalized as an RC circuit, where the cell’s lipid membrane is modeled as a capacitance C, and the ion channels as a resistance that admits a leak current with effective conductance gL. Hence, the temporal evolution of the membrane’s voltage V is governed by

CV˙=gL(VVR)+Iext(t). (66)

Here, VR denotes the resting potential and Iext(t) external currents that are induced by some external drive. The neuron emits an action potential (spike) once the neuron crosses a voltage threshold VT, where a spike is described as a delta pulse at the time of emission t^. After spike emission, the neuron returns to a reset potential V0. Here, we do not incorporate an explicit refractory period, because interspike intervals in the simulation were all larger than 10 ms. For constant input current Iext, integrating Eq (66) yields the membrane potential between two spiking events

V(t)=V+(V0V)eγ(tt^0), (67)

where t^0 is the time of the most recent spike, γ = gL/C the inverse membrane timescale and V = VR + Iext/γ the equilibrium potential.

In contrast to the LIF, the GLIF models the spike emission with a soft spiking threshold. To do that, spiking is described by an inhomogeneous Poisson process, where the spiking probability in a time window of width δt ≪ 1 is given by

p(t^[t,t+δt])=1exp(tt+δtλ(s)ds)λ(t)δt. (68)

Here, the spiking probability is governed by the time dependent firing rate

λ(t)=λ0exp(V(t)VT(t)ΔV). (69)

The idea is that once the membrane potential V(t) approaches the firing threshold VT(t), the firing probability increases exponentially, where the exponential increase is modulated by 1/ΔV. For ΔV → 0, we recover the deterministic LIF, while for larger ΔV the emission becomes increasingly random.

In the GLIF-ξ, the otherwise constant threshold VT* is modulated by the neuron’s own past activity according to

VT(t)=VT*+t^j<tξ(tt^j). (70)

Thus, depending on their spike times t^j, emitted action potentials increase or decrease the threshold additively and independently according to an adaptation filter ξ(t). In the experiments conducted in [43], the following functional form for the adaptation filter was extracted:

ξ(s)={aξ,if0<sTξaξ(sTξ)βξ,ifTξ<s<22s. (71)

The filter is an effective model not only for the measured increase in firing threshold, but also for spike-triggered currents that reduce the membrane potential. When mapped to the effective adaptation filter ξ, it turned out that past spikes lead to a decrease in firing probability that is approximately constant over a period Tξ = 8.3 ms, after which it decays like a power-law with exponent βξ = 0.93, until the contributions are set to zero after 22 s.

Model variant with 1s past kernel

For demonstration, we also simulated a variant of the above model with a 1s past kernel

ξ1s(s)={aξ1s,if0<sTξaξ1s(sTξ)βξ,ifTξ<s<1s. (72)

All parameters are identical apart from the strength of the kernel aξ1s=35.2mV, which was adapted to maintain a firing rate of 4 Hz despite the shorter kernel.

Simulation details

In order to ensure stationarity, we simulated the model neuron exposed to a constant external current Iext = const. over a total duration of Trec = 900 min. Thereby, the current Iext was chosen such that the neuron fired with a realistic average firing rate of 4 Hz. During the simulation, Eq (66) was integrated using simple Runge-Kutta integration with an integration time step of δt = 0.5 ms. At every time step, random spiking was modeled as a binary variable with probability as in Eq (68). After a burning-in time of 100 s, spike times were recorded and used for the estimation of history dependence. The detailed simulation parameters can be found in Table 1.

Table 1. Simulation parameters of the GLIF-ξ model.
Term Description Value Units
λ0 Latency 2.0 ms−1
1/γ Membrane timescale 15.3 ms
V Equilibrium potential -45.9 mV
V0 Reset potential -38.8 mV
VT* Firing threshold baseline -51.9 mV
ΔV Firing threshold sharpness 0.75 mV
αξ Magnitude of the effective adaptation filter ξ 19.3 mV
βξ Scaling exponent of the effective adaptation filter ξ 0.93 -
Tξ Cutoff of the effective adaptation filter ξ 8.3 ms
δt Simulation step 0.5 ms

The parameters were originally extracted from experimental recordings of (n = 14) L5 pyramidal neurons [43].

Computation of the total history dependence

In order to determine the total history dependence in the simulated spiking activity, we computed the conditional entropy H(X|X−∞) from the conditional spiking probability in Eq (68) that was used for the simulation. Note that this is only possible because of the constant input current, otherwise the conditional spiking probability would also capture information about the external input.

Since the conditional probability of spiking used in the simulation computes the probability in a simulation step δt = 0.5 ms, we first have to transform this to a probability of spiking in the analysis time step Δt = 5 ms. To do so, we compute the probability of no spike in a time step [t, t + Δt) according to

psim(xt=0|xt)=j=1Δt/δt[1λ˜(t+(j1)δt)δt], (73)

and then compute the probability of at least one spike by p(xt=1|xt)=1p(xt=0|xt). Here, the rate λ˜(t) is computed as λ(t) in Eq (69), but only with respect to past spikes that are emitted at times t^<t. This is because no spike that occurs within [t, t + Δt) must be considered when computing psim(xt=0|xt).

For sufficiently long simulations, one can make use of the SLLN to compute the conditional entropy

Hsim(X|X)=1Nn=1Nlog2psim(xtn|xtn), (74)

and thus the total history dependence

Rtot=1Hsim(X|X)H^(X), (75)

which gives an upper bound to the history dependence for any past embedding.

Computation of history dependence for given past embedding

To compute history dependence for given past embedding, we use that the model neuron can be well approximated by a generalized linear model (GLM) within the parameter regime of our simulation. We can thus fit a GLM to the simulated data for the given past embedding T, d, κ to obtain a good approximation of the corresponding true history dependence R(T, d, κ). Note that this is a specific property if this model and does not hold in general. For example in experiments, we found that the GLM accounted for less history dependence than model-free estimates (Fig 6).

To map the model neuron to a GLM, we plug the membrane and threshold dynamics of Eqs (67) and (70) into the equation for the firing rate Eq (69), i.e.

λ(t)=exp(logλ0+VVT*+t^j<tξ(tt^j)+(V0V)eγ(tt^0)). (76)

For the parameters used in the simulation, the decay time of the reset term V0V is 1/γ = 15.3 ms. When compared to the minimum and mean inter-spike intervals of ISImin = 25 ms and ISI¯=248ms, it is apparent that the probability for two spikes to occur within the decay time window is negligibly small. Therefore, one can safely approximate

(V0V)eγ(tt^0)t^j<t(V0V)eγ(tt^j), (77)

i.e. describing the potential reset after a spike as independent of other past spikes, because contributions beyond the last spike (j > 0) are effectively zero. Using the above approximation, one can formulate the rate as in a generalized linear model with

λ(t)=exp(μ+j=1dhjxt,j), (78)

where

μ=logλ0+VVT* (79)
hj=ξ(jδt)+(V0V)eγjδt, (80)

and xt,j{0,1} indicates whether the neuron spiked in [tjδt, t − (j − 1)δt). Therefore, the true spiking probability of the model is well described by a GLM.

We use this relation to approximate the history dependence R(T, d, κ) for any past embedding T, d, κ with a GLM with the same past embedding. Since in that case the parameters μ and h are not known, we fitted them to the simulated 900 minute recording via maximum likelihood (see above) and computed the history dependence according to

R^GLM(T,d,κ)=1H^GLM(X|Xd,κT)H^(X). (81)

Computation of history dependence as a function of the past range

To approximate the model’s true history dependence R(T), for each T we computed GLM estimates R^GLM(T,d,κ) (Eq 81) for a varying number of past bins d ∈ [25, 50, 75, 100, 125, 150]. For each d, the scaling κ was chosen such that the size of the first past bin was equal or less than 0.5 ms. To save computation time, and to reduce the effect of overfitting, the GLM parameters where fitted on 300 minutes of the simulation, whereas estimates R^GLM(T,d,κ) were computed on the full 900 minutes of the simulated recording. For each T, we then chose the highest estimate R^GLM(T,d,κ) among the estimates for different d as the best estimate of the true R(T).

Experimental recordings

We analyzed neural spike trains from in vitro recordings of rat cortical cultures and salamander retina, as well as in vivo recordings in rat dorsal hippocampus (layer CA1) and mouse primary visual cortex. For all recordings, we only analyzed sorted units with firing rates between 0.5 Hz and 10 Hz to exclude the extremes of either inactive units or units with very high firing rate.

Rat cortical culture

Neurons were extracted from rat cortex (1st day postpartum) and recorded in vitro on an electrode array 2–3 weeks after plating day. We took data from five consecutive sessions (L_Prg035_txt_nounstim.txt, L_Prg036_txt_nounstim.txt, …, L_Prg039_txt_nounstim.txt) with a total duration of about Trec ≈ 203 min. However, we only analyzed the first 90 minutes to make the results comparable to the other neural systems. We analyzed in total n = 48 sorted units that satisfied our requirement on the firing rate. More details on the recording procedure can be found in [67], and details on the data set proper can be found in [49].

Salamander retina

Spikes from larval tiger salamander retinal ganglion cells were recorded in vitro by extracting the entire retina on an electrode array [68], while a non-repeated natural movie (leaves moving in the wind) was projected onto the retina. The recording had a total length of about Trec ≈ 82 min, and we analyzed in total n = 111 sorted units that satisfied our requirement on the firing rate. More details on the recording procedure and the data set can be found in [47, 48]. The spike recording was obtained from the Dryad database [47].

Rat dorsal hippocampus (layer CA1)

We evaluated spike trains from a multichannel simultaneous recording made from layer CA1 of the right dorsal hippocampus of a Long-Evans rat during an open field task (data set ec014.277). The data set provided sorted spikes from 8 shanks with 64 channels. The recording had a total length of about Trec ≈ 90 min. We analyzed in total n = 28 sorted units that were indicated as single units and satisfied our requirement on the firing rate. More details on the experimental procedure and the data set can be found in [45, 46]. The spike recording was obtained from the NSF-founded CRCNS data sharing website.

Mouse primary visual cortex

Neurons were recorded in vivo during spontaneous behavior, while face expressions were monitored. Recordings were obtained by 8 simultaneously implanted Neuropixel probes, and sorted units were located using the location of the electrode contacts provided in [50], and the Allen Mouse Common Coordinate Framework [69]. We analyzed in total n = 142 sorted units from the mouse “Waksman” that belonged to primary visual cortex (irrespective of their layer) and satisfied our requirement on the firing rate. Second, we only selected units that were recorded for more than Trec ≈ 40 min (difference between the last and first recorded spike time). Details on the recording procedure and the data set can be found in [58] and [50].

Parameters used for embedding optimization

The embedding dimension or number of bins was varied in a range d ∈ [1, dmax], where dmax was either dmax = 20, dmax = 5 (max five bins) or dmax = 1 (one bin). During embedding optimization, we explored Nκ = 10 linearly spaced values of the exponential scaling κ within a range [0, κmax(d)]. The maximum κmax(d) was chosen for each number of bins d ∈ [1, dmax] such that the bin size of the first past bin was equal to a minimum bin size, i.e. τ1 = τ1,min, which we chose to be equal to the time step τ1,min = Δt = 5 ms. To save computation time, we did not consider any embeddings with κ > 0 if the past range T and d were such that τ1(κmax(d)) ≤ Δt for κ = 0. Similarly, for given T and each d, we neglected values of κ during embedding optimization if the difference Δκ to the previous value of κ was less than Δκmin = 0.01. In Table 2 we summarize the relevant parameters that were used for embedding optimization.

Table 2. Parameters used for embedding optimization.

Symbol Value Settings variable name Description
Δt 0.005 embedding_step_size Time step (in seconds) for the discretization of neural spiking activity.
d 1, 2, …, dmax embedding_number_of_bins_set Set of embedding dimensions.
Nκ 10 number_of_scalings Number of linearly spaced values of the exponential scaling κ.
τ1,min 0.005 min_first_bin_size Minimum bin size (in seconds) of the first past bin.
Δκmin 0.01 min_step_for_scaling Minimum required difference between two values of κ.
p 0.05 bbc_tolerance Tolerance for the acceptance of estimates for BBC.
- False cross_validated_optimization Is cross-validation used for optimization or not.
- 250 number_of_bootstraps_R_max Number of bootstrap samples used to estimate σR^max.
l 1/rΔt block_length_l Block length used for blocks-of-blocks bootstrapping.
- all estimation_method Estimators for which embeddings are optimized (BBC, Shuffling)

To facilitate reproduction, we added the settings variable names of the parameters as they are used in the toolbox [37].

Details to Fig 3

For Fig 3B, the process was considered for l = 1 and an reactivation probability of m = 0.8. For l = 1, all probabilities can easily be calculated, with marginal probability to be active p(xt = 1) = h/(1 − m + mh), and conditional probabilities p(xt = 1|xt−1 = 1) = h + (1 − h)m and p(xt = 1|xt−1 = 0) = h. From these probabilities, the total mutual information Itot and total history dependence Rtot could be directly computed. We then plotted these quantities as a function of h, where values of h were chosen to vary the firing rate between 0.5 and 10 Hz, with a bin size of Δt = 5 ms. For Fig 3C, the binary autoregressive process was simulated for n = 107 time steps with m = 0.8 (l = 1), whereas for l = 5, m was adapted to yield approximately the same Rtot as for l = 1. The input activation probability h was chosen to lead to a fixed probability p(x = 1) ≈ 0.025, corresponding to 5 Hz firing rate with Δt = 5 ms. Autocorrelation C(T) was computed using the MR.estimator toolbox [52], and ΔR(T) and L(T) were estimated using plugin estimation. For Fig 3D, the same procedures were applied as in Fig 3C, but now m was varied between 0.5 and 0.95, and h was adapted for each m to hold the firing rate fixed at 5 Hz. For Fig 3E, the same procedures were applied as in Fig 3C, but now l was varied between 1 and 10, and h and m were adapted for each l to hold the firing rate fixed at 5 Hz and Rtot fixed at the value for l = 1 and m = 0.8.

Details to Fig 4A and 4B

The branching process was simulated using the MR.estimator toolbox, with a time step of Δt = 4ms, population rate of 500 Hz and subsampling probability of 0.01. Thus, the subsampled spike train had a firing rate of ≈ 5 Hz. The branching parameter was set to m = 0.98 with analytic autocorrelation time τC(m) = 198ms. For a long simulation, autocorrelation C(T) was computed using the MR.estimator toolbox, L(T) using plugin estimation, and R(T) using the embedding-optimized Shuffling estimator with dmax = 20. The generalized timescales τR and τL were computed with T0 = 10ms.

Details to Fig 4C and 4D

The Izhikevich model was simulated with the PyNN toolbox [70], with parameters set to the chattering mode (a = 0.02, b = 0.2, c = −50, d = 2), simulation time bin dt = 0.01 ms, and noisy input with mean 0.011 and standard deviation 0.001. For the analysis, a time step of Δt = 1 ms was chosen. Apart from that, C(T) and L(T) were computed as for Fig 4B. Here, R(T) was computed with BBC and dmax = 20, which revealed higher Rtot than Shuffling. To compute τR, we set T0 = 0.

Details to Fig 4E and 4F

The GLIF model was simulated as described in Benchmark neuron model (model with 22s past kernel). The analysis time step was Δt = 5 ms. Apart from that, C(T) and L(T) were computed as for Fig 4B. History dependence R(T) was estimated using a GLM as described in Benchmark neuron model. To compute τR, we set T0 = 10 ms.

Details to Fig 5A and 5B

In Fig 5A and 5B, we applied the ML, NSB, BBC and Shuffling estimators of R(τ, d) to a simulated recording of 90 minutes. Embedding parameters were T = dτ and κ = 0, with τ = 20 ms and d ∈ [1, 60]. Since the goal was to show the properties of the estimators, confidence intervals were estimated from 50 repeated 90 minute simulations instead of bootstrap samples from the same recording. Each simulation had a burning in period of 100 seconds. To estimate the true R(τ, d), a GLM was fitted on a 300 minute recording and evaluated on the full 900 minute recording for the estimation of R.

Details to Fig 5C

In Fig 5C, history dependence R(T) was estimated on a 90 minute recording for 57 different values of T in a range T ∈ [10 ms, 3 s]. Embedding-optimized estimates were computed with up to dmax = 25 past bins, and 95% confidence intervals were computed using the standard deviation over n = 100 bootstrap samples (see Bootstrap confidence intervals). To estimate the true R(T, d*, κ*) for the optimized embedding parameters d*, κ* with either BBC or Shuffling, a GLM was fitted for the same embedding parameters on a 300 minute recording and evaluated on 900 minutes recording for the estimation of R.

Details to Fig 6

For Fig 6, history dependence R(T) was estimated for 61 different values of T in a range T ∈ [10 ms, 5 s]. For each recording, we only analyzed the first 90 minutes to have a comparable recording length. For embedding optimization, we used dmax = 20 as a default for BBC and Shuffling, and compared the estimates with the Shuffling estimator optimized for dmax = 5 (max five bins) and dmax = 1 (one bin). For the GLM, we only estimated R(TD) for the temporal depth TD that was estimated with BBC. To optimize the estimate, we computed GLM estimates of R(TD) with the optimal embedding found by BBC, and for varying embedding dimension d ∈ [1, 2, 3, ‥, 20, 25, 30, 35, 40, 45, 50], where for each d we chose κ such that τ1 = Δt. We then chose the embedding that minimized the BIC, and took the corresponding estimate R^(TD) as a best estimate for Rtot. For Fig 6A, we plotted only spike trains of channels that were identified as single units. For Fig 6B, 95% confidence intervals were computed using the standard deviation over n = 100 bootstrap samples. For Fig 6C, embedding-optimized estimates with uniform embedding (κ = 0) were computed with dmax = 20 (BBC and Shuffling) or dmax = 5 (Shuffling). Medians were computed over the n = 28 sorted units in CA1.

Details to Figs 7 and 8

For Figs 7 and 8, history dependence was R(T) was estimated for 61 different values of T in a range T ∈ [10 ms, 5 s] using the Shuffling estimator with dmax = 5. The autocorrelation coefficients C(T) were computed with the MR.Estimator toolbox [52], and the autocorrelation time τC was obtained using the exponential_offset fitting function. For each recording, we only analyzed the first 40 minutes to have a comparable recording length. For Fig 7, medians of τR, τC and Rtot were computed over all sorted units that were analyzed, and 95% confidence intervals on the medians were obtained by bootstrapping with n = 10000 resamples of the median. For Fig 8, 95% confidence intervals were computed using the standard deviation over n = 100 bootstrap samples.

Practical guidelines: How to estimate history dependence from neural spike recordings

Estimating history dependence (or any complex statistical dependency) for neural data is notoriously difficult. In the following, we address the main requirements for a practical and meaningful analysis of history dependence, and provide guidelines on how to fulfill these requirements using embedding optimization. A toolbox for Python3 is available online [37], together with default parameters that worked best with respect to the following requirements. It is important that practitioners make sure that their data fulfill the data requirements (points 4 and 5).

1) The embedding of past spiking activity should be individually optimized to account for very different spiking statistics

It is crucial to optimize the embedding for each neuron individually, because history dependence can strongly differ for neurons from different areas or neural systems (Fig 7), or even among neurons within a single area (see examples in Fig 8). Individual optimization enables a meaningful comparison of information timescale and history dependency R between neurons.

2) The estimation has to capture any non-linear or higher-order statistical dependencies

Embedding optimization using both, the BBC or Shuffling estimators, is based on non-parametric estimation, in which the joint probabilities of current and past spiking are directly estimated from data. Thereby, it can account for any higher-order or non-linear dependency among all bins. In contrast, the classical generalized linear model (GLM) that is commonly used to model statistical dependencies in neural spiking activity [20, 21] does not account for higher-order dependencies. We found that the GLM recovered consistently less total history dependence Rtot (Fig 6D). Hence, to capture single-neuron history dependence, higher-order and non-linear dependencies are important, and thus a non-parametric approach is advantageous.

3) Estimation has to be computationally feasible even for a high number of recorded neurons

Strikingly, while higher-order and non-linear dependencies are important, the estimation of history dependence does not require high temporal resolution. Optimizing up to dmax = 5 past bins with variable exponential scaling κ could account for most of the total history dependence that was estimated with up to dmax = 20 bins (Fig 6D). With this reduced setup, embedding optimization is feasible within reasonable computation time. Computing embedding-optimized estimates of the history dependence R(T) for 61 different values of T (for 40 minute recordings, the approach used for Figs 7 and 8) took around 10 minutes for the Shuffling estimator, and about 8.5 minutes for the BBC per neuron on a single computing node. Therefore, we recommend using dmax = 5 past bins when computation time is a constraint. Ideally, however, one should check for a few recordings if higher choices of dmax lead to different results, in order to cross-validate the choice of dmax = 5 for the given data set.

4) Estimates have to be reliable lower bounds, otherwise one cannot interpret the results

It is required that embedding-optimized estimates do not systematically overestimate history dependence for any given embedding. Otherwise, one cannot guarantee that on average estimates are lower bounds to the total history dependence, and that an increase in history dependence for higher past ranges is not simply caused by overestimation. This guarantee is an important aspect for the interpretation of the results.

For BBC, we found that embedding-optimized estimates are unbiased if the variance of estimators is sufficiently small (S1 Fig). The variance was sufficiently small for recordings of 90 minutes duration. When the variance was too high (short recordings with 3–45 minutes recording length), maximizing estimates for different embedding parameters introduced very mild overestimation due to overfitting (1–3%) (S1 Fig). The overfitting can, however, be avoided by cross-validation, i.e. optimizing the embedding on one half of the recording and computing estimates on the other half. Using cross-validation, we found that embedding-optimized BBC estimates were unbiased even for recordings as short as 3 minutes (S1 Fig).

For Shuffling, we also observed overfitting, but the overestimation was small compared to the inherent systematic underestimation of Shuffling estimates. Therefore, we observed no systematic overestimation by embedding-optimized Shuffling estimates on the model neuron, even for shorter recordings (3 minutes and more). Thus, for the Shuffling estimator, we advice to apply the estimator without cross-validation as long as recordings are sufficiently long (10 minutes and more, see next point).

5) Spike recordings must be sufficiently long (at least 10 minutes), and of similar length, in order to allow for a meaningful comparison of total history dependence and information timescale across experiments

The recording length affects estimates of the total history dependence Rtot, and especially of the information timescale τR. This is because more data allow more-complex embeddings, hence higher history dependence can be estimated. Moreover, complex embeddings are particular relevant for long past ranges T. Therefore, if recordings are shorter, smaller R(T) will be estimated for long past ranges T, leading to smaller estimates of τR. We found that for shorter recordings, estimates of Rtot were roughly the same as for 90 minutes, but estimates of τR were considerably smaller (S2 and S3 Figs).

To enable a meaningful comparison of the information timescale between neurons, one thus has to ensure that recordings are sufficiently long (in our experience at least 10 minutes), otherwise differences in τR may not be well resolved. Below 10 minutes, we found that estimates of τR could be less than half of the value that was estimated for 90 minutes, and also estimates of Rtot showed a notable decrease. In addition, all recordings should have comparable length to prevent that differences in history dependence or timescale are due to different recording lengths.

Supporting information

S1 Fig. Embedding optimization leads to mild overfitting for short recordings, which can be avoided by cross-validation.

Shown is the relative bias for two versions of the GLIF model with spike adaption, one with 1s and the other with 22s past kernel. The relative bias refers to the relative difference between embedding-optimized estimates R^(T,d*,κ*) and the model’s true history dependence R(T, d*, κ*) for the same optimized embedding parameters d*, κ*. The relative bias for R^tot was computed by first averaging the relative difference (R^(T,d*,κ*)R(T,d*,κ*))/R(T,d*,κ*) for T ∈ [TD, Tmax], and second averaging again over 30 different simulations for Trec between 1 and 20 minutes, and 10 different simulations for 45 and 90 minutes. Embedding parameters were optimized for each simulation, respectively, using parameters as in Table 2 with dmax = 25. (Left) For BBC, the relative bias for R^tot is zero only if recordings are sufficiently long (> 20 minutes for 1s kernel, and ≈ 90 minutes for 22s kernel). When recordings are shorter, the relative bias increases, and thus estimates are mildly overestimating the model’s true history dependence for the optimized embedding parameters. For Shuffling, estimates provide lower bounds to the model’s true history dependence, hence the relative bias remains negative even in the presence of overfitting. (Right) When one round of cross-validation is applied, i.e. embedding parameters are optimized on the first, and estimates are computed on the second half of the data, the bias is approximately zero for BBC even for short recordings, or more negative for the Shuffling estimator. Therefore, we conclude that the origin of overfitting is the selection of embedding parameters on the same data that are used for the estimation of R. Errorbars show 95% bootstrap confidence intervals on the mean over n = 10 (45 or 90 min) or n = 30 (≤ 20 min) different simulations.

(TIF)

S2 Fig. For the simulated neuron model, recording length has little effect on the estimated total history dependence, but large impact on the estimated information timescale.

(Left) Mean estimated total history dependence R^tot for different recording lengths, relative to the true total history dependence Rtot of the model (GLIF with spike adaption with 1s or 22s past kernel). As the recording length decreases, so does R^tot. However, with only 3 minutes, one does still infer about ≈ 95% of the true Rtot. (Right) In contrast, the estimated information timescale τ^R decreases strongly with decreasing recording length. With 3 minutes and less, only ≈ 75% of the true τR is estimated on average. Note that for the simpler 1s model (top), an accurate estimation of the true τR is possible for 90 minute recordings, whereas for the 22s model (bottom), the estimated τ^R remains below the true value. Shown are mean values for 30 different simulations for Trec between 1 and 20 minutes, and 10 different simulations for 45 and 90 minutes, as well as 95% confidence intervals on the mean based on bootstrapping.

(TIF)

S3 Fig. For experimental data, too, recording length has little effect on estimated total history dependence, but larger impact on the estimated information timescale.

(Left) Total history dependence Rtot for different recording lengths, relative to the total history dependence estimated for a 90 minute recording. As long as recordings are 10 minutes or longer, one does still estimate about ≈ 95% as much or more of Rtot as for 90 minutes, for all three recordings. For less than 10 minutes, the estimated total history dependence decreases down to 90% (CA1), or increases again due to overfitting (retina). (Right) Similar to the GLIF model, the estimated information timescale τR decreases more strongly with decreasing recording length. With 10 minutes and more, one estimates around ≈ 75% or more of the τR that is estimated on a 90 minute recording. Note that for the experimental data, the estimated timescale of the BBC estimator depends more strongly on the recording time, whereas the Shuffling estimator is more robust, especially for dmax = 5. Shown is the median with 95% bootstrap confidence intervals over n = 10 randomly chosen sorted units for each neural system. Before taking the median over sorted units, for each unit we averaged estimates over 10 excerpts of the full recording, each with 3 or 5 minutes duration, and over 8,4 and 2 excerpts with 10, 20 and 45 minutes duration, respectively.

(TIF)

S4 Fig. Example estimation results for the generalized leaky integrate-and-fire model (GLIF) with 1s past kernel.

For each recording length, we show the embedding-optimized estimates of history dependence R(T) with and without cross-validation, for BBC (red) and Shuffling (blue) with dmax = 25, as well as the ground truth for the same embeddings that were found during optimization (dashed lines). Dashed lines indicate the estimated information timescale τ^R and total history dependence R^tot. Shaded areas indicate ± two standard deviations obtained by bootstrapping.

(TIF)

S5 Fig. Example estimation results for the generalized leaky integrate-and-fire model (GLIF) with 22s past kernel.

For each recording length, we show the embedding-optimized estimates of history dependence R(T) with and without cross-validation, for BBC (red) and Shuffling (blue) with dmax = 25, as well as the ground truth for the same embeddings that were found during optimization (dashed lines). Dashed lines indicate the estimated information timescale τ^R and total history dependence R^tot. Shaded areas indicate ± two standard deviations obtained by bootstrapping.

(TIF)

S6 Fig. Estimation results for all sorted units in rat dorsal hippocampus (layer CA1).

For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

(TIF)

S7 Fig. Estimation results for all sorted units in rat cortical culture.

For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

(TIF)

S8 Fig. Estimation results for all sorted units in salamander retina.

For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

(TIF)

S9 Fig. Estimation results for all sorted units in mouse primary visual cortex.

For each unit, we show the embedding-optimized Shuffling estimates of history dependence R(T) for dmax = 5. Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot.

(TIF)

S10 Fig. Bootstrapping yields accurate estimates of standard deviation and confidence intervals.

(Left) Shown is the standard deviation on BBC estimates (blue) obtained from 250 “blocks of blocks” bootstrap samples on a single recording (GLIF model with 22s past kernel). It agrees well with the true standard deviation (black), which we estimated from 100 repeated simulations of the same recording length and embedding. As expected, the standard deviation decreases substantially for longer recordings. For each recording length, estimates were computed for typical optimal embedding parameters d*, κ* and T = TD that were found by embedding optimization. Errorbars show mean and standard deviation of the estimated σ(R) over the repeated simulations. (Right) The 95% confidence intervals based on two standard deviations σ(R) have approximately the claimed confidence level (CI accuracy). Standard deviation was estimated from 250 “blocks of blocks” bootstrap samples. For each recording length, we computed estimates R^ and the bootstrap confidence intervals on the 100 simulations. We then computed the confidence level (CI accuracy) by counting how often the true value of R was contained in the estimated confidence interval (green line). Estimates and the true value of R were computed for the same typical embedding parameters d*, κ* and T = TD as before.

(TIF)

S11 Fig. Total history dependence and information timescale for increasing branching parameter m.

Similar to the binary autoregressive process, increasing the branching parameter m increases the total history dependence Rtot, whereas the information timescale τR stays constant, or even decreases for high m. For each m, the input activation probability h was adapted to hold the firing rate fixed at 5 Hz.

(TIF)

S12 Fig. The estimated information timescale varies between estimators.

For each sorted unit (grey dots), estimates of the information timescale τR are plotted relative to the corresponding BBC estimate for dmax = 20. The BBC estimator tends to estimate higher timescales than the Shuffling estimator on recordings of CA1 and cortical culture, whereas for retina the medians of different estimators are more similar. Although estimates of the timescale are highly variable between estimators, Shuffling with only dmax = 5 past bins still estimates timescales of at least 80% of the timescales that are estimated with BBC. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

(TIF)

S13 Fig. Total history dependence and information timescale show no clear dependence on the firing rate, whereas the total mutual information tends to increase with the rate.

Shown are the same estimates of the total history dependence Rtot and information timescale τR as in Fig 7 (Shuffling estimator with dmax = 5) versus the firing rates of sorted units (dots). The total mutual information Itot is equal to Rtot times the spiking entropy H(spiking) of the respective unit. While Itot tends to increase with firing rate, no clear relation is visible for Rtot or τR. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

(TIF)

S14 Fig. Relationship between total history dependence or information timescale and standard statistical measures of neural spike trains.

Estimates of the total history dependence Rtot tend to decrease with the median interspike interval (ISI), and to increase with the coefficient of variation CV. This result is expected for a measure of history dependence, because a shorter median ISI indicates that spikes tend to occur together, and a higher CV indicates a deviation from independent Poisson spiking. In contrast, the information timescale τR tends to increase with the autocorrelation time, as expected, with no clear relation to the median ISI or the coefficient of variation CV. However, the correlation between the measures depends on the neural system. For example in retina (n = 111), Rtot is significantly anti-correlated with the median ISI (Pearson correlation coefficient: r = −0.69, p < 10−5) and strongly correlated with the coefficient of variation CV (r = 0.90, p < 10−5), and τR is significantly correlated with the autocorrelation time τC (r = 0.75, p < 10−5). In contrast, for mouse primary visual cortex (n = 142), we found no significant correlations between any of these measures. Results are shown for the Shuffling estimator with dmax = 5, and T0 = 10 ms. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

(TIF)

S15 Fig. Excluding short-term contributions helps to differentiate the timescales for different neural systems.

By only considering gains ΔR(T) for past ranges T > T0 when computing the information timescale τR, short-term effects that are related to the refractory period and different firing modes are excluded. The higher T0, the higher is the distance in the median τR between systems (especially between salamander retina and mouse primary visual cortex). This is because both timescales τR and τC increase with T0 for CA1 and primary visual cortex, whereas they decrease for retina. The same holds for the autocorrelation time τC, where only time lags T > T0 were considered when fitting an exponential decay to the autocorrelograms. Note that if the decay is perfectly exponential, then T0 does not affect the results. Estimates of Rtot and τR are shown for the Shuffling estimator with dmax = 5. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

(TIF)

S16 Fig. Total history dependence decreases for small time bins Δt.

The choice of the time bin Δt of the spiking activity has little effect on the information timescale τR, whereas the total history dependence Rtot decreases for small time bins Δt < 5ms. This is consistent across experiments. The smaller the time bin, the higher the risk that noise in the spike emission reduces the overall predictability or history dependence in the spiking, whereas an overly large time bin holds the risk of destroying coding relevant time information in the spike train. Thus, we chose the smallest time bin Δt = 5 ms that does not yet show a substantial decrease in Rtot. We do not plot results for higher Δt, because for higher Δt we observed many instances of multiple spikes in the same time bin. Results are shown for the Shuffling estimator with dmax = 5, and T0 = 10 ms. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

(TIF)

Acknowledgments

We thank the Priesemann group, especially Matthias Loidolt, Fabian Mikulasch, Andreas Schneider, David A. Ehrlich, Jonas Dehning, F. Paul Spitzner, as well as Abdullah Makkeh for valuable comments and for reviewing the manuscript.

Data Availability

The data underlying the results presented in the study are available online—from the CRCNS (crcns.org) data repository: http://crcns.org/data-sets/hc/hc-2—from the Mendeley Data repository: https://data.mendeley.com/datasets/4ztc7yxngf/1—from the Dryad database: https://datadryad.org/stash/dataset/doi:10.5061/dryad.1f1rc—from the Janelia figshare repository: https://janelia.figshare.com/articles/dataset/Eight-probe_Neuropixels_recordings_during_spontaneous_behaviors/7739750 All code for Python3 that was used to analyze the data and to generate the figures is available online at https://github.com/Priesemann-Group/historydependence.

Funding Statement

All authors received support from the Max-Planck-Society, https://www.mpg.de/de. L.R. was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of the SPP 2205 - project number 430157073. L.R. acknowledges funding by SMARTSTART, the joint training program in computational neuroscience by the VolkswagenStiftung and the Bernstein Network, https://www.smartstart-compneuro.de/. M. W. is employed at the Campus Institute for Dynamics of Biological Networks funded by the VolkswagenStiftung, https://www.volkswagenstiftung.de. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Barlow HB, et al. Possible principles underlying the transformation of sensory messages. Sensory communication. 1961;1(01). [Google Scholar]
  • 2. Rieke F. Spikes: exploring the neural code. MIT press; 1999. [Google Scholar]
  • 3. Pozzorini C, Naud R, Mensi S, Gerstner W. Temporal Whitening by Power-Law Adaptation in Neocortical Neurons. Nature Neuroscience. 2013;16(7):942. doi: 10.1038/nn.3431 [DOI] [PubMed] [Google Scholar]
  • 4. Atick JJ. Could Information Theory Provide an Ecological Theory of Sensory Processing? Network: Computation in Neural Systems. 1992;3(2):213–251. doi: 10.1088/0954-898X_3_2_009 [DOI] [PubMed] [Google Scholar]
  • 5. Lizier JT. Computation in Complex Systems. In: Lizier JT, editor. The Local Information Dynamics of Distributed Computation in Complex Systems. Springer Theses. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013. p. 13–52. Available from: 10.1007/978-3-642-32952-4_2. [DOI] [Google Scholar]
  • 6. Wibral M, Lizier JT, Vögler S, Priesemann V, Galuske R. Local Active Information Storage as a Tool to Understand Distributed Neural Information Processing. Frontiers in Neuroinformatics. 2014;8. doi: 10.3389/fninf.2014.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Wibral M, Lizier JT, Priesemann V. Bits from Brains for Biologically Inspired Computing. Frontiers in Robotics and AI. 2015;2. doi: 10.3389/frobt.2015.00005 [DOI] [Google Scholar]
  • 8. Barlow H. Redundancy reduction revisited. Network: computation in neural systems. 2001;12(3):241–253. doi: 10.1080/net.12.3.241.253 [DOI] [PubMed] [Google Scholar]
  • 9. Murray JD, Bernacchia A, Freedman DJ, Romo R, Wallis JD, Cai X, et al. A Hierarchy of Intrinsic Timescales across Primate Cortex. Nature Neuroscience. 2014;17(12):1661–1663. doi: 10.1038/nn.3862 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Cavanagh SE, Towers JP, Wallis JD, Hunt LT, Kennerley SW. Reconciling Persistent and Dynamic Hypotheses of Working Memory Coding in Prefrontal Cortex. Nature Communications. 2018;9(1):3498. doi: 10.1038/s41467-018-05873-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Wasmuht DF, Spaak E, Buschman TJ, Miller EK, Stokes MG. Intrinsic Neuronal Dynamics Predict Distinct Functional Roles during Working Memory. Nature Communications. 2018;9(1):3499. doi: 10.1038/s41467-018-05961-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Hasson U, Yang E, Vallines I, Heeger DJ, Rubin N. A Hierarchy of Temporal Receptive Windows in Human Cortex. Journal of Neuroscience. 2008;28(10):2539–2550. doi: 10.1523/JNEUROSCI.5487-07.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Wilting J, Dehning J, Pinheiro Neto J, Rudelt L, Wibral M, Zierenberg J, et al. Operating in a Reverberating Regime Enables Rapid Tuning of Network States to Task Requirements. Frontiers in Systems Neuroscience. 2018;12. doi: 10.3389/fnsys.2018.00055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Wilting J, Priesemann V. Inferring Collective Dynamical States from Widely Unobserved Systems. Nature Communications. 2018;9(1):2325. doi: 10.1038/s41467-018-04725-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Wilting J, Priesemann V. Between Perfectly Critical and Fully Irregular: A Reverberating Model Captures and Predicts Cortical Spike Propagation. Cerebral Cortex. 2019;29(6):2759–2770. doi: 10.1093/cercor/bhz049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Zeraati R, Engel TA, Levina A. Estimation of autocorrelation timescales with Approximate Bayesian Computations. bioRxiv. 2020. [Google Scholar]
  • 17. Archer E, Park IM, Pillow JW. Bayesian entropy estimation for binary spike train data using parametric prior knowledge. In: NIPS; 2013. p. 1700–1708. [Google Scholar]
  • 18.Bialek W, Tishby N. Predictive Information. arXiv:cond-mat/9902341. 1999. Available from: https://arxiv.org/abs/cond-mat/9902341.
  • 19. Bialek W, Nemenman I, Tishby N. Predictability, Complexity, and Learning. Neural Computation. 2001;13(11):2409–2463. doi: 10.1162/089976601753195969 [DOI] [PubMed] [Google Scholar]
  • 20. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, et al. Spatio-Temporal Correlations and Visual Signalling in a Complete Neuronal Population. Nature. 2008;454(7207):995–999. doi: 10.1038/nature07140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Quinn CJ, Coleman TP, Kiyavash N, Hatsopoulos NG. Estimating the Directed Information to Infer Causal Relationships in Ensemble Neural Spike Train Recordings. Journal of Computational Neuroscience. 2011;30(1):17–44. doi: 10.1007/s10827-010-0247-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W. Entropy and Information in Neural Spike Trains. Physical Review Letters. 1998;80(1):197–200. doi: 10.1103/PhysRevLett.80.197 [DOI] [Google Scholar]
  • 23. Panzeri S, Treves A, Schultz S, Rolls ET. On Decoding the Responses of a Population of Neurons from Short Time Windows. Neural Computation. 1999;11(7):1553–1577. doi: 10.1162/089976699300016142 [DOI] [PubMed] [Google Scholar]
  • 24. Brenner N, Strong SP, Koberle R, Bialek W, van Steveninck RRdR. Synergy in a Neural Code. Neural Computation. 2000;12(7):1531–1552. doi: 10.1162/089976600300015259 [DOI] [PubMed] [Google Scholar]
  • 25. Panzeri S, Schultz SR. A Unified Approach to the Study of Temporal, Correlational, and Rate Coding. Neural Computation. 2001;13(6):1311–1349. doi: 10.1162/08997660152002870 [DOI] [PubMed] [Google Scholar]
  • 26. Stetter O, Battaglia D, Soriano J, Geisel T. Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals. PLoS computational biology. 2012;8(8):e1002653. doi: 10.1371/journal.pcbi.1002653 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Panzeri S, Treves A. Analytical Estimates of Limited Sampling Biases in Different Information Measures. Network: Computation in Neural Systems. 1996;7(1):87–107. doi: 10.1080/0954898X.1996.11978656 [DOI] [PubMed] [Google Scholar]
  • 28. Paninski L. Estimation of Entropy and Mutual Information. Neural Computation. 2003;15(6):1191–1253. doi: 10.1162/089976603321780272 [DOI] [Google Scholar]
  • 29. Panzeri S, Schultz SR, Treves A, Rolls ET. Correlations and the Encoding of Information in the Nervous System. Proceedings of the Royal Society of London Series B: Biological Sciences. 1999;266(1423):1001–1012. doi: 10.1098/rspb.1999.0736 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Panzeri S, Senatore R, Montemurro MA, Petersen RS. Correcting for the Sampling Bias Problem in Spike Train Information Measures. Journal of Neurophysiology. 2007;98(3):1064–1072. doi: 10.1152/jn.00559.2007 [DOI] [PubMed] [Google Scholar]
  • 31. Montemurro MA, Senatore R, Panzeri S. Tight Data-Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques. Neural Computation. 2007;19(11):2913–2957. doi: 10.1162/neco.2007.19.11.2913 [DOI] [PubMed] [Google Scholar]
  • 32. Wolpert DH, Wolf DR. Estimating Functions of Probability Distributions from a Finite Set of Samples. Physical Review E. 1995;52(6):6841–6854. doi: 10.1103/PhysRevE.52.6841 [DOI] [PubMed] [Google Scholar]
  • 33. Nemenman I, Bialek W, de Ruyter van Steveninck R. Entropy and Information in Neural Spike Trains: Progress on the Sampling Problem. Physical Review E. 2004;69(5):056111. doi: 10.1103/PhysRevE.69.056111 [DOI] [PubMed] [Google Scholar]
  • 34. Archer E, Park IM, Pillow JW. Bayesian entropy estimation for countable discrete distributions. The Journal of Machine Learning Research. 2014;15(1):2833–2868. [Google Scholar]
  • 35. Small M. Time Series Embedding and Reconstruction. In: Applied Nonlinear Time Series Analysis: Applications in Physics, Physiology and Finance. vol. Volume 52 of World Scientific Series on Nonlinear Science Series A. WORLD SCIENTIFIC; 2005. p. 1–46. Available from: https://www.worldscientific.com/doi/abs/10.1142/9789812567772_0001. [Google Scholar]
  • 36. Palmigiano A, Geisel T, Wolf F, Battaglia D. Flexible Information Routing by Transient Synchrony. Nature Neuroscience. 2017;20(7):1014. doi: 10.1038/nn.4569 [DOI] [PubMed] [Google Scholar]
  • 37.Rudelt L, Marx DG, Wibral M, Priesemann V. History Dependence Estimator; 2020. Zenodo. Available from: https://github.com/Priesemann-Group/hdestimator.
  • 38. Shannon CE. A Mathematical Theory of Communication. The Bell System Technical Journal. 1948;27(3):379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x [DOI] [Google Scholar]
  • 39. Goodman J, Weare J. Ensemble Samplers with Affine Invariance. Communications in Applied Mathematics and Computational Science. 2010;5(1):65–80. doi: 10.2140/camcos.2010.5.65 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Brockwell PJ, Davis RA. Time Series: Theory and Methods. 2nd ed. Springer Series in Statistics. New York: Springer-Verlag; 1991. Available from: https://www.springer.com/de/book/9780387974293. [Google Scholar]
  • 41. Chapeau-Blondeau F. Autocorrelation versus Entropy-Based Autoinformation for Measuring Dependence in Random Signal. Physica A: Statistical Mechanics and its Applications. 2007;380:1–18. doi: 10.1016/j.physa.2007.02.077 [DOI] [Google Scholar]
  • 42. Albers DJ, Hripcsak G. Using Time-Delayed Mutual Information to Discover and Interpret Temporal Correlation Structure in Complex Populations. Chaos. 2012;22(1):013111–013111–25. doi: 10.1063/1.3675621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Mensi S, Naud R, Pozzorini C, Avermann M, Petersen CCH, Gerstner W. Parameter Extraction and Classification of Three Cortical Neuron Types Reveals Two Distinct Adaptation Mechanisms. Journal of Neurophysiology. 2012;107(6):1756–1775. doi: 10.1152/jn.00408.2011 [DOI] [PubMed] [Google Scholar]
  • 44. Izhikevich EM. Simple Model of Spiking Neurons. IEEE Transactions on Neural Networks. 2003;14(6):1569–1572. doi: 10.1109/TNN.2003.820440 [DOI] [PubMed] [Google Scholar]
  • 45.Mizuseki K, Sirota A, Pastalkova E, Buzsáki G. Multi-Unit Recordings from the Rat Hippocampus Made during Open Field Foraging.; 2009. Available from: http://crcns.org/data-sets/hc/hc-2.
  • 46. Mizuseki K, Sirota A, Pastalkova E, Buzsáki G. Theta Oscillations Provide Temporal Windows for Local Circuit Computation in the Entorhinal-Hippocampal Loop. Neuron. 2009;64(2):267–280. doi: 10.1016/j.neuron.2009.08.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Loback AR, Tkačik G, Prentice JS, Ioffe ML, J BI Michael, Marre O, et al. Data from: Error-Robust Modes of the Retinal Population Code; 2017. Available from: http://datadryad.org/stash/dataset/doi:10.5061/dryad.1f1rc. [DOI] [PMC free article] [PubMed]
  • 48. Prentice JS, Marre O, Ioffe ML, Loback AR, Tkačik G, Ii MJB. Error-Robust Modes of the Retinal Population Code. PLOS Computational Biology. 2016;12(11):e1005148. doi: 10.1371/journal.pcbi.1005148 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Marom S. MEA Data. Mendeley Data. 2018;V1. [Google Scholar]
  • 50.Stringer C, Pachitariu M, Carandini M, Harris K. Eight-Probe Neuropixels Recordings during Spontaneous Behaviors; 2019. Available from: https://janelia.figshare.com/articles/dataset/Eight-probe_Neuropixels_recordings_during_spontaneous_behaviors/7739750.
  • 51. Wibral M, Vicente R, Lindner M. Transfer Entropy in Neuroscience. In: Wibral M, Vicente R, Lizier JT, editors. Directed Information Measures in Neuroscience. Understanding Complex Systems. Berlin, Heidelberg: Springer; 2014. p. 3–36. Available from: 10.1007/978-3-642-54474-3_1. [DOI] [Google Scholar]
  • 52.Spitzner FP, Dehning J, Wilting J, Hagemann A, Neto JP, Zierenberg J, et al. MR. Estimator, a Toolbox to Determine Intrinsic Timescales from Subsampled Spiking Activity. arXiv:200703367v1 [physics, q-bio]. 2020. Available from: https://arxiv.org/abs/2007.03367v1. [DOI] [PMC free article] [PubMed]
  • 53. Shankar KH, Howard MW. Optimally fuzzy temporal memory. The Journal of Machine Learning Research. 2013;14(1):3785–3812. [Google Scholar]
  • 54. Dong DW, Atick JJ. Temporal Decorrelation: A Theory of Lagged and Nonlagged Responses in the Lateral Geniculate Nucleus. Network: Computation in Neural Systems. 1995;6(2):159–178. doi: 10.1088/0954-898X_6_2_003 [DOI] [Google Scholar]
  • 55. Wang XJ, Liu Y, Sanchez-Vives MV, McCormick DA. Adaptation and Temporal Decorrelation by Single Neurons in the Primary Visual Cortex. Journal of Neurophysiology. 2003;89(6):3279–3293. doi: 10.1152/jn.00242.2003 [DOI] [PubMed] [Google Scholar]
  • 56. Moser EI, Kropff E, Moser MB. Place Cells, Grid Cells, and the Brain’s Spatial Representation System. Annual Review of Neuroscience. 2008;31(1):69–89. doi: 10.1146/annurev.neuro.31.061307.090723 [DOI] [PubMed] [Google Scholar]
  • 57. Masquelier T, Deco G. Network Bursting Dynamics in Excitatory Cortical Neuron Cultures Results from the Combination of Different Adaptive Mechanism. PLOS ONE. 2013;8(10):e75824. doi: 10.1371/journal.pone.0075824 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Stringer C, Pachitariu M, Steinmetz N, Reddy CB, Carandini M, Harris KD. Spontaneous Behaviors Drive Multidimensional, Brainwide Activity. Science. 2019;364 (6437). doi: 10.1126/science.aav7893 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Meyn SP, Tweedie RL. Markov Chains and Stochastic Stability. Communications and Control Engineering. London: Springer-Verlag; 1993. Available from: www.springer.com/de/book/9781447132691. [Google Scholar]
  • 60. Kullback S, Leibler RA. On Information and Sufficiency. The Annals of Mathematical Statistics. 1951;22(1):79–86. doi: 10.1214/aoms/1177729694 [DOI] [Google Scholar]
  • 61. Kiefer J, Wolfowitz J. Consistency of the Maximum Likelihood Estimator in the Presence of Infinitely Many Incidental Parameters. The Annals of Mathematical Statistics. 1956;27(4):887–906. doi: 10.1214/aoms/1177728066 [DOI] [Google Scholar]
  • 62. Antos A, Kontoyiannis I. Convergence Properties of Functional Estimates for Discrete Distributions. Random Structures & Algorithms. 2001;19(3-4):163–193. doi: 10.1002/rsa.10019 [DOI] [Google Scholar]
  • 63. Treves A, Panzeri S. The Upward Bias in Measures of Information Derived from Limited Data Samples. Neural Computation. 1995;7(2):399–407. doi: 10.1162/neco.1995.7.2.399 [DOI] [Google Scholar]
  • 64.Nemenman I, Shafee F, Bialek W. Entropy and Inference, Revisited. arXiv:physics/0108025v2. 2001. Available from: https://arxiv.org/abs/physics/0108025v2.
  • 65. Schwarz G. Estimating the Dimension of a Model. Annals of Statistics. 1978;6(2):461–464. doi: 10.1214/aos/1176344136 [DOI] [Google Scholar]
  • 66. Davison AC, Hinkley DV. Bootstrap Methods and Their Application. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press; 1997. Available from: https://www.cambridge.org/core/books/bootstrap-methods-and-their-application/ED2FD043579F27952363566DC09CBD6A. [Google Scholar]
  • 67. Shahaf G, Marom S. Learning in Networks of Cortical Neurons. Journal of Neuroscience. 2001;21(22):8782–8788. doi: 10.1523/JNEUROSCI.21-22-08782.2001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Marre O, Amodei D, Deshmukh N, Sadeghi K, Soo F, Holy TE, et al. Mapping a Complete Neural Population in the Retina. Journal of Neuroscience. 2012;32(43):14859–14873. doi: 10.1523/JNEUROSCI.0723-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Wang Q, Ding SL, Li Y, Royall J, Feng D, Lesnar P, et al. The Allen Mouse Brain Common Coordinate Framework: A 3D Reference Atlas. Cell. 2020;181(4):936–953.e20. doi: 10.1016/j.cell.2020.04.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Davison AP, Brüderle D, Eppler J, Kremkow J, Muller E, Pecevski D, et al. PyNN: A Common Interface for Neuronal Network Simulators. Frontiers in Neuroinformatics. 2009;2. doi: 10.3389/neuro.11.011.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008927.r001

Decision Letter 0

Daniele Marinazzo

1 Dec 2020

Dear Mr. Rudelt,

Thank you very much for submitting your manuscript "Embedding optimization reveals long-lasting history dependence in neural spiking activity" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers.

While the methods look overall correct, with some needed improvements, what is less clear is the potential for application and the impact. Please work on that (of course not overselling it but critically situating it in the current panorama). A resubmission addressing all the outstanding issues could be offered publication in PLOS Computational Biology if the scope is clearer, or in PLOS One if it is not. In this latter case you could of course decline the offer.

In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: This paper proposes new metrics for measuring history dependence in neural spike trains, and uses a particular coarse-graining in combination with existing entropy/mutual information estimation methods to estimate this metric for a range of neural spike trains. The authors then try to draw conclusions about their estimated metrics for various real neural spike trains.

The methods aspect of this seems relatively sound. I do have a suggestion for the authors, though, in terms of presentation: I'd put the vast majority of the methods in the Methods rather than the Results section. Basically, the discussion of the curse of dimensionality and the Data Processing Inequality in various forms (large number of bins is curse of dimensionality, can lead to overestimation and small number of bins yields lower MI due to Data Processing Inequality) seem to me to be well-worn statistical ground and not worthy of so much of the Results section. I'd also emphasize more that your main contribution to estimation of these information quantities is a particularly clever coarse-graining that assumes the recency hypothesis.

But that's not my main worry. I'm mainly worried that the metric isn't necessarily the right one for the job. On the chopping block is not just your R(T) (which I would not call a redundancy, but rather just a version of the predictive information divided by) and T_D (which I have a few comments on later), but also the autocorrelation function (which you discard, for reasons that make sense) and the predictive information (which you essentially have a version of in your numerator, but see Nemenman et al) and all the information measures in "Anatomy of a Bit" by Ryan James et al. Based on my experience playing with these metrics, I'd say the following:

-- it is likely that T_D will grow with the size of your data set, and so what's really relevant is the rate of growth; that may be a better way to distinguish between different time series;

-- it is likely that R(T) has some weird behavior with the time bin size for the present neural patterning that has not yet been discussed and should be;

-- I still have no idea how or if either R(T) or T_D(data set size) capture anything related to history dependence.

Before I recommend acceptance, I would ask for simulations of an Izhikevich neuron that can adopt different neuron types. The strawmen, in my opinion, should be first the autocorrelation function and then the predictive information. I believe that information measures of time series can reveal the type of neuron or aspects of how it behaves, but I don't see why I should switch from using the predictive information to using R(T) or its relative T_D. What am I getting from R(T) that I'm not getting from predictive information? What is the intuition behind introducing this new measure? What do the authors even mean by "history dependence"? If I am to normalize something like predictive information by single symbol entropy, as the authors do here, what neural spike train do I now correctly classify as having long history dependence that I before believed had little history dependence? As I am missing this intuition from the paper, I cannot recommend acceptance-- yet.

Smaller things:

-- I would not say that this measure of history dependence has anything to do with the efficient coding hypothesis, which is more about how stimulus is transformed by a neuron so that the neuron has maximal entropy, or sometimes (depending on who's using the term) is about how mutual information between stimulus and neuron is close to the entropy of the neural activity;

-- I would add some words on when your embedding method is likely to fail, which is precisely when initial conditions really really matter and the recency hypothesis is inaccurate-- e.g. network of Izhikevich neurons-- and which (notably) some might call long-term history dependence.

If the authors can convince me that their metric R(T) and its relative T_D (which should really be some aspect of how T_D changes with recording length) contain useful information that stumps the predictive information, then I will happily recommend acceptance.

Reviewer #2: This paper is a potentially important contribution to neuroscientific

toolbox. The authors propose an extension of existing information

theoretic approaches that allows for an unbiased estimation of a

neuron's history dependence on temporal depth and history dependence.

The paper presents a thorough approach to controlling bias and

overfitting. Further, the method is applied to several open datasets and

an intriguing finding is described. Finally, the code to apply the

methods described in the paper is made available with thorough

documentation.

I am enthusiastic but have one minor concern and a few related requests

for additional analyses described below. In addition, I made a pull

request on Github that may help improve the usability of this tool;

hopefully, the authors will build on it to include a few tests of the

code. This is not a requirement for this review, but it would be great

to see code coverage increase to >50%.

The concern is the following. History dependence R depends on the

entropy of current spiking conditional on the past, as well as on the

entropy of current spiking. The average firing rate of a neuron changes

its entropy; presumably, this is the reason that entropy of current

spiking is in the denominator. In theory, the product does not depend on

the neuron's average firing rate; however, it would be nice to get a

demonstration that R_tot or T_D do not vary as a function of the GLIF

neuron's average firing rate, median ISI, or CV. More importantly, I'd

like to see a scatterplot of these quantities vs R_tot and T_D in the

datasets from Fig. 5. If authors find no correlation there, it may be

instructive to look for a different connection to traditional statistics

as described in the first section of Discussion. Surely, we won't find

any perfect replacements for history dependence, but if T_D is loosely

related to some function of autocorrelation, it will help ground

researchers in more familiar terrain.

The following lists a few minor suggestions.

In most citations, the name of the journal is missing. Is this by

design?

In line 78, what does 'discrete past embedding of spiking activity'

mean? Do you refer to a 'reduced representation' of the past, or the

discrete nature of spiking data? I am trying to discern whether past

embedding with binary data has been described in practical terms before.

In line 152, you may wish to say something like 'while minimizing the

risk of overestimation'.

Line 163 mentions errorbars, but none are visible in Fig. 2D. I think a

different place in the paper mentions 2xSTD errorbars being too small to

be visible, but does that come later?

I am confused regarding the status of GLM in this paper. Line 337 justly

points out its systematic underestimation of history dependence, while

line 194 claims that the authors used GLM as ground truth for R(T, d,

kappa). Please clarify.

In Fig 4, why are bootstrapping errorbars not centered around the median

(bars' height)?

When refering to results from extracellular recordings, it may be best

to call the units identified through spikesorting "single units" rather

than "neurons" to remind us that spikesorting is somewhat subjective.

In Fig 5, would it be possible to include a scatterplot of history

dependence estimated from GLM?

Please attempt to interpret the results of Fig 6 further. Why is it that

single unit 3 has such a distinctive shape? What might this mean for the

corresponding neuron's information processing? What follow-up would you

suggest for researchers using your tool when they see shapes like these?

Would inspecting autocorrelograms help? Include any diagnostic

information you find helpful.

Related: What is the interpretation of a peak followed by decay in R(T)

as in Fig S7, row 2, middle two?

There is a typo in line 1298 and in caption to S4.

The sentence that starts on line 1326 is too long. Also, it may be good

to italicize 'blocks of blocks' here.

Reviewer #3: Embedding optimization reveals long-lasting history dependence in neural spiking

Activity

• Summary of the paper and novelties

This work investigates how to reliably quantify the dependence of a single neuron's spiking on its own preceding activity, called history dependence. Previous studies used limited representations of past activity (the so-called past embedding) to estimate information theory-based measures. Here it is argued that a careful embedding of past activity is crucial.

A novel embedding-optimization method is proposed here that optimizes temporal binning of past spiking to capture most of the magnitude and the temporal depth of history dependence. The new method is validated against simulated data of a LIF neuron model and empirical data from different databases that account for a large variety of spiking statistics.

• Strengths

The main strengths of the work are:

- It is demonstrated that previous ad hoc embedding strategies are likely to capture much less history dependence, or lead to estimates that severely overestimate the true history dependence. The new method maximizes the estimated history dependence while avoiding overestimation.

- The new method is flexible enough to account for the variety of spiking statistics encountered in experiments.

• Weaknesses and suggestions

A weak point of the work is that for spike trains with long temporal depths (e.g., larger than 3 seconds, as in Fig. 3 C), the temporal depth estimated by the optimization method is much smaller (630 ms). This is a critical point to discuss in terms of possible limitations to estimate the timescale of neural processing at different stages of the brain.

Another drawback of the new optimization methods is that they perform worse on short recordings: the estimated history dependence is overestimated when applying BBC to recordings of 3 minutes (S1 Fig) and the estimated temporal depth is underestimated to half of the real temporal depth (S2 Fig). This aspect might be discussed in the paper, analyzing possible limitations on application of optimization techniques to experimental data of short length.

Some minor suggestions:

- Line 60: Could you comment on why the time bin of current spiking is chosen to be 5 ms?

- Fig. 1 it is included in the Methods summary but is not well described in the text. Either move it to Methods, or further explain it here. In the figure caption, please provide more details of the figure, e.g., explain what is ML, NSB and BBC.

- Fig S1 and paragraph between lines 272 and 286: how is each half of the data selected for cross-validation? Are multiple rounds of cross-validation performed using different partitions (in this case different halves) of the data?

- Fig 4C: why BBC is computed with d = 20, and shuffling with d = 5?

- Fig S4 is not referenced in the text.

- Typo: “errorbars” instead of “error bars” (for example, in line 262).

- The publication year is missing in references.

Methods are written in an appropriate and informative way

The paper is well written and concepts are provided in a correct, clear and suitable way.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008927.r003

Decision Letter 1

Daniele Marinazzo

31 Mar 2021

Dear Mr. Rudelt,

We are pleased to inform you that your manuscript 'Embedding optimization reveals long-lasting history dependence in neural spiking activity' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The review is uploaded as an attachment-- a pdf with comments. My general comments are that this manuscript is now much improved. The examples have convinced me that R(T) might be of interest to the neuroscience community, and it is clear that the authors have done their due diligence in estimating it as well as possible. I'm less sold on \\tau_R, especially since it just seems to be correlated with \\tau_C, but I have no fundamental objections.

Reviewer #2: Thank you, that was all for me.

Reviewer #3: The authors addressed the suggestions satisfactorily and the paper has improved in revision. Nice work!

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Maxym Myroshnychenko

Reviewer #3: Yes: Stefano Panzeri

Attachment

Submitted filename: PCOMPBIOL-D-20-01987_R1_reviewer.pdf

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008927.r004

Acceptance letter

Daniele Marinazzo

26 May 2021

PCOMPBIOL-D-20-01987R1

Embedding optimization reveals long-lasting history dependence in neural spiking activity

Dear Dr Rudelt,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Katalin Szabo

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Embedding optimization leads to mild overfitting for short recordings, which can be avoided by cross-validation.

    Shown is the relative bias for two versions of the GLIF model with spike adaption, one with 1s and the other with 22s past kernel. The relative bias refers to the relative difference between embedding-optimized estimates R^(T,d*,κ*) and the model’s true history dependence R(T, d*, κ*) for the same optimized embedding parameters d*, κ*. The relative bias for R^tot was computed by first averaging the relative difference (R^(T,d*,κ*)R(T,d*,κ*))/R(T,d*,κ*) for T ∈ [TD, Tmax], and second averaging again over 30 different simulations for Trec between 1 and 20 minutes, and 10 different simulations for 45 and 90 minutes. Embedding parameters were optimized for each simulation, respectively, using parameters as in Table 2 with dmax = 25. (Left) For BBC, the relative bias for R^tot is zero only if recordings are sufficiently long (> 20 minutes for 1s kernel, and ≈ 90 minutes for 22s kernel). When recordings are shorter, the relative bias increases, and thus estimates are mildly overestimating the model’s true history dependence for the optimized embedding parameters. For Shuffling, estimates provide lower bounds to the model’s true history dependence, hence the relative bias remains negative even in the presence of overfitting. (Right) When one round of cross-validation is applied, i.e. embedding parameters are optimized on the first, and estimates are computed on the second half of the data, the bias is approximately zero for BBC even for short recordings, or more negative for the Shuffling estimator. Therefore, we conclude that the origin of overfitting is the selection of embedding parameters on the same data that are used for the estimation of R. Errorbars show 95% bootstrap confidence intervals on the mean over n = 10 (45 or 90 min) or n = 30 (≤ 20 min) different simulations.

    (TIF)

    S2 Fig. For the simulated neuron model, recording length has little effect on the estimated total history dependence, but large impact on the estimated information timescale.

    (Left) Mean estimated total history dependence R^tot for different recording lengths, relative to the true total history dependence Rtot of the model (GLIF with spike adaption with 1s or 22s past kernel). As the recording length decreases, so does R^tot. However, with only 3 minutes, one does still infer about ≈ 95% of the true Rtot. (Right) In contrast, the estimated information timescale τ^R decreases strongly with decreasing recording length. With 3 minutes and less, only ≈ 75% of the true τR is estimated on average. Note that for the simpler 1s model (top), an accurate estimation of the true τR is possible for 90 minute recordings, whereas for the 22s model (bottom), the estimated τ^R remains below the true value. Shown are mean values for 30 different simulations for Trec between 1 and 20 minutes, and 10 different simulations for 45 and 90 minutes, as well as 95% confidence intervals on the mean based on bootstrapping.

    (TIF)

    S3 Fig. For experimental data, too, recording length has little effect on estimated total history dependence, but larger impact on the estimated information timescale.

    (Left) Total history dependence Rtot for different recording lengths, relative to the total history dependence estimated for a 90 minute recording. As long as recordings are 10 minutes or longer, one does still estimate about ≈ 95% as much or more of Rtot as for 90 minutes, for all three recordings. For less than 10 minutes, the estimated total history dependence decreases down to 90% (CA1), or increases again due to overfitting (retina). (Right) Similar to the GLIF model, the estimated information timescale τR decreases more strongly with decreasing recording length. With 10 minutes and more, one estimates around ≈ 75% or more of the τR that is estimated on a 90 minute recording. Note that for the experimental data, the estimated timescale of the BBC estimator depends more strongly on the recording time, whereas the Shuffling estimator is more robust, especially for dmax = 5. Shown is the median with 95% bootstrap confidence intervals over n = 10 randomly chosen sorted units for each neural system. Before taking the median over sorted units, for each unit we averaged estimates over 10 excerpts of the full recording, each with 3 or 5 minutes duration, and over 8,4 and 2 excerpts with 10, 20 and 45 minutes duration, respectively.

    (TIF)

    S4 Fig. Example estimation results for the generalized leaky integrate-and-fire model (GLIF) with 1s past kernel.

    For each recording length, we show the embedding-optimized estimates of history dependence R(T) with and without cross-validation, for BBC (red) and Shuffling (blue) with dmax = 25, as well as the ground truth for the same embeddings that were found during optimization (dashed lines). Dashed lines indicate the estimated information timescale τ^R and total history dependence R^tot. Shaded areas indicate ± two standard deviations obtained by bootstrapping.

    (TIF)

    S5 Fig. Example estimation results for the generalized leaky integrate-and-fire model (GLIF) with 22s past kernel.

    For each recording length, we show the embedding-optimized estimates of history dependence R(T) with and without cross-validation, for BBC (red) and Shuffling (blue) with dmax = 25, as well as the ground truth for the same embeddings that were found during optimization (dashed lines). Dashed lines indicate the estimated information timescale τ^R and total history dependence R^tot. Shaded areas indicate ± two standard deviations obtained by bootstrapping.

    (TIF)

    S6 Fig. Estimation results for all sorted units in rat dorsal hippocampus (layer CA1).

    For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

    (TIF)

    S7 Fig. Estimation results for all sorted units in rat cortical culture.

    For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

    (TIF)

    S8 Fig. Estimation results for all sorted units in salamander retina.

    For each unit, we show the embedding-optimized estimates of history dependence R(T) for BBC with dmax = 20 (red), as well as Shuffling with dmax = 20 (blue), dmax = 5 (green) and dmax = 1 (yellow). Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot. Also shown is the embedding-optimized GLM estimate (violet square) with a past range equal to the temporal depth that was found with the BBC estimator.

    (TIF)

    S9 Fig. Estimation results for all sorted units in mouse primary visual cortex.

    For each unit, we show the embedding-optimized Shuffling estimates of history dependence R(T) for dmax = 5. Dashed lines indicate estimates of the information timescale τR and total history dependence Rtot.

    (TIF)

    S10 Fig. Bootstrapping yields accurate estimates of standard deviation and confidence intervals.

    (Left) Shown is the standard deviation on BBC estimates (blue) obtained from 250 “blocks of blocks” bootstrap samples on a single recording (GLIF model with 22s past kernel). It agrees well with the true standard deviation (black), which we estimated from 100 repeated simulations of the same recording length and embedding. As expected, the standard deviation decreases substantially for longer recordings. For each recording length, estimates were computed for typical optimal embedding parameters d*, κ* and T = TD that were found by embedding optimization. Errorbars show mean and standard deviation of the estimated σ(R) over the repeated simulations. (Right) The 95% confidence intervals based on two standard deviations σ(R) have approximately the claimed confidence level (CI accuracy). Standard deviation was estimated from 250 “blocks of blocks” bootstrap samples. For each recording length, we computed estimates R^ and the bootstrap confidence intervals on the 100 simulations. We then computed the confidence level (CI accuracy) by counting how often the true value of R was contained in the estimated confidence interval (green line). Estimates and the true value of R were computed for the same typical embedding parameters d*, κ* and T = TD as before.

    (TIF)

    S11 Fig. Total history dependence and information timescale for increasing branching parameter m.

    Similar to the binary autoregressive process, increasing the branching parameter m increases the total history dependence Rtot, whereas the information timescale τR stays constant, or even decreases for high m. For each m, the input activation probability h was adapted to hold the firing rate fixed at 5 Hz.

    (TIF)

    S12 Fig. The estimated information timescale varies between estimators.

    For each sorted unit (grey dots), estimates of the information timescale τR are plotted relative to the corresponding BBC estimate for dmax = 20. The BBC estimator tends to estimate higher timescales than the Shuffling estimator on recordings of CA1 and cortical culture, whereas for retina the medians of different estimators are more similar. Although estimates of the timescale are highly variable between estimators, Shuffling with only dmax = 5 past bins still estimates timescales of at least 80% of the timescales that are estimated with BBC. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

    (TIF)

    S13 Fig. Total history dependence and information timescale show no clear dependence on the firing rate, whereas the total mutual information tends to increase with the rate.

    Shown are the same estimates of the total history dependence Rtot and information timescale τR as in Fig 7 (Shuffling estimator with dmax = 5) versus the firing rates of sorted units (dots). The total mutual information Itot is equal to Rtot times the spiking entropy H(spiking) of the respective unit. While Itot tends to increase with firing rate, no clear relation is visible for Rtot or τR. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

    (TIF)

    S14 Fig. Relationship between total history dependence or information timescale and standard statistical measures of neural spike trains.

    Estimates of the total history dependence Rtot tend to decrease with the median interspike interval (ISI), and to increase with the coefficient of variation CV. This result is expected for a measure of history dependence, because a shorter median ISI indicates that spikes tend to occur together, and a higher CV indicates a deviation from independent Poisson spiking. In contrast, the information timescale τR tends to increase with the autocorrelation time, as expected, with no clear relation to the median ISI or the coefficient of variation CV. However, the correlation between the measures depends on the neural system. For example in retina (n = 111), Rtot is significantly anti-correlated with the median ISI (Pearson correlation coefficient: r = −0.69, p < 10−5) and strongly correlated with the coefficient of variation CV (r = 0.90, p < 10−5), and τR is significantly correlated with the autocorrelation time τC (r = 0.75, p < 10−5). In contrast, for mouse primary visual cortex (n = 142), we found no significant correlations between any of these measures. Results are shown for the Shuffling estimator with dmax = 5, and T0 = 10 ms. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

    (TIF)

    S15 Fig. Excluding short-term contributions helps to differentiate the timescales for different neural systems.

    By only considering gains ΔR(T) for past ranges T > T0 when computing the information timescale τR, short-term effects that are related to the refractory period and different firing modes are excluded. The higher T0, the higher is the distance in the median τR between systems (especially between salamander retina and mouse primary visual cortex). This is because both timescales τR and τC increase with T0 for CA1 and primary visual cortex, whereas they decrease for retina. The same holds for the autocorrelation time τC, where only time lags T > T0 were considered when fitting an exponential decay to the autocorrelograms. Note that if the decay is perfectly exponential, then T0 does not affect the results. Estimates of Rtot and τR are shown for the Shuffling estimator with dmax = 5. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

    (TIF)

    S16 Fig. Total history dependence decreases for small time bins Δt.

    The choice of the time bin Δt of the spiking activity has little effect on the information timescale τR, whereas the total history dependence Rtot decreases for small time bins Δt < 5ms. This is consistent across experiments. The smaller the time bin, the higher the risk that noise in the spike emission reduces the overall predictability or history dependence in the spiking, whereas an overly large time bin holds the risk of destroying coding relevant time information in the spike train. Thus, we chose the smallest time bin Δt = 5 ms that does not yet show a substantial decrease in Rtot. We do not plot results for higher Δt, because for higher Δt we observed many instances of multiple spikes in the same time bin. Results are shown for the Shuffling estimator with dmax = 5, and T0 = 10 ms. Errorbars indicate median over sorted units and 95% bootstrap confidence intervals on the median.

    (TIF)

    Attachment

    Submitted filename: response_to_reviewers.pdf

    Attachment

    Submitted filename: PCOMPBIOL-D-20-01987_R1_reviewer.pdf

    Data Availability Statement

    The data underlying the results presented in the study are available online—from the CRCNS (crcns.org) data repository: http://crcns.org/data-sets/hc/hc-2—from the Mendeley Data repository: https://data.mendeley.com/datasets/4ztc7yxngf/1—from the Dryad database: https://datadryad.org/stash/dataset/doi:10.5061/dryad.1f1rc—from the Janelia figshare repository: https://janelia.figshare.com/articles/dataset/Eight-probe_Neuropixels_recordings_during_spontaneous_behaviors/7739750 All code for Python3 that was used to analyze the data and to generate the figures is available online at https://github.com/Priesemann-Group/historydependence.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES