Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2023 Oct 23:2023.01.03.522580. [Version 3] doi: 10.1101/2023.01.03.522580

Fluctuating landscapes and heavy tails in animal behavior

Antonio Carlos Costa 1,*, Massimo Vergassola 1
PMCID: PMC9900741  PMID: 36747746

Abstract

Animal behavior is shaped by a myriad of mechanisms acting on a wide range of scales. This immense variability hampers quantitative reasoning and renders the identification of universal principles elusive. Through data analysis and theory, we here show that slow non-ergodic drives generally give rise to heavy-tailed statistics in behaving animals. We leverage high-resolution recordings of C. elegans locomotion to extract a self-consistent reduced order model for an inferred reaction coordinate, bridging from sub-second chaotic dynamics to long-lived stochastic transitions among metastable states. The slow mode dynamics exhibits heavy-tailed first passage time distributions and correlation functions, and we show that such heavy tails can be explained by dynamics on a time-dependent potential landscape. Inspired by these results, we introduce a generic model in which we separate faster mixing modes that evolve on a quasi-stationary potential, from slower non-ergodic modes that drive the potential landscape, and reflect slowly varying internal states. We show that, even for simple potential landscapes, heavy tails emerge when barrier heights fluctuate slowly and strongly enough. In particular, the distribution of first passage times and the correlation function can asymptote to a power law, with related exponents that depend on the strength and nature of the fluctuations. We support our theoretical findings through direct numerical simulations.

I. INTRODUCTION

Throughout their lives, animals continuously sense, process information and act appropriately to ensure survival. Such ever-changing behavior emerges from orchestrated biological activity across a wide range of scales. The intrinsic high-dimensionality of these far from equilibrium dynamics, where multiple timescales continuously interact, pose deep challenges to a quantitative understanding. Yet, time is ripe for theory. Recent advances in machine vision technologies (e.g., [13]) make it possible to record an animal’s pose in unconstrained environments with unprecedented resolution. Such data now span several orders of magnitude in space and time [4], challenging our quantitative understanding and demanding modeling approaches that can bridge across scales: from sub-second movements to hours-long strategies.

Despite these technical advances, we are still far from measuring all the variables that determine behavior. Indeed, a complete microscopic theory would require knowledge not only of the current posture of the animal, but also its physiological state, its sensors, the state of its muscle cells, the kinetics of an uncountable number of molecules that determine the interaction between neurons, the genetic expression of different types of neuromodulators, and so on. However, examples from statistical physics highlight how microscopic details might not be required to predict the dynamics of carefully selected collective variables [5]. Indeed, to study how odor molecules diffuse in the air we need not measure the position and momentum of each molecule. Instead, we can simply write down a self-determined equation for the transport of the concentration field itself [6]. In fact, much of the success of statistical mechanics relies on the identification of slowly varying macroscopic modes, which, through a time-scale separation, depend only statistically on the microscopic details of the dynamics. Finding such order parameters, however, is typically not a simple task, and requires a build-up of intuition that is often not accessible for far-from-equilibrium systems as encountered in biology. Nonetheless, we leverage the notion of time scale separation to search for such slowly-varying collective variables and show that it is possible to build tractable reduced-order models directly from imaging data of behaving animals. Additionally, through a combination of data analysis and theory, we show that the ever-changing nature of biological systems, which evolve at a wide range of time scales, provides a minimal yet sufficient mechanism for generating heavy-tailed statistics.

Our inspiration is the foraging behavior of the nematode C. elegans, a pivotal model organism in genetics and neuroscience [7, 8]. On a two-dimensional agar plate, worms move by propagating dorsoventral waves throughout their bodies; on short time scales, they control the frequency, wavelength, and direction of those waves to move forward, backward, or turn. Sequences of such short-lived movements exhibit signatures of chaos generating temporal variability in the worm’s behavior [9, 10]. Despite this inherent unpredictability, Costa et al. [11] showed that by reconstructing unobserved influences through a time-delay embedding [1216], it is possible to build a high-fidelity Markov model that accurately predicts C. elegans foraging behavior, rendering simulated worms nearly indistinguishable from real worms across a wide range of scales. This Markov model also recovered coarse-grained descriptions of the worm’s foraging strategy, identifying long-lived metastable states that correspond to transitions between relatively straight paths (“runs”) and not-so-abrupt reorientations (“pirouettes”) [11, 17] (a two-state characterization that is akin to the run-and-tumbling of bacteria [18, 19]). Recent analysis of posture-scale movements highlights how other organisms also exhibit stereotyped movements [2022], making stereotypy one of the few general principles in the physics of animal behavior [4, 23].

The emergence of stereotypy from continuous movement stems from an implicit time scale separation between variations on what is defined as a behavioral state, and transitions between behavioral states, much like particles hopping among wells in potential landscapes. Here, we make this evocative picture concrete. In the first section, we infer an effective Langevin description for the worm’s “run-and-pirouette” dynamics. Notably, we find long-range correlations and heavy-tailed distributions of times spent either performing a “run” or a “pirouette”, instead of the exponential timescales expected for independent transition events with a fixed hopping rate. We then show how such nontrivial statistics stem from the slow adaptation of the worm’s search strategy, which is captured by time-dependent model parameters that we infer directly from the data. In the second section, we investigate whether non-ergodic fluctuations (such as the worm’s adaptation) generally give rise to heavy-tailed statistics. We introduce a generic model of animal behavior in which the posture dynamics evolves in potential landscapes that fluctuate in time, and show that even simple potential landscapes can exhibit heavy-tailed first passage times and long-range correlations when barrier heights fluctuate in time slowly and strongly enough.

II. HEAVY TAILS IN C. ELEGANS BEHAVIOR: THE ROLE OF ADAPTATION

We leverage a previously analyzed dataset in which 12 lab-strain N2 worms are placed on an agar plate and allowed to freely explore for 35 minutes [24] (see Appendix A). From each video frame (sampled every δt=1/16s), we extract the worm’s centerline, measure tangent angles equally spaced along the body, and subtract the overall rotation of the worm to obtain a worm-centric representation of the animal’s shape θt. As done in [9, 11], we then stack K*=11 time delays of the animals posture XK*=θt:t+K* to obtain a maximally-predictive sequence space, Fig. 1(a). In this way, we subsume the short-term memory resulting from hidden dynamics into an expanded state space that admits an approximately Markovian description. Assuming stationary dynamics and a fully resolved state, the dynamics is then given by

ddtX=Φ(X),

where Φ(X) is a nonlinear noisy function that evolves the state XK*. The corresponding evolution of probability densities ρt=ρXK*,t is given by,

ddtρt=ρtρt+τ=eτρt,

where Φ is encoded into a linear transfer operator that evolves probability densities. Given an appropriate discretization of the state space, it is possible to approximate the action of as a Markov chain [10, 11].

FIG. 1. A reduced-order model of C. elegans foraging dynamics.

FIG. 1.

(a) From video imaging data, we extract the body posture in a worm-centric perspective by measuring local tangent angles along the body and rotating them onto a common frame-of-reference, obtaining θt [26]. These vectors are then stacked over a timescale K* to yield maximally predictive states XK*(t) [911]. (b) Through an appropriate fine-scale partitioning of XK*(t), we obtain a high-fidelity Markov model of the dynamics [10, 11]. The eigenspectrum of the inferred Markov chain captures a hierarchy of timescales, with the first non-trivial eigenvector of the reversibilized transition matrix ϕ2 capturing transitions between “runs” and “pirouettes”. We represent the high-dimensional state-space XK* through a 2D UMAP embedding as in [11] (left), and color-code each point in this space by the projection along ϕ2 (see Appendix A). Each point in this space corresponds to a K* sequence of postures and different behaviors correspond to different regions of the space. We also plot an example 10 min long centroid trajectory color-coded by ϕ2 (right). The example states and the centroid trajectory showcases how ϕ2<0 corresponds to forward “runs” whereas ϕ2>0 corresponds to combinations of reversals, ventral and dorsal turns used during “pirouettes”. (c) Example time series of ϕ2 illustrating the stochastic hopping between “runs” and “pirouettes”. (d-left) First passage time distribution obtained from the data (black), simulations of the stochastic dynamics of Eq. 1 (blue) and simulations performed with the full model (gray). The bulk of the distribution is captured by a sum of exponential functions (gray dashed line) that are predicted by the simulations, but the data also exhibits heavy tails that are not captured accurately. (d-right) Connected autocorrelation function Cϕ2(τ) for the data (black) and simulations performed with the reduced-order model of Eq. 1 (blue) and the full model (gray). While model simulations capture the dynamics over short timescales, they fail to predict the long-range correlations exhibited by the data. Notably, the stochastic model of Eq. 1 gives predictions for the first passage time distribution and autocorrelation functions that are comparable with those obtained with the full model [11], showcasing the self-consistency of this model at capturing the long-lived dynamics. Error bars represent 95% confidence intervals bootstrapped across worms.

Encoding the nonlinear dynamics of XK* into the linear evolution of the probability distribution, offers a means to a principled coarse-graining. Indeed, the eigen-decomposition of ,eτψi=eΛiτψi, yields a set of eigenvalues that capture the hierarchy of timescales by which different eigenfunctions relax to the steady-state Λi has units of inverse time). For a mixing system, there is a unique largest eigenvalue Λ1=0 that corresponds to the steady-state distribution ψ1=π=e0τπ. The remaining eigenfunctions ψi>1, organized according to increasing real parts, correspond to collective variables that relax to the steady-state on faster and faster timescales, set by ReΛi>1. For C. elegans foraging dynamics, the eigenspectrum of the reversibilized reveals a main slow mode ϕ2, which was used to coarse-grain the behavior into “runs” and “pirouettes” [11]. Here, we leverage this timescale-separated eigenfunction to define a slow reaction coordinate [25] that captures transitions along a “run-and-pirouette” axis (see Appendix A), Fig. 1(b). In particular, we project the full posture dynamics onto ϕ2, going from the fast chaotic dynamics of the body posture [9] to an effective stochastic description for the hopping between “runs” and “pirouettes”. An example time series of ϕ2(t) is shown in Fig. 1(c).

A. Inferring a stationary Langevin equation for the “run-and-pirouette” dynamics

We here aim to infer an explicit coarse-grained model for the apparent stochastic hopping along ϕ2(t). Projecting the full dynamics onto ϕ2, however, results in nonMarkovian effects due to the fact that the orthogonal projection (the modes we do not take into account) includes non-vanishing temporal correlations coming from the faster-decaying eigenfunctions of [2729]. To deal with this, we sample the dynamics on a timescale τ* long enough such that the temporal correlations in the noise have decayed [30]. In this way, we can obtain an effective overdamped Langevin description for ϕ2(t) [31],

ϕ˙2=Fϕ2+2Dϕ2η(t), (1)

in which, since the dynamics are sampled every τ*, we effectively have η(t)ηt=δtt,t=nτ*,t= nτ*,n,nN. Indeed, with τ*=0.75s the slow dynamics becomes effectively Markovian [11], and a stochastic model inferred from the time series results in effectively delta-correlated fluctuations, Fig. S1(b). To find Fϕ2 and Dϕ2 we leverage a kernel-based approach [32], based on the Kramers-Moyal expansion (e.g., [33]). Instead of estimating the drift and diffusion coefficients in discretized bins, we use kernels to obtain a more robust and continuous estimate of Fϕ2 and Dϕ2 (see Appendix A).

To probe the ability of this model to reproduce the long-lived properties of the dynamics, we identify “run” and “pirouettte” states by maximizing the metastability of both states (see Appendix A) [11], and estimate the time spent in these two behaviors, Fig. 1(d-left), Fig. S2. Interestingly, while the main exponential time scales of the first passage time distribution (FPTD) are captured by the inferred stationary Langevin dynamics, the data also exhibits a heavier tail that this model naturally does not predict. In addition, we estimated the connected autocorrelation function

Cϕ2(τ)=1σϕ22ϕ2(t)ϕ2tϕ2(t+τ)ϕ2tt,

where σϕ22 is the variance of ϕ2(t) and t represents a temporal average. In accordance with the first passage time results, we observe that, while the model captures correlations on relatively short timescales (≈ 10 s), the data exhibits long-range correlations that the model fails to predict, Fig. 1(d-right). While this discrepancy could be associated with the projection onto a single slow mode, or with the assumption of Langevin dynamics, we find that simulating the dynamics using the full model [11] results in similar predictions, showcasing the self-consistency of our coarse-graining approach, Fig. 1(d).

B. Fluctuating potential landscapes underlie the emergence of heavy tails in C. elegans foraging

While the inferred captures C. elegans foraging behavior across several scales [11], it fails to predict the heavy-tailed statistics observed at the longest times. One possible explanation for the inability of the Langevin dynamics of Eq. 1 (or the full model ) to capture these observations, is the existence of subtle hidden fluctuations that evolve on timescales comparable to the observation time and are not accurately captured by our time-delay embedding, rendering the dynamics non-stationary. Indeed, it has been observed that upon removal from food worms slowly adapt their search strategy by lowering their rate of “pirouettes” to explore wider areas in search for food [3439]. In order to have a time-evolving rate of pirouettes we would need to extend the stationary model to allow for explicitly time-dependent drift and diffusion terms,

ϕ˙2=Fϕ2,t+2Dϕ2,tη(t), (2)

that reflect the adaptation of the worm’s foraging strategy throughout the experimental time scales.

We infer time-dependent drift and diffusion coefficients in overlapping windows, defined to be long enough to equilibrate the fast dynamics but short enough such that the steady-state distribution does not change significantly (see Appendix A). Interestingly, the time evolution of the effective potential landscape, Fig. 2(a), shows that worms slowly adapt their search strategy by increasingly performing runs, in agreement with previous studies [3439]. Over time, the “run-and-pirouette” random walk is therefore biased to explore further away in search of food. Notably, this explicitly time-dependent model is sufficient to quantitatively reproduce the heavy tail of the first passage time distribution and the nontrivial long-range correlations exhibited by real worms, Fig.2(b).

FIG. 2. A time-varying potential landscape captures heavy tails in C. elegans foraging behavior.

FIG. 2.

(a) We infer a time-dependent potential landscape description from the ϕ2 time series by estimating drift and diffusion coefficients in sliding windows (see Appendix A), and show the result for an example worm. Notably, as time goes on (blue to yellow) we observe a biasing of the behavior towards increasingly performing “runs”. (b) First passage time distribution (left) and connected autocorrelation function Cϕ2(τ) (right) obtained from the data (black), simulations from the static model of Eq. 1 (blue, same as Fig. 1(d)) and simulations using a time-dependent potential landscape, Eq. 2 (orange). Notably, including an explicit time-dependence to the model parameters recovers the heavy tailed first passage times and the non-trivial long-range correlations observed in the data. Error bars represent 95% confidence intervals bootstrapped across worms.

III. LONG TIMESCALES AND THE EMERGENCE OF HEAVY TAILS IN ANIMAL BEHAVIOR

These results show that the observed heavy tails result from the adaptation of the worms’ search strategy. Could similar mechanisms underlie the widespread observation of heavy tails across behaving animals? Animals do modulate their behavior across a vast range of scales, either due to environmental factors or through endogenous fluctuating internal states driven by neuromodulation, such as hunger or stress [4042]. Such a continuum of scales would inevitably result in non-stationary dynamics since long-lived modes would prevent the relaxation to a steady-state distribution within a finite observation time Texp. We here investigate if such non-stationary fluctuations give rise to heavy-tailed statistics. In particular, we introduce a general picture of behavior in which the pose dynamics evolves in potential landscapes that fluctuate over time.

A. A fluctuating landscape picture of animal behavior

Given a set of observations of animal locomotion (e.g., from video imaging), we consider that the dynamics can be decomposed into ergodic, x, and non-ergodic, s, components. The former are the state-space variables that mix sufficiently well and define the potential wells that correspond to the stereotyped behaviors; the latter evolve on time scales τs comparable to the observation time and slowly modulate the potential landscape of x. Assuming that we can simplify the dynamics onto an overdamped Langevin description through an appropriate time scale separation, we obtain a phenomenological model of the long-lived dynamics as a system of Itô stochastic differential equations,

dxt=τx1xUxt,stdt+2Txτx1dWtx, (3)
dst=τs1sVstdt+2Tsτs1dWts, (4)

where we set τx=1 without loss of generality (what matters is the ratio τs/τx),dWtx and dWts are independent increments of a Wiener process, Tx and Ts capture the level of fluctuations in x and s respectively, U is a potential landscape where different wells correspond to long-lived stereotyped behaviors and V is assumed to be uncoupled from the dynamics of x for simplicity. In the following sections, we will that the slow modulation of the barrier heights of U(x,s) through the dynamics of s is sufficient to explain the emergence of heavy-tailed first passage time distributions and the non-trivial correlations we observed for the worm behavior, Fig. 2(b).

B. Heavy-tailed first passage times in slowly-driven metastable dynamics

In the context of the Langevin dynamics of Eq. 3, the distribution of times spent in a given behavioral state is given by the statistics of escape from a potential well, a well-studied problem in statistical physics [43]. Despite the general interest in this concept across several fields [4447], finding analytical expressions for the density of first passage time events is generally a formidable task [48]. Indeed, most results focus on the mean first passage time (MFPT), which is more tractable (see, e.g., [49, 50]). However, the MFPT provides only limited information: when multiple time scales are involved, the MFPT is not representative of the long time behavior of the distribution [51]. To investigate whether the non-ergodic dynamics of Eqs. 3,4 can give rise to heavy tails, we here derive the large time asymptotic behavior of the first passage time distribution.

The measurement time Texp separates ergodic from non-ergodic dynamics. Importantly, it also sets a lower bound on the slowest observed hopping rates ωmin~Texp1, such that when τs=𝒪Texp we can make an adiabatic approximation and assume that transition events occur within a nearly static potential. The long-time behavior of the first passage time distribution is dominated by the deepest potential well, which when stationary yields a first passage time distribution

f(t,ω)=ωeωt, (5)

where ω is the dominating slow kinetic transition rate that we assume to be dependent on s. When we allow s to fluctuate, ω also varies, and the distribution of first passage times f(t) is given by the expectation value of f(t,ω) over the distribution of ω,p(ω), weighted by the effective number of transition observed within Texp, which is proportional to ω. Marginalizing over ω we get

f(t)ωminωmaxp(ω)×ω×ωeωtdω. (6)

While the barrier height depends on the dynamics of a slow control parameter s, the tail of the distribution is dominated by instances in which the barrier height is the largest, motivating the use of Kramers approximation (see, e.g., [43, 52]),

ω(s)=ω0expΔU(s)Tx, (7)

where ΔU(s)=Uxf,sUx0,s and ω0 is a constant (see Appendix B). Assuming that each measurement starts from different initial conditions sampled according to a Boltzmann weight, the distribution of s is given by [53],

p(s)expV(s)Ts. (8)

As shown in Appendix B, when the barrier height fluctuations are large enough to yield an ωmin1 that is comparable to the measurement time Texp,ωmin1~Texp, we can combine equations Eqs. 6,7,8 to obtain an asymptotic approximation of the FPTD in the large t limit,

f(t)~t2expVΔU1Txlogω0tTs, (9)

where ΔU1() represents the inverse function of ΔU(s) and we have kept only the dominant order of the asymptotic approximation (see Appendix B). Importantly, when Ts we obtain f(t)~t2 under very general assumptions for the form of V(s) and U(x,s). In addition, when V(s) and ΔU(s) are asymptotically equivalent, f(t) behaves as a power law f(t)~t2cTxTs,cR+ with an exponent that depends on the ratio between the fluctuations in x and in s. This derivation qualitatively recovers what we observed for the worm behavior, Fig. 2, showcasing how slow non-ergodic drives can indeed give rise to heavy-tailed first passage time distributions. In the following section, we demonstrate these results with illustrative examples.

1. Poisson process with varying hopping rates

To probe our theoretical predictions, we first assume that hopping events are well captured by a Poisson process and that the modulation of the potential landscape is infinitely slow such that the adiabatic approximation of Eq. 6 holds exactly. In practice, we sample s according to the Boltzmann distribution, Eq. 8, and in order to relax from the Kramers approximation, we obtain ωs through the backward Kolmogorov equation [52],

ω1(s)=2TxxxmaxeU(y,s)/TxdyxminyeU(z,s)/Txdz. (10)

We then sample events according to the distribution of first passage times f(t,ω), Eq. 5, until reaching the measurement time Texp, Fig. 3(a). We take U(x,s)=s2(x21)2 to be a symmetric double well potential and sample s according to a Boltzmann distribution with p(s)exp{(sμs)22Ts}, corresponding to an Ornstein-Uhlenbeck process. Since the first passage time distributake VΔU1(x)~x/2. From the derivation of Eq. 9, we expect that the final distribution of first passage times will be given by,

f(t)~t2Tx2Ts. (11)

Indeed, this is what we find through numerical simulations, Fig. 3(b).

FIG. 3. Heavy-tailed first passage time distribution in Poisson process with varying hopping rates.

FIG. 3.

(a) Schematic of the simulation process. For each realization, we sample s according to the Boltzmann distribution, p(s). The hopping rate corresponding to a particular sample si is then determined by the backward Kolmogorov equation, Eq. 10, and event durations are sampled according to the first passage time distribution f(t,ω)=ωeωt until reaching the experimental timescale Texp. This process is then repeated over N=50,000 realizations (see Appendix A). (b) First passage time distribution for the Poisson process with varying hopping rates. We collect the duration of the event from our simulations and estimate their probability distribution function (PDF). As predicted, we obtain a power law with an exponent f(t)~t2Tx2Ts.

2. Slowly-driven double-well potential

We now relax the simplifications of the previous analysis and fully simulate the two-dimensional stochastic dynamics for a double-well potential whose barrier height is slowly modulated according to an Ornstein-Uhlenbeck process, Fig. 4(a). The dynamics are given by an Itô stochastic differential equation,

dxt=4st2xtxt21dt+2TxdWtxdst=τs1stμsdt+2Tsτs1dWts, (12)

where Tx=103,μs=Tx,τs=103Texp and dWtx,dWts are independent increments of a Wiener process (see Appendix A for simulation details). Since the tail of f(t) is dominated by large s values, we again take V(s)~s2/2, and thus VΔU1(x)~x/2. From the derivation of Eq. 9 we find,

f(t)~t2Tx2Ts. (13)

To test this result we performed direct numerical simulations of Eq. 12 while varying Ts and τs, Fig. 4(b,c), S3. We observe a transition from exponential to power-law behavior when τs becomes comparable to Texp for a fixed Ts. For small τs, the potential landscape relaxes much faster than the time it takes to escape a potential well, resulting in exponential behavior as if the potential U(x,s) was static with s=μs. For intermediate τs, the first passage time distribution behaves as a truncated power law, with an exponential cut-off emerging for τs<t<Texp, Fig. S3. For τs sufficiently large, τs~Texp, our adiabatic approximation holds and the simulations exhibit a power law tail. In Fig. 4(c), we keep τs large and vary Ts in the range Tx/4,2Tx. The direct numerical simulations quantitatively recover the dependence of the power law exponent on the ratio between the fluctuations in x and s, Eq. 13, approaching t2 as Ts.

FIG. 4. Emergence of heavy-tails in the first passage time distribution of a slowly-driven double-well potential.

FIG. 4.

(a) Schematic of the variation in the double-well potential with s (colored from blue to red; the black line represents s=μs). (b) Probability distribution function (PDF) of the first passage times obtained from direct numerical simulations of Eq. 12 for different values of τs and Ts=Tx/2 (see Appendix A). When τs0, the potential landscape relaxes to its mean value much faster than the time it takes to escape the well, resulting in exponential behavior with a hopping rate corresponding to μs (black line). As τs approaches Texp, we start observing a transition from exponential to power law behavior, and in the limit of large τs we obtain the power law behavior derived in Eq. 13 (black dashed line). (c) Estimated FPTD from direct numerical simulations of Eq. 12 for large τs=103Texp and different values of Ts (see Appendix A). As predicted, the tail of the distribution behaves as a power law f(t)~t2Tx2Ts (colored lines) with an exponent that approaches −2 as Ts (black dashed line).

C. Long-range correlations and their finite-size corrections in slowly-driven metastable dynamics

In the previous section, we have shown that slowly varying barrier heights can give rise to power-law tails in the distribution of first passage times. Here, we extend these results to show that the correlation function also exhibits heavy tails, and that finite-size corrections give rise to the long-range anti-correlations observed in the worm behavior, Fig. 2(b-right). The (connected) normalized autocorrelation function of x is given by

Cx(τ)=x(t)x(t+τ)x2x2x2, (14)

where represents the ensemble average over the invariant density. In our generic model of behavior, Eqs. 3,4, the long-time behavior of the correlation function of x is dominated by the first non-trivial eigenvalue of the Fokker-Planck operator, Λ1(s), which is proportional to the slowest hopping rate Λ1(s)ω(s) [33, 54],

Cx(τ,s)~eΛ1(s)τ.

In the adiabatic limit, the correlation function of x can be obtained through a weighted average over the slowly fluctuating ω,

Cx(τ)~ωminωmaxp(ω)×eωτdω. (15)

Notice that, compared to the expression for the first passage time distribution f(t), Eq. 6, the integrand is divided by ω2: one ω is dropped since Cx(t)f(t,ω)/ω and the other ω factor attributed to the effective number of observed transitions ωTexp is also dropped since the correlation function is not simply determined by the transition events. Following the same steps as for f(t) (see Appendix B), we expect that

Cx(τ)~expVΔU1Txlogω0τTs, (16)

to the dominant order in the asymptotic approximation for large time delays τ. As for the FPTD, when V(s) and ΔU(s) are asymptotically equivalent, Cx(τ) behaves as a power law Cx(τ)~τcTxTs,cR+. In this case, both the first passage time distribution f(t)~tβ and the correlation function Cx(τ)~τγ have power-law tails, with exponents that are related by γ=β+2.

To illustrate these results, we return to the example of the slowly forced overdamped dynamics on a double-well potential, Eq.12. The ergodic expectation would be that the correlation function should asymptote to Cx(τ)~τTx2Ts. In particular, in the ergodic limit, we have x=0 (since the potential is symmetric around xf=0) and the connected correlation function is simply given by its nonconnected component C˜x(τ)x(t)x(t+τ). Indeed, if we measure the non-connected correlation function from numerical simulations of Eq. 12 (without subtracting the mean and normalizing by the variance, see Appendix A), we recover the theoretical expectation of power-law correlations for large τs, Fig. 5(a).

FIG. 5. Power-law correlations and finite-size corrections in a slowly-driven double well potential.

FIG. 5.

(a) Estimated non-connected correlation function C˜x(τ)=x(t)x(t+τ) (see Appendix A) for the position of a particle in a double well potential driven on a timescale τs=102Texp. As predicted, the correlation function exhibits power-law tails Cx(τ)~τTx2Ts (solid lines). Error bars represent 95% confidence intervals across 50,000 simulations. Large τ estimates become increasingly challenging with the growth of the exponent. (b) Connected autocorrelation function, Cx(τ), directly estimated through time averages (see Appendix A), for the position of a particle in a double well potential driven on a timescale τs=102Texp. Due to the existence of timescales comparable to the observation time Texp, the connected correlation function exhibits finite-size effects that drive the appearance of long-range anti-correlations, as predicted from the finite-size correction to the correlation function Cc [55] derived in Appendix D (solid lines). For both the empirical Cx(τ) and Cc we normalize the correlation functions by dividing by their value at τ=1lag=5×104Texp. Error bars represent 95% confidence intervals across 50,000 simulations. (c) Finite-size correction to the correlation function as a function of Ts (see Appendix D). As we increase the temperature, the range of observed ω grows, and so do the deviations from the ergodic expectation, resulting in more apparent finite-size effects with clear anti-correlations (blue) appearing for large Ts/Tx. Conversely, for very small Ts the finite size effects become negligible due to the fact that the longest sampled ω1 is much shorter than the experimental time scale Texp.

Notice, however, that in the empirical estimation of the connected correlation function, the expectation values are typically estimated by averaging in time (see Appendix A). In the presence of slow time scales, we expect that such temporal averages μˆx=1Texp0Texpx(t)dt will deviate significantly from the ensemble average of x with respect to the invariant measure π(x),μx=xπ(x)dx. Such weak ergodicity breaking results in finite-size effects in the estimation of the connected correlation function from time series data [56]. In particular, when x relaxes to its steady state expectation value μx on time scales comparable to the observation time Texp, we expect that on average x(t)μˆx will change signs as time progresses. This transient behavior results in apparent long-range anti-correlations, since on average x(t)μˆx and x(t+τ)μˆx will have different signs for large τ [57]. Therefore, we expect that our derivation of Eq. 16 will deviate from the direct empirical estimation of the connected autocorrelation function, particularly when slow time scales are present in the dynamics. Indeed, when we estimate the connected correlation function directly through temporal averages (see Appendix A), we observe the appearance of long-range anti-correlations, Fig. 5(b). Importantly, using our analytical derivation of the non-connected correlation Cˆx(τ) and the results of [55], we can derive an expression for the finite-size correction Cc(τ) to the connected correlation (see Appendix D), that correctly approximates the behavior of the empirical estimate of Cx(τ), Fig. 5(b).

As Ts becomes smaller, the finite-size effects become less apparent, reflecting the fact that the slowest generated timescales are closer to the mean hopping rate, Fig. 5(c). Conversely, finite-size effects become clearer the larger Ts is. In addition, for sufficiently small τs we observe that the non-connected correlation function exhibits exponential tails, which become a power law only when τs~Texp, Fig. S4(a-left), which is the regime in which our adiabatic approximation holds. However, even when the tail of the correlation function is exponential, finite-size corrections are still apparent, as long as the exponential timescales are sufficiently long to be comparable to the measurement time scale, Fig. S4(a-right,b).

We therefore uncover that slowly-fluctuating energy landscapes generally give rise to long-range temporal correlations, which become anti-correlations due to finite-size effects. In addition, we find that even when the modulation of the potential landscape is fast, finite-size effects might still be apparent as long as the variation in the barrier height is sufficient to give rise to hopping rates that are comparable to the measurement time. Our theoretical results thus qualitatively recover the non-trivial correlations observed in foraging worms, Fig. 2(b-right).

IV. DISCUSSION

We combine theory, numerics, and data analysis to show that the multiplicity of timescales inherent to animal behavior is sufficient to give rise to heavy-tailed first passage times and long-range correlations.

We start by analyzing the movement dynamics of the C. elegans nematode, where we build an effective reduced order model from high-resolution measurements of the animal pose, bridging from ~ 0.1 s chaotic posture dynamics to ~ 10 s stochastic hopping among “runs-and-pirouettes” [18]. The spatio-temporal dynamics of the animal’s pose results from a nontrivial combination of neural and biomechanical influences, for which building a first-principles microscopic model is extremely challenging. Nonetheless, we here obtain a one-dimensional stochastic differential equation that self-consistently captures the long-lived properties of the inferred dynamics, Fig. 1(d).

Technically speaking, we build an overdamped description of a partially-observed system with metastable dynamics, combining ideas from reduced order modeling [25, 5860] and stochastic model inference [32, 61, 62]. Our contribution here is mostly conceptual: instead of assuming structure a priori, we reveal it by carefully analyzing the time series. We first found that time delays are required to recover minimal-memory evolution [9, 11], then leveraged transfer operators to discover that the system exhibits timescale-separated long-lived modes [11], and finally recovered a self-consistent effective overdamped Langevin description of the slow mode dynamics. This combination of first principles approaches, which is agnostic to the properties of the system under study, is what allowed us to build such a complete picture of a system for which physical intuition is elusive. We note, however, that each step in our framework could be enhanced by incorporating modern tools from machine learning (see, e.g., [63]), and we leave that for future work.

We find heavy-tailed first passage time distributions and correlation functions in the worm data, Fig. 1(d), that could not be captured by a model featuring autonomous dynamics only. This is perhaps inevitable: behavior is influenced by a wide range of biological mechanisms acting across a multitude of scales, and the fact that an autonomous Markovian description of the posture dynamics is predictive at all is surprising [64, 65]. In fact, while our framework tries to capture the effect of hidden variables through a delay embedding, slow non-ergodic modes would require a prohibitively large number of time delays to be properly encoded in the reconstructed state-space. To capture such non-ergodic modulations, we leverage our stochastic model as a locally-adiabatic approximation of the dynamics, and allow the parameters that define the effective potential landscape to change in time. This approach allows us to improve upon the predictions of the static model, and effectively capture the heavy-tailed first passage time statistics and the long-range correlations exhibited by the worm, Fig. 2. In addition, we find that the slow modulation of the potential landscape effectively encodes the adaptation of the worm’s foraging strategy, biasing their random walk to search for food further away. In this way, we go beyond classical approaches to reduced-order modeling, recognizing the existence of non-ergodic fluctuations and introducing an explicitly non-stationary model to encode them.

Our results indicate that the heavy tails observed for C. elegans foraging behavior results from the slow adaptation of their search strategy. To test this experimentally, one could perturb the neural circuits responsible for the adaptation of “pirouette” rates, such as dopaminergic and glutamatergic signaling [34]. In the absence of adaptation, we expect that, while shorter timescale movements remain unaffected, the dwell times in the “run” and “pirouette” states would become exponentially distributed, rather than heavy-tailed, and that the correlation function would simply decay exponentially to zero on faster timescales.

The analysis of C. elegans data also suggests a general mechanism for the emergence of heavy-tailed first passage time distributions and long-range correlations in animal behavior. We investigate this by introducing a generic model in which the posture dynamics evolves on slowly fluctuating potential landscapes. We find that when non-ergodic fluctuations are sufficiently slow and strong, the first passage time distribution asymptotes to a power law with an exponent f(t)~t2, and otherwise exhibits corrections that depend on the ratio between the ergodic and non-ergodic fluctuations and the details of the dynamics. In addition, we find that estimates of the connected correlation function exhibit long-range anti-correlations due to finite size effects and that, in the absence of such effects, the correlations would be long-ranged, appearing scale-free when V(s) and ΔU(s) are asymptotically equivalent.

In the context of animal behavior, heavy-tailed distributions of first passage times (or run lengths, at constant speed) with an exponent f(t)t2 have been found extensively across multiple species, from bacteria [66], termites [67] and rats [68] to marine animals [69, 70], humans [71] and even fossil records [72]. In the context of search behavior, such observations have led researchers to hypothesize that Lévy-flights (with an exponent of −2) result in efficient search strategies and are thus evolutionarily favorable [7377], although this view has been met with some controversy [7879]. Indeed, we here show that fat tails appear simply from the fact that the animal continuously adapts its behavior over time, leading to a broad distribution of “run” times (and thus run lengths). Therefore, such emergent behavior need not be fine-tuned by evolution per se, although one might argue that it is a by-product of the evolutionarily favorable ability to perform adaptive behavior.

Interestingly, the notion that time-dependent energy barriers can give rise to power-law waiting time distributions has also been used to explain observations in bac-terial chemotaxis [80]. However, the analysis of [80] concerns only a particular limit of our derivation, in which Ts and the distribution of hopping rates becomes uniform. Indeed, our analysis is more general: we consider the full dynamics of coupled overdamped Langevin equations, and predict corrections to the power-law behavior that go beyond the limits deployed in [80]. Another mechanism that has been proposed to explain the emergence of Lévy-flights in animal behavior is the existence of multiplicative noise terms in the dynamics [81, 82], and this notion has recently been used to explain the emergence of Lévy-flights in the collective behavior of midge swarms [83]. Our analysis is somewhat analogous to this argument. Indeed, Eqs. 3,4 give rise to an effectively colored multiplicative noise term for the quasistationary behavioral dynamics. However, our analysis also goes beyond that of [83], as we explicitly explore the dependency of the heavy tails on the relationship between the correlation time of the colored noise τs and the measurement time Texp, and between the additive and multiplicative noise terms.

Our starting point is an effective description of the long-time scale dynamics, Eqs. 3,4, and further work will be required to fully bridge between the microscopic dynamics and the emergent long time behavior, especially when a time scale separation is not evident, or when the adiabatic approximation does not hold. For example, we find that for intermediate values of 1τsTexp and finite Ts the numerically estimated FPTD behaves as a truncated power law with an effective exponent that can be smaller than −2. In this regime, the barrier heights fluctuate significantly before the particle hops. Intuitively, we expect that if barrier-crossing events become uncorrelated, the extra ω correction in the FPTD, coming from the increased probability of observing hopping events for larger ω, drops out, resulting in a FPTD with a dominant power law with an exponent of t1, instead of t2 [84]. In the opposite regime, we note that when τsTexp, it is the distribution of initial conditions (which we here assumed to be Boltzmann distributed) that determines the emergent behavior. This assumption holds if we consider that behavioral “individuality” is equivalent to having an extremely slow mode driving the dynamics τsTexp. This would mean that in finite observations from a population of conspecifics, different animals will exhibit a degree of “individuality” that matches the steady-state distribution of such longlived modes. Indeed, such a relationship between interindividual variability and long-lived temporal variability in behavior has been observed in flies [85]. In this sense, when τsTexp, our results are equivalent to explaining the emergence of heavy tails through inter-individual variability [86]. If such variability differs from the Boltzmann assumption, the heavy tails need to be corrected accordingly, following the steps of our derivation but with a corrected p(s).

In addition to heavy-tailed FPTDs, we also derived that when the barrier height is slowly-driven long-range correlations emerge, which become anti-correlations when the ergodic assumptions break and the temporal averages become plagued by finite-size effects. Notably, a recent arXiv preprint that followed the original arXiv version of this manuscript, has shown evidence for power-law correlations in the behavior of fruit flies [87]. These observations fit our theoretical predictions, and we argue that they might stem from non-ergodic internal states. Indeed, we expect there to be slow modes that evolve on timescales comparable to the 1 hour recordings used in [87], see [88, 89].

Power laws have been observed in a wide variety of systems, from solar flares [90, 91] to the brain [92] and different hypotheses have been put forward to explain their emergence (for a review see e.g. [93]). In disordered systems for example [94, 95], averaging over an exponential distribution of barrier heights can give rise to a broad distribution of waiting times. Note however, that while this mechanism is at its core analogous to the one presented here, ours relies on the temporal (rather than spatial) variation of barrier heights, resulting in distinct emergent behavior that depends directly on the measurement time scale Texp (that sets the lowest hopping rate ωmin) and the magnitude of the non-ergodic fluctuations.

With respect to power laws in biological systems and, in particular, in neuroscience, work inspired by phase transitions in statistical mechanics associates such power laws to “criticality” [96, 97], since models inferred from data appear to require fine-tuning of the parameters to a special regime between two qualitatively different “phases” (see, e.g., [98]). However, power laws can emerge without fine tuning and far from “criticality” [99], a clear example being Alder tails in hydrodynamics [100]. Here, we show how apparent “criticality” can emerge from the presence of slow non-ergodic drives, regardless of the details of the dynamics. Indeed, slow modes that evolve on time scales comparable to the observation time are challenging to infer from data, and can give rise to best-fit models that appear “critical”. While some of the arguments we have put forward have also been proposed to explain neural “criticality” [101103], we here generalize to a wider range of model classes, using the framework of out-of-equilibrium statistical mechanics to explicitly connect the long time scale emergent behavior with the underlying effective fluctuations. In addition, unlike other approaches [102, 104], our framework does not require explicit external drives, but simply collective modes that evolve in a weakly non-ergodic fashion.

We have used a physics approach to shed light on biological phenomena, leveraging statistical mechanics as a framework for thinking about the effect of slowly-varying internal states on animal behavior. Simultaneously, the observations from animal behavior also inspired new physics, leading to general results regarding the emergence of heavy tails in slowly-driven potential landscapes, a result that we believe is relevant to a wide range of natural systems in chemistry, biology, or finance (see, e.g., [4447, 50, 105, 106] and references therein).

Supplementary Material

1

ACKNOWLEDGEMENTS

We thank Adrian van Kan, Stéphan Fauve, Federica Ferretti, Tosif Ahamed, Nicola Rigoli and Arghyadip Mukherjee for their comments. This work was partially supported by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*. AC also acknowledges useful discussions at the Aspen Center for Physics, which is supported by National Science Foundation Grant PHY-1607611.

APPENDIX A: METHODS

Software and data availability:

Code for reproducing our results is publicly available: https://github.com/AntonioCCosta/fluctuating_potential. Data can be found in [107].

C. elegans foraging dataset:

We used a previously-analyzed dataset [26], in which N2-strain C. elegans were tracked at f=16Hz [24]. Worms were grown at 20C under standard conditions [108]. Before imaging, worms were removed from bacteria-strewn agar plates using a platinum worm pick, and rinsed from E. coli by letting them swim for 1 min in NGM buffer. They were then transferred to an assay plate (9 cm Petri dish) that contained a copper ring (5.1 cm inner diameter) pressed into the agar surface, preventing the worm from reaching the side of the plate. Recording started approximately 5 min after the transfer and lasted for 35 mins.

Data-driven reduced order model of C. elegans foraging dynamics:

Building upon previous work [911], we extract a slow reaction coordinate that captures transitions between “runs” and “pirouettes” from the posture dynamics of C. elegans. The first step consists in performing a time-delay embedding of the instantaneous posture measurements to include short-term memory into an expanded maximally predictive state XK*. The amount of time delays K* used to reconstruct the state space is chosen so as to maximize predictive information [10, 11]. In this way, all the dynamics that mix on a sufficiently fast timescale (compared to the measurement time) should be included in the state. We then partition the state space into a large number of discrete symbols through k-means clustering, and choose the number of partitions so as to preserve as much information as possible in the discretization [10, 11]. The outcome of the partitioning is a symbolic sequence, where each symbol si corresponds to a small region of the state space. We then build a Markov chain by counting transitions among state-space partitions separated by a timescale τ,Pij(τ)=Psj(t+τ)si(t)eτ [10, 11], effectively approximating the action of the Perron-Frobenius operator (see, e.g., [109]). The transition time τ*=0.75s was chosen so as to self-consistently capture the long-lived dynamics [10, 11]. The eigenfunctions of the Perron-Frobenius operator, and its adjoint, the Koopman operator, capture global patterns of the dynamics that relax to the steady-state distribution on different timescales. In particular, the slowest eigenfunctions of these operators offer optimal reaction coordinates that capture the slow dynamics of the system [59, 60, 110]. The slowest left eigenvector of the reversibilized transition matrix ϕ2 captures transitions among “runs” and “pirouettes” that C. elegans uses to forage [11]. We find the transition point ϕ2c between “runs” and “pirouettes” by maximizing the overall coherence of the metastable states [10, 11], and recenter and rescale ϕ2 to have ϕ2c=0 at the transition point and to have equally-spaced values within 2,ϕ2c=0 and ϕ2c=0,2 [25]. Finally, each symbol si assumes a particular value of ϕ2, and so we can translate the symbolic sequence into a stochastic time series ϕ2(t) that captures transitions between “runs” and “pirouettes”, see Fig. 1(ac).

Two-dimensional UMAP embedding of the reconstructed state space:

We use the UMAP embedding [111] as a tool to visualize the maximally predictive states XK* of C. elegans posture dynamics [11]. In a nutshell, the UMAP algorithm searches for a low-dimensional representation of the data that preserves its topological structure. We use a publicly available implementation of the algorithm https://github.com/lmcinnes/umap, using Chebyshev distances, n_neighbors=50 nearest neighbors and min_dist=0.05 as the minimum distance.

Stochastic model inference:

The Kramers-Moyal expansion transforms the master equation for the dynamics of Eq. 1 into a Fokker-Planck equation

tρ=ϕ2Jϕ2=ϕ2Fϕ2ρ+ϕ22Dϕ2ρ, (A1)

where Jϕ2 is the current, and

F(x)=limτ01τϕ2(t+τ)ϕ2(t)ϕ2(t)=x,
Dx=limτ012τϕ2t+τϕ2t2ϕ2t=x.

We use this expansion to estimate Fϕ2 and Dϕ2. In practice, given a time series Yt, we estimate the averages in the Kramers-Moyal expansion using a kernel approach [32],

F(y)τ,h=1τKhyYtKhyYttYt+τYtt
D(y)τ,h=12τKhyYtKhyYttYt+τYt2t,

where Kh(z)=h1κ(z/h) and κ is the Epanechnikov kernel [112, 113],

κ(y)=3451y25, if y2<50, if y2>5.

Importantly, the estimator has an explicit dependence on the time delay τ and the bandwidth h. First, as discussed in the main text, we choose τ long enough such that most of the temporal correlations in the noise have decayed to zero. It has been shown that τ*=0.75s gives an accurate first-order Markov model of the worm dynamics [11], and accordingly we find that a stochastic model inferred with τ*=0.75s yields nearly delta-correlated noise, Fig. S1(b). Given this time delay τ*, we choose the bandwidth through the Δ-algorithm introduced in [32]. In essence, for each bandwidth h we estimate Fτ*,h and Dτ*,h and generate simulations with the estimated Fτ*,h and Dτ*,h. From such simulations, we then re-infer the drift and diffusion from the simulated time series, obtaining Fˆτ*,h and Dˆτ*,h. Finally, we compare the re-inferred drift and diffusion to the ones estimated directly from the time series,

ξ(h)=fτ*,hfˆτ*,hπ(y)πˆ(y)dyπ(y)πˆ(y)dy, (A2)

where f can be either F or D,π(y) is the steady-state distribution obtained from Fτ*,h and Dτ*,h and πˆ(y) is the one obtained from Fˆτ*,h and Dˆτ*,h. We choose h* as the first minimum of ξ(h) [32], locally minimizing the difference between original and reconstructed drift and diffusion coefficients and avoiding the trivial minimum that corresponds to h (which yields constant F and D). In Fig. S1(a) we plot the change in ξ(h) as a function of h, which reaches zero at around h*0.1. We choose h*=0.08 to infer the time series of ϕ2(t).

Non-stationary stochastic model inference:

We proceed as before, but now infer Fτ*,h*w and Dτ*,h*w in overlapping 5 min windows. The window length was chosen long enough to allow for equilibration of the “run” and “pirouette” dynamics, which has a mixing time of ≈ 3 s [11], but also short enough such that the steady-state distribution remains approximately constant.

Reconstructing an effective potential landscape:

From the Fokker-Planck equation, Eq. A1, with natural boundary conditions, Jϕ2=0, we can obtain the steady-state solution ρ=π, satisfying tπ=0, as,

πϕ2=expFϕ2ϕ2Dϕ2Dϕ2dϕ2.

Writing the steady-state distribution as a Boltzmann factor [114],

πϕ2eβVϕ2,

with β=1, we can identify an effective potential landscape Vϕ2,

Vϕ2=Fϕ2ϕ2Dϕ2Dϕ2dϕ2.

The same approach applies to the time-dependent stochastic model, where each window has its own local steady-state, and the effective potential landscape is time dependent,

Vϕ2,t=Fϕ2,tϕ2Dϕ2,tDϕ2,tdϕ2.

Stochastic model simulations of ϕ2:

We simulate the dynamics using an Euler scheme with the same sampling time as the data t=1/16s. For the non-autonomous model, we take Fτ*,h*w and Dτ*,h*w of the window with a center closest to the sampled time point.

Fine-scale Markov model simulations:

As in [11], we simulate symbolic sequences by sampling the next state according to the condition probability distribution Psjt+τ*si(t), which is simply the i-th row of Pijτ*. From this symbolic sequence, we can then obtain a simulated time series of ϕ2(t) sampled on a timescale τ*.

Estimating the first passage time distributions in C. elegans foraging dynamics:

We estimate the time spent either performing a “run” or a “pirouette” by identifying segments where ϕ2(t)<0 (runs) or ϕ2(t)>0 (pirouettes). To remove short-time fluctuations we subsample the data and the simulated time series by τ*/2.

Empirical estimate of the connected autocorrelation function:

We estimate the connected autocorrelation function from M time traces at each lag τ=lδt, as

Cˆx(lδt)=1Mα=1M1Nli=1Nxα,ixα,i+l1Ni=1Nxα,i21Ni=1Nxα,i21Ni=1Nxα,i2, (A3)

where xα,i is the i-th frame of the α trace with length N=Texp/δt.

Estimating the first passage time distribution of a Poisson process with varying hopping rates:

We sample s according to the Boltzmann distribution p(s)expsμs22Ts, and convert it to a hopping rate ω(s) by numerically integrating the backward Kolmogorov equation, Eq. 10. We then sample first passage time events according to Eq. 5, until reaching the measurement timescale Texp. We repeat this process 50,000 times, and collect the statistics of waiting times to build a normalized histogram of first passage times with logarithmic bins, which we show in Fig. 3.

Estimating the first passage time distribution in the slowly-driven double well potential:

We generate 10,000 simulations of Langevin dynamics of Eq. Eq. 12, through an Euler-scheme with a sampling time of δt=103s for Texp=107s. We then vary τs in the range 104Texp,104Texp and Ts in the range Tx/4,2Tx, where Tx=103 and μs=Tx. The initial conditions x(0) are sampled randomly either as x(0)=1 and x(0)=1 with equal probability and s(0)~𝒩μs,Ts is sampled according to the Boltzmann distribution. From the simulations of x(t), we then estimate the first passage time distribution by first identifying all the segments, t0,tf, in which t0 corresponds to the first time x returns to x0=±1 after reaching xf=0, and tf is the time first to reach xf=0 after t0. Finally, we build a normalized histogram of first passage times with logarithmic bins, which we show in Figs. 4,S3.

Estimating the autocorrelation functions in the slowly-driven double well potential:

We generate 50,000 simulations of Langevin dynamics of Eq. 12, through an Euler-scheme with an initial sampling time of dt=0.2s that is downsampled to δt=100s for Texp=108s, with Tx=102,μx=Tx,τs sampled in the range 102Texp,102Texp and Ts sampled in the range 1.25Tx,10Tx. We then estimate the connected autocorrelation function from the simulations using Eq. A3. The non-connected correlation function is estimated as,

C˜ˆx(lδt)=1Mα=1M1Nli=1Nxα,ixα,i+l,

and then normalized by dividing by C˜ˆx(l=1). The finite-size corrections to the correlation function are detailed in Appendix D.

APPENDIX B: FIRST PASSAGE TIME DISTRIBUTION IN SLOWLY FLUCTUATING POTENTIAL LANDSCAPES

We here derive the expression for the first passage time distribution (FPTD) in a fluctuating potential landscape. As discussed in the main text, we consider the adiabatic limit in which the FPTD can be approximated by

f(t)ωminωmaxp(ω)ω2eωtdω,

where ω=ω0expΔU(s)/Txs(ω)=ΔU1Txlogω/ω0, and ω0 is a typical (fast) frequency of the hopping dynamics [115]. The distribution p(ω) obeys p(ω)dω=p(s)ds, where p(s)expV(s)/Ts, and is thus given by

p(ω)expV(s(ω))TsTx/ωsΔU(s).

Plugging this into Eq. 6, we get

f(t)ωminωmaxexpV(s(ω))TsTxsΔU(s)ωeωtdω. (B4)

The exponential factor eωt restricts the contributions to ω~1/t, which motivates the change of variable ω=θt. The above integral is then recast in the form

f(t)t2θmin(t)θmax(t)expθV(s(θ))Ts+log(θ)sΔU(s(θ))dθ, (B5)

where s(θ)=ΔU1Txlogθω0t,θmin(t)=ωmint and θmax(t)=ωmaxt.

To grasp the structure of the integral, it is convenient to consider first the special case where V and ΔU can be written as a power series expansion V(s)~asn and ΔU(s)~bsn,a,bR with an equal dominant (at large values of the argument, see below) exponent n. The integral reduces then to the form

f(t)t2aTxbTsθminθmaxθ1+aTxbTseθlogθωot11ndθ.

It remains to verify that the time dependencies at the denominator of the integrand and the limits of integration do not spoil the behavior at large times. This is verified by noting that the numerator of the integrand has the structure of an Euler-Γ function of order 2+aTxbTs. The numerator of the integrand has its maximum at θ*=1+aTxbTs, decays over a range of values of order unity (we consider aTxbTs to be small or of order unity) and vanishes at the origin. In that range, the argument of the power at the denominator logω0tlog(θ)logω0t, which yields the final scaling with subdominant logarithmic corrections

f(t)~t2aTxbTs×logω0t1n1. (B6)

To complete the argument, we note that the time dependency of θmin is not an issue as long as values θ~O(1) are in the integration range. In practice, this means that the minimum hopping rate ωmin should be comparable to (or larger than) the measurement time, ωmin1~𝒪Texp.

Before moving to the general case, two remarks are in order. First, for ω0t1 the functions V and sΔU that appear in Eq. B5 have their argument s1. The dominant behavior of the two functions should then be understood for large values of their arguments. Second, the denominator sΔU could a priori be included in the exponential at the numerator but this does not modify our conclusion. It is indeed easy to verify that the maximum θ* and the decay range would not be shifted at the dominant order (and this holds also for the general case considered hereafter).

We can now consider the general case with different dominant exponents V(s)~asn and ΔU(s)~bsk,a,bR. The argument of the exponential in Eq. B5

L(θ)=θV(s(θ))Ts+log(θ), (B7)

has its maximum at θ*, defined by the implicit equation

θ*=1+TxTssVsθ*sΔUsθ*=1+TxTsanbksnk,

where we have used

θV(s)=sV(s)×ds(θ)dθ;ds(θ)dθ=Tx/θsΔU(s).

For n<k, the maximum θ*1 (as s1) and the integrand decays in a range of order unity. Indeed, the dominant order of the derivatives pL(p2) at θ=θ* coincide with those of log(θ). It follows that L(θ)Lθ*logθ/θ*θθ*. The resulting integral over θ is an Euler Γ-function of order two, which indeed forms at values O(1). In that range, s~Txblogω0t1/k and the integral is then approximated by expLθ*, so f(t) becomes

f(t)~t2expaTxblogω0tn/kTs.

The factor at the denominator in Eq. B5 is 𝒪exp(1/k1)loglogω0t and thus of the same order as terms that we have discarded in our approximation so we neglect it as well. Since the integral over θ forms for values O(1), the constraint on the minimum hopping rate is the same as for the n=k case, i.e., ωmin1~𝒪Texp.

For n>k, the maximum θ*~logω0tn/k1, which is now large. The dominant order of the derivatives pL(p2) at θ=θ* is given by (1)p1(p1)!θ*(p1), that is they coincide with those of θ*log(θ). It follows that L(θ)Lθ*θ*logθ/θ*θθ*/θ*. The resulting integral over θ is an Euler Γ-function of (large) argument θ*+1: its value is approximated by Stirling formula, which yields θ/θ*θ*eθθ*dθθ*. The θ* reflects the fact that the integral forms around the maximum at θ* of the integrand over a range θ*, which implies that the approximation logθω0tlogω0t still holds, as in the previous cases nk. The θ*, as well as the logω0t1/k1 coming from the denominator in Eq. B5, are subdominant with respect to terms that we have neglected in the expansion of L. We therefore discard them from our final approximation for n>k:

f(t)~t2expaTxblogω0tn/kTs.

Since the integral over θ forms for values Ologω0tn/k11, the condition ωmin1~𝒪Texp ensures a fortiori that the finite value of ωmin does not affect the above result.

Discarding subdominant terms, in all three cases we thus get the general expression we present in the main text,

f(t)~t2expaTxblogω0tn/kTs. (B8)

To verify the validity of the above arguments, we show in Fig.S5 how, to the dominant order, asymptotic predictions agree with a detailed numerical integration of Eq. B4 for ΔU(s)=sk and V(s)=sn.

APPENDIX C: CORRELATION FUNCTIONS IN SLOWLY FLUCTUATING POTENTIAL LANDSCAPES

We here derive the expression for the correlation function in a fluctuating potential landscape. In general, the autocorrelation function can be expressed as a sum over exponential functions [33, 54],

Cx(τ)=x(t)x(t+τ)x2x2x2=icieΛiτ

where ΛiΛ1<Λ2< are the eigenvalues of the Fokker-Planck operator and ici=1. Since we are interested in the long-term behavior of systems with energy barriers that can fluctuate over time, we assume that the large τ behavior of the correlation function asymptotes to

Cx(τ)~eΛ1τ,

where Λ1 is the first non-trivial eigenvalue, which captures the longest-lived dynamics in the system. In addition, we assume that there is always a deeper well, with an escape rate ω, that dominates the long-lived dynamics. In this case, Λ1ω and we have,

Cx(τ)~eωτ.

As previously discussed, we take the adiabatic approximation to derive the asymptotic behavior of the correlation function in the presence of slow non-ergodic modulation of the potential landscape. In particular, we obtain a weighted average of the correlation function over multiple realizations of ω(s), yielding

Cx(τ)~ωminωmaxp(ω)eωτdω. (C9)

Note that in comparison with Eq. 6, besides dropping an ω factor due to the difference between f(t,ω), Eq. 5, and Cx(τ), we also do not need to take into account the extra factor of ω coming from the finite observation time, which in the case of the estimation of first passage times biases the probability density in a manner that is proportional to ω. In the case of the correlation function, the dynamics of x is exposed to modulations in s, regardless of ω(s). Following the same steps as before, we consider that V and ΔU can be written as a series expansion with dominant terms V(s)~asn and ΔU(s)~bsk,a,bR. In this case, we find that to dominant order

Cx(τ)~expaTxblogω0τn/kTs. (C10)

Notably, when n=k, we obtain power law correlations with an exponent that depends on the ratio of temperatures,

Cx(τ)~τaTxbTs×logω0τ1n1, (C11)

where we have included sub-dominant corrections. In particular, we find that when n=k both the first passage time distribution f(t)~tβ and the correlation function Cx(τ)~τγ have power-law behavior at large times, and that the exponents are related by γ=β+2. In addition, when Ts correlations decay slowly as Cx(τ)~logω0τ1/n1.

APPENDIX D: FINITE-SIZE CORRECTION TO THE CORRELATION FUNCTION

When estimating the correlation function from a collection of M finite time traces sampled at δt and with length N=Texp/δt, we compute,

Cˆx(τ=lδt)=1Mα=1M1Nli=1Nxα,ixα,i+l1Ni=1Nxα,i21Ni=1Nxα,i21Ni=1Nxα,i2,

where xα,i is the i-th frame of the α trace. Assuming that the finite-size corrections to the correlation function are dominated by corrections to the mean value (and not the variance), we can leverage the derivation of Desponds et al. [55] to obtain an expression for the finite-size corrections to the correlation function from the non-connected correlation function C˜x=x(t)x(t+τ),

Cc(τ)~C˜(l)+1N1N2NlNC˜(0)+k=1N12(Nk)C˜(k)+2N(Nl)lC˜(0)+k=1l12(lk)C˜(k)+m=1N1C˜(m)(min(m+l,N)max(l,m)). (D12)

As detailed in the main text, as a case study we take the overdamped dynamics for the position x of a particle in a symmetric double well potential, for which the barrier height can fluctuate according to a slow parameter s, Eq. 12. The time scale separation between the hopping events and the relaxation to the well means that the correlation function is dominated by the first nontrivial eigenvalue, Cx(τ)~eΛ1τ=e2ωτ, where we take Λ=2ω due to the fact that the potential wells have the same depth [33]. Taking V(s)~s2/2 and ΔU(s)=s2 we obtain that, in the asymptotic large τ limit, Cx(τ)~τTx2Tslogω0τ1/2, Fig. 5(a).

To obtain an accurate estimate of the correlation function for all τ, we go beyond the asymptotic approximation and numerically integrate

C˜x(τ)~eV(s)/Tse2ω(s)τds, (D13)

where ω(s) can be estimated directly by integrating the Kolmogorov backward equation. At large τ, the numerical integration of C˜x(τ) matches the asymptotic behavior τTx2Ts. Plugging Eq. D13 into Eq. D12, we obtain the correction to the autocorrelation function Cc(τ) presented in Figs. 5(b,c).

References

  • [1].Pereira T. D., Shaevitz J. W., and Murthy M., Nature Neuroscience 23, 1537 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Mathis M. W. and Mathis A., Current Opinion in Neurobiology 60, 1 (2020). [DOI] [PubMed] [Google Scholar]
  • [3].Hebert L., Ahamed T., Costa A. C., O’Shaughnessy L., and Stephens G. J., PLoS Computational Biology 17, e1008914 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Berman G. J., BMC Biol. 16, 10.1186/s12915-018-0494-7 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Sethna J. P., Statistical Mechanics: Entropy, Order Parameters and Complexity, first edition ed. (Oxford University Press, Great Clarendon Street, Oxford: OX2 6DP, 2006). [Google Scholar]
  • [6].Crank J., The Mathematics of Diffusion, Oxford science publications (Clarendon Press, 1979). [Google Scholar]
  • [7].Brenner S., Genetics 77, 71 (1974). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Bargmann C. I. and Marder E., Nat Methods 10, 10.1016/j.cub.2012.01.061 (2013). [DOI] [PubMed] [Google Scholar]
  • [9].Ahamed T., Costa A. C., and Stephens G. J., Nature Physics 17, 275 (2021). [Google Scholar]
  • [10].Costa A. C., Ahamed T., Jordan D., and Stephens G. J., Chaos: An Interdisciplinary Journal of Nonlinear Science 33, 023136 (2023). [DOI] [PubMed] [Google Scholar]
  • [11].Costa A. C., Ahamed T., Jordan D., and Stephens G. J., A markovian dynamics for C. elegans behavior across scales (2023), arXiv:2310.12883 [physics.bio-ph]. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Takens F., in Dynamical Systems and Turbulence, Warwick 1980, edited by Rand D. and Young L.-S. (Springer Berlin Heidelberg, Berlin, Heidelberg, 1981) pp. 366–381. [Google Scholar]
  • [13].Sugihara G. and May R. M., Nature 344, 734 (1990). [DOI] [PubMed] [Google Scholar]
  • [14].Sauer T., Yorke J. A., and Casdagli M., Journal of Statistical Physics 65, 579 (1991). [Google Scholar]
  • [15].Stark J., Journal of Nonlinear Science 9, 255 (1999). [Google Scholar]
  • [16].Stark J., Broomhead D. S., Davies M., and Huke J., Journal of Nonlinear Science 13, 519 (2003). [Google Scholar]
  • [17].Pierce-Shimomura J. T., Morse T. M., and Lockery S. R., J. Neurosci. 19, 9557 (1999). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Fujiwara M., Sengupta P., and McIntire S. L., Neuron 36, 1091 (2002). [DOI] [PubMed] [Google Scholar]
  • [19].Berg H. C., coli in motion E., Biological and medical physics series (Springer, New York, 2004). [Google Scholar]
  • [20].Berman G. J., Choi D. M., Bialek W., and Shaevitz J. W., J. Royal Soc. Interface 11, 1 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Wiltschko A. B., Johnson M. J., Iurilli G., Peterson R. E., Katon J. M., Pashkovski S. L., Abraira V. E., Adams R. P., and Datta S. R., Neuron 88, 1121 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Johnson R. E., Linderman S., Panier T., Wee C. L., Song E., Herrera K. J., Miller A., and Engert F., Current Biology 30, 70 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Brown A. X. and de Bivort B., Nature Physics 14, 653 (2018). [Google Scholar]
  • [24].Broekmans O. D., Rodgers J. B., Ryu W. S., and Stephens G. J., eLife 5(e17227), 10.7554/eLife.17227 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Froyland G., Gottwald G. A., and Hammerlindl A., SIAM Journal on Applied Dynamical Systems 13, 1816 (2014). [Google Scholar]
  • [26].Stephens G. J., Johnson-Kerner B., Bialek W., and Ryu W. S., PLoS Comput. Biol. 4, e1000028 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Mori H., Progress of Theoretical Physics 33, 423 (1965). [Google Scholar]
  • [28].Zwanzig R., Journal of Statistical Physics 9, 215 (1973). [Google Scholar]
  • [29].Rupe A., Vesselinov V. V., and Crutchfield J. P., New Journal of Physics 24, 103033 (2022). [Google Scholar]
  • [30].This timescale is typically referred to as the Markov-Einstein scale [61, 116].
  • [31].We use the Itô interpretation of the stochastic dynamics (see, e.g., [117]).
  • [32].Lamouroux D. and Lehnertz K., Physics Letters A 373, 3507 (2009). [Google Scholar]
  • [33].Risken H. and Haken H., The Fokker-Planck Equation: Methods of Solution and Applications Second Edition (Springer, 1989). [Google Scholar]
  • [34].Hills T., Brockie P. J., and Maricq A. V., Journal of Neuroscience 24, 1217 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Gray J. M., Hill J. J., and Bargmann C. I., Proceedings of the National Academy of Sciences 102, 10.1073/pnas.0409009101 (2005). [DOI] [Google Scholar]
  • [36].Salvador L. C., Bartumeus F., Levin S. A., and Ryu W. S., Journal of the Royal Society Interface 11, 10.1098/rsif.2013.1092 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Calhoun A. J., Tong A., Pokala N., Fitzpatrick J. A. J., Sharpee T. O., and Chalasani S. H., Neuron 86, 428 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Hums I., Riedl J., Mende F., Kato S., Kaplan H. S., Latham R., Sonntag M., Traunmüller L., and Zimmer M., eLife 5, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Flavell S. W., Raizen D. M., and You Y.-J., Genetics 216, 315 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Bargmann C. I., BioEssays 34, 458 (2012). [DOI] [PubMed] [Google Scholar]
  • [41].Marder E., Neuron 76, 1 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [42].Flavell S. W., Gogolla N., Lovett-Barron M., and Zelikowsky M., Neuron 110, 2545 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Hänggi P., Talkner P., and Borkovec M., Rev. Mod. Phys. 62, 251 (1990). [Google Scholar]
  • [44].Szabo A., Schulten K., and Schulten Z., The Journal of Chemical Physics 72, 4350 (1980). [Google Scholar]
  • [45].Condamin S., Tejedor V., Voituriez R., Bénichou O., and Klafter J., Proceedings of the National Academy of Sciences 105, 10.1073/pnas.0712158105 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Bénichou O. and Voituriez R., Physics Reports 539, 225 (2014). [Google Scholar]
  • [47].Chicheportiche R. and Bouchaud J.-P., Some applications of first-passage ideas to finance, in First-Passage Phenomena and Their Applications (World Scientific, 2014) Chap. 1, pp. 447–476. [Google Scholar]
  • [48].Grebenkov D. S., Journal of Physics A: Mathematical and Theoretical 48, 013001 (2014). [Google Scholar]
  • [49].Hänggi P., Chemical Physics 180, 157 (1994). [Google Scholar]
  • [50].Bénichou O., Guérin T., and Voituriez R., Journal of Physics A: Mathematical and Theoretical 48, 163001 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Godec A. and Metzler R., Phys. Rev. X 6, 041037 (2016). [Google Scholar]
  • [52].Van Kampen N. G., Stochastic processes in physics and chemistry (North-Holland, Amsterdam, 1981). [Google Scholar]
  • [53].We note that this is generally true even for fixed initial conditions as long as τs < Texp. If τs ~ Texp, then p(ω) is primarily defined by the distribution of initial conditions. However, when the initial conditions are well approximated by a normal distribution with variance σ2, the denominator in the Boltzmann weight should be changed accordingly and this will change the final form of the first passage time distribution. Nonetheless, the derivation we present is general and can be adapted for a given p(s), see Appendix B.
  • [54].Coffey W. T., Kalmykov Y. P., and Waldron J. T., The Langevin Equation, 2nd ed. (WORLD SCIENTIFIC, 2004). [Google Scholar]
  • [55].Desponds J., Tran H., Ferraro T., Lucas T., Perez Romero C., Guillou A., Fradin C., Coppey M., Dostatni N., and Walczak A. M., PLOS Computational Biology 12, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Cavagna A., Giardina I., and Grigera T. S., Physics Reports 728, 1 (2018). [Google Scholar]
  • [57].A similar observation has been done in the analysis of spatial correlation functions in flocks of birds [118].
  • [58].Givon D., Kupferman R., and Stuart A., Nonlinearity 17, 1 (2004). [Google Scholar]
  • [59].Coifman R. R., Kevrekidis I. G., Lafon S., Maggioni M., and Nadler B., Multiscale Modeling & Simulation 7, 842 (2008), 10.1137/070696325. [DOI] [Google Scholar]
  • [60].Giannakis D., Applied and Computational Harmonic Analysis 47, 338 (2019). [Google Scholar]
  • [61].Callaham J. L., Loiseau J.-C., Rigas G., and Brunton S. L., Nonlinear stochastic modeling with langevin regression (2020), arXiv:2009.01006 [cond-mat.stat-mech]. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [62].Frishman A. and Ronceray P., Phys. Rev. X 10, 021009 (2020). [Google Scholar]
  • [63].Dietrich F., Makeev A., Kevrekidis G., Evangelou N., Bertalan T., Reich S., and Kevrekidis I. G., Chaos: An Interdisciplinary Journal of Nonlinear Science 33, 023121 (2023). [DOI] [PubMed] [Google Scholar]
  • [64].Berman G. J., Bialek W., and Shaevitz J. W., Proceedings of the National Academy of Sciences 104, 20167 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [65].Alba V., Berman G. J., Bialek W., and Shaevitz J. W., Exploring a strongly non-markovian animal behavior (2020), arXiv:2012.15681 [q-bio.NC]. [Google Scholar]
  • [66].Korobkova E., Emonet T., Vilar J. M. G., Shimizu T. S., and Cluzel P., Nature 428, 574 (2004). [DOI] [PubMed] [Google Scholar]
  • [67].Miramontes O., DeSouza O., Paiva L. R., Marins A., and Orozco S., PLOS ONE 9, 1 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [68].Jung K., Jang H., Kralik J. D., and Jeong J., PLOS Computational Biology 10, 1 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Humphries N. E., Queiroz N., Dyer J. R. M., Pade N. G., Musyl M. K., Schaefer K. M., Fuller D. W., Brunnschweiler J. M., Doyle T. K., Houghton J. D. R., Hays G. C., Jones C. S., Noble L. R., Wearmouth V. J., Southall E. J., and Sims D. W., Nature 465, 1066 (2010). [DOI] [PubMed] [Google Scholar]
  • [70].Sims D. W., Southall E. J., Humphries N. E., Hays G. C., Bradshaw C. J. A., Pitchford J. W., James A., Ahmed M. Z., Brierley A. S., Hindell M. A., Morritt D., Musyl M. K., Righton D., Shepard E. L. C., Wearmouth V. J., Wilson R. P., Witt M. J., and Metcalfe J. D., Nature 451, 1098 (2008). [DOI] [PubMed] [Google Scholar]
  • [71].Raichlen D. A., Wood B. M., Gordon A. D., Mabulla A. Z. P., Marlowe F. W., and Pontzer H., Proceedings of the National Academy of Sciences 111, 10.1073/pnas.1318616111 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Sims D. W., Reynolds A. M., Humphries N. E., Southall E. J., Wearmouth V. J., Metcalfe B., and Twitchett R. J., Proceedings of the National Academy of Sciences 111, 10.1073/pnas.1405966111 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].Viswanathan G. M., Buldyrev S. V., Havlin S., da Luz M. G., Raposo E. P., and Stanley H. E., Nature 401, 911 (1999). [DOI] [PubMed] [Google Scholar]
  • [74].Wosniack M. E., Santos M. C., Raposo E. P., Viswanathan G. M., and da Luz M. G. E., Phys. Rev. E 91, 052119 (2015). [DOI] [PubMed] [Google Scholar]
  • [75].Wosniack M. E., Santos M. C., Raposo E. P., Viswanathan G. M., and da Luz M. G. E., PLOS Computational Biology 13, 1 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [76].Guinard B. and Korman A., Science Advances 7, eabe8211 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Clementi A., d’Amore F., Giakkoupis G., and Natale E., in Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, PODC’21 (Association for Computing Machinery, New York, NY, USA, 2021) p. 81–91. [Google Scholar]
  • [78].Pyke G. H., Methods in Ecology and Evolution 6, 1 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [79].Reynolds A., Physics of Life Reviews 14, 59 (2015). [DOI] [PubMed] [Google Scholar]
  • [80].Tu Y. and Grinstein G., Phys. Rev. Lett. 94, 208101 (2005). [DOI] [PubMed] [Google Scholar]
  • [81].Biró T. S. and Jakovác A., Phys. Rev. Lett. 94, 132302 (2005). [DOI] [PubMed] [Google Scholar]
  • [82].Lubashevsky I., Friedrich R., and Heuer A., Phys. Rev. E 79, 011110 (2009). [DOI] [PubMed] [Google Scholar]
  • [83].Reynolds A. M. and Ouellette N. T., Scientific Reports 6, 30515 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [84].Interestingly, fast-fluctuating hopping rates and scale-invariance arguments have been used to explain heavy-tailed distributions of uncorrelated resting times in mice [119].
  • [85].Hernández D. G., Rivera C., Cande J., Zhou B., Stern D. L., and Berman G. J., eLife 10, e61806 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [86].Petrovskii S., Mashanova A., and Jansen V. A. A., Proceedings of the National Academy of Sciences 108, 8704 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Bialek W. and Shaevitz J. W., Long time scales, individual differences, and scale invariance in animal behavior (2023), arXiv:2304.09608 [q-bio.NC]. [DOI] [PubMed] [Google Scholar]
  • [88].Qiao B., Li C., Allen V. W., Shirasu-Hiza M., and Syed S., eLife 7, e34497 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Overman K. E., Choi D. M., Leung K., Shaevitz J. W., and Berman G. J., PLOS Computational Biology 18, 1 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [90].Wheatland M. S., Sturrock P. A., and McTiernan J. M., The Astrophysical Journal 509, 448 (1998). [Google Scholar]
  • [91].Boffetta G., Carbone V., Giuliani P., Veltri P., and Vulpiani A., Phys. Rev. Lett. 83, 4662 (1999). [Google Scholar]
  • [92].Beggs J. M. and Plenz D., Journal of Neuroscience 23, 10.1523/JNEUROSCI.23-35-11167.2003 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [93].Newman M., Contemporary Physics 46, 323 (2005), 10.1080/00107510500052444. [DOI] [Google Scholar]
  • [94].Bouchaud J.-P. and Georges A., Physics Reports 195, 127 (1990). [Google Scholar]
  • [95].ben Avraham D. and Havlin S., Diffusion and Reactions in Fractals and Disordered Systems (Cambridge University Press, 2000). [Google Scholar]
  • [96].Cocchi L., Gollo L. L., Zalesky A., and Breakspear M., Progress in Neurobiology 158, 132 (2017). [DOI] [PubMed] [Google Scholar]
  • [97].O’Byrne J. and Jerbi K., Trends in Neurosciences 45 820 (2022). [DOI] [PubMed] [Google Scholar]
  • [98].Mora T. and Bialek W., Journal of Statistical Physics 144, 268 (2011). [Google Scholar]
  • [99].den Hollander F., Long time tails in physics and mathematics, in Probability and Phase Transition, edited by Grimmett G. (Springer Netherlands, Dordrecht, 1994) pp. 123–137. [Google Scholar]
  • [100].Alder B. J. and Wainwright T. E., Phys. Rev. A 1, 18 (1970). [Google Scholar]
  • [101].Touboul J. and Destexhe A., Phys. Rev. E 95, 012413 (2017). [DOI] [PubMed] [Google Scholar]
  • [102].Priesemann V. and Shriki O., PLOS Computational Biology 14, 1 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [103].Morrell M., Nemenman I., and Sederberg A. J., Neural criticality from effective latent variable (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Schwab D. J., Nemenman I., and Mehta P., Physical Review Letters 113, 068102 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [105].Ghusinga K. R., Dennehy J. J., and Singh A., Proceedings of the National Academy of Sciences 114, 10.1073/pnas.1609012114 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [106].de Wit X. M., van Kan A., and Alexakis A., Journal of Fluid Mechanics 939, R2 (2022). [Google Scholar]
  • [107].Costa A. C. and Vergassola M., Fluctuating landscapes and heavy tails in animal behavior, 10.5281/zenodo.10030151 (2023). [DOI] [Google Scholar]
  • [108].Sulston J. E. and Brenner S., Genetics 77, 95 (1974). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [109].Bollt E. M. and Santitissadeekorn N., Applied and computational measurable dynamics (Society for Industrial and Applied Mathematics, Philadelphia, United States, 2013). [Google Scholar]
  • [110].Bittracher A., Koltai P., Klus S., Banisch R., Dellnitz M., and Schütte C., Journal of Nonlinear Science 28, 471 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [111].McInnes L., Healy J., and Melville J., Umap: Uniform manifold approximation and projection for dimension reduction (2018).
  • [112].Epanechnikov V. A., Theory of Probability & Its Applications 14, 153 (1969), 10.1137/1114019. [DOI] [Google Scholar]
  • [113].Härdle W. K., Müller M., Sperlich S., and Werwatz A., Nonparametric and Semiparametric Models (Springer Berlin, Heidelberg, 2006). [Google Scholar]
  • [114].Horsthemke W. and Lefever R., Noise-Induced Transitions: Theory and Applications in Physics, Chemistry, and Biology (Springer Berlin, Heidelberg, 2006). [Google Scholar]
  • [115].We note that for the general dynamics of Eqs. (2,3), ω0 may have a s dependency. However, without loss of generality, we consider that ω0 and ΔU(s) can be redefined to move the s dependency to the exponential as a subdominant contribution.
  • [116].Friedrich R., Peinke J., Sahimi M., and Reza Rahimi Tabar M., Physics Reports 506, 87 (2011). [Google Scholar]
  • [117].van Kampen N. G., Journal of Statistical Physics 24, 175 (1981). [Google Scholar]
  • [118].Cavagna A., Cimarelli A., Giardina I., Parisi G., Santagati R., Stefanini F., and Viale M., Proceedings of the National Academy of Sciences 107, 11865 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [119].Proekt A., Banavar J. R., Maritan A., and Pfaff D. W., Proceedings of the National Academy of Sciences 109, 10.1073/pnas.1206894109 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES