Skip to main content
Proceedings. Mathematical, Physical, and Engineering Sciences logoLink to Proceedings. Mathematical, Physical, and Engineering Sciences
. 2021 Oct 13;477(2254):20210097. doi: 10.1098/rspa.2021.0097

Structured time-delay models for dynamical systems with connections to Frenet–Serret frame

Seth M Hirsh 1,, Sara M Ichinaga 2, Steven L Brunton 3, J Nathan Kutz 2, Bingni W Brunton 4
PMCID: PMC8511787  PMID: 35153585

Abstract

Time-delay embedding and dimensionality reduction are powerful techniques for discovering effective coordinate systems to represent the dynamics of physical systems. Recently, it has been shown that models identified by dynamic mode decomposition on time-delay coordinates provide linear representations of strongly nonlinear systems, in the so-called Hankel alternative view of Koopman (HAVOK) approach. Curiously, the resulting linear model has a matrix representation that is approximately antisymmetric and tridiagonal; for chaotic systems, there is an additional forcing term in the last component. In this paper, we establish a new theoretical connection between HAVOK and the Frenet–Serret frame from differential geometry, and also develop an improved algorithm to identify more stable and accurate models from less data. In particular, we show that the sub- and super-diagonal entries of the linear model correspond to the intrinsic curvatures in the Frenet–Serret frame. Based on this connection, we modify the algorithm to promote this antisymmetric structure, even in the noisy, low-data limit. We demonstrate this improved modelling procedure on data from several nonlinear synthetic and real-world examples.

Keywords: dynamic mode decomposition, time-delay coordinates, Frenet–Serret, Koopman operator, Hankel matrix

1. Introduction

Discovering meaningful models of complex, nonlinear systems from measurement data has the potential to improve characterization, prediction and control. Focus has increasingly turned from first-principles modelling towards data-driven techniques to discover governing equations that are as simple as possible while accurately describing the data [14]. However, available measurements may not be in the right coordinates for which the system admits a simple representation. Thus, considerable effort has gone into learning effective coordinate transformations of the measurement data [57], especially those that allow nonlinear dynamics to be approximated by a linear system. These coordinates are related to eigenfunctions of the Koopman operator [813], with dynamic mode decomposition (DMD) [14] being the leading computational algorithm for high-dimensional spatio-temporal data [11,13,15]. For low-dimensional data, time-delay embedding [16] has been shown to provide accurate linear models of nonlinear systems [5,17,18]. Linear time-delay models have a rich history [19,20], and recently, DMD on delay coordinates [15,21] has been rigorously connected to these linearizing coordinate systems in the Hankel alternative view of Koopman (HAVOK) approach [5,7,17]. In this work, we establish a new connection between HAVOK and the Frenet–Serret frame from differential geometry, which inspires an extension to the algorithm that improves the stability of these models.

Time-delay embedding is a widely used technique to characterize dynamical systems from limited measurements. In delay embedding, incomplete measurements are used to reconstruct a representation of the latent high-dimensional system by augmenting the present measurement with a time history of previous measurements. Takens showed that under certain conditions, time-delay embedding produces an attractor that is diffeomorphic to the attractor of the latent system [16]. Time-delay embeddings have also been extensively used for signal processing and modelling [19,20,2227], for example, in singular spectrum analysis (SSA) [19,22] and the eigensystem realization algorithm (ERA) [20]. In both cases, a time history of augmented delay vectors are arranged as columns of a Hankel matrix, and the singular value decomposition (SVD) is used to extract eigen-time-delay coordinates in a dimensionality reduction stage. More recently, these historical approaches have been connected to the modern DMD algorithm [15], and it has become commonplace to compute DMD models on time-delay coordinates [15,21]. The HAVOK approach established a rigorous connection between DMD on delay coordinates and eigenfunctions of the Koopman operator [5]; HAVOK [5] is also referred to as Hankel DMD [17] or delay DMD [15].

HAVOK produces linear models where the matrix representation of the dynamics has a peculiar and particular structure. These matrices tend to be skew-symmetric and dominantly tridiagonal, with zero diagonal (see figure 1 for an example). In the original HAVOK paper, this structure was observed in some systems, but not others, with the structure being more pronounced in noise-free examples with an abundance of data. It has been unclear how to interpret this structure and whether or not it is a universal feature of HAVOK models. Moreover, the eigen-time-delay modes closely resemble Legendre polynomials; these polynomials were explored further in Kamb et al. [28]. The present work directly resolves this mysterious structure by establishing a connection to the Frenet–Serret frame from differential geometry.

Figure 1.

Figure 1.

In this work, we unify key results from dimensionality reduction, time-delay embedding and the Frenet–Serret frame to show that a dynamical system may be decomposed into a sparse linear model plus a forcing term. Furthermore, this linear model has a particular structure: it is an antisymmetric matrix with non-zero elements only along the super- and sub-diagonals. These non-zero elements are interpretable as they are intrinsic curvatures of the system in the Frenet–Serret frame.

The structure of HAVOK models may be understood by introducing intrinsic coordinates from differential geometry [29]. One popular set of intrinsic coordinates is the Frenet–Serret frame, which is formed by applying the Gram–Schmidt procedure to the derivatives of the trajectory x˙(t),x¨(t),x(t), [3032]. Álvarez-Vizoso et al. [33] showed that the SVD of trajectory data converges locally to the Frenet–Serret frame in the limit of an infinitesimal time step. The Frenet–Serret frame results in an orthogonal basis of polynomials, which we will connect to the observed Legendre basis of HAVOK [5,28]. Moreover, we show that the dynamics, when represented in these coordinates, have the same tridiagonal structure as the HAVOK models. Importantly, the terms along the sub- and super-diagonals have a specific physical interpretation as intrinsic curvatures. By enforcing this structure, HAVOK models are more robust to noisy and limited data.

In this work, we present a new theoretical connection between time-delay embedding models and the Frenet–Serret frame from differential geometry. Our unifying perspective sheds light on the antisymmetric, tridiagonal structure of the HAVOK model. We use this understanding to develop structured HAVOK models that are more accurate for noisy and limited data. Section 2 provides a review of dimensionality reduction methods, time-delay embeddings and the Frenet–Serret frame. This section also discusses current connections between these fields. In §3, we establish the main result of this work, connecting linear time-delay models with the Frenet–Serret frame, explaining the tridiagonal, antisymmetric structure seen in figure 1. We then illustrate this theory on a synthetic example. In §4, we explore the limitations and requirements of the theory, giving recommendations for achieving this structure in practice. In §5, based on this theory, we develop a modified HAVOK method, called structured HAVOK (sHAVOK), which promotes tridiagonal, antisymmetric models. We demonstrate this approach on three nonlinear synthetic examples and two real-world datasets, namely measurements of a double pendulum experiment and measles outbreak data, and show that sHAVOK yields more stable and accurate models from significantly less data.

2. Related work

Our work relates and extends results from three fields: dimensionality reduction, time-delay embedding and the Frenet–Serret coordinate frame from differential geometry. There is an extensive literature on each of these fields, and here we give a brief introduction of the related work to establish a common notation on which we build a unifying framework in §3.

(a) . Dimensionality reduction

Recent advancements in sensor and measurement technologies have led to a significant increase in the collection of time-series data from complex, spatio-temporal systems. Although such data are typically high dimensional, in many cases, it can be well approximated with a low-dimensional representation. One central goal is to learn the underlying structure of this data. Although there are many data-driven dimensionality reduction methods, here we focus on linear techniques because of their effectiveness and analytic tractability. In particular, given a data matrix XRm×n, the goal of these techniques is to decompose X into the matrix product

X=UV, 2.1

where URm×k and VRn×k are low rank (k<min(m,n)). The task of solving for U and V is highly underdetermined, and different solutions may be obtained when different assumptions are made.

Here, we review two popular linear dimensionality reduction techniques: SVD [34,35] and DMD [13,15,36]. Both of these methods are key components of the HAVOK algorithm and play a key role in determining the underlying tridiagonal antisymmetric structure in figure 1.

(i) . SVD

The SVD is one of the most popular dimensionality reduction methods, and it has been applied in a wide range of applications, including genomics [37], physics [38] and image processing [39]. SVD is the underlying algorithm for principal component analysis.

Given the data matrix XRm×n, the SVD decomposes X into the product of three matrices,

X=UΣV,

where URm×m and VRn×n are unitary matrices, and ΣRm×n is a diagonal matrix with non-negative entries [34,35]. We denote the ith columns of U and V as ui and vi, respectively. The diagonal elements of Σ, σi, are known as the singular values of X, and they are written in descending order.

The rank of the data is defined to be R, which equals the number of non-zero singular values. Consider the low-rank matrix approximation

Xr=j=1rujσjvjT,

with rR. An important property of Xr is that it is the best rank r approximation to X in the least-squares sense. In other words,

Xr=argminYXYsuch that rank(Y)=r,

with respect to both the l2 and Frobenius norms. Furthermore, the relative error in this rank-r approximation using the l2 norm is

XXrl2Xl2=σr+1σ1. 2.2

From (2.2), we immediately see that if the singular values decay rapidly (σj+1σj), then Xr is a good low-rank approximation to X. This property makes the SVD a popular tool for compressing data.

(ii) . DMD

DMD [1315] is another linear dimensionality reduction technique that incorporates an assumption that the measurements are time-series data generated by a linear dynamical system in time. DMD has become a popular tool for modelling dynamical systems in such diverse fields, including fluid mechanics [11,14], neuroscience [21], disease modelling [40], robotics [41], plasma modelling [42], resolvent analysis [43] and computer vision [44,45].

Like the SVD, for DMD, we begin with a data matrix XRm×n. Here, we assume that our data are generated by an unknown dynamical system so that the columns of X, x(tk), are time snapshots related by the map x(tk+1)=F(x(tk)). While F may be nonlinear, the goal of DMD is to determine the best-fit linear operator A:RmRm such that

x(tk+1)Ax(tk).

If we define the two time-shifted data matrices,

X1n1=[|||x(t1)x2(t2)x(tn1)|||]andX2n=[|||x(t2)x(t3)x(tn)|||],

then we can equivalently define ARm×m to be the operator such that

X2nAX1n1.

It follows that A is the solution to the minimization problem

A=minAX2nAX1n1F,

where F denotes the Frobenius norm.

A unique solution to this problem can be obtained using the exact DMD method and the Moore–Penrose pseudo-inverse A^=X2n(X1n1) [13,15]. Alternative algorithms have been shown to perform better for noisy measurement data, including optimized DMD [46], forward–backward DMD [47] and total least-squares DMD [48].

One key benefit of DMD is that it builds an explicit temporal model and supports short-term future state prediction. Defining {λj} and {vj} to be the eigenvalues and eigenvectors of A, respectively, then we can write

x(tk)=j=1rvjeωjtk, 2.3

where ωj=ln(λj)/Δt are eigenvalues normalized by the sampling interval Δt, and the eigenvectors are normalized such that j=1rvj=x(t1). Thus, to compute the state at an arbitrary time t, we can simply evaluate (2.3) at that time. Furthermore, letting vj be the columns of U and {exp(ωjtk)fork=1,r} be the columns of V, then we can express data in the form of (2.1).

(b) . Time-delay embedding

Suppose we are interested in a dynamical system

dξdt=F(ξ),

where ξ(t)Rl are states whose dynamics are governed by some unknown nonlinear differential equation. Typically, we measure some possibly nonlinear projection of ξ, x(ξ)Rd at discrete time points t=0,Δt,,qΔt. In general, the dimensionality of the underlying dynamics is unknown, and the choice of measurements are limited by practical constraints. Consequently, it is difficult to know whether the measurements x are sufficient for modelling the system. For example, d may be smaller than m. In this work, we are primarily interested in the case of d=1; in other words, we have only a single one-dimensional time-series measurement for the system.

We can construct an embedding of our system using successive time delays of the measurement x, at x(tτ). Given a single measurement of our dynamical system x(t)R, fort=0,Δt,(q1)Δt, we can form the Hankel matrix HRm×n by stacking time-shifted snapshots of x [49],

H=[x1x2x3x4xnx2x3x4x5xn+1xmxm+1xm+2xm+3xq]. 2.4

Each column may be thought of as an augmented state space that includes a short, m-dimensional trajectory in time. Our data matrix H is then this m-dimensional trajectory measured over n snapshots in time.

There are several key benefits of using time-delay embeddings. Most notably, given a chaotic attractor, Taken’s embedding theorem states that a sufficiently high-dimensional time-delay embedding of the system is diffeomorphic to the original attractor [16], as illustrated in figure 2. In addition, recent results have shown that time-delay matrices are guaranteed to have strongly decaying singular value spectra. In particular, Beckerman & Townsend [50] prove the following theorem:

Figure 2.

Figure 2.

Outline of steps in HAVOK method. First, given a dynamical system a single variable x(t) is measured. Time-shifted copies of x(t) are stacked to form a Hankel matrix H. The singular value decomposition (SVD) is applied to H, producing a low-dimensional representation V. The dynamic mode decomposition (DMD) is then applied to V to form a linear dynamical model and a forcing term.

Theorem 2.1. —

Let HnRn×n be a positive definite Hankel matrix, with singular values σ1,,σn. Then σjCρj/lognσ1 for constants C and ρ and for j=1,,n.

Equivalently, Hn can be approximated up to an accuracy of ϵHn2 by a rank O(lognlog1/ϵ) matrix. From this, we see that Hn can be well-approximated by a low-rank matrix.

Many methods have been developed to take advantage of this structure of the Hankel matrix, including the ERA [20], SSA [19] and nonlinear Laplacian spectrum analysis [22]. DMD may also be computed on delay coordinates from the Hankel matrix [15,21,51], and it has been shown that this approach may provide a Koopman invariant subspace [5,52]. In addition, this structure has also been incorporated into neural network architectures [53].

This analysis is limited to delay embeddings of one-dimensional signals. However, embeddings of multi-dimensional signals have also been explored [15,54]. Most notably, higher order DMD is particularly powerful for very high dimensional embeddings [5557]. Understanding the structure of these higher dimensional embeddings is also an exciting area of current research.

(c) . HAVOK: dimensionality reduction and time-delay embeddings

Leveraging dimensionality reduction and time-delay embeddings, the HAVOK algorithm constructs low-dimensional models of dynamical systems [5]. Specifically, HAVOK learns effective measurement coordinates of the system and estimates its intrinsic dimensionality. Remarkably, HAVOK models are simple, consisting of a linear model and a forcing term that can be used for short-term forecasting.

We illustrate this method in figure 1 for the Lorenz system (see §5b for details about this system). To do so, we begin with a one-dimensional time series x(t) for t=0,Δt,,(q1)Δt. We construct a higher dimensional representation using time-delay embeddings, producing a Hankel matrix HRm×n as in (2.4) and compute its SVD,

H=UΣV.

If H is sufficiently low rank (with rank r), then we need only consider the reduced SVD,

Hr=UrΣrVr,

where UrRm×r and VrRn×r are orthogonal matrices and ΣrRr×r is diagonal. Rearranging the terms, Vr=Σr1UrHr and we can think of

Vr=[v1v2vn] 2.5

as a lower dimensional representation of our high dimensional trajectory. For quasi-periodic systems, the SVD decomposition of the Hankel matrix results in principal component trajectories [54], which reconstruct dynamical trajectories in terms of periodic orbits.

To discover the linear dynamics, we apply DMD. In particular, we construct the time-shifted matrices,

V1=[v1v2vn1]andV2=[v2v3vn]. 2.6

We then compute the linear approximation A^ such that V2=A^V1, where A^=V2V1. This yields a model vi+1=A^vi.

In the continuous case,

v˙(t)=Av(t), 2.7

which is related to first order in Δt to the discrete case by

A(A^I)Δt.

For a general nonlinear dynamical system, this linear model yields a high RMSE error on the training data. Instead, [5] proposed a linear model plus a forcing term in the last component of v (figure 1):

v˙(t)=Av(t)+Bvr(t), 2.8

where v(t)Rr1, ARr1×r1 and BRr1. In this case, V2 is defined as columns 2 to n of the SVD singular vectors with an r1 rank truncation Vr1. A^Rr1×r1 and B^Rr1×1 are computed as [A^,B^]=V2V1. The continuous analogue of B^, B, is computed by B(B^I)/Δt. v(t) corresponds to the first r1 rows of Vr, while vr(t) corresponds to the rth row of Vr. The forcing term vr is required to capture the essential nonlinearity of the system, such as lobe switching, that cannot be captured by the linear model.

Once the A and B matrices have been derived, [5] found that HAVOK models could be used to forecast in an online setting. In particular, given the previous snapshots xn,xn+1,xq, we can estimate vr at the next snapshot by taking the inner product of xn,xn+1,xq with the rth column of U scaled by the inverse of the rth component of Σ.

HAVOK was shown to be a successful model for a variety of systems, including a double pendulum, switchings of Earth’s magnetic field and measurements of human behaviour [5,58]. In addition, the linear portion of the HAVOK model has been observed to adopt a very particular structure: the dynamics matrix was antisymmetric, with non-zero elements only on the super-diagonal and sub-diagonal (figure 1).

Much work has been done to study the properties of HAVOK. Arbabi et al. [17] showed that, in the limit of an infinite number of time delays (m), A converges to the Koopman operator for ergodic systems. Bozzo et al. [59] showed that in a similar limit, for periodic data, HAVOK converges to the temporal discrete Fourier transform. Kamb et al. [28] connect HAVOK to the use of convolutional coordinates. The primary goal of this current work is to connect HAVOK to the concept of curvature in differential geometry, and with these new insights, improve the HAVOK algorithm to take advantage of this structure in the dynamics matrix. In contrast to much of the previous work, we focus on the limit where only small amounts of noisy data are available.

(d) . The Frenet–Serret coordinate frame

Suppose we have a smooth curve γ(t)Rm measured over some time interval t[a,b]. As before, we would like to determine an effective set of coordinates in which to represent our data. When using SVD or DMD, the basis discovered corresponds to the spatial modes of the data and is constant in time. However, for many systems, it is sometimes natural to express both the coordinates and basis as functions of time [60,61]. One popular method for developing this non-inertial frame is the Frenet–Serret coordinate system, which has been applied in a wide range of fields, including robotics [62,63], aerodynamics [64] and general relativity [65,66].

Let us assume that γ(t) has r non-zero continuous derivatives, γ,(t),γ(t),γ(r)(t). We further assume that these derivatives are linearly independent and γ(t)0 for all t. Using the Gram–Schmidt process, we can form the orthonormal basis, e1,e2,,er,

e1(t)=γ(t)γ(t),e2(t)=γ(t)γ(t),e1(t)e1(t)γ(t)γ(t),e1(t)e1(t),ander(t)=γ(r)(t)k=1r1γ(r)(t),ek(t)ek(t)γ(r)(t)k=1r1γ(r)(t),ek(t)ek(t).} 2.9

Here, , denotes an inner product, and we choose rm so that these vectors are linearly independent and hence form an orthonormal basis. This set of basis vectors define the Frenet–Serret frame.

To derive the evolution of this basis, let us define the matrix formed by stacking these vectors Q(t)=[e1(t),e2(t),,er(t)]Rr×m, so that Q(t) satisfies the following time-varying linear dynamics,

dQdt=γ(t)K(t)Q, 2.10

where K(t)Rr×r.

By factoring out the term γ(t) from K(t), it is guaranteed that K(t) does not depend on the parametrization of the curve (i.e. the speed of the trajectory), but only on its geometry. The matrix K(t) is highly structured and sparse. To understand the structure of K(t) we derive two key properties [33]:

  • (1)
    Ki,j(t)=Kj,i(t) (antisymmetry):
    Proof. —
    Since rm, then by construction the columns of Q(t) are orthogonal and thus QQ=I. Taking the derivative with respect to t, dQ/dtQT+Q(dQ/dt)=0, or equivalently
    dQdtQ=(dQdtQ).
    Since Q is unitary, then Q1=Q, and hence
    K(t)=1γ(t)dQdtQ,
    from which we immediately see that K(t)=K(t).
  • (2)

    Ki,j(t)=0 for ji+2: We first note that since ei(t)span{γ(t),,γi(t)}, its derivative must satisfy ei(t)span{γ(t),,γ(i+1)(t)}. Now by construction, using the Gram–Schmidt method, ej is orthogonal to span{γ(t),,γ(i+1)(t)} for ji+2. Since ei(t) is in the span of this set, then ej must be orthogonal to ei for ji+2. Thus, Ki,j(t)=ei(t),ej=0 for ji+2.

With these two constraints, K(t) takes the form,

K(t)=[0κ1(t)0κ1(t)0κr1(t)0κr1(t)0]. 2.11

Thus K(t) is antisymmetric with non-zero elements only along the super-diagonal and sub-diagonal, and the values κ1(t),,κr1(t) are defined to be the curvatures of the trajectory. The curvatures κi(t) combined with the basis vectors ei(t) define the Frenet–Serret apparatus, which fully characterizes the trajectory up to translation [33].

From a geometric perspective, e1(t),,er(t) form an instantaneous (local) coordinate frame, which moves with the trajectory. The curvatures define how quickly this frame changes with time. If the trajectory is a straight line the curvatures are all zero. If κ1 is constant and non-zero, while all other curvatures are zero, then the trajectory lies on a circle. If κ1 and κ2 are constant and non-zero with all other curvatures zero, then the trajectory lies on a helix. Comparing the structure of (2.11) to figure 1, we immediately see a similarity. Over the following sections, we will shed light on this connection.

(e) . SVD and curvature

Given time-series data, the SVD constructs an orthonormal basis that is fixed in time, whereas the Frenet–Serret frame constructs an orthonormal basis that moves with the trajectory. In recent work, Álvarez-Vizoso et al. [33] showed how these frames are related. In particular, the Frenet–Serret frame converges to the SVD frame in the limit as the time interval of the trajectory goes to zero.

To understand this further, consider a trajectory γ(t)Rm as described in §2d. If we assume that our measurements are from a small neighbourhood t(ϵ,ϵ) (where ϵ1), then γ(t) is well-approximated by its Taylor expansion,

γ(t)γ(0)=γ(0)t+γ(0)2t2+γ(0)6t3+.

Writing this in matrix form, we have that

γ(t)γ(0)=[||||γ(0)γ(0)γ(0)||||]Γ[11216]Σ[tt2t3]T. 2.12

Recall one key property of the SVD is that the rth rank truncation in the expansion is the best rank-r approximation to the data in the least-squares sense. Since ϵ1, then each subsequent term in this expansion is much smaller than the previous term,

γ(0)t2γ(0)2t22γ(0)6t32. 2.13

From this, we see that the expansion in (2.12) is strongly related to the SVD. However, in the SVD, we have the constraint that the U and V matrices are orthogonal, while for the Taylor expansion Γ and T have no such constraint. Álvarez-Vizoso et al. [33] show that in the limit as ϵ0, then U is the result of applying the Gram–Schmidt process to the columns of Γ, and V is the result of applying the Gram–Schmidt process to the columns of T. Comparing this to above, we see that

U=[||||e1(0)e2(0)e3(0)||||]andV=[||||p1(t)p2(t)p3(t)||||],

where e1(t),e2(t),,er(t) is the basis for the Frenet–Serret frame defined in (2.9) and

pi(t)=tij=1i1ti,pj(t)pj(t)tij=1i1ti,pj(t)pj(t)fori=1,2,3, 2.14

We note that the pi(t)’s form a set of orthogonal polynomials independent of the dataset. In this limit, the curvatures depend solely on the singular values,

κi(t)=aiσi+1σ1(t)σi(t),where ai1=(ii+(1)i)24i213.

We note that connections between the SVD and the Gram–Schmidt method are well described in the literature and underlie several different DMD frameworks [15,67]. Furthermore, this particular connection is crucial for understanding the structure in HAVOK models.

3. Unifying SVD, time-delay embeddings and the Frenet–Serret frame

In this section, we show that time-series data from a dynamical system may be decomposed into a sparse linear dynamical model with nonlinear forcing, and the non-zero elements along the sub- and super-diagonals of the linear part of this model have a clear geometric meaning: they are curvatures of the system. In §3a, we combine key results about the Frenet–Serret frame, time delays and SVD to explain this structure. Following this theory, §3b illustrates this approach with a simple synthetic example. The decomposition yields a set of orthogonal polynomials that form a coordinate basis for the time-delay embedding. In §3c, we explicitly describe these polynomials and compare their properties with the Legendre polynomials.

(a) . Connecting SVD, time-delay embeddings and Frenet–Serret frame

Here, we connect the properties of the SVD, time-delay embeddings and the Frenet–Serret frame to decompose a dynamical model into a linear dynamical model with nonlinear forcing, where the linear model is both antisymmetric and tridiagonal. To do this, we follow the steps of the HAVOK method with slight modifications and show how they give rise to these structured dynamics. This process is illustrated in figure 3. We emphasize that to develop this new perspective, our key insight is based on deriving a connection between the global Koopman frame and the local Frenet–Serret frame for the case of time-delay coordinates. To do this, we observe that for a low-dimensional time-delay embedding H that satisfies global analyses, the transpose of this data H is a time-delay embedding. By construction, H covers a short time interval and hence satisfies local analyses. The dynamics of these two sets of data are highly related, since H and H only differ by a transpose, from which we can connect the local/global dynamics. These two perspectives for the same dataset are only possible because we are using time-delay embeddings/Hankel matrices, and the transpose of a Hankel matrix is also a Hankel matrix.

Figure 3.

Figure 3.

An illustration of how a highly structured, antisymmetric linear model arises from time-delay data. Starting with a one-dimensional time series, we construct a m×n Hankel matrix using time-shifted copies of the data. Assume that nm, in which case H can be thought of as an m dimensional trajectory over a long period (n snapshots in time). Similarly, the transpose of H may be thought of as a high dimensional (n dimensional) trajectory over a short period (m snapshots) in time. With this interpretation, by the results of [33], the singular vectors of H after applying centring yield the Frenet–Serret frame. Regression on the dynamics in the Frenet–Serret frame yields the tridiagonal antisymmetric linear model with an additional forcing term, which is non-zero only in the last component.

Following the notation introduced in §2c, let us begin with the time series x(t) for t=0, Δt,,(q1)Δt. We construct a time-delay embedding HRm×n, where we assume mn.

Next, we compute the SVD of H and show that the singular vectors correspond to the Frenet–Serret frame at a fixed point in time. In particular, to compute the SVD of this matrix, we consider the transpose HRn×m, which is also a Hankel matrix. Thus, the columns of H can be thought of as a trajectory h(t)Rn for t=0,Δt,,(m1)Δt. For simplicity, we shift the origin of time so that h(t) spans t=(m1)Δt/2,,0,(m1)Δt/2, and we denote h(iΔt) as hi. In this form,

H=[|||h(m+1)/2h0h(m1)/2|||].

Subtracting the central column h0 from H (or equivalently, the central row of H) yields the centred matrix

H¯=Hh01. 3.1

We can then express hi as a Taylor expansion about h0,

hih0=h0iΔt+12h0(iΔt)2+13!h0(iΔt)3+. 3.2

We note that this is a Taylor expansion in each row of H. The top right image in figure 3 shows sample rows for the Lorenz system. The images with red lines show the sample rows of H (left) and sample rows of H (right). Many of these curves look nearly linear so even a low-order Taylor expansion would yield good approximations.

With this in mind, applying the results of [33] described in §2e yields the SVD,1

H¯=[|||e01e02e03|||]V[σ1σ2σ3]Σ[p1p3p3]U. 3.3

The singular vectors in V correspond to the Frenet–Serret frame (the Gram–Schmidt method applied to the vectors, h0,h0,h0),

e0=h0h0

and

e0i=h0(i)j=1i1h0(i),e0je0jh0(i)j=1i1h0(i),e0je0j.

The matrix U is similarly defined by the discrete orthogonal polynomials

p1=1c1p

and

pi=1ci(pij=1i1pi,pjpj),

where p is the vector

p=[(m+1)2(m+2)20(m2)2(m1)2], 3.4

and where ci is a normalization constant so that pi,pi=1. Note that pi here means raise p to the power i element-wise. These polynomials are similar to the discrete orthogonal polynomials defined in [68], except p is the normalized ones vector 1/c1[11]. These polynomials will be discussed further in §3c.

Next, we build a regression model of the dynamics. We first consider the case where the system is closed (i.e. H¯ has rank r). By (3.3), V=[e01e02] well-approximates the Frenet–Serret frame at the fixed point in time t=0. Following the Frenet–Serret equations (2.10),

dVdt=AV, 3.5

where A=h0K. Here, K is a constant tridiagonal and antisymmetric matrix, which corresponds to the curvatures at t=0. From the dual perspective, we can think about the set of vectors {e01,e02,,e0r} as an r-dimensional time series over n snapshots in time,

V=[v1(t)v2(t)vr(t)]=[e01e02e0r]Rr×n. 3.6

Here, v(t)=[v1(t),v2(t),vr(t)]Rr denotes the r-dimensional trajectory, which corresponds to the r-dimensional coordinates considered in (2.5) for HAVOK. From (3.5), these dynamics must therefore satisfy

v˙(t)=Av(t),

where A is a skew-symmetric tridiagonal matrix. If the system is not closed, the dynamics take the form

[v˙1v˙2v˙rv˙r+1]=h0[0κ1κ10κr10κrκr0][v1v2vrvr+1].

We note that, due to the tridiagonal structure of K, the governing dynamics of the first r1 coordinates v1(t),vr1(t) are the same as in the unforced case. The dynamics of the last coordinate includes an additional term v˙r=κr1vr1+κr+1vr+1. The dynamics therefore take the form,

dvdt=Av(t)+Bvr+1(t),

where B is a vector that is non-zero only its last coordinate. Thus, we recover a model as in (2.8), but with the desired tridiagonal skew-symmetric structure. The matrix of curvatures is simply given by K=A/h0.

To compute A, similar to (2.6), we define two time-shifted matrices

V1=[v(t1)v(t2)v(tm1)]andV2=[v(t2)v(t3)v(tm)]. 3.7

The matrix A may then be approximated as

A=dVdtV(V2V1Δt)V1. 3.8

In summary, we have shown here that the trajectories of singular vectors v(t) from a time-delay embedding are governed by approximately tridiagonal antisymmetric dynamics, with a forcing term non-zero only in the last component. Comparing these steps with those described in §2c, we see that the estimation of K is nearly identical to the steps in HAVOK. In particular, h0K is the linear dynamics matrix A in HAVOK. The only difference is the centring step in (3.1), which is further discussed in §3c.

Note that unlike in the general case for the Frenet–Serret equations, the dynamics matrix here is constant, a surprising result. This is directly due to the time-delay nature of the data and in particular depends on how well h is approximated by its Taylor expansion in (3.2). These assumptions will be explored in more detail in §4.

(b) . HAVOK computes approximate curvatures in a synthetic example

To illustrate the correspondence between non-zero elements of the HAVOK dynamics matrix and curvatures, we start by considering an analytically tractable synthetic example. We start by applying the steps of HAVOK as described in [5] with an additional centring step. The resultant modes and terms on the sub- and super-diagonals of the dynamics matrix are then compared with curvatures computed with an analytic expression, and we show that they are approximately the same, scaled by a factor of h0.

We consider data from the one-dimensional system governed by

x(t)=sin(t)+sin(2t),

for t[0,10] and sampled at Δt=0.001. Following HAVOK, we form the time-delay matrix HR41×9961 then centre the data, subtracting the middle row h0 from all other rows, which forms H¯. We next apply the SVD to H¯=VΣU.

Figure 4 shows the columns of UR41×4 and the columns of VR9961×4. The columns of U correspond to the orthogonal polynomials described in §3c and the columns of V are the instantaneous basis vectors ei for the 9961-dimensional Frenet–Serret frame. To compute the derivative of the state, we now treat V as a four-dimensional trajectory with 9961 snapshots. Applying DMD to V yields the A matrix,

A=[1.245×1031.205×1024.033×1061.444×1071.224×1023.529×1044.458×1032.283×1069.390×1043.467×1035.758×1046.617×1033.970×1046.568×1047.451×1032.835×104]. 3.9

This matrix is approximately antisymmetric and tridiagonal as we expect.

Figure 4.

Figure 4.

Frenet–Serret frame (a) and corresponding orthogonal polynomials (b) for HAVOK applied to time-series generated by x(t)=sin(t)+sin(2t). The orthogonal polynomials and the Frenet–Serret frame are the right singular vectors U and left singular vectors V of H¯, respectively.

Next, we compute the Frenet–Serret frame for the time-delay embedding using analytic expressions and show that HAVOK indeed extracts the curvatures of the system multiplied by h0. Forming the time-delay matrix, we can easily compute h0=[x0.02,x0.02+Δt,x9.98].

h0=[sin(t)+sin(2t) for t[0.02,0.021,,9.98]]

and the corresponding derivatives,

h˙0=[cos(t)+2cos(2t) for t[0.02,0.021,,9.98]]h¨0=[sin(t)4sin(2t) for t[0.02,0.021,,9.98]]h0=[cos(t)8cos(2t) for t[0.02,0.021,,9.98]]andh0(4)=[sin(t)+16sin(2t) for t[0.02,0.021,,9.98]].

The fifth derivative h(5) is given by cos(t)+32cos(2t) and can be expressed as a linear combination of the previous derivatives, namely, h0(5)=5h04h˙0. This can also be shown using the fact that x(t) satisfies the fourth-order ordinary differential equation x(4)+5x¨+4x=0.

Since only the first four derivatives are linearly independent, only the first three curvatures are non-zero. Furthermore, exact values of the first three curvatures can be computed analytically using the following formulae from [69],

κ1=det([h˙0h¨0][h˙0h¨0])h˙03/2,κ2=det([h˙0h¨0h0][h˙0h¨0h0])det([h˙0h¨0][h˙0h¨0]),κ3=det([h˙0h¨0h0h0(4)][h˙0h¨0h0h0(4)])det([h˙0h¨0][h˙0h¨0])det([h˙0h¨0h0][h˙0h¨0h0])h0.

These formulae yield the values κ1=1.205×102, κ2=4.46×103 and κ3=6.62×103, respectively.

As expected, these curvature values are very close to those computed with HAVOK, highlighted in (3.9). In particular, the super-diagonal entries of the matrix appear to be very good approximations to the curvatures. The reasons why the super-diagonal, but not the sub-diagonal, is so close in value to the true curvatures is not yet well understood. Furthermore, in §5, we use the theoretical insights from §3a to propose a modification to the HAVOK algorithm that yields an even better approximation to curvatures in the Frenet–Serret frame.

(c) . Orthogonal polynomials and centring

In the decomposition in (3.3), we define a set of orthonormal polynomials. Here, we discuss the properties of these polynomials, comparing them with the Legendre polynomials and providing explicit expressions for the first several terms in this series.

In §3a, we apply the SVD to the centred matrix H¯, as in (3.3). The columns of U in this decomposition yield a set of orthonormal polynomials, which are defined by (2.14). In the continuous case, the inner product in (2.14) is a(t),b(t)=ppa(t)b(t)dt, while in the discrete case a,b=j=ppajbj. The first five polynomials in the discrete case may be found in the electronic supplementary material, Note 1. The first five of these polynomials pi(x) in the continuous case are

p1(x)=xc1(p),wherec1(p)=6p33p2(x)=x2c2(p),wherec2(p)=10p55p3(x)=1c3(p)(x335p2x),wherec3(p)=214p735p4(x)=1c4(p)(x457p2x2),wherec4(p)=22p921andp5(x)=1c5(p)(x5+521p4x109p2x3),wherec5(p)=822p11693.

By construction, pi(t) form a set of orthonormal polynomials, where pi(t) has degree i.

Interestingly, these orthogonal polynomials are similar to the Legendre polynomials li [70,71], which are defined by the recursive relation

l1=1c1[111]

and

li=1pi(pik=1i1pi,lk),

where p is as defined in (3.4). For the corresponding Legendre polynomials normalized over [p,p], we refer the reader to [68].

The key difference between these two sets of polynomials is that the first polynomial p1 is linear, while the first Legendre polynomial is constant (i.e. corresponding in the discrete case to the normalized ones vector). In particular, if H is not centred before decomposition by SVD, the resulting columns of U will be the Legendre polynomials. However, without centring, the resulting V will no longer be the Frenet–Serret frame. Instead, the resulting frame corresponds to applying the Gram–Schmidt method to the set {γ(t),γ(t),γ(t),} instead of {γ(t),γ(t),γ(t),}. Recently, it has been shown that using centring as a preprocessing step is beneficial for the DMD [72]. That being said, since the derivation of the tridiagonal and antisymmetric structure seen in the Frenet–Serret frame is based on the properties of the derivatives and orthogonality, this same structure can be computed without the centring step.

4. Limits and requirements

Section 3a has shown how HAVOK yields a good approximation to the Frenet–Serret frame in the limit that the time interval spanned by each row of H goes to zero. To be more precise, HAVOK yields the Frenet–Serret frame if (2.13) is satisfied. However, this property can be difficult to check in practice. Here, we establish several rules for choosing and structuring the data so that the HAVOK dynamics matrix adopts the structure we expect from theory.

Choose Δt to be small. The specific constraint we have from (2.13) is

h0tih02ti2h06ti3h0(k)k!tik,

for mΔt/2timΔt/2 or more simply timΔt, where Δt is the sampling period (inverse of the sampling frequency) of the data and m is the number of delays in the Hankel matrix H. If we assume that mΔt<1, then rearranging,

mΔt2h0h0,3h0h0,,kh0(k1)h0(k). 4.1

In practice, since the series of ratios of derivatives defined in (4.1) grows, it is only necessary to check the first inequality. By choosing the sampling period of the data to be small, we can constrain the data to satisfy this inequality. To illustrate the effect of decreasing Δt, figure 5ad shows the dynamics matrices A computed by the HAVOK algorithm for the Lorenz system for a fixed number of rows of data and fixed time span of the simulation. As Δt becomes smaller, A becomes more structured in that it is antisymmetric and tridiagonal.

Figure 5.

Figure 5.

Increasing sampling frequency and number of columns yields more structured HAVOK models for the Lorenz system. Given the Hankel matrix H, the linear dynamical model is plotted for values of sampling period Δt equal to 0.01, 0.005, 0.001, 0.0005 for a fixed number of rows and fixed time span of measurement (ad). Similarly, the model is plotted for values of the number of columns n equal to 1001, 2001, 5001 and 10001 for fixed sampling frequency and number of delays m (eh). As we increase the sampling frequency and the number of columns of the data, A becomes more antisymmetric with non-zero elements only on the super- and sub-diagonals. These trends illustrate the results in §4. (Online version in colour.)

Choose the number of columns n to be large. The number of columns comes into the Taylor expansion through the derivatives h0(k), since h0(k)Rn.

For the synthetic example x(t)=sin(t)+2sin(t), we can show that the ratio 2h0/h0 saturates to a fixed value in the limit as n goes to infinity (see the electronic supplementary material, Note 2). However, for short time series (small values of n), this ratio can be arbitrarily small, and hence (4.1) will be difficult to satisfy.

We illustrate this in figure 5 using data from the Lorenz system. We compute and plot the HAVOK linear dynamics matrix for a varying number of columns n, while fixing the sampling frequency and number of rows m. We see that as we increase the number of columns, the dynamics becomes more skew-symmetric and tridiagonal. In general, due to practical constraints and restrictions, it may be difficult to guarantee that given data satisfies these two requirements. In §§4a and 5, we propose methods to tackle this challenge.

(a) . Interpolation

From the first requirement, we see that the sampling frequency Δt needs to be sufficiently small to recover the antisymmetric structure in A. However, in practice, it is not always possible to satisfy this sampling criterion.

One solution to remedy this is to use data interpolation. To be precise, we can increase the sampling rate by spline interpolation, then construct H from the interpolated data that satisfies (4.1). The ratio of the derivatives h0/h0,h0/h0, may also contain some dependence on Δt, but we observe that this dependence is not significantly affected in practice.

As an example, we consider a set of time-series measurements generated from the Lorenz system (see §5 for more details about this system). We start with a sampling period of Δt=0.1 (figure 6ac). Note that here we have simulated the Lorenz system at high temporal resolution then subsampled to produce these time-series data. Applying HAVOK with centring and m=201, we see that A is not antisymmetric and the columns of U are not the orthogonal polynomials like in the synthetic example shown in figure 4.

Figure 6.

Figure 6.

In the case where a dynamical system is sparsely sampled, interpolation can be used to recover a more tridiagonal and antisymmetric matrix for the linear model in HAVOK. First, we simulate the Lorenz system, measuring x(t) with a sampling period of Δt=0.1. The resulting dynamics model A and corresponding singular vectors of U are plotted. Due to the low sampling frequency, these values do not satisfy the requirements in (4.1). Consequently, the dynamics matrix is not antisymmetric and the singular vectors do not correspond to the orthogonal polynomials in §3c. Next, the data are interpolated using cubic splines and subsequently sampled using a sampling period of Δt=0.001. In this case, the data satisfy the assumptions in (4.1), which yields the tridiagonal antisymmetric structure for A and orthogonal polynomials for U as predicted. (Online versionin colour.)

Next, we apply cubic spline interpolation to these data, evaluating at a sampling rate of Δt=0.001 (figure 6df). We note that, especially for real-world data with measurement noise, this interpolation procedure also serves to smooth the data, making the computation of its derivatives more tractable [73]. Applying HAVOK to this interpolated data yields a new antisymmetric A matrix and the U corresponds to the orthogonal polynomials described in §3c.

5. Promoting structure in the HAVOK decomposition

HAVOK yields a linear model of a dynamical system explained by the Frenet–Serret frame, and by leveraging these theoretical connections, here we propose a modification of the HAVOK algorithm to promote this antisymmetric structure. We refer to this algorithm as sHAVOK and describe it in §5a. Compared with HAVOK, sHAVOK yields structured dynamics matrices that better approximate the Frenet–Serret frame and more closely estimate the curvatures. Importantly, sHAVOK also produces better models of the system using significantly less data. We demonstrate its application to three nonlinear synthetic example systems in §5b and two real-world datasets in §5c.

(a) . The sHAVOK algorithm

We propose a modification to the HAVOK algorithm that more closely induces the antisymmetric structure in the dynamics matrix, especially for shorter time series. The key innovation in sHAVOK is the application of two SVDs applied separately to time-shifted Hankel matrices (compare figures 1 and 7). This simple modification enforces that the singular vector bases on which the dynamics matrix is computed are orthogonal, and thus more closely approximate the Frenet–Serret frame.

Figure 7.

Figure 7.

Outline of steps in structured HAVOK (sHAVOK). First, given a dynamical system a single variable x(t) is measured. Time-shifted copies of x(t) are stacked to form a Hankel matrix H. H is split into two time-shifted matrices, H1 and H2. The singular value decomposition (SVD) is applied to these two matrices individually. This results in reduced order representations, V1 and V2, of H1 and H2, respectively. The matrices, V1 and V2 are then used to construct an approximation to this low-dimensional state and its derivative. Finally, linear regression is performed on these two matrices to form a linear dynamical model with an additional forcing term in the last component. (Online version in colour.)

Building on the HAVOK algorithm as summarized in §2c, we focus on the step where the singular vectors V are split into V1 and V2. In the Frenet–Serret framework, we are interested in the evolution of the orthonormal frame e1(t),e2(t),,er(t). In HAVOK, V1 and V2 correspond to instances of this orthonormal frame.

Although V is a unitary matrix, V1 and V2—which each consist of removing a column from V—are not. To enforce this orthogonality, we propose to split H¯ into two time-shifted matrices H¯1 and H¯2 (figure 7) and then compute two SVDs with rank truncation r,

H¯1=U1Σ1V1andH¯2=U2Σ2V2.

By construction, V1 and V2 are now orthogonal matrices.

Like in HAVOK, our goal is to estimate the dynamics matrix A such that

v˙(t)=Av(t).

To do so, we use the matrices V1 and V2 to construct the state and its derivative,

V=V1

and

dVdt=V2V1Δt.

A then satisfies

A=dVdtV=(V2V1Δt)V1=(V2V1IΔt). 5.1

If this system is not closed (non-zero forcing term), then V2 is defined as columns 2 to n1 of the SVD singular vectors with an r1 rank truncation Vr1, and ARr1×r1 and BRr1×1 are computed as [A,B]=(V2V1I)/Δt. The corresponding pseudocode is elaborated in the electronic supplementary material, Note 3. We note that sHAVOK requires one additional SVD evaluation compared with HAVOK. For situations in which runtime is a limiting factor, H¯2 may be expressed using rank one updates of H¯1. Using this factor, efficient methods may be leveraged to compute the SVD of H¯2 from H¯1, with a negligible increase to runtime [74,75].

As a simple analytic example, we apply sHAVOK to the same system described in §b generated by x(t)=sin(t)+sin(2t). The resulting dynamics matrix is

A=[1.116×1051.204×1021.227×1058.728×1081.204×1021.269×1054.458×1034.650×1062.053×1054.458×1034.897×1066.617×1039.956×1081.118×1076.617×1033.368×106].

We see immediately that, with this small modification, A has become much more structured compared with (3.9). Specifically, the estimates of the curvatures both below and above the diagonal are now equal, and the rest of the elements in the matrix, which should be zero, are almost all smaller by an order of magnitude. In addition, the curvatures are equal to the true analytic values up to three decimal places.

We emphasize that from a theoretical standpoint, sHAVOK aligns much more closely with the findings of §3. In particular, sHAVOK enforces that the singular vector bases on which the dynamics matrix is computed are orthogonal, and thus more closely approximate the Frenet–Serret frame compared with HAVOK. Methods with stronger theoretical foundations are beneficial as they allow us to (1) better predict/understand their behaviour on new datasets and (2) more easily understand their underlying assumptions and areas for future modifications. For further analysis of the sHAVOK method for varying lengths of data, initial conditions, rank truncations and noise levels, see the electronic supplementary material, Note §5–8.

(b) . Comparison of HAVOK and sHAVOK for three synthetic examples

The results of HAVOK and sHAVOK converge in the limit of infinite data,2 and the models they produce are most different in cases of shorter time-series data, where we may not have measurements over long periods of time. Using synthetic data from three nonlinear example systems, we compute models using both methods and compare the corresponding dynamics matrices A (figure 8). In every case, the A matrix computed using the sHAVOK algorithm is more antisymmetric and has a stronger tridiagonal structure than the corresponding matrix computed using HAVOK.

Figure 8.

Figure 8.

Structured HAVOK (sHAVOK) yields more structured models from short trajectories than HAVOK. For each system, we simulated a trajectory extracting a single coordinate in time (grey). We then apply HAVOK and sHAVOK to data x(t) from a short subset of this trajectory, shown in black. The middle columns show the resulting dynamics matrices A from the models. The top three rows correspond to different subsets of the Lorenz system, while the fourth and fifth rows correspond to trajectories from the Rössler system and a double pendulum, respectively. Compared with HAVOK, the resulting models for sHAVOK consistently show stronger structure in that they are antisymmetric with non-zero elements only along the sub- and super-diagonals. The corresponding eigenvalue spectra of A for HAVOK and sHAVOK are plotted in teal and maroon, respectively, in addition to eigenvalues from HAVOK for the full (grey) trajectory. In all cases, the sHAVOK eigenvalues are much closer in value to those from the long trajectory limit than HAVOK. (Online version in colour.)

In addition to the dynamics matrices, we also show in figure 8 the eigenvalues of A, ωkC for k=1,r for HAVOK (teal) and sHAVOK (maroon). We additionally plot the eigenvalues (black crosses) corresponding to those computed from the data measured in the large data limit, but at the same sampling frequency. In this large data limit, both sHAVOK and HAVOK yield the same antisymmetric tridiagonal dynamics matrix and corresponding eigenvalues. Comparing the eigenvalues, we immediately see that eigenvalues from sHAVOK more closely match those computed in the large data limit. Thus, even with a short trajectory, we can still recover models and key features of the underlying dynamics.

We emphasize here that sHAVOK is robust to initial conditions. In particular, for the first example, corresponding to the Lorenz system, we plot the HAVOK and sHAVOK results for three different subsets of the data. In all of these cases, although the HAVOK dynamics matrix varies significantly in structure, the sHAVOK matrix remains antisymmetric and tridiagonal. Furthermore, the sHAVOK eigenvalues are much closer to those from the long trajectory compared with HAVOK. We describe each of the systems and their configurations below.

Lorenz attractor: We first illustrate these two methods on the Lorenz system. Originally developed in the fluids community, the Lorenz [76] system is governed by three first-order differential equations [76]:

x˙=σ(yx),y˙=x(ρz)yandz˙=xyβz.

The Lorenz system has since been used to model systems in a wide variety of fields, including chemistry [77], optics [78] and circuits [79].

We simulate 3000 samples with initial condition [8,8,27] and a stepsize of Δt=0.001, measuring the variable x(t). We use the common parameters σ=10, ρ=28 and β=8/3. This trajectory is shown in figure 8 and corresponds to a few oscillations about a fixed point. We choose the lengths of these datasets to be short enough that the HAVOK dynamics matrix is visually neither antisymmetric nor tridiagonal. We compare the spectra with that of a longer trajectory containing 300000 samples, which we take to be an approximation of the true spectrum of the system.

Rössler attractor: The Rössler attractor is given by the following nonlinear differential equations [80,81]:

x˙=yz,y˙=x+ayandz˙=b+z(xc).

We choose to measure the variable x(t). This attractor is a canonical example of chaos, like the Lorenz attractor. Here, we perform a simulation with 70000 samples and a stepsize of Δt=0.001. We choose the following common values of a=0.1, b=0.1 and c=14 and the initial condition x0=y0=z0=1. We similarly plot the trajectory and dynamics matrices. We compare the spectra in this case with a longer trajectory using a simulation for 300000 samples.

Double pendulum: The double pendulum is a similar nonlinear differential equation, which models the motion of a pendulum that is connected at the end to another pendulum [82]. This system is typically represented by its Lagrangian,

L=16ml2(θ˙22+4θ˙12+3θ˙1θ˙2cos(θ1θ2))+12mgl(3cosθ1+cosθ2), 5.2

where θ1 and θ2 are the angles between the top and bottom pendula and the vertical axis, respectively. m is the mass at the end of each pendulum, l is the length of each pendulum and g is the acceleration constant due to gravity. Using the Euler–Lagrange equations,

ddtLθ˙iLθi=0fori=1,2,

we can construct two second-order differential equations of motion.

The trajectory is computed using a variational integrator to approximate

δabL(θ1,θ2,θ˙1,θ˙2)dt=0.

We simulate this system with a stepsize of Δt=0.001 and for 1200 samples. We choose m1=m2=l1=l2=1 and g=10, and use initial conditions θ1=θ2=π/2, θ˙1=0.01 and θ˙2=0.005. As our measurement for HAVOK and sHAVOK, we use x(t)=sin(θ1(t)) and compare our data with a long trajectory containing 100000 samples.

(c) . sHAVOK applied to real-world datasets

Here, we apply sHAVOK to two real-world time-series datasets, the trajectory of a double pendulum and measles outbreak data. Similar to the synthetic examples, we find that the dynamics matrix from sHAVOK is much more antisymmetric and tridiagonal compared with the dynamics matrix for HAVOK. In both cases, some of the HAVOK eigenvalues contain positive real components; in other words, these models have unstable dynamics. However, the sHAVOK spectra do not contain positive real components, resulting in much more accurate and stable models (figure 9).

Figure 9.

Figure 9.

Comparison of HAVOK and structured HAVOK (sHAVOK) for two real-world systems: a double pendulum and measles outbreak data. For each system, we measure a trajectory extracting a single coordinate (grey). We then apply HAVOK and sHAVOK to a subset of this trajectory, shown in black. The A matrices for the resulting linear dynamical models are shown. sHAVOK yields models with an antisymmetric structure, with non-zero elements only along the sub-diagonal and super-diagonal. The corresponding eigenvalue spectra for HAVOK and sHAVOK are additionally plotted in teal and maroon, respectively, along with eigenvalues from HAVOK for a long trajectory. In both cases, the eigenvalues of sHAVOK are much closer in value to those in the long trajectory limit than HAVOK. Some of the eigenvalues of HAVOK are unstable and have positive real components. The corresponding reconstructions of the first singular vector of the corresponding Hankel matrices are shown along with the real data. Note that the HAVOK models are unstable, growing exponentially due to the unstable eigenvalues, while the sHAVOK models do not. Credit for images on left: (double pendulum) [83] and (measles) CDC/Cynthia S. Goldsmith; William Bellini, PhD. (Online version in colour.)

Double pendulum: We first look at measurements of a double pendulum [83]. A picture of the set-up can be found in figure 9. The Lagrangian in this case is very similar to that in (5.2). One key difference in the synthetic case is that all of the mass is contained at the joints, while in this experiment, the mass is spread over each arm. To accommodate this, the Lagrangian can be slightly modified,

L=12(m1(x˙12+y˙12)+m2(x˙22+y˙22))+12(I1θ˙12+I2θ˙22)(m1y1+m2y2)g,

where x1=a1sin(θ1), x2=l1sin(θ1)+a2sin(θ2), y1=a1cos(θ1) and y2=l1cos(θ1)+a2cos(θ2). m1 and m2 are the masses of the pendula, l1 and l2 are the lengths of the pendula, a1 and a2 are the distances from the joints to the centre of masses of each arm, and I1 and I2 are the moments of inertia for each arm. When m1=m2=m, a1=a2=l1=l2 and I1=I2=ml2 we recover (5.2). We sample the data at Δt=0.001 s and plot sin(θ2(t)) over a 15 s time interval. The data over this interval appear approximately periodic.

Measles outbreaks: As a second example, we apply measles outbreak data from New York City between 1928 and 1964 [84]. The case history of measles over time has been shown to exhibit chaotic behaviour [85,86], and [5] applied HAVOK to measles data and successfully showed that the method could extract transient behaviour.

For both systems, we apply sHAVOK to a subset of the data corresponding to the black trajectories x(t) shown in figure 9. We then compare that with HAVOK applied over the same interval. We use m=101 delays with a r=5 rank truncation for the double pendulum, and m=51 delays and a r=6 rank truncation for the measles data. For the measles data, prior to applying sHAVOK and HAVOK, the data is first interpolated and sampled at a rate of Δt=0.0018 years. Like in previous examples, the resulting sHAVOK dynamics is tridiagonal and antisymmetric while the HAVOK dynamics matrix is not. Next, we plot the corresponding spectra for these two methods, in addition to the eigenvalues applied to HAVOK over the entire time series. Most noticeably, the eigenvalues from sHAVOK are closer to the long data limit values. In addition, two of the HAVOK eigenvalues lie to the right of the real axis, and thus have positive real components. All of the sHAVOK eigenvalues, on the other hand, have negative real components. This difference is most prominent in the reconstructions of the first singular vector. In particular, since two of the eigenvalues from HAVOK are positive, the reconstructed time series grows exponentially. By contrast, for sHAVOK the corresponding time-series remains bounded providing a much better model of the true data.

6. Discussion

In this paper, we describe a new theoretical connection between models constructed from time-delay embeddings, specifically using the HAVOK approach, and the Frenet–Serret frame from differential geometry. This unifying perspective explains the peculiar antisymmetric, tridiagonal structure of HAVOK models: namely, the sub- and super-diagonal entries of the linear model correspond to the intrinsic curvatures in the Frenet–Serret frame. Inspired by this theoretical insight, we develop an extension we call structured HAVOK that effectively yields models with this structure. Importantly, we demonstrate that this modified algorithm improves the stability and accuracy of time-delay embedding models, especially when data are noisy and limited in length. All code is available at https://github.com/sethhirsh/sHAVOK.

Establishing theoretical connections between time-delay embedding, dimensionality reduction and differential geometry opens the door for a wide variety of applications and future work. By understanding this new perspective, we now better understand the requirements and limitations of HAVOK and have proposed simple modifications to the method which improve its performance on data. However, the full implications of this theory remain unknown. Differential geometry, dimensionality reduction and time-delay embeddings are all well-established fields, and by understanding these connections we can develop more robust and interpretable methods for modelling time series.

For instance, by connecting HAVOK to the Frenet–Serret frame, we recognize the importance of enforcing orthogonality for V1 and V2 and inspired development of sHAVOK. With this theory, we can incorporate further improvements on the method. For example, sHAVOK can be thought of as a first-order forward difference method, approximating the derivative and state by(V2V1)/Δt and V1, respectively. By employing a central difference scheme, such as approximating the state by V, we have observed this to further enforce the antisymmetry in the dynamics matrix and move the corresponding eigenvalues towards the imaginary axis.

Throughout this analysis, we have focused purely on linear methods. In recent years, nonlinear methods for dimensionality reduction, such as autoencoders and diffusion maps, have gained popularity [7,87,88]. Nonlinear models similarly benefit from promoting sparsity and interpretability. By understanding the structures of linear models, we hope to generalize these methods to create more accurate and robust methods that can accurately model a greater class of functions.

Supplementary Material

Acknowledgements

We are grateful for discussions with S. H. Singh and K. D. Harris; and to K. Kaheman for providing the double pendulum dataset. We especially thank A. G. Nair for providing valuable insights and feedback in designing the analysis.

Footnotes

1

We define the left singular matrix as V and the right singular matrix as U. This definition can be thought of as taking the SVD of the transpose of the matrix H1h0. This keeps the definitions of the matrices more in line with the notation used in HAVOK.

2

See the electronic supplementary material, Note §4, for more details.

Data accessibility

All code used to reproduce results in the figures is openly available at https://github.com/sethhirsh/sHAVOK.

Authors' contributions

S.M.H. conceived of the study, designed the analyses, carried out the analyses and wrote the manuscript. S.M.I. helped carry out the computational analyses. S.L.B. helped design the analyses and write the manuscript. J.N.K. helped design the analyses and revised the manuscript. B.W.B. helped design the analyses, coordinate the study and write the manuscript.

Competing interests

The authors have no competing interests to declare.

Funding

This work was funded by the Army Research Office (W911NF-17-1-0306 to S.L.B.); Air Force Office of Scientific Research (FA9550-17-1-0329 to J.N.K.); the Air Force Research Laboratory (FA8651-16-1-0003 to B.W.B.); the National Science Foundation (award no. 1514556 to B.W.B.); the Alfred P. Sloan Foundation and the Washington Research Foundation to B.W.B.

References

  • 1.Schmidt M, Lipson H. 2009. Distilling free-form natural laws from experimental data. Science 324, 81-85. ( 10.1126/science.1165893) [DOI] [PubMed] [Google Scholar]
  • 2.Bongard J, Lipson H. 2007. Automated reverse engineering of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 104, 9943-9948. ( 10.1073/pnas.0609476104) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Brunton SL, Proctor JL, Kutz JN. 2016. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 113, 3932-3937. ( 10.1073/pnas.1517384113) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Brunton SL, Kutz JN. 2019. Data-driven science and engineering: machine learning, dynamical systems, and control. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 5.Brunton SL, Brunton BW, Proctor JL, Kaiser E, Kutz JN. 2017. Chaos as an intermittently forced linear system. Nat. Commun. 8, 19. ( 10.1038/s41467-017-00030-8) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lusch B, Kutz JN, Brunton SL. 2018. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9, 4950. ( 10.1038/s41467-018-07210-0) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Champion K, Lusch B, Kutz JN, Brunton SL. 2019. Data-driven discovery of coordinates and governing equations. Proc. Natl Acad. Sci. USA 116, 22 445-22 451. ( 10.1073/pnas.1906995116) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Koopman BO. 1931. Hamiltonian systems and transformation in Hilbert space. Proc. Natl Acad. Sci. USA 17, 315-318. ( 10.1073/pnas.17.5.315) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mezić I, Banaszuk A. 2004. Comparison of systems with complex behavior. Physica D 197, 101-133. ( 10.1016/j.physd.2004.06.015) [DOI] [Google Scholar]
  • 10.Mezić I. 2005. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dyn. 41, 309-325. ( 10.1007/s11071-005-2824-x) [DOI] [Google Scholar]
  • 11.Rowley CW et al. 2009. Spectral analysis of nonlinear flows. J. Fluid Mech. 641, 115-127. ( 10.1017/S0022112009992059) [DOI] [Google Scholar]
  • 12.Mezić I. 2013. Analysis of fluid flows via spectral properties of the Koopman operator. Annu. Rev. Fluid Mech. 45, 357-378. ( 10.1146/annurev-fluid-011212-140652) [DOI] [Google Scholar]
  • 13.Kutz JN, Brunton SL, Brunton BW, Proctor JL. 2016. Dynamic mode decomposition: data-driven modeling of complex systems. SIAM. [Google Scholar]
  • 14.Schmid PJ. 2010. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5-28. ( 10.1017/S0022112010001217) [DOI] [Google Scholar]
  • 15.Tu JH, Rowley CW, Luchtenburg DM, Brunton SL, Kutz JN. 2014. On dynamic mode decomposition: theory and applications. J. Comput. Dyn. 1, 391-421. ( 10.3934/jcd.2014.1.391) [DOI] [Google Scholar]
  • 16.Takens F. 1981. Detecting strange attractors in turbulence. In Dynamical systems and turbulence, pp. 366–381. New York, NY: Springer
  • 17.Arbabi H, Mezic I. 2017. Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator. SIAM J. Appl. Dyn. Syst. 16, 2096-2126. ( 10.1137/17M1125236) [DOI] [Google Scholar]
  • 18.Champion KP, Brunton SL, Kutz JN. 2019. Discovery of nonlinear multiscale systems: sampling strategies and embeddings. SIAM J. Appl. Dyn. Syst. 18, 312-333. ( 10.1137/18M1188227) [DOI] [Google Scholar]
  • 19.Broomhead DS, Jones R. 1989. Time-series analysis. Proc. R. Soc. Lond. A 423, 103-121. [Google Scholar]
  • 20.Juang J-N, Pappa RS. 1985. An eigensystem realization algorithm for modal parameter identification and model reduction. J. Guidance, Control Dyn. 8, 620-627. ( 10.2514/3.20031) [DOI] [Google Scholar]
  • 21.Brunton BW, Johnson LA, Ojemann JG, Kutz JN. 2016. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 258, 1-15. ( 10.1016/j.jneumeth.2015.10.010) [DOI] [PubMed] [Google Scholar]
  • 22.Giannakis D, Majda AJ. 2012. Nonlinear Laplacian spectral analysis for time series with intermittency and low-frequency variability. Proc. Natl Acad. Sci. USA 109, 2222-2227. ( 10.1073/pnas.1118984109) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Das S, Giannakis D. 2019. Delay-coordinate maps and the spectra of Koopman operators. J. Stat. Phys. 175, 1107-1145. ( 10.1007/s10955-019-02272-w) [DOI] [Google Scholar]
  • 24.Dhir N, Kosiorek AR, Posner I. 2017. Bayesian delay embeddings for dynamical systems. In NIPS Timeseries Workshop.
  • 25.Giannakis D. 2020. Delay-coordinate maps, coherence, and approximate spectra of evolution operators. (http://arxiv.org/abs/2007.02195)
  • 26.Gilpin W. 2020. Deep learning of dynamical attractors from time series measurements. (http://arxiv.org/abs/2002.05909)
  • 27.Pan S, Duraisamy K. 2020. On the structure of time-delay embedding in linear models of non-linear dynamical systems. Chaos 30, 073135. ( 10.1063/5.0010886) [DOI] [PubMed] [Google Scholar]
  • 28.Kamb M, Kaiser E, Brunton SL, Kutz JN. 2018. Time-delay observables for Koopman: Theory and applications. (http://arxiv.org/abs/1810.01479)
  • 29.Do Carmo MP. 2016. Differential geometry of curves and surfaces: revised and updated second edition. Mineola, NY: Courier Dover Publications. [Google Scholar]
  • 30.O’Neill B. 2014. Elementary differential geometry. New York, NY: Academic Press. [Google Scholar]
  • 31.Serret J-A. 1851. Sur quelques formules relatives à la théorie des courbes à double courbure.J. de mathématiques pures et appliquées 193-207. [Google Scholar]
  • 32.Spivak MD. 1970. A comprehensive introduction to differential geometry. Publish or perish. [Google Scholar]
  • 33.Álvarez-Vizoso J, Arn R, Kirby M, Peterson C, Draper B. 2019. Geometry of curves in Rn from the local singular value decomposition. Linear Algebra Appl. 571, 180-202. ( 10.1016/j.laa.2019.02.006) [DOI] [Google Scholar]
  • 34.Golub GH, Reinsch C. 1971. Singular value decomposition and least squares solutions. In Linear Algebra, pp. 134–151. New York, NY: Springer.
  • 35.Joliffe I, Morgan B. 1992. Principal component analysis and exploratory factor analysis. Stat. Methods Med. Res. 1, 69-95. ( 10.1177/096228029200100105) [DOI] [PubMed] [Google Scholar]
  • 36.Schmid P, Sesterhenn J. 2008. Dynamic mode decomposition of numerical and experimental data. APS 61, MR–007. [Google Scholar]
  • 37.Alter O, Brown PO, Botstein D. 2000. Singular value decomposition for genome-wide expression data processing and modeling. Proc. Natl Acad. Sci. USA 97, 10 101-10 106. ( 10.1073/pnas.97.18.10101) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Santolík O, Parrot M, Lefeuvre F. 2003. Singular value decomposition methods for wave propagation analysis. Radio Sci. 38. [Google Scholar]
  • 39.Muller N, Magaia L, Herbst BM. 2004. Singular value decomposition, eigenfaces, and 3D reconstructions. SIAM Rev. 46, 518-545. ( 10.1137/S0036144501387517) [DOI] [Google Scholar]
  • 40.Proctor JL, Eckhoff PA. 2015. Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 7, 139-145. ( 10.1093/inthealth/ihv009) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Berger E, Sastuba M, Vogt D, Jung B, Amor HB. 2015. Estimation of perturbations in robotic behavior using dynamic mode decomposition. J. Adv. Rob. 29, 331-343. ( 10.1080/01691864.2014.981292) [DOI] [Google Scholar]
  • 42.Kaptanoglu AA, Morgan KD, Hansen CJ, Brunton SL. 2020. Characterizing magnetized plasmas with dynamic mode decomposition. Phys. Plasmas 27, 032108. ( 10.1063/1.5138932) [DOI] [Google Scholar]
  • 43.Herrmann B, Baddoo PJ, Semaan R, Brunton SL, McKeon BJ. 2020. Data-driven resolvent analysis. (http://arxiv.org/abs/2010.02181)
  • 44.Grosek J, Kutz JN. 2014. Dynamic mode decomposition for real-time background/foreground separation in video. (http://arxiv.org/abs/1404.7592)
  • 45.Erichson NB, Brunton SL, Kutz JN. 2019. Compressed dynamic mode decomposition for background modeling. J. Real-Time Image Process. 16, 1479-1492. ( 10.1007/s11554-016-0655-2) [DOI] [Google Scholar]
  • 46.Askham T, Kutz JN. 2018. Variable projection methods for an optimized dynamic mode decomposition. SIAM J. Appl. Dyn. Syst. 17, 380-416. ( 10.1137/M1124176) [DOI] [Google Scholar]
  • 47.Dawson ST, Hemati MS, Williams MO, Rowley CW. 2016. Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition. Exp. Fluids 57, 1-19. ( 10.1007/s00348-016-2127-7) [DOI] [Google Scholar]
  • 48.Hemati MS, Rowley CW, Deem EA, Cattafesta LN. 2017. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis. Theor. Comput. Fluid Dyn. 31, 349-368. ( 10.1007/s00162-017-0432-2) [DOI] [Google Scholar]
  • 49.Partington JR et al. 1988. An introduction to Hankel operators, Vol. 13. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 50.Beckermann B, Townsend A. 2019. Bounds on the singular values of matrices with displacement structure. SIAM Rev. 61, 319-344. ( 10.1137/19M1244433) [DOI] [Google Scholar]
  • 51.Susuki Y, Mezić I. 2015. A prony approximation of Koopman mode decomposition. In Decision and Control (CDC), 2015 IEEE 54th Annual Conf. on, pp. 7022–7027. IEEE.
  • 52.Brunton SL, Brunton BW, Proctor JL, Kutz JN. 2016. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE 11, e0150171. ( 10.1371/journal.pone.0150171) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Waibel A, Hanazawa T, Hinton G, Shikano K, Lang KJ. 1989. Phoneme recognition using time-delay neural networks. IEEE Trans. Acoust. Speech Signal Process. 37, 328-339. ( 10.1109/29.21701) [DOI] [Google Scholar]
  • 54.Dylewsky D, Kaiser E, Brunton SL, Kutz JN. 2020. Principal component trajectories (PCT): nonlinear dynamics as a superposition of time-delayed periodic orbits. (http://arxiv.org/abs/2005.14321) [DOI] [PubMed]
  • 55.Le Clainche S, Vega JM. 2017. Higher order dynamic mode decomposition. SIAM J. Appl. Dyn. Syst. 16, 882-925. ( 10.1137/15M1054924) [DOI] [Google Scholar]
  • 56.Le Clainche S, Vega JM, Soria J. 2017. Higher order dynamic mode decomposition of noisy experimental data: the flow structure of a zero-net-mass-flux jet. Exp. Therm Fluid Sci. 88,336-353. ( 10.1016/j.expthermflusci.2017.06.011) [DOI] [Google Scholar]
  • 57.Le Clainche S, Vega JM. 2017. Higher order dynamic mode decomposition to identify and extrapolate flow patterns. Phys. Fluids 29, 084102. ( 10.1063/1.4997206) [DOI] [Google Scholar]
  • 58.Moulder Jr RG, Martynova E, Boker SM. 2020. Extracting nonlinear dynamics from psychological and behavioral time series through HAVOK analysis.
  • 59.Bozzo E, Carniel R, Fasino D. 2010. Relationship between singular spectrum analysis and Fourier analysis: theory and application to the monitoring of volcanic activity. Comput. Math. Appl. 60, 812-820. ( 10.1016/j.camwa.2010.05.028) [DOI] [Google Scholar]
  • 60.Arnol’d VI. 2013. Mathematical methods of classical mechanics, vol. 60. Springer Science & Business Media. [Google Scholar]
  • 61.Meirovitch L. 2010. Methods of analytical dynamics. Courier Corporation. [Google Scholar]
  • 62.Colorado J, Barrientos A, Martinez A, Lafaverges B, Valente J. 2010. Mini-quadrotor attitude control based on hybrid backstepping & Frenet-Serret theory. In 2010 IEEE Int. Conf. on Robotics and Automation, pp. 1617–1622. IEEE.
  • 63.Ravani R, Meghdari A. 2006. Velocity distribution profile for robot arm motion using rational Frenet–Serret curves. Informatica 17, 69-84. ( 10.15388/Informatica.2006.124) [DOI] [Google Scholar]
  • 64.Pilté M, Bonnabel S, Barbaresco F. 2017. Tracking the Frenet-Serret frame associated with a highly maneuvering target in 3D. In 2017 IEEE 56th Annual Conf. on Decision and Control (CDC), pp. 1969–1974. IEEE.
  • 65.Bini D, de Felice F, Jantzen RT. 1999. Absolute and relative Frenet-Serret frames and Fermi-Walker transport. Classical Quantum Gravity 16, 2105. ( 10.1088/0264-9381/16/6/333) [DOI] [Google Scholar]
  • 66.Iyer BR, Vishveshwara C. 1993. Frenet-Serret description of gyroscopic precession. Phys. Rev. D 48, 5706. ( 10.1103/PhysRevD.48.5706) [DOI] [PubMed] [Google Scholar]
  • 67.Sayadi T, Hamman C, Schmid P. 2014. Parallel qr algorithm for data-driven decompositions. In Center for Turbulence Research, Proceedings of the Summer Program, pp. 335–343.
  • 68.Gibson JF, Farmer JD, Casdagli M, Eubank S. 1992. An analytic approach to practical state space reconstruction. Physica D 57, 1-30. ( 10.1016/0167-2789(92)90085-2) [DOI] [Google Scholar]
  • 69.Gutkin E. 2011. Curvatures, volumes and norms of derivatives for curves in Riemannian manifolds. J. Geometry Phys. 61, 2147-2161. ( 10.1016/j.geomphys.2011.06.013) [DOI] [Google Scholar]
  • 70.Abramowitz M, Stegun IA. 1948. Handbook of mathematical functions with formulae, graphs, and mathematical tables, vol. 55. US Government printing office. [Google Scholar]
  • 71.Whittaker ET, Watson GN. 1996. A course of modern analysis. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 72.Hirsh SM, Harris KD, Kutz JN, Brunton BW. 2019. Centering data improves the dynamic mode decomposition. (http://arxiv.org/abs/1906.05973)
  • 73.Van Van Breugel F, Kutz JN, Brunton BW. 2020. Numerical differentiation of noisy data: a unifying multi-objective optimization framework. IEEE Access 8, 196 865-196 877. ( 10.1109/ACCESS.2020.3034077) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Gandhi R, Rajgor A. 2017. Updating singular value decomposition for rank one matrix perturbation. (http://arxiv.org/abs/1707.08369v1)
  • 75.Stange P. 2008. On the efficient update of the singular value decomposition. In PAMM: Proc. in Applied Mathematics and Mechanics, vol. 8, no. 1, pp. 10 827–10 828. Wiley Online Library.
  • 76.Lorenz EN. 1963. Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130-141. () [DOI] [Google Scholar]
  • 77.Poland D. 1993. Cooperative catalysis and chemical chaos: a chemical model for the Lorenz equations. Physica D 65, 86-99. ( 10.1016/0167-2789(93)90006-M) [DOI] [Google Scholar]
  • 78.Weiss C, Brock J. 1986. Evidence for Lorenz-type chaos in a laser. Phys. Rev. Lett. 57, 2804. ( 10.1103/PhysRevLett.57.2804) [DOI] [PubMed] [Google Scholar]
  • 79.Hemati N. 1994. Strange attractors in brushless DC motors. IEEE Trans. Circ. Syst. I: Fundam. Theory Appl. 41, 40-45. ( 10.1109/81.260218) [DOI] [Google Scholar]
  • 80.Rössler OE. 1976. An equation for continuous chaos. Phys. Lett. A 57, 397-398. ( 10.1016/0375-9601(76)90101-8) [DOI] [Google Scholar]
  • 81.Rössler OE. 1979. An equation for hyperchaos. Phys. Lett. A 71, 155-157. ( 10.1016/0375-9601(79)90150-6) [DOI] [Google Scholar]
  • 82.Shinbrot T, Grebogi C, Wisdom J, Yorke JA. 1992. Chaos in a double pendulum. Am. J. Phys. 60, 491-499. ( 10.1119/1.16860) [DOI] [Google Scholar]
  • 83.Kaheman K, Kaiser E, Strom B, Kutz JN, Brunton SL. 2019. Learning discrepancy models from experimental data. In 58th IEEE Conf. on Decision and Control. IEEE.
  • 84.London WP, Yorke JA. 1973. Recurrent outbreaks of measles, chickenpox and mumps: I. Seasonal variation in contact rates. Am. J. Epidemiol. 98, 453-468. ( 10.1093/oxfordjournals.aje.a121575) [DOI] [PubMed] [Google Scholar]
  • 85.Schaffer WM, Kot M. 1985. Do strange attractors govern ecological systems? BioScience 35, 342-350. ( 10.2307/1309902) [DOI] [Google Scholar]
  • 86.Sugihara G, May RM. 1990. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series. Nature 344, 734-741. ( 10.1038/344734a0) [DOI] [PubMed] [Google Scholar]
  • 87.Coifman RR, Lafon S. 2006. Diffusion maps. Appl. Comput. Harmon. Anal. 21, 5-30. ( 10.1016/j.acha.2006.04.006) [DOI] [Google Scholar]
  • 88.Ng A. 2011. Sparse autoencoder. CS294A Lecture Notes 72, 1-19. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

All code used to reproduce results in the figures is openly available at https://github.com/sethhirsh/sHAVOK.


Articles from Proceedings. Mathematical, Physical, and Engineering Sciences are provided here courtesy of The Royal Society

RESOURCES