Abstract
Deep brain stimulation (DBS) is a promising therapeutic approach for epilepsy treatment. Recently, research has focused on the implementation of stimulation protocols that would adapt to the patients need (adaptive stimulation) and deliver electrical stimuli only when it is most useful. A formal mathematical description of the effects of electrical stimulation on neuronal networks is a prerequisite for the development of adaptive DBS algorithms. Using tools from non-linear dynamic analysis, we describe an evidence-based, mathematical modeling approach that (1) accurately simulates epileptiform activity at time-scales of single and multiple ictal discharges, (2) simulates modulation of neural dynamics during epileptiform activity in response to fixed, low-frequency electrical stimulation, (3) defines a mapping from real-world observations to model state, and (4) defines a mapping from model state to real-world observations. We validate the real-world utility of the model’s properties by statistical comparison between the number, duration, and interval of ictal-like discharges observed in vitro and those simulated in silica under conditions of repeated stimuli at fixed-frequency. These validation results confirm that the evidence-based modeling approach captures robust, informative features of neural network dynamics of in vitro epileptiform activity under periodic pacing and support its use for further implementation of adaptive DBS protocols for epilepsy treatment.
Keywords: Deep brain stimulation, Epilepsy, Manifold embedding, Modeling, Neural network dynamics
1. Introduction
It is well established that electrical stimulation can manipulate the dynamics of the neuronal activity generated by brain slices maintained in vitro to reduce the frequency, duration, or amplitude of ictal-like discharges (Bawin et al., 1986; Nakagawa and Durand, 1991; Durand, 1993; Durand and Warman, 1994; Schiff et al., 1994; Jerger and Schiff, 1995; Gluckman et al., 1996; Barbarosie and Avoli, 1997; Warren and Durand, 1998; D’Arcangelo et al., 2005). In particular, fixed-frequency stimulation, also known as periodic pacing, has achieved robust suppression of epileptiform activity in a diversity of in vitro chemical models of ictogenesis, e.g., high-K+ (Jerger and Schiff, 1995), low-Mg2+ (Schiff et al., 1994), and 4-aminopyridine (4AP) (Barbarosie and Avoli, 1997; D’Arcangelo et al., 2005). The next logical step beyond periodic pacing is the deployment of an adaptive control system that intelligently selects the timing of stimulations to maximize the suppression of ictal-like discharges while minimizing current pulses. Such intelligent control systems rely on accurate knowledge of causal relationships in the dynamics of the system of interest. Therefore, they are feasible only to the degree with which the modulating effects of electrical stimulation can be accurately modeled and predicted. However, a complete understanding of the dynamics underlying the response of neuronal networks to electrical stimulation is still lacking.
Predictive models must accurately capture both the system’s state (a numerical representation of the system’s current behavior) and its transition function (a mapping from the current state and control action, e.g., stimulation, onto the future state). The state and transition function can be modeled by one of two distinct approaches. The first principles approach builds up the model from neurophysiology theory. The evidence-based approach builds up the model from abstract rules that best describe observations of the real-world system. With respect to neural network dynamics of in vitro epileptiform activity under periodic pacing, only first principles models have been studied (see for example Biswal and Dasgupta, 2002; Franaszczuk et al., 2003). However, two weaknesses make them poorly suited for use in the construction and/or validation of neural control systems. First, they are unable to quantitatively reproduce the efficacy of dynamic modulation of neuronal activity as a function of stimulation frequency with the accuracy necessary to make effective control decisions. Second, they provide no mapping between the model’s state space and real-world observations.
Here, we describe an evidence-based modeling approach that combines non-linear dynamic analysis (time-delay embeddings) to identify the neural system’s state with machine learning (nearest neighbor methods) to approximate the neural system’s transition function. Directly from the data, this approach (1) simulates the occurrence of epileptiform activity in vitro at time-scales of single and multiple ictal discharges, (2) simulates modulation of epileptiform activity in response to periodic pacing protocols, (3) defines a mapping from real-world observations to model state (allowing the model to act as a stand-alone predictor or as part of a control system), and (4) defines a mapping from model state to real-world observations (allowing the model to act as a simulator). We quantitatively validate the model’s predictive accuracy under multiple stimulation protocols by first-order and higher-order statistical comparison between dynamics of the model and of a previously unseen dataset. We also compare the model’s simulated field potentials directly against previously unseen real-world field potential recordings as qualitative, visual evidence of the model’s efficacy in describing neural network dynamics of in vitro epileptiform activity under periodic pacing.
2. Materials and methods
2.1. Theory
The state of a dynamic system is a real-valued vector that uniquely determines the future state of the system according to the system’s rules. These rules are termed the transition function. Models of real-world dynamic systems are founded on accurate identification of the system’s underlying state and transition function. With regard to neuronal networks studied in vitro, obstacles to system identification are numerous: dynamic variance, non-stationarity, limited exploration, noise, transients, and limited knowledge of first principles; however, the most challenging obstacle is partial observability, which implies that all of the components required to uniquely quantify the neuronal networks state cannot be directly observed. Indeed, network activity generated by brain slices maintained in vitro contains millions of hypothetically measurable parameters, yet it can only be observed through a small number of field potentials.
Nonlinear dynamic systems theory provides means of reconstructing complete state observability from partial observability via the method of delayed embeddings, formalized by Takens’ Theorem (Takens, 1980). Here we present the key points of Takens’ Theorem utilizing the notation of Huke and Warman (2006) in a deterministically forced system.
Let the system state, s, be a M-dimensional, real-valued, bounded vector space indexed by time, t, and a is a real-valued control action. Consider the state update function, f, is a deterministic function:
| (1) |
Let the action choice a(t) be set by a deterministic control function, π:
| (2) |
then Eq. (2) may be composed with Eq. (1):
| (3) |
where ϕ specifies the discrete time evolution of the forced system. If ϕ is a smooth map, ϕ : ℝM → ℝM, and the system is observed through partial state, s̃, via function, y, such that:
and y : ℝM → ℝ, then if ϕ−1 exists, and ϕ, ϕ−1, and y are continuously differentiable, we may apply Takens’ Theorem (Takens, 1980) to reconstruct the complete state space of the observed system. Thus, for each s̃(t), we can construct a vector sE(t),
such that sE lies on a subset of ℝE which is an embedding of s. Because embeddings preserve the connectivity of the original vector-space, in the context of system dynamics, a mapping ψ,
| (4) |
may be substituted for f (Eq. (3)), and vectors sE(t) may be substituted for vectors s(t) without loss of generality.
The properties of a correctly constructed embedding vector, sE, guarantee that we can reconstruct the complete state of a large class of partially observable dynamical systems directly from a sequence of their observations and that this reconstruction may occur independently of identification of the transition function, ψ. We exploit both of these facts in our methodology to use simple but powerful semi-parametric modeling methods, which differentiates our approach from important earlier approaches to modeling forced physiological systems using parametric functions of Volterra–Wiener kernels (Marmarelis and Orme, 1993; Marmarelis, 1997; Marmarelis et al., 1999).
Takens’ Theorem does not define how to compute the embedding dimension, E, of arbitrary sequences of observations nor does it provide for a test to determine if the theorem is applicable to a specific dataset. In practice the intrinsic dimension, M, of a system is often unknown. Finding high-quality embedding parameters of challenging domains, such as chaotic or noise-corrupted nonlinear signals, occupy much of the fields of subspace identification and nonlinear dynamic analysis. Numerous methods of note exist (Galka, 2000; Kennel and Abarbanel, 2002; Katayama, 2005), drawn from both disciplines. We build-upon a spectral approach (Galka, 2000) premised on the singular value decomposition (SVD). It has the advantage of being non-parametric, computationally efficient, and robust to additive noise—all of which are useful in practical application.
We summarize the spectral parameter selection algorithm as follows. Given a sequence of state observations s̃ of length S̃, we choose a sufficiently large embedding dimension, Ê. Sufficiently large refers to a cardinality of dimension which is certain to be greater than twice the dimension in which the actual state space resides. For each embedding window size, tmin ∈ {Ê, …, S̃}, we define a matrix SÊ(tmin) having row vectors, sÊ(t), t ∈ {tmin, …, S̃}, constructed according to
| (5) |
where τ = tmin/(Ê − 1). We compute the SVD of the matrix SÊ(tmin), and we record the vector of singular values, σ(tmin). Embedding parameters of s̃ are found by analysis of the sequence of second singular values, σ2(tmin), tmin ∈ {Ê, …, S̃}. The value of tmin at the first local maxima of this sequence is the approximate embedding window size, Tmin. The approximate embedding dimension, E, is the number of non-trivial singular values of σ(Tmin), where we define non-trivial as a value greater than the long term trend of σ(tmin) with respect to σ(Tmin), tmin ≫ Tmin.
The spectral method provides a mechanism for efficiently projecting into, and out of, the E-dimensional state space. These projections exist because the SVD returns a decomposition of a matrix, X, as X = UΣVT. It is well-known that matrix V defines an orthonormal basis of covariance matrix, XTX, with vectors ordered by the corresponding singular values, diag(Σ). Thus, VÊ [1, …, E](Tmin), i.e., the first E columns of matrix V returned by the SVD of the embedding SÊ(Tmin), form a basis, renamed VE, of a low-dimensional manifold, SE, which maximally captures the variance of the embedded observations. Therefore, we make a change of coordinates such that VESE = SÊ(Tmin). We achieve this by computing the Moore–Penrose pseudoinverse, , of VE and left-multiply such that . SE is model’s state space.
The matrix defines the projection from an Ê-dimensional vector of embedded observations, given by Eq. (5), into the reconstructed, reduced E-dimensional state space, SE. VE defines the opposite projection (from the E-dimensional state space to the space of embedded observations). Thus, any numerical system state can be mapped, without loss of generality, to an approximate sequence of observations. Likewise, sequences of observations can be mapped to a numerical state.
The preservation of locality and dynamics afforded by such an embedding allows approximation of the underlying dynamic system. To model these dynamics we assume that the derivative of the region surrounding each state is well-approximated by the derivative of the state itself, a nearest-neighbors derivative (Parlitz and Merkwirth, 2000). We simulate trajectories as iterative numerical integration of the local state and gradient. We define the model and integration process formally.
Consider a dataset 𝒟 of observations, s̃(t), t ∈ {1, …, S̃}. Applying the spectral embedding method to 𝒟 yields parameters E and Tmin. Embedding 𝒟 according to Eq. (5) and projecting into state space via yields a sequence of vectors sE(t) in ℝE indexed by t ∈ {Tmin, …, S̃}. A model, ℳ, of 𝒟 is the set of vectors, m(t) = sE(t), t ∈ {Tmin, …, S̃}.
Consider a state vector x(i) in ℝE indexed by simulation time, i. To numerically integrate this state we define the gradient according to our definition of locality: nearest neighbor. The model’s nearest neighbor of x(i), denoted m(tx(i)), is defined as1
| (6) |
The model gradient and numerical integration are defined, respectively, as,
| (7) |
and
| (8) |
where η is a vector of noise. Applying Eqs. (6)–(8) iteratively simulates a trajectory of the underlying system, termed a surrogate dataset, 𝒟̃, or simply surrogate. Surrogates are initialized from some state, x(0). Eq. 8 assumes that dataset 𝒟 contains noise. This noise biases the derivative estimate in ℝE, via Eq. 5. In practice, a small amount of additive noise facilitates generalization. Note, our technique does not preclude the use of non-local function approximation, but here we assume a sufficient density of data exists to reconstruct the embedded state space with minimal bias.
As defined, the model generates surrogates that mimic real-world neural network dynamics without stimulation (experimental control condition). We extend this model to account for electrical stimulation effects as follows. Let dataset 𝒟 also include sequence a, comprised of actions, a(t), t ∈ {1, …, S̃}. We now define the model, ℳ, of 𝒟 to be the set of 2-tuples, m(t) = {sE(t), a(t)}, t ∈ {Tmin, …, S̃}, and we add the operation 𝒵(m(t)) ≡ [s(t), ωa(t)] where ω is a parameter which scales the action dimension to the model’s state space. In this formulation, we redefine the nearest neighbor to be conditioned on the action input to the model, a(i):
| (9) |
Using Eq. (9), numerical integration of the model can generate surrogates for arbitrary stimulation protocols.
We extend the model definition further by introducing the notion of a labeled state space. Let the dataset 𝒟 also include the sequence l, comprised of labels l(t), t ∈ {1, …, S̃}. We define the model, ℳ, of 𝒟 to be the set of 3-tuples, m(t) = {sE(t), a(t), l(t)}, t ∈ {Tmin, …, S̃}, and we add the operation ℒ(m(t)) ≡ l(t), which extracts the label assigned to model state, m(t). This extension provides the necessary mathematical tools for analysis of the model’s simulation performance described in Eq. (11).
2.2. Calculation
To construct and validate the evidence-based modeling approach using real-world observations, we collect a dataset, 𝒟, containing field potential recordings under each stimulation protocol, ρ, from the set of all protocols, P. For each observation, s̃(t), assign the label, l(t), and the control action, c(t), as either idle = 0 or stimulation = 1. We then perform spectral embedding (cf. Section 2.1) to identify the parameters E and Tmin. For each observation, we define the time elapsed since the last c = 1, termed time-since-stimulation or tss and the maximum tss, denoted tmax, as the largest interval between stimulations in the dataset. We filter the data such that if tss(t) > tmax then tss(t) = tmax (i.e., scale the tss of the control protocol to tmax) and then normalize tss by tmax. Finally, we define the model action, a(t) = tss(t). We then partition the dataset into disjoint training dataset, 𝒟trn, and test dataset, 𝒟tst, ensuring that both datasets contain approximately the same distribution of stimulation protocols. Assuming that N unique partitions are possible, we term this N-fold cross validation and we define the set of cross-validation tests, 𝒩 ≡ {1, …, N}. For all datasets and , n ∈ 𝒩, we embed the datasets via parameters E and Tmin and compute the projections VE and . We then compute the reduced-dimensional states via change of coordinates and combine the states with the corresponding actions and labels to form the training models, , and testing models, , n ∈ 𝒩.
We define measurements, Θ, and we measure each training dataset, , n ∈ 𝒩, yielding results vector, , n ∈ 𝒩. These measurements summarize important characteristics of the system that should be reproduced by the model. As an example, measurements of stimulated epileptiform activity could include the efficacy of various fixed-frequency policies in suppressing ictal-like discharges. For a wide range of parameters η and ω, we numerically integrate , forming surrogates, and then measure the surrogates, . The best training model parameters, for each training dataset, are defined as:
| (10) |
This minimization over parameters ηn and ωn maximizes, according to the measurement Θ, the likelihood that dataset would be observed by simulation of model .
Using parameters ηn and ωn, we numerically integrate test models , n ∈ 𝒩, according to Eqs. (6)–(8), yielding test surrogates, , n ∈ 𝒩: these are the model’s predictions. We report predictions as distributions of N means of K randomly initialized trials executed on each test model. Each prediction, therefore, requires the generation of K × N surrogates.
2.3. Experimental
Male, adult Sprague–Dawley rats (250–300 g) were decapitated under deep isoflurane anesthesia. The brain was quickly removed and placed in cold (0–2 °C) artificial cerebro-spinal fluid (ACSF) of the following composition (mM): 124 NaCl, 2 KCl, 2 MgSO4, 2 CaCl2, 1.25 KH2PO4, 26 NaHCO3 and 10 D-glucose, continuously bubbled with gas mixture (CO2 5% and O2 95%) to equilibrate at pH ~7.35. Partially disconnected combined hippocampus-EC slices (450 μm thick) were cut as previously described (Panuccio et al., 2010) using a VT1000S vibratome (Leica, Germany). In these brain slices, which included the most ventral part of the hippocampal formation, fast CA3-driven interictal-like activity disclosed by 4AP bath-application was restrained to the hippocampus proper and did not propagate to the EC (cf., Avoli et al., 1996, but see also Avoli et al., 2002). Slices were then transferred to an interface recording chamber, lying between warm (~32 °C) ACSF and humidified gas (CO2 5% and O2 95%), where they were allowed to recover for ≥ 1 h before beginning continuous bath-application of 4AP (~1 ml/min). Chemicals were acquired from Sigma–Aldrich Canada, Ltd. (Oakville, Ontario, Canada). All the procedures were carried on in accordance to the Canadian Council on Animal Care and McGill University guidelines. Field potential recordings were performed with ACSF-filled pipettes (tip diameter <10 μm; resistance <10 MΩ) pulled from borosilicate capillary tubing (World Precision Instruments Inc., Sarasota, FL, USA) using a P-97 puller (Sutter Instrument, Novato, CA, USA). Extracellular signals were fed to a Cyberamp 380 amplifier (Molecular Devices, Palo Alto, CA, USA) connected to a digital interface device (Digidata 1320A, Molecular Devices). Data were acquired at a sampling rate of 5 kHz and low-passed at 2 kHz, using the software Clampex 8.2 (Molecular Devices), stored on the hard drive and analyzed off-line. Recording electrodes were placed in the EC deep layers and in the subiculum. Extracellular current pulses (100–250 μA, pulse width 100 μs) were delivered in the subiculum through a bipolar concentric Pt–Ir electrode (FHC, Bowdoin, ME, USA) plugged onto a high voltage stimulus isolator unit (A360, WPI Inc., Sarasota, FL, USA) connected to the pulse generator Pulsemaster A300 (WPI Inc., Sarasota, FL, USA). Stimulus intensity was established prior to beginning the experimental protocols in order to reliably induce an interictal-like event in the EC. The following periodic pacing protocols were implemented: 0.2 Hz, 0.5 Hz, 1 Hz, and 2 Hz. Each stimulation phase was preceded by a control period and followed by a post-stimulation recovery period, which served as the control recording for the following stimulation protocol. Recordings were pursued until at least 4 ictal-like discharges were generated (control and ineffective stimulation protocols) or for a period of ≥3 times the previously observed interval between ictal-like discharges (effective stimulation).
We assigned a label to each data point of the recordings: ictal= 1 and non-ictal= 0. The latter includes both baseline and interictal-like discharges. We also assigned a time-since-stimulation value, tss, where tmax, the maximum interval between pulses, is 5.0 s, corresponding to the 0.2 Hz protocol. We low-pass filtered (50 Hz) the recordings and desampled them (500 Hz). We then centered the data by a filter such that baseline data points had a mean potential of 0.0 mV. We combined recordings from 5 brain slices to form a dataset of 30,421 s, including 46 ictal-like discharges. We partitioned the dataset into cross-validation training datasets, and test datasets, , n ∈ 𝒩, 𝒩 ≡ {1, …, 6}, comprised of data from 3 and 2 in vitro experiments, respectively (6 partitions constructed from the 5 slice recordings ensured that each training and testing pair contained an approximately balanced composition of control and stimulation protocols). We formed models from each of these datasets, and , n ∈ 𝒩. A visual example of the embedding spectrum of one training dataset is shown in Fig. 1a. Here, by the methods described in Sections 2.1 and 2.2, we computed embedding parameters E = 3 and Tmin = 0.5. A projection of the data, given by Fig. 1b and c, illustrates the geometry of the embedding as well as the relative positions of non-ictal and ictal data points.
Fig. 1.
(a) Embedding spectrum assuming E = 15; E was determined empirically to be sufficiently large to contain all the observable dynamics (magnitudes of σE at greater E are indistinguishable within a small constant). Maxima σ2 occurred at approximately Tmin = 0.5 s. The mean and standard deviation of the spectra of singular values (σ2–σ15 over 0.0–2.0 s) are presented as bold dashed (mean) and dashed (±std. dev.) horizontal lines. Singular values 1–3 fall above the first standard deviation of this trend at Tmin = 0.50 s, which indicates that the intrinsic dimension of the embedded system is E = 3. (b) and (c) are examples of a real training dataset after embedding and projection into the model’s state space; the dataset is viewed along the 1st and 2nd as well as 2nd and 3rd principal axes of VE, respectively.
A model of neural network dynamics of in vitro epileptiform activity under periodic pacing should reproduce modulation of ictal-like discharges in response to stimulation. To enforce these characteristics in the model, we defined the fraction of ictal-like discharges to be μ,
| (11) |
as the measurement, Θ, to be optimized during selection of the numerical integration parameters, η and ω.
Thus, for each and for each pair of parameters η and ω drawn from the set of pairs defined by the cross product of ranges η = [0.0, 0.001] sampled at intervals 0.0001 and ω = [0.0, 1.0] sampled at intervals of 0.05, we simulated K = 30 surrogates of 1800 s length for each protocol, ρ, ρ ∈ P = {control, 0.2 Hz, 0.5 Hz, 1.0 Hz, 2.0 Hz}. We computed by averaging all K values of μ for each stimulation protocol. We computed similar measurements of the training datasets, and we solved for the parameter pair ηn and ωn that yielded minimum error according to Eq. (10). Once found, parameters ηn and ωn remained unchanged throughout the experiments.
3. Results
3.1. State validation
Embedding theory implies that a single E-dimensional state of the model captures the dynamic information contained in a sequence of historical observations. In the context of modeling neural network dynamics of in vitro epileptiform activity under periodic pacing, the dynamic information is the state’s label. If the distribution of labels in the model’s state space (see Fig. 1b and c) is both accurate and generalizable, then test data projected into the model’s state space will have similar structure and distribution. Adherence of the model to embedding theory, therefore, can be measured experimentally by using the model to predict the labels of previously unseen observations.
For each and , n ∈ 𝒩, we embedded dataset using model’s E and Tmin parameters and mapped these data into a state space using the model’s projection, . For each element in this projection, we applied the ℒ operation to the result of Eq. 6; the resulting label is the model’s prediction, which we compared against the actual labels of the elements of .
To assess the structural quality of our embedding we computed the performance for each of the training models in classifying the corresponding test dataset. We measured performance in terms of likelihood ratios (Hogg et al., 2004) ( ) of ictal classification. Likelihood ratios measure how much the odds of a positive event change in response to a classifier’s positive prediction of that event. A random classifier yields LR+ = 1.0, indicating that the prediction provides zero information. For an ideal classifier, LR+ approaches ∞. Our classifier (i.e., model) achieved LR+ = 9.33 (2.37–16.28 for the 95% confidence limits), indicating that a point classified as ictal is 933% more likely to actually be ictal.
3.2. Transition function validation
Embedding theory also implies that transitions between model states are equivalent to transitions in the underlying dynamic system. Thus, state sequences, and hence label sequences, of both real and surrogate datasets should be similar. We compared similarities between sequences of labels in the test datasets and the test surrogates at two time-scales: macroscopic (multiple ictal discharges) and mesoscopic (single ictal discharge).
To validate macroscopic temporal predictions of the model we compared the distributions of fractions of ictal labels (see Eq. 11) between the test datasets and test surrogates for each stimulation protocol. As illustrated in Fig. 2a, the model accurately captures the non-linear functional relationship between the stimulation policy and the fraction of ictal labels observed (i.e., the model accurately reproduces the degree of control of ictal-like discharges in response to stimulation). To validate mesoscopic temporal predictions, we compared the distributions of ictal-like discharge duration (Fig. 2b) and interval (Fig. 2c) between the test datasets and test surrogates for each stimulation protocol. Our results confirm that the model is faithfully capturing higher-order, non-linear functional relationships between the stimulation policy and the dynamic response of stimulated neural networks.
Fig. 2.
(a) Comparison of the fraction of ictal-like discharges observed as a function of stimulation protocol between test datasets and test surrogates. Bars summarize the means of the distributions of the cross-validation experiments and error bars indicate the 95% confidence limits. Numbers above the error bar indicate the number of cross-validation experiments comprising the distribution. Asterisk (*) indicates that the distribution has zero mean and variance and cannot be plotted. (b) Comparison of ictal-like discharge durations as a function of stimulation protocol between test datasets and test surrogates. (c) Comparison of ictal-like discharge intervals as a function of stimulation protocol between test datasets and test surrogates. To construct these comparisons we generated K = 30 randomly initialized surrogates of 1800 s length for each of N = 6 test models, , n ∈ 𝒩, for each stimulation protocol. For each surrogate we computed the fraction of ictal labels, according to Eq. 11, as well as the duration and interval of ictal-like discharges. We separated the data by stimulation protocol and averaged together the K quantities (i.e., fractions of ictal labels or ictal-like discharge durations and intervals) computed for each protocol on each test model, yielding N = 6 quantities for each protocol. We computed the analogous N quantities of the test datasets for each stimulation protocol.
These results do, however, illustrate a difficulty of fairly evaluating the model and real data using cross-validation. When ictal samples are sparsely distributed in the dataset, as is the case for periodic pacing results in both the test dataset and the test surrogate, then the impact of outliers can be augmented. Only a single ictal discharge interval exists in the test dataset under the 0.2 Hz protocol. However, due to 6-fold partitioning of the dataset as part of cross validation of the in vitro experiments, this single ictal discharge interval appears three times attributing to a large mean-value having zero variance. A similar artifact occurs for the test surrogate at 1.0 Hz periodic pacing. Only one of the six cross validation models produced a mean ictal discharge duration. This single value, likely caused by noise in the model, dominates the mean value.
3.3. Generation of surrogate data
The spectral embedding method yields a projection, VE, from the model’s state space to a vector of observations (given by Eq. 5). The first element of the resultant vector is the simulated observation of the field potential corresponding to the model’s current state. During numerical integration of the model, each state may be projected onto such a field potential; the resulting sequence is a surrogate field potential recording. Fig. 3 illustrates examples of test surrogate field potentials alongside examples of field potential recordings for each of the five stimulation protocols.
Fig. 3.
Comparison of selected segments of surrogate field potential traces versus selected segments of the test dataset that share similar dynamics. Dots indicate the timing of stimulations. Horizontal black lines indicate ictal labels. The test dataset labels were assigned by humans whereas the test surrogate labels were assigned automatically by the model. No effort was made to identify traces that are visually similar. Rather, visual similarity between the dynamic structure of the surrogates and datasets was observed to be a general attribute of the evidence-based modeling approach.
The time correspondence between the test dataset and surrogate (which is, in general, very good) arises from structural consistencies in the dataset that are exploited by the evidence-based modeling approach. However, excellent temporal reproduction does not imply the ability to accurately predict specific trajectories, particularly, future epileptiform discharges. The presence of noise and the nonlinear (potentially chaotic) nature of this dynamical system make prediction of specific closed-loop trajectories difficult; previous work in dynamic modeling of epileptiform systems suggests that such predictions may be infeasible (Lopes da Silva et al., 2003).
Another visual aspect of the plots in Fig. 3 is the degradation of temporal detail reproduced in the surrogate as the frequency of stimulation increases. This observation stands in contrast to the agreement between the test surrogate and test dataset results depicted in Fig. 2a across stimulation frequencies. Because the spectral embedding method extracts embeddings that maximally capture dataset variance, we speculate that model fidelity is preserved in high-variance regimes of the dataset (i.e., lower-frequency stimulation and control) at the expense of low-variance regimes (i.e., higher-frequency stimulation), which is supported by the visual evidence.
4. Discussion
We proposed and validated an evidence-based model that faithfully reproduces epileptiform discharges as well as the effects of low-frequency periodic pacing on ictal-like activity generated by a brain slice preparation in vitro. Our use of cross-validation for reporting the model performance is a well-known machine learning technique for identifying how model predictions will generalize to previously unseen data. The technique succeeds because it approximates an unbiased estimate of the actual distribution to which the model is being fit (Kohavi, 1995).
The key physiological insight to be drawn from successful application of the evidence-based approach to this dataset is its ability to validate the existence of a low-dimensional manifold that captures canonical dynamics of a complex neural circuit governed by millions of variables. Cross-validated simulations using this manifold confirm, topologically, the robustness of neural dynamic modulation during epileptiform activity in response to fixed, low-frequency electrical stimulation that has been reported in the literature. Further, we know of no other computational modeling approach that is capable of generating surrogate data featuring: (1) spontaneous ictal-like discharges having frequency, duration, amplitude, and higher-order dynamics similar to that of real-world observations, (2) spontaneous interictal-like discharges that do not lead to ictal-like events, and (3) post-ictal depression.
The versatility of this approach, however, comes at a cost: evidence-based models are limited in the testable hypotheses that they can inform. We consider this approach most appropriate for hypotheses in which accurate prediction of causal relationships is paramount to understanding the physiological roots of these relationships.
One can easily imagine a large set of testable hypotheses that require designing and implementing neural control systems, be it automated exploration of neural dynamics or treatment of neurological diseases (Sun et al., 2008; Jahangiri et al., 1997; Schiff et al., 1994; Pineau et al., 2009; Schiff and Sauer, 2008). Currently, first-principle approaches, while unlimited in the testable hypotheses that they can inform, are unsuitable for control applications because they do not model the causal relationships in real-world neural networks with the accuracy necessary to make effective control decisions; they also provide no mapping between real-world observations and the models state, making it difficult to query the model in a specific real-world scenario.
We anticipate specific need for the evidence-based modeling approach in the fields of intelligent and adaptive control. Training algorithms for these classes of control systems rely on accurate representations of the neural system’s state and its transition function to succeed. We leave the application of the model to control systems for future work; however, the evidence-based approach does solve a key challenge posited by prior neural control research using Kalman filtering (Schiff and Sauer, 2008) in that it does not assume the form of the system’s dynamics a priori. Compared to past methods, this difference makes the evidence-based approach scalable to real-world neural control systems.
Acknowledgments
Funding for this work was provided by the Natural Sciences and Engineering Research Council of Canada (grant 311949-08), the Canadian Institutes of Health Research (grants MOP8109 and MOP97907), and the National Institutes of Health (grant R21 DA019800). GP received support from Epilepsy Canada and the Savoy Foundation.
Footnotes
The operation returns the argument, x, at which the function, f(x), is at its global minimum.
References
- Avoli M, Barbarosie M, Lcke A, Nagao T, Lopantsev V, Khling R. Synchronous GABA-mediated potentials and epileptiform discharges in the rat limbic system in vitro. Journal of Neuroscience. 1996;16:3912–24. doi: 10.1523/JNEUROSCI.16-12-03912.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Avoli M, D’Antuono M, Louvel J, Khling R, Biagini G, Pumain R, et al. Network and pharmacological mechanisms leading to epileptiform synchronization in the limbic system in vitro. Progress in Neurobiology. 2002;68:167–207. doi: 10.1016/s0301-0082(02)00077-1. [DOI] [PubMed] [Google Scholar]
- Barbarosie M, Avoli M. CA3-driven hippocampal-entorhinal loop controls rather than sustains in vitro limbic seizures. The Journal of Neuroscience. 1997;17(23):9308–14. doi: 10.1523/JNEUROSCI.17-23-09308.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bawin S, Abu-Assal M, Sheppard A, Mahoney M, Adey W. Long-term effects of sinusoidal electric fields in penicillin-treated rat hippocampal slices. Brain Research. 1986;399:194–9. doi: 10.1016/0006-8993(86)90619-0. [DOI] [PubMed] [Google Scholar]
- Biswal B, Dasgupta C. Neural network model for apparent deterministic chaos in spontaneously bursting hippocampal slices. Physical Review Letters. 2002;88:088102–88111. doi: 10.1103/PhysRevLett.88.088102. [DOI] [PubMed] [Google Scholar]
- D’Arcangelo G, Panuccio G, Tancredi V, Avoli M. Repetitive low-frequency stimulation reduces epileptiform synchronization in limbic neuronal networks. Neurobiology of Disease. 2005;19:119–28. doi: 10.1016/j.nbd.2004.11.012. [DOI] [PubMed] [Google Scholar]
- Durand D. Ictal patterns in animal models of epilepsy. Journal of Clinical Neurophysiology. 1993;10:181–297. doi: 10.1097/00004691-199307000-00004. [DOI] [PubMed] [Google Scholar]
- Durand D, Warman E. Desynchronization of epileptiform activity by extracellular current pulses in rat hippocampal slices. Journal of Physiology. 1994;71:2033–45. doi: 10.1113/jphysiol.1994.sp020381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Franaszczuk P, Kudela P, Bergey G. External excitatory stimuli can terminate bursting in neural network models. Epilepsy Research. 2003;53:65–80. doi: 10.1016/s0920-1211(02)00248-6. [DOI] [PubMed] [Google Scholar]
- Galka A. Topics in nonlinear time series analysis: with implications for EEG analysis. World Scientific; 2000. [Google Scholar]
- Gluckman B, Neel E, Netoff T, Ditto W, Spano M, Schiff S. Electric field suppression of epileptiform activity in hippocampal slices. Journal of Neurophysiology. 1996;76:4202–5. doi: 10.1152/jn.1996.76.6.4202. [DOI] [PubMed] [Google Scholar]
- Hogg RV, Craig A, McKean JW. Introduction to mathematical statistics. Prentice Hall; 2004. [Google Scholar]
- Huke J. Tech rep. Manchester Institute for Mathematical Sciences, University of Manchester; Mar, 2006. Embedding nonlinear dynamical systems: a guide to Takens’ Theorem. [Google Scholar]
- Jahangiri A, Durand D, Lin J. Singular stimulus parameters to annihilate spontaneous activity in Hodgkin-Huxley model with elevated potassium. Proceedings of the 19th International Conference of the IEEE Engineering in Medicine and Biology Society; 1997; pp. 165–78. [Google Scholar]
- Jerger K, Schiff S. Periodic pacing and in vitro epileptic focus. Journal of Neurophysiology. 1995;73(2):876–9. doi: 10.1152/jn.1995.73.2.876. [DOI] [PubMed] [Google Scholar]
- Katayama T. Subspace methods for system identification. Springer; 2005. [Google Scholar]
- Kennel M, Abarbanel H. False neighbors and false strands: a reliable minimum embedding dimension algorithm. Physical Review E. 2002;66:026209. doi: 10.1103/PhysRevE.66.026209. [DOI] [PubMed] [Google Scholar]
- Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. International Joint Conference on Artificial Intelligence; 1995; pp. 1137–45. [Google Scholar]
- Lopes da Silva F, Blanes W, Kalitzin S, Parra J, Suffczynski P, Velis D. Dynamical diseases of brain systems: different routes to epileptic seizures. IEEE Transactions on Biomedical Engineering. 2003;50(5):540–8. doi: 10.1109/TBME.2003.810703. [DOI] [PubMed] [Google Scholar]
- Marmarelis V. Modeling methodology for nonlinear physiological systems. Annals of Biomedical Engineering. 1997;25:239–51. doi: 10.1007/BF02648038. [DOI] [PubMed] [Google Scholar]
- Marmarelis V, Juusola M, French A. Principal dynamic mode analysis of nonlinear transduction in a spider mechanoreceptor. Annals of Biomedical Engineering. 1999;27:391–402. doi: 10.1114/1.149. [DOI] [PubMed] [Google Scholar]
- Marmarelis V, Orme M. Modeling of neural systems by use of neuronal modes. IEEE Transactions on Biomedical Engineering. 1993;40(11):1149–58. doi: 10.1109/10.245633. [DOI] [PubMed] [Google Scholar]
- Nakagawa M, Durand D. Suppression of spontaneous epileptiform activity with applied currents. Brain Research. 1991;567:241–7. doi: 10.1016/0006-8993(91)90801-2. [DOI] [PubMed] [Google Scholar]
- Panuccio G, D’Antuono M, de Guzman P, Lannoy LD, Biagini G, Avoli M. In vitro ictogenesis and parahippocampal networks in a rodent model of temporal lobe epilepsy. Neurobiology of Disease. 2010 doi: 10.1016/j.nbd.2010.05.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parlitz U, Merkwirth C. Prediction of spatiotemporal time series based on reconstructed local states. Physical Review Letters. 2000;84(9):1890–3. doi: 10.1103/PhysRevLett.84.1890. [DOI] [PubMed] [Google Scholar]
- Pineau J, Guez A, Vincent R, Panuccio G, Avoli M. Treating epilepsy via adaptive neurostimulation: a reinforcement learning approach. International Journal of Neural Systems. 2009;19(4):227–40. doi: 10.1142/S0129065709001987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schiff S, Jerger K, Duong D, Chang T, Spano M, Ditto W. Controlling chaos in the brain. Nature. 1994;370:615–20. doi: 10.1038/370615a0. [DOI] [PubMed] [Google Scholar]
- Schiff S, Sauer T. Kalman filter control of a model of spatiotemporal cortical dynamics. Journal of Neural Engineering. 2008;5:1–8. doi: 10.1088/1741-2560/5/1/001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun F, Morrell M, Wharen R. Responsive cortical stimulation for the treatment of epilepsy. Neurotherape: The Journal of the American Society for Experimental NeuroTherapeutics. 2008;5:68–74. doi: 10.1016/j.nurt.2007.10.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Takens F. Detecting strange attractors in turbulence. In: Young DARLS, editor. Dynamical systems and turbulence. Vol. 898. Warwick: 1980. pp. 366–81. [Google Scholar]
- Warren R, Durand D. Effects of applied currents on spontaneous epileptiform activity induced by low calcium in the rat hippocampus. Brain Research. 1998;806:1078–85. doi: 10.1016/s0006-8993(98)00723-9. [DOI] [PubMed] [Google Scholar]



