Abstract
Quasi-stationarity is ubiquitous in complex dynamical systems. In brain dynamics, there is ample evidence that event-related potentials (ERPs) reflect such quasi-stationary states. In order to detect them from time series, several segmentation techniques have been proposed. In this study, we elaborate a recent approach for detecting quasi-stationary states as recurrence domains by means of recurrence analysis and subsequent symbolization methods. We address two pertinent problems of contemporary recurrence analysis: optimizing the size of recurrence neighbourhoods and identifying symbols from different realizations for sequence alignment. As possible solutions for these problems, we suggest a maximum entropy criterion and a Hausdorff clustering algorithm. The resulting recurrence domains for single-subject ERPs are obtained as partition cells reflecting quasi-stationary brain states.
Keywords: recurrence analysis, symbolic dynamics, electroencephalography, brain microstates, language processing
1. Introduction
The brain is an open complex system that receives a broad range of sensational inputs, carries out some computations and triggers various sorts of output. The corresponding neural information is processed in time either sequentially, for example, as in low-level responses to sensory input [1–4], or in parallel in different brain areas as in most higher-level neural processes, for example, in motor action [5] and cognition [6,7]. In both sequential and parallel processes, electrophysiological recordings exhibit a sequence of quasi-stationary signal states that reflect the underlying neural processing steps [8–11]. Here, quasi-stationary states evolve on a large time scale, whereas transitions between them are much faster.
Sequential patterns of quasi-stationary states have been observed in various quite different natural systems. For instance, the chaotic Lorenz attractor exhibits two quasi-stationary states (the so-called Lorenz wings), which are visited recurrently over time. The Lorenz model was one of the first nonlinear atmospheric models applied in weather forecasting [12], while it also describes the dynamics in solid-state laser systems [13]. In neural systems, recurrent temporal activity patterns are known from the replay of memory in the hippocampus [14], in bird songs [9] and in electroencephalographic (EEG) data obtained during epileptic seizures [15–17]. Some more detailed analysis of the dynamics of quasi-stationary neural states reveals properties of metastable attractors, which attract and repel in different directions in phase space. Good examples for such metastable attractors are saddle nodes with an attracting and a repelling manifold. Hence, sequences of metastable attractors that are connected along their repelling and attracting directions could generate measurable signals of neural activity exhibiting sequences of quasi-stationary states. Mazor & Laurent [18] found such a sequence of metastable attractors in the activity of projection neurons in the insect olfactory bulb [10,18], whereas Freeman & Rogers [19] revealed sequential rapid phase transitions between spatially synchronized states in EEG signals. Most recently, Hudson et al. [20] discovered metastable transition networks in the recovery from anaesthesia.
During cognitive tasks, Lehmann and co-workers [8,21–24] observed segments of quasi-stationary EEG topographies, which they called brain microstates. The EEG related to cognitive tasks exhibits a sequence of signal components and is termed event-related potential (ERP). Lehmann et al. were able to identify the extracted microstates with ERP components, which, in turn, are signal components well established in neuropsychology as markers of certain neural cognitive processes. This link between quasi-stationary signal states and ERP components corroborates the working hypotheses of this study: ERP components reflect quasi-stable attractor states in brain dynamics [25–28]. This hypothesis indicates that the brain performs information processing in a sequence of steps [29] that is reflected in quasi-stationary signal states. Therefore, improving the detection of such states in neurophysiological data promises to improve the study of the corresponding states, their neural origin and essentially their meaning in neural information processing.
For the detection of quasi-stationary states from real-world data, several methods have been proposed. Obviously, one may first plot the signal trajectory in signal space and extract the time windows of apparent quasi-stationarity by visual inspection. To this end, the dimensionality of multivariate signals may be reduced by principal component analysis or related techniques, cf. [18,20]. These methods give first insights into the dynamics, but allow neither classification of the quasi-stationary states nor extraction of their durations and their transition phases. More advanced methods for segmenting quasi-stationary states have to consider (i) non-stationary signal features, i.e. applying temporally instantaneous methods, (ii) multivariate time series, i.e. multidimensional signals exhibiting heterogeneous signal characteristics in different dimensions, and (iii) low signal-to-noise ratio where the noise level is not known a priori [30,31].
Techniques applied in the context of brain–computer interfaces aim to extract signal features from multivariate signals [32,33]. Most of them perform a first supervised learning task in order to determine the specific features to be extracted from the EEG. However, this learning phase is not applicable in the detection of quasi-stationary states. This work aims to extract signal features that are unknown a priori and which may differ slightly between different trials in an experiment.
One of the first successful methods dealing with these signal characteristics is the microstate segmentation technique of Lehmann et al. [8,21,23,24,34], which extracts transient quasi-stationary states in multivariate EEG signals based on the similarity of their spatial scalp distributions. It considers the multivariate EEG as a temporal sequence of spatial activity maps and extracts the time windows of the microstates by computing the temporal difference between successive maps. This procedure allows one to compute the time windows of microstates from the signal and classifies them by the spatial averages over EEG electrodes in the extracted state time window. This method works well for high-dimensional EEG signals with good spatial resolution and high signal-to-noise ratio. Other similar methods extract signal features by clustering techniques [26–28] taking into account the similarity of the time series in different electrodes. Such methods are less sensitive to noise compared with the microstates method because they avoid the computation of temporal differences.
Although methods for experimental single-trial EEG data exist, they imply a missing statistical evaluation of the detected time windows, i.e. one cannot ensure that the states found reflect an underlying neural metastable attractor and not just noise. Hence, to statistically evaluate the gained results, it is necessary to perform a statistical evaluation study over several trials. However, in turn, this implies the need to map detected temporal segments of different trials to each other and, in the best case, align them to each other. This task is difficult to perform, because single trials are known to exhibit temporal jitter to each other [35,36], i.e. they are delayed and shortened to each other. Previous studies have attempted successfully to deal with this problem by applying dynamic time warping on EEG [37]. This technique considers a certain signal template and detects it in the dataset under study with respect to dilation and shrinking operations. The method is not well adjusted to our problem because it is based on a predefined signal template, which is not known in practice.
This study elaborates and further improves our recently developed recurrence domain analysis method [38] in two ways. First, we suggest an alternative method for optimizing the neighbourhood size in the recurrence plot (RP), because the earlier proposal turned out to exhibit some bias. We modify our maximum entropy principle by an additional recoding step that maps transients in the symbolic dynamics onto one distinguished symbol. Second, we suggest a new technique for identifying and aligning recurrence domains obtained from different realizations of a dynamic process (e.g. different trajectories starting from randomly prepared initial conditions or different trials in an EEG experiment). We introduce a Hausdorff partition clustering method achieving this alignment. Section 2 introduces the technique of recurrence domain analysis and applies a new method on single realizations, i.e. single datasets. Then, several extensions of this method are presented followed by a novel method based on the Hausdorff distance to align multiple symbolic sequences gained from multiple datasets. Section 3 concludes.
2. Symbolic recurrence domain analysis
Nonlinear dynamical systems generally obey Poincaré's famous recurrence theorem1 [39], which states that almost all trajectories starting in a ‘ball’ Bɛ(x0) of radius ɛ>0 centred at an initial condition x0∈X (where denotes the system's phase space of dimension d) return infinitely often to Bɛ(x0) as time elapses. For time-discrete (or discretely sampled) dynamical systems, these recurrences can be visualized by means of Eckmann et al.'s [40] RP technique where the element
![]() |
2.1 |
of the recurrence matrix R=(Rij) is unity if the state xj at time j belongs to an ɛ-ball centred at state xi at time i, i.e. xj∈Bɛ(xi), and zero otherwise [40,41].
Often, RPs exhibit a characteristic ‘chequerboard texture’ [40] indicating the system's recurrence domains, which are quasi-stable regions in phase space with relatively large dwell times that are connected by transients. Figure 1c, for example, depicts a typical RP for a so-called heteroclinic contour connecting saddle nodes (details are reported in §2a). Because states moving along such a trajectory slow down in the vicinity of the saddles, these can be regarded as quasi-stationary states. Hence, they appear as ‘blobs’ in the RP figure 1c. Every time the system returns into one of these recurrence domains, the RP displays such a texture, which is symmetric with respect to parallel translations along the horizontal and vertical time axes, which is the case for the ‘blobs’ in figure 1c. Other paradigmatic examples for recurrence domains are, for example, the Lorenz wings centred around the unstable foci of the Lorenz attractor [12], or saddle sets (such as saddle nodes or saddle tori) that are connected by stable heteroclinic sequences (SHS) as shown in figure 1c [10,42,43].
Figure 1.
Stable heteroclinic contour. (a) Phase portrait with indicated recurrence domain partition. (b) Multivariate time series (upper panel) and symbolic segmentation into recurrence domains (bottom panel). (c) RP for ɛ=0.15. (d) Symbolic RP resulting from recurrence grammar partitioning. (Online version in colour.)
In a recent study [38], we have proposed a method for the detection of recurrence domains by means of symbolic dynamics [44–46]. Our approach is motivated by the fact that a recurrence Rij=1 leads to overlapping ɛ-balls Bɛ(xi)∩Bɛ(xj)≠Ø that can be merged together into equivalence classes, which eventually partition, together with their complements and the still isolated balls, the phase space into its respective recurrence domains. Merging Aj=Bɛ(xi)∪Bɛ(xj) into a set Aj if states xi and xj are recurrent (Rij=1) and if i>j, we simply replace the larger time index i in the RP R by the smaller one j. Let be a finite partition of the phase space X into n disjoint sets Ak. Then, the discrete trajectory (xi) is mapped onto a symbolic sequence s=(si) according to the cells of
being visited by the states xi, i.e. si=k when xi∈Ak.
In the study [38], we suggested creating the symbolic sequence s=(si) from an initial sequence of time indices ri=i subjected to a rewriting grammar [47], which we refer to henceforth as the recurrence grammar because this grammar is simply another interpretation of the recurrence matrix R obtained in equation (2.1). It contains rewriting rules that replace large time indices i by smaller ones j (i>j) when the corresponding states xi and xj are recurrent, Rij=1. Moreover, if three states xi, xj and xk are recurrent, Rij=1 and Rik=1, for i>j>k, the grammar replaces the two larger indices i, j by the smallest one k. In other words, the recurrence grammar comprises rewriting rules
![]() |
2.2 |
and
![]() |
2.3 |
We illustrate the action of the recurrence grammar with a simple example. Let us assume that a trajectory consists of only five data points (x1,x2,…,x5) giving rise to the recurrence matrix
![]() |
2.4 |
The algorithm commences with the fifth row, detecting a recurrence R51=1. Because 5>1, a rewriting rule 5→1 is generated. Because the next recurrence in row 5 is trivial, the algorithm continues with row 4, where R42=R43=1. Now, two rules 4→2 and 3→2 are created. The remaining rows 3, 2 and 1 can be neglected owing to the symmetry. Thus, we obtain the following recurrence grammar
![]() |
2.5 |
Recursively applying this grammar to the series of time indices r=12345 yields s=12221, i.e. a system with two recurrence domains 1 and 2. Note that recurrence grammars could be ambiguous, i.e. one left-hand symbol could be expanded into different right-hand symbols by several rules. Yet, this is not a problem for the segmentation of the trajectory into recurrence domains when the grammar is recursively applied. After at most two iterations, all ambiguities become resolved through the smallest time index symbolizing the earliest recurrence domain.
Furthermore, the symbolic alphabet of the rewritten sequence may contain several gaps. We therefore augmented our algorithm by a search-and-replace mechanism recoding unused smaller indices by larger ones, thereby squeezing the symbolic repertoire together.
From the resulting symbolic sequence s, a symbolic recurrence plot [48,49] can be computed. Doing this for our example above, we obtain figure 1d where the ‘blobs’ have been converted into squares. Now, the chequerboard texture is clearly visible: each square corresponds to a quasi-stationary state captured by a recurrence domain in phase space.
(a). Numerical simulations
For further illustration and validation of our algorithm, we carry out a numerical simulation study of a stable heteroclinic contour, i.e. a closed SHSs between three saddle nodes [10,42,43] that may be regarded as a model for event-related brain potentials, according to our working hypothesis [26,28].
The stable heteroclinic contour was obtained from a generalized Lotka–Volterra dynamics of n=3 competing populations
![]() |
2.6 |
with growth rates σk>0, and interaction weights ρkj>0, ρkk=1. The particular parameter settings were σ1=1, σ2=1.2 and σ3=1.6.
Lotka–Volterra systems are well studied in theoretical ecology where they describe the intra- and interspecific interactions, i.e. competition for limited resources and predator–prey networks, among n populations with abundances xk in an ecosystem. Moreover, these systems have a long-standing tradition in computational neuroscience [50,51] as a suitable approximation [52] of the Wilson–Cowan neural mass model [53].
The Lotka–Volterra equations often describe limit cycles in appropriate parameter regimes. However, they also exhibit SHS [10,42,43] and stable heteroclinic channels (SHCs) [10,54] for asymmetric interactions, which is of particular importance for neural applications.
(i). Recurrence grammars
The heteroclinic contour in our simulation is governed by: ρ3,1=σ3/σ1+0.5, ρ2,1=σ2/σ1−0.5 for saddle 1; ρ1,2=σ1/σ2+0.5, ρ3,2=σ3/σ2−0.5 for saddle 2; and ρ2,3=σ2/σ3+0.5, ρ1,3=σ1/σ3−0.5 for saddle 3. The system was initialized with x=(1,0.17,0.01)T and numerically integrated over [0,100] with Δt=0.1429 and the Runge–Kutta algorithm ode45 of MATLAB. Figure 1 shows the resulting trajectory x(t) in figure 1a and the three coordinates, x1(t), x2(t) and x3(t) in the upper panel of figure 1b.
Regarding the coordinates as ‘recording channels’ of an (idealized) EEG experiment, we can identify the traces of x1 (blue online), x2 (yellow online) and x3 (green online) as ‘ERP components’, peaking at the respective saddle nodes in phase space.
For illustration purposes only, we show a typical RP of the dynamics obtained for ɛ=0.15 in figure 1c. The RP exhibits a ‘skeleton’ of parallel diagonal lines which is characteristic for oscillatory dynamics: the distance of two lines reflects the oscillation's period. As already mentioned above, the saddles appear as ‘blobs’ owing to the system's slowing down around these quasi-stationary states. Applying our recurrence domain algorithm to the simulated data with a different ɛ*=0.017 yields the symbolic segmentation displayed in the bottom panel of figure 1b. Each peak is now assigned to one recurrence domain symbolized by the same colour. The corresponding phase space partition is depicted by the coloured balls in figure 1a. Moreover, figure 1d depicts the symbolic RP resulting from this encoding, where the ‘blobs’ eventually became squares, thus indicating the partitioning of the system's phase space into equivalence classes.
One of the pertinent problems in recurrence analysis is optimizing the ball size ɛ [55]. In our previous study [38], we suggested maximizing the ratio of symbol entropy
![]() |
2.7 |
over the cardinality of the symbolic repertoire M(ɛ), where pk is the relative frequency of symbol k in the symbolic sequence s. However, this utility function strongly penalizes large alphabets through the denominator M(ɛ) such that the proposed procedure is biased towards binary partitions, which is not appropriate in most applications, such as in our SHS example as well. Therefore, here we remedy this bias by an additional recoding of s. The symbolic dynamics obtained from recurrence grammar encoding still contains subsequences (or ‘words’) of monotonically increasing time indices, resulting from isolated ɛ-balls. These isolated balls, reflecting the transients of the dynamics, entail large symbol alphabets that are punished by the utility function above.
For our new optimization method, we scan the sequence s for monotonically increasing time indices, such as 4567, for example, which are then replaced by a new symbol 0, resulting in the subsequence 0000. Thereby, all isolated balls receive the same label 0, now indicating ‘transient’ in the symbolic dynamics. This new symbol is plotted in red (online) in figure 1; red segments and partition cells therefore denote transients in our encoding.
Hence, the optimization algorithm performs a sequence of three encoding steps: (i) rewriting by the recurrence grammar, (ii) compressing the alphabet for closing gaps, and (iii) mapping transients onto 0. Figure 2 shows the intermediate results of this optimization procedure. In figure 2a, we plot the final symbolic sequences (called s, again) as a function of the ball size ɛ.
Figure 2.
Dependence of recurrence domain partition on ball size ɛ. (a) Symbolic segmentation of time series from heteroclinic contour into recurrence domains for varying ɛ. Same colours encode the same symbol. (b) Symbol entropy (2.7) dependent on ɛ. (Online version in colour.)
The symbolic dynamics plotted in figure 2a starts with ɛ=0.001 in the first row. Here, almost all balls remain isolated, which is shown by the symbol 0 printed in olive here. For increasing ɛ, the number of recurrences increases owing to the trajectory's slowing down in the vicinity of the saddle nodes. These saddles correspond to the ‘tongues’ widening up to the recurrence domains. For some range of ɛ, the results are quite robust, indicating the stability of our algorithm. At the bottom row of figure 2a, ɛ=0.032, eventually all balls merge together into one domain covering the entire trajectory. This trivial domain is printed in dark brown here.
Computing the Shannon entropy equation (2.7) for each row in figure 2a yields the distribution in figure 2b. For small and large ɛ, the symbolic dynamics is highly degenerate: for small values, it consists mainly of the transient symbol 0; for large values, there is essentially only one remaining symbol. In both cases, these symbols are highly predictable, which is reflected by low entropies. Because pk is the relative frequency of symbol k in the symbolic sequence s, it can be regarded as the system's dwell time in partition cell Ak and therefore as the duration of either a quasi-stationary state or of a transient regime. Considering now an approximately uniform symbol distribution with maximal entropy, we arrive at an optimization constraint maximizing both the durations of quasi-stationary states and their separation through transient regimes of comparable duration. In our particular example of figure 2b, the entropy distribution is relatively broad and exhibits several local maxima. Therefore, we suggest looking not for one of these (probably) spurious maxima for optimizing ɛ, but rather for the median of a symmetric distribution, or likewise for another quantile in the case of an asymmetric entropy distribution. In this way, we obtained the optimal ball size ɛ* for figure 1 through the median (the q=0.5 quantile).
(ii). Hausdorff partition clustering
Lotka–Volterra systems as in equation (2.6) do not only possess SHS. These trajectories are additionally captured by SHCs [54] that illustrate another important property of event-related brain potentials. Adopting our working hypothesis again, that ERP components correspond to saddle sets, or, more generally, to recurrence domains in the brain's phase space [26,28], we need to understand the emergence of ERP waveforms from ensemble averages over realizations of the system's dynamics starting from randomly distributed initial conditions [3,56].
To this end, we simulated an ensemble of 20 trajectories starting from randomly distributed initial conditions in the vicinity of the first saddle node (the blue recurrence domain in figure 1). Figure 3 shows the phase portrait of this ensemble (each realization plotted in a different colour) in figure 3a and the x3-components in figure 3b. In addition, we present the ensemble averages as the fat black curves in both.
Figure 3.
Stable heteroclinic channel (SHC). (a) Phase portrait of multiple realizations (coloured lines) and ensemble average (thick black line). (b) Projections onto x1-axis. (Online version in colour.)
Figure 3 nicely demonstrates that all simulated trajectories are confined to a ‘tube’, namely the SHC, stretched across the saddle nodes. This channel is stable against small perturbations in the initial conditions and also against additive noise [54], i.e. for moderate noise levels, the dynamically evolving states never leave the SHC. Moreover, figure 3 shows the dispersion of the ensemble average which is inevitable in standard ERP analyses (yet see [36,56] for alternative approaches). This dispersion is due to the velocity differences dependent on the distance of the actual state from the saddles. The closer a state comes to the saddle, the larger is its acceleration along the unstable and its deceleration along the stable manifold. Hence, the velocities with which states explore their available phase space crucially depend on the initial conditions. As the system evolves, the dynamics accumulates phase differences, deteriorating the averaged ensemble trajectory. This finding is consistent with the phase-resetting hypothesis of ERPs [57] as phase-resetting takes place in the preparation of initial conditions.
Our example is also relevant for the larger scope of nonlinear data analysis problems. Computing RPs and subsequent recurrence quantification analysis [55,58] is numerically rather expensive and therefore restricted to relatively short time series. In practice, a long-lasting time series is therefore cut into many segments for which recurrence analyses are carried out individually. The pertinent problem is then comparison and alignment of the individual results.
In the framework of our symbolic recurrence domain analysis, we suggest here a solution based on a clustering algorithm whose results are presented in figure 4. In figure 4a, we plot the symbolic sequences of the 20 SHS realizations from figure 3, now encoded with the q=0.7 quantile.
Figure 4.
Symbolic segmentation of the realizations from figure 3 into recurrence domains: (a) before and (b) after Hausdorff partition clustering. Black (dark blue online) encodes transients. (Online version in colour.)
Figure 4a shows different realizations using essentially the same colour palette that is due to the recurrence grammar and recoding algorithm. However, the same symbol (i.e. colour) does not necessarily refer to the same recurrence domain in the system's phase space (with one exception: transients are always encoded as 0, here plotted in dark blue). Therefore, we have to render the sequences in figure 4a comparable with respect to corresponding regions in phase space.
For that aim, we first gather all discrete sampling points belonging to the same phase space cell Ak that is labelled by the symbol k, i.e. Ak={xi|si=k}⊂X. Because high-dimensional phase spaces are usually very sparsely populated with sampling points from experimental or simulated data, we then project the cells onto the d-dimensional hypersphere through yi=xi/∥xi∥. This is also justified by the goal to clustering recurrent topographies of ERP data [8,22,34].2
In the next step, the recurrence domain partitions and
of two subsequent realizations are merged together into a set system
by regarding all symbols (apart from 0) as being different. This set system is, in general, not a partition any more. From the members of
we then calculate the pairwise Hausdorff distances3
![]() |
2.8 |
where
![]() |
2.9 |
measures the ‘distance’ of the point x from the compact set A⊂X. The Hausdorff distance between two compact sets vanishes when they are intersecting. The recurrence domains taken into account here are clearly compact sets as they are finite unions of intersecting ɛ-balls.
From the pairwise Hausdorff distances (2.8), we compute the θ-similarity matrix S with elements
![]() |
2.10 |
which has essentially the same properties as the recurrence matrix R from equation (2.1). Therefore, we merge the members of into new partition cells by interpreting S as another recurrence grammar for rewriting large indices of Ai by smaller ones from Aj (i>j) when they are θ-similar (Sij=1).
For numerical implementation, we recursively decompose the ensemble of realizations into halves until only one or two trajectories are obtained. In the latter case, their recurrence domain partitions are merged together and subjected to the recurrence grammar given by the Hausdorff similarity matrix S. The resulting partition contains the unions of similar recurrence domains that are passed to the next iteration of the algorithm.
The result of the Hausdorff partition clustering algorithm applied to our SHC simulation is shown in figure 4b for θ=0.7. Now, all symbols (colours) refer to the same recurrence domains in phase space (respectively to their topographic projections). Realizations 1 to 12 exhibit very similar behaviours, as they explore the same recurrence domains. Note also the differences in phase and duration in the individual realizations. Although the algorithm shows nice convergence, it is numerically very demanding using the MATLAB interpreter language. However, we have been able to substantially improve and speed up the Hausdorff clustering algorithm. The results will be published in a subsequent publication. Clustering the simulated recurrence domains took approximately 100 h computing time on a conventional PC. Considered as a model of ERP data, figure 4b shows great resemblance to statically encoded ERPs [56] where ERP components correspond to meandering vertical stripes.
(b). Event-related brain potentials
The analysis of electroencephalographic data by means of symbolic dynamics techniques has a long-standing tradition. It could be traced back to Lehmann [60] and Callaway et al. [61], who encoded extremal EEG voltages in a binary fashion and estimated the relative frequencies of these symbols across trials. The former approach led later to the so-called half-wave encoding [62], whereas the latter was used through cylinder measures and word statistics [56]. Further developments in this field were the symbolic resonance analysis [63,64] and order pattern analyses [65–67]. In addition, the segmentation of the EEG into quasi-stable brain microstates can be subsumed under these techniques [8,22,34], which were recently combined with spectral clustering methods on approximate Markov chains [17,68,69].
In this section, we reanalyse an ERP experiment on the processing of ungrammaticalities in German [70] (see [71–74] for other studies on symbolic dynamics of language-related brain potentials). Frisch et al. [70] examined processing differences for different violations of lexical and grammatical rules. Here, we focus only on the contrast between a so-called phrase structure violation (11-b), indicated by the asterisk, in comparison with grammatical control sentences (11-a).
Sentences of type (11-b) are ungrammatical in German, because the preposition am is followed by a past participle instead of a noun. A correct continuation could be, for example, im Garten wurde am Zaun gearbeitet (‘work was going on in the garden at the fence’) with am Zaun (‘at the fence’) comprising an admissible prepositional phrase.
The ERP study was conducted with 17 subjects in a visual word-by-word presentation paradigm. Subjects were presented with 40 structurally identical examples per condition (i.e. 40 trials). The critical word in all conditions was the past participle printed in bold font above. EEG and additional electrooculogram (EOG) for controlling eye movement were recorded with a 64-electrode montage from which 59 channels were measuring EEG proper. These were selected for recurrence domain analysis.
(i). Grand averages
First, we carry out the recurrence domain analysis for the grand averages over all 17 subjects, which are presented in the upper panels of figure 5. At the left-hand side, the multivariate ERP time series for the correct condition (11-a) are plotted as coloured traces, one for each EEG electrode.4 The same for the time series of the phrase structure violation condition (11-b) at the right-hand side. Both plots start 200 ms before stimulus presentation at t=0. In both conditions, similar N100/P200 ERP complexes are evoked. Moreover, condition (11-b) exhibits a large positive P600 ERP at central and posterior electrodes around 600 ms after stimulus presentation.
Figure 5.
ERP grand averages and recurrence domain segmentation from [70]. (a) Correct condition (11-a). (b) Phrase structure violation condition (11-b). (Top panels) Grand average ERPs over 17 subjects for all EEG channels, e.g. (online) blue,C5; green,T7; red,FC5. Each trace shows one recording channel. (Bottom panels) Symbolic dynamics of recurrence domains. (Online version in colour.)
For the recurrence domain analysis, we regard the observation space [75] spanned by the instantaneous EEG voltages v=(vi), with 1≤i≤d as EEG electrode index and d the number of recording channels, as the system's d-dimensional phase space [76,77] and compute entropy distributions H(ɛ) (2.7) in the range ɛ∈[0.2 μV,5 μV] that are very asymmetric. Therefore, we chose the q=0.77 quantile for optimization by visual inspection of both segmentations, yielding ɛ*=2.2 μV for the control condition. We then compute the recurrence grammars and the resulting segmentation into d-dimensional recurrence domains with this parameter for both conditions. The results are shown in the bottom panels of figure 5.
From figure 5 (bottom panel), we draw three important conclusions. (i) The pre-stimulus interval and also the first 100 ms are assigned to the same recurrence domain, reflecting some resting state brain activity. (ii) In the time window from 100 to 300 ms, a similar partitioning into successive recurrence domains is observed for both conditions. This indicates similarities in the early attentional processes assigned to the N100/P200 ERP complexes [4]. (iii) Both conditions differ crucially after 300 ms. In the correct control condition (11-a), there is only one large recurrence domain in this time window, whereas the phrase structure violation condition (11-b) exhibits two recurrence domains: a first one in the time window of lexical access and syntactic diagnoses processes around 400–700 ms, and another one in the time window of syntactic repair and reanalysis processes around 700–1200 ms, which is consistent with other findings about subcomponents of the late positivity [73].
(ii). Single subject clustering
Finally, we carry out the Hausdorff partition cluster analysis for the phrase structure violation condition (11-b). Here, we present only a proof of concept using the single-subject ERPs shown in figure 6 for one characteristic electrode Pz that is located above the brain's parietal lobe at the midline of the head. Now, each trace depicts the ERP average of one subject.
Figure 6.
Single-subject ERP averages from [70]. Each trace shows one subject at channel Pz. (Online version in colour.)
Single-subject ERPs as shown in figure 6 exhibit much more variation than grand averages from figure 5, which is reflected by the amplitude scales: 40 μV in figure 6 versus 20 μV in figure 5. Furthermore, large differences in the P600 waveforms around 800 ms are also visible between subjects. Thus, single-subject ERPs provide a good benchmark for ensembles of realizations of noisy dynamics.
All individual single-subject ERPs of condition (11-b) are subjected to the same recurrence analysis with the q=0.77 quantile of the entropy distribution (2.7). The results are shown in figure 7a.
Figure 7.
Hausdorff partition clustering of single-subject ERPs for condition (11-b). (a) Symbolic dynamics of individual recurrence domains of 17 subject ERPs. (b) Recurrence domain symbolic dynamics after Hausdorff partition clustering. (Online version in colour.)
Figure 7a displays large differences between individual single-subject ERPs, although some resemblances with the late grand average recurrence domains from figure 5 (bottom right) are present, for example, in subjects 1, 5, 10, 13 and 17. As every symbol must be treated differently across realizations, the cardinality of the alphabet for this representation is about 272.
In order to obtain a good clustering despite the apparent differences, we chose a similarity threshold θ=0.985 for our Hausdorff algorithm. The resulting clustering of recurrence domain topographies is depicted in figure 7b. The cardinality of the alphabet is drastically reduced to 54 different symbols. Obviously, some recurrence domains are shared by many realizations, as reflected by the repeating symbol appearing in medium blue in subjects 1–3, 7–9, as well as 11, 12 and 16 around 400–800 ms. However, the desired ‘meandering vertical stripes’ are not yet visible.
For further evaluation, the symbolic dynamics obtained from different experimental conditions, different subjects and eventually from single trials could be subjected to statistical analysis by computing word distributions, cylinder entropies and statistical hypothesis tests (e.g. χ2 tests on word distributions or permutation tests on symbolic ensembles) [56,64]. We leave this as well as further parameter search for optimizing encodings for a future publication.
3. Conclusion
Starting from the working hypothesis that event-related brain potentials (ERPs) reflect quasi-stationary states in brain dynamics, we have elaborated an earlier proposal for the detection of quasi-stationary states and for the segmentation of electroencephalographic time series into recurrence domains. This segmentation technique interprets the recurrence matrix as a rewriting grammar that applies to the time indices of an ERP dataset. After further recoding steps, the quasi-stationary states are detected as recurrence domains in phase space as indicated by a symbolic dynamics. Moreover, we have suggested a method for the alignment and unification of multiple EEG trials (i.e. single-subject ERPs) by means of Hausdorff partition clustering.
We think that the presented methods could be of significance for the greater community of nonlinear time series analysis, as we have addressed some pertinent problems in recurrence analysis and symbolic dynamics. We have suggested an optimization procedure for choosing the ball size of RPs by maximizing the entropy of the symbol distribution that is obtained by applying recurrence grammars and subsequent recoding of transients. In addition, the alignment and comparison of RPs of different time windows or different realizations was an unsolved problem so far. We solved this problem by merging together recurrence domains from different time series upon their similarity with respect to the Hausdorff distance either in phase space or in projection onto the unit sphere.
Supplementary Material
Acknowledgements
We thank the guest editors for their kind invitation to contribute to this Theme Issue of the Philosophical Transactions A. In this study, we reanalysed a language processing EEG dataset by courtesy of Stefan Frisch, Anja Hahne and Angela Friederici.
Footnotes
Under the assumptions that the dynamics possesses an invariant measure and is restricted to a finite portion of phase space.
A classification method based on the estimation of phase space probability densities was reported by Wright & Schult [59]. However, because probability density functions are even harder to estimate for sparsely sampled data, we use the recurrence domain partitioning instead.
Note that Wackermann et al.[8] used a very similar method for clustering brain microstate topographies.
For a legend of electrode labels, see the electronic supplementary material.
Funding statement
A.H. and P.b.G. acknowledge funding from the European Research Council for support under the European Union's Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 257253. In addition, P.b.G acknowledges support by a Heisenberg Fellowship of the German Research Foundation DFG (GR 3711/1-2).
References
- 1.Näätänen R, Picton T. 1987. The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 24, 375–425. ( 10.1111/j.1469-8986.1987.tb00311.x) [DOI] [PubMed] [Google Scholar]
- 2.Regan D. 1989. Human brain electrophysiology: evoked potentials and evoked magnetic fields in science and medicine. New York, NY: Elsevier. [Google Scholar]
- 3.Başar E. 1980. EEG-rain dynamics. Relations between EEG and brain evoked potentials. Amsterdam, The Netherlands: Elsevier/North Holland Biomedical Press. [Google Scholar]
- 4.Woods DL. 1990. The physiological basis of selective attention: implications of event-related potential studies. In Event-related potentials. Basic issues and applications (eds Rohrbaugh JW, Parasuraman R, Johnson R.), 178–209. New York, NY: Oxford University Press. [Google Scholar]
- 5.Latash ML, Scholz JP, Schöner G. 2007. Toward a new theory of motor synergies. Motor Control 11, 276–308. [DOI] [PubMed] [Google Scholar]
- 6.Ungerleider LG, Pessoa L. 2008. What and where pathways. Scholarpedia 3, 5342 ( 10.4249/scholarpedia.5342) [DOI] [Google Scholar]
- 7.Haxby JV, Hoffman EA, Gobbini MI. 2000. The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. ( 10.1016/S1364-6613(00)01482-0) [DOI] [PubMed] [Google Scholar]
- 8.Wackermann J, Lehmann D, Michel CM, Strik WK. 1993. Adaptive segmentation of spontaneous EEG map series into spatially defined microstates. Int. J. Psychophysiol. 14, 269–283. ( 10.1016/0167-8760(93)90041-M) [DOI] [PubMed] [Google Scholar]
- 9.Yildiz IB, Kiebel SJ. 2011. A hierarchical neuronal model for generation and online recognition of birdsongs. PLoS Comput. Biol. 7, 1002303 ( 10.1371/journal.pcbi.1002303) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Rabinovich MI, Huerta R, Laurent G. 2008. Transient dynamics for neural processing. Science 321, 48–50. ( 10.1126/science.1155564) [DOI] [PubMed] [Google Scholar]
- 11.Gazzaniga M, Ivry RB, Mangun GR. 2013. Cognitive neuroscience: the biology of the mind, 4th edn. New York, NY: W. W. Norton. [Google Scholar]
- 12.Lorenz EN. 1963. Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130–141. () [DOI] [Google Scholar]
- 13.Haken H. 1975. Analogy between higher instabilities in fluids and lasers. Phys. Lett. A 53, 77–78. ( 10.1016/0375-9601(75)90353-9) [DOI] [Google Scholar]
- 14.Pavlides C, Winson J. 1989. Influences of hippocampal place cell firing in the awake state on the activity of these cells during subsequent sleep episodes. J. Neurosci. 9, 2907–2918 See http://www.jneurosci.org/content/9/8/2907.long. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Friedrich R, Uhl C. 1996. Spatio-temporal analysis of human electroencephalograms: petit-mal epilepsy. Physica D 98, 171–182. ( 10.1016/0167-2789(96)00059-0) [DOI] [Google Scholar]
- 16.Wendling F. 2008. Computational models of epileptic activity: a bridge between observation and pathophysiological interpretation. Expert Rev. Neurother. 8, 889–896. ( 10.1586/14737175.8.6.889) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Allefeld C, Atmanspacher H, Wackermann J. 2009. Mental states as macrostates emerging from EEG dynamics. Chaos 19, 015102 ( 10.1063/1.3072788) [DOI] [PubMed] [Google Scholar]
- 18.Mazor O, Laurent G. 2005. Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron 48, 661–673. ( 10.1016/j.neuron.2005.09.032) [DOI] [PubMed] [Google Scholar]
- 19.Freeman WJ, Rogers LJ. 2002. Fine temporal resolution of analytic phase reveals episodic synchronization by state transitions in gamma EEGs. J. Neurophysiol. 87, 937–945 See http://jn.physiology.org/content/87/2/937.long. [DOI] [PubMed] [Google Scholar]
- 20.Hudson AE, Calderon DP, Pfaff DW, Proekt A. 2014. Recovery of consciousness is mediated by a network of discrete metastable activity states. Proc. Natl Acad. Sci. USA 111, 9283–9288. ( 10.1073/pnas.1408296111) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Lehmann D, Skrandies W. 1980. Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroencephalogr. Clin. Neurophysiol. 48, 609–621. ( 10.1016/0013-4694(80)90419-8) [DOI] [PubMed] [Google Scholar]
- 22.Lehmann D, Ozaki H, Pal I. 1987. EEG alpha map series: brain micro-states by space-oriented adaptive segmentation. Electroencephalogr. Clin. Neurophysiol. 67, 271–288. ( 10.1016/0013-4694(87)90025-3) [DOI] [PubMed] [Google Scholar]
- 23.Brandeis D, Lehmann D, Michel CM, Mingrone W. 1995. Mapping event-related brain potential microstates to sentence endings. Brain Topogr. 8, 145–159. ( 10.1007/BF01199778) [DOI] [PubMed] [Google Scholar]
- 24.Lehmann D, Pascual-Marqui RD, Michel C. 2009. EEG microstates. Scholarpedia 4, 7632 ( 10.4249/scholarpedia.7632) [DOI] [Google Scholar]
- 25.Hutt A, Munk MH. 2006. Mutual phase synchronization in single trial data. Chaos Complex. Lett. 2, 225–246. [Google Scholar]
- 26.Hutt A, Riedel H. 2003. Analysis and modeling of quasi-stationary multivariate time series and their application to middle latency auditory evoked potentials. Physica D 177, 203–232. ( 10.1016/S0167-2789(02)00747-9) [DOI] [Google Scholar]
- 27.Hutt A, Schrauf M. 2007. Detection of transient synchronization in multivariate brain signals, application to event-related potentials. Chaos Complex. Lett. 3, 1–24. [Google Scholar]
- 28.Hutt A. 2004. An analytical framework for modeling evoked and event-related potentials. Int. J. Bifurc. Chaos 14, 653–666. ( 10.1142/S0218127404009351) [DOI] [Google Scholar]
- 29.beim Graben P, Potthast R. 2009. Inverse problems in dynamic cognitive modeling. Chaos 19, 015103 ( 10.1063/1.3097067) [DOI] [PubMed] [Google Scholar]
- 30.Effern A, Lehnertz K, Schreiber T, Grunwald T, David P, Elger CE. 2000. Nonlinear denoising of transient signals with application to event-related potentials. Physica D 140, 257–266. ( 10.1016/S1386-9477(00)00111-9) [DOI] [Google Scholar]
- 31.Effern A, Lehnertz K, Fernández G, Grunwald T, David P, Elger CE. 2000. Single trial analysis of event related potentials: non-linear de-noising with wavelets. Clin. Neurophysiol. 111, 2255–2263. ( 10.1016/S1388-2457(00)00463-6) [DOI] [PubMed] [Google Scholar]
- 32.Bostanov V. 2004. BCI competition 2003—data sets Ib and IIb: feature extraction from event-related brain potentials with the continuous wavelet transform and the t-value scalogram. IEEE Trans. Biomed. Eng. 51, 1057–1061. ( 10.1109/TBME.2004.826702) [DOI] [PubMed] [Google Scholar]
- 33.Liang N, Bougrain L. 2012. Decoding finger flexion from band-specific ECoG signals in humans. Front. Neurosci. (Neuroprosth.) 6, 91 ( 10.3389/fnins.2012.00091) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Pascual-Marqui RD, Michel CM, Lehmann D. 1995. Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Trans. Biomed. Eng. 42, 658–665. ( 10.1109/10.391164) [DOI] [PubMed] [Google Scholar]
- 35.Quian Quiroga R, Atienza M, Cantero JL, Jongsma MLA. 2007. What can we learn from single-trial event-related potentials?. Chaos Complex Lett. 2, 345–363. [Google Scholar]
- 36.Ouyang G, Herzmann G, Zhou C, Sommer W. 2011. Residue iteration decomposition (RIDE): a new method to separate ERP components on the basis of latency variability in single trials. Psychophysiology 48, 1631–1647. ( 10.1111/j.1469-8986.2011.01269.x) [DOI] [PubMed] [Google Scholar]
- 37.Huang HC, Jansen BH. 1985. EEG waveform analysis by means of dynamic time-warping. Int. J. Biomed. Comput. 17, 135–144. ( 10.1016/0020-7101(85)90084-4) [DOI] [PubMed] [Google Scholar]
- 38.beim Graben P, Hutt A. 2013. Detecting recurrence domains of dynamical systems by symbolic dynamics. Phys. Rev. Lett. 110, 154101 ( 10.1103/PhysRevLett.110.154101) [DOI] [PubMed] [Google Scholar]
- 39.Poincaré H. 1890. Sur le problème des trois corps et les équations de la dynamique., Acta Math. 13, 1–270. [Google Scholar]
- 40.Eckmann JP, Kamphorst SO, Ruelle D. 1987. Recurrence plots of dynamical systems. Europhys. Lett. 4, 973–977. ( 10.1209/0295-5075/4/9/004) [DOI] [Google Scholar]
- 41.Marwan N, Kurths J. 2005. Line structures in recurrence plots. Phys. Lett. A 336, 349–357. ( 10.1016/j.physleta.2004.12.056) [DOI] [Google Scholar]
- 42.Afraimovich VS, Rabinovich MI, Varona P. 2004. Heteroclinic contours in neural ensembles and the winnerless competition principle. Int. J. Bifurc. Chaos 14, 1195–1208. ( 10.1142/S0218127404009806) [DOI] [Google Scholar]
- 43.Afraimovich VS, Zhigulin VP, Rabinovich MI. 2004. On the origin of reproducible sequential activity in neural circuits. Chaos 14, 1123–1129. ( 10.1063/1.1819625) [DOI] [PubMed] [Google Scholar]
- 44.Daw CS, Finney CEA, Tracy ER. 2003. A review of symbolic analysis of experimental data. Rev. Sci. Instrum. 74, 915–930. ( 10.1063/1.1531823) [DOI] [Google Scholar]
- 45.Hao BL. 1991. Symbolic dynamics and characterization of complexity. Physica D 51, 161–176. ( 10.1016/0167-2789(91)90229-3) [DOI] [Google Scholar]
- 46.Lind D, Marcus B. 1995. An introduction to symbolic dynamics and coding. Cambridge, UK: Cambridge University Press. [Google Scholar]
- 47.Hopcroft JE, Ullman JD. 1979. Introduction to automata theory, languages, and computation. Menlo Park, CA: Addison-Wesley. [Google Scholar]
- 48.Donner R, Hinrichs U, Scholz-Reiter B. 2008. Symbolic recurrence plots: a new quantitative framework for performance analysis of manufacturing networks. Eur. Phys. J. Spec. Top. 164, 85–104. ( 10.1140/epjst/e2008-00836-2) [DOI] [Google Scholar]
- 49.Faure P, Lesne A. 2010. Recurrence plots for symbolic sequences. Int. J. Bifurc. Chaos 20, 1731–1749. ( 10.1142/S0218127410026794) [DOI] [Google Scholar]
- 50.Cowan J. 2014. A personal account of the development of the field theory of large-scale brain activity from 1945 onward. In Neural fields: theory and applications (eds Coombes S, beim Graben P, Potthast R, Wright JJ.), pp. 47–96. Berlin, Germany: Springer. [Google Scholar]
- 51.Rabinovich MI, Varona P, Selverston AI, Abarbanel HDI. 2006. Dynamical principles in neuroscience. Rev. Mod. Phys. 78, 1213–1265. ( 10.1103/RevModPhys.78.1213) [DOI] [Google Scholar]
- 52.Fukai T, Tanaka S. 1997. A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural Comput. 9, 77–97. ( 10.1162/neco.1997.9.1.77) [DOI] [PubMed] [Google Scholar]
- 53.Wilson HR, Cowan JD. 1972. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24. ( 10.1016/S0006-3495(72)86068-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Rabinovich MI, Huerta R, Varona P, Afraimovich VS. 2008. Transient cognitive dynamics, metastability, and decision making. PLoS Comput. Biol. 4, 1000072 ( 10.1371/journal.pcbi.1000072) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Marwan N. 2011. How to avoid potential pitfalls in recurrence plot based data analysis. Int. J. Bifurc. Chaos 21, 1003–1017. ( 10.1142/S0218127411029008) [DOI] [Google Scholar]
- 56.beim Graben P, Saddy JD, Schlesewsky M, Kurths J. 2000. Symbolic dynamics of event-related brain potentials. Phys. Rev. E 62, 5518–5541. ( 10.1103/PhysRevE.62.5518) [DOI] [PubMed] [Google Scholar]
- 57.Makeig S, Westerfield M, Jung TP, Enghoff S, Townsend J, Courchesne E, Sejnowski TJ. 2002. Dynamic brain sources of visual evoked responses. Science 295, 690–694. ( 10.1126/science.1066168) [DOI] [PubMed] [Google Scholar]
- 58.Zbilut JP, Hu Z, Giuliani A, Webber CL., Jr 2000. Singularities of the heart beat as demonstrated by recurrence quantification analysis. In Proc. 22nd Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society 2000, Chicago, IL, 23–28 July 2000 vol. 4, pp. 2406–2409. Piscataway, NJ: IEEE; ( 10.1109/IEMBS.2000.901283) [DOI] [Google Scholar]
- 59.Wright J, Schult RL. 1993. Recognition and classification of nonlinear chaotic signals. Chaos 3, 295–304. ( 10.1063/1.165938) [DOI] [PubMed] [Google Scholar]
- 60.Lehmann D. 1971. Multichannel topography of human alpha EEG fields. Electroencephalogr. Clin. Neurophysiol. 31, 439–449. ( 10.1016/0013-4694(71)90165-9) [DOI] [PubMed] [Google Scholar]
- 61.Callaway E, Halliday RA. 1973. Evoked potential variability: effects of age, amplitude and methods of measurement. Electroencephalogr. Clin. Neurophysiol. 34, 125–133. ( 10.1016/0013-4694(73)90039-4) [DOI] [PubMed] [Google Scholar]
- 62.beim Graben P, Frisch S. 2004. Is it positive or negative? On determining ERP components. IEEE Trans. Biomed. Eng. 51, 1374–1382. ( 10.1109/TBME.2004.827558) [DOI] [PubMed] [Google Scholar]
- 63.beim Graben P, Kurths J. 2003. Detecting subthreshold events in noisy data by symbolic dynamics. Phys. Rev. Lett. 90, 100602 ( 10.1103/PhysRevLett.90.100602) [DOI] [PubMed] [Google Scholar]
- 64.beim Graben P, Frisch S, Fink A, Saddy D, Kurths J. 2005. Topographic voltage and coherence mapping of brain potentials by means of the symbolic resonance analysis. Phys. Rev. E 72, 051916 ( 10.1103/PhysRevE.72.051916) [DOI] [PubMed] [Google Scholar]
- 65.Bandt C, Pompe B. 2002. Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett. 88, 174102 ( 10.1103/PhysRevLett.88.174102) [DOI] [PubMed] [Google Scholar]
- 66.Keller K, Wittfeld K. 2004. Distances of time series components by means of symbolic dynamics. Int. J. Bifurc. Chaos 14, 693–703. ( 10.1142/S0218127404009387) [DOI] [Google Scholar]
- 67.Schinkel S, Marwan N, Kurths J. 2007. Order patterns recurrence plots in the analysis of ERP data. Cogn. Neurodyn. 1, 317–325. ( 10.1007/s11571-007-9023-z) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Froyland G. 2005. Statistically optimal almost-invariant sets. Physica D 200, 205–219. ( 10.1016/j.physd.2004.11.008) [DOI] [Google Scholar]
- 69.Gaveau B, Schulman LS. 2006. Multiple phases in stochastic dynamics: geometry and probabilities. Phys. Rev. E 73, 036124 ( 10.1103/PhysRevE.73.036124) [DOI] [PubMed] [Google Scholar]
- 70.Frisch S, Hahne A, Friederici AD. 2004. Word category and verb-argument structure information in the dynamics of parsing. Cognition 91, 191–219. ( 10.1016/j.cognition.2003.09.009) [DOI] [PubMed] [Google Scholar]
- 71.Frisch S, beim Graben P, Schlesewsky M. 2004. Parallelizing grammatical functions: P600 and P345 reflect different cost of reanalysis. Int. J. Bifurc. Chaos 14, 531–549. ( 10.1142/S0218127404009533) [DOI] [Google Scholar]
- 72.Frisch S, beim Graben P. 2005. Finding needles in haystacks: symbolic resonance analysis of event-related potentials unveils different processing demands. Cogn. Brain Res. 24, 476–491. ( 10.1016/j.cogbrainres.2005.03.004) [DOI] [PubMed] [Google Scholar]
- 73.Drenhaus H, beim Graben P, Saddy D, Frisch S. 2006. Diagnosis and repair of negative polarity constructions in the light of symbolic resonance analysis. Brain Lang. 96, 255–268. ( 10.1016/j.bandl.2005.05.001) [DOI] [PubMed] [Google Scholar]
- 74.beim Graben P, Gerth S, Vasishth S. 2008. Towards dynamical system models of language-related brain potentials. Cogn. Neurodyn. 2, 229–255. ( 10.1007/s11571-008-9041-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Birkhoff G, von Neumann J. 1936. The logic of quantum mechanics. Ann. Math. 37, 823–843. ( 10.2307/1968621) [DOI] [Google Scholar]
- 76.Porta A, et al. 2014. Effect of age on complexity and causality of the cardiovascular control: comparison between model-based and model-free approaches. PLoS ONE 9, 89463 ( 10.1371/journal.pone.0089463) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Stam CJ. 2005. Nonlinear dynamical analysis of EEG and MEG: review of an emerging field. Clin. Neurophysiol. 116, 2266–2301. ( 10.1016/j.clinph.2005.06.011) [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.