Skip to main content
eLife logoLink to eLife
. 2019 Oct 16;8:e44287. doi: 10.7554/eLife.44287

A statistical framework to assess cross-frequency coupling while accounting for confounding analysis effects

Jessica K Nadalin 1, Louis-Emmanuel Martinet 2, Ethan B Blackwood 3, Meng-Chen Lo 3, Alik S Widge 3, Sydney S Cash 2, Uri T Eden 1, Mark A Kramer 1,
Editors: Frances K Skinner4, Laura L Colgin5
PMCID: PMC6821458  PMID: 31617848

Abstract

Cross frequency coupling (CFC) is emerging as a fundamental feature of brain activity, correlated with brain function and dysfunction. Many different types of CFC have been identified through application of numerous data analysis methods, each developed to characterize a specific CFC type. Choosing an inappropriate method weakens statistical power and introduces opportunities for confounding effects. To address this, we propose a statistical modeling framework to estimate high frequency amplitude as a function of both the low frequency amplitude and low frequency phase; the result is a measure of phase-amplitude coupling that accounts for changes in the low frequency amplitude. We show in simulations that the proposed method successfully detects CFC between the low frequency phase or amplitude and the high frequency amplitude, and outperforms an existing method in biologically-motivated examples. Applying the method to in vivo data, we illustrate examples of CFC during a seizure and in response to electrical stimuli.

Research organism: Human, Rat

Introduction

Brain rhythms - as recorded in the local field potential (LFP) or scalp electroencephalogram (EEG) - are believed to play a critical role in coordinating brain networks. By modulating neural excitability, these rhythmic fluctuations provide an effective means to control the timing of neuronal firing (Engel et al., 2001; Buzsáki and Draguhn, 2004). Oscillatory rhythms have been categorized into different frequency bands (e.g., theta [4–10 Hz], gamma [30–80 Hz]) and associated with many functions: the theta band with memory, plasticity, and navigation (Engel et al., 2001); the gamma band with local coupling and competition (Kopell et al., 2000; Börgers et al., 2008). In addition, gamma and high-gamma (80–200 Hz) activity have been identified as surrogate markers of neuronal firing (Rasch et al., 2008; Mukamel et al., 2005; Fries et al., 2001; Pesaran et al., 2002; Whittingstall and Logothetis, 2009; Ray and Maunsell, 2011), observable in the EEG and LFP.

In general, lower frequency rhythms engage larger brain areas and modulate spatially localized fast activity (Bragin et al., 1995; Chrobak and Buzsáki, 1998; von Stein and Sarnthein, 2000; Lakatos et al., 2005; Lakatos et al., 2008). For example, the phase of low frequency rhythms has been shown to modulate and coordinate neural spiking (Vinck et al., 2010; Hyafil et al., 2015b; Fries et al., 2007) via local circuit mechanisms that provide discrete windows of increased excitability. This interaction, in which fast activity is coupled to slower rhythms, is a common type of cross-frequency coupling (CFC). This particular type of CFC has been shown to carry behaviorally relevant information (e.g., related to position [Jensen and Lisman, 2000; Agarwal et al., 2014], memory [Siegel et al., 2009], decision making and coordination [Dean et al., 2012; Pesaran et al., 2008; Wong et al., 2016; Hawellek et al., 2016]). More generally, CFC has been observed in many brain areas (Bragin et al., 1995; Chrobak and Buzsáki, 1998; Csicsvari et al., 2003; Tort et al., 2008; Mormann et al., 2005; Canolty et al., 2006), and linked to specific circuit and dynamical mechanisms (Hyafil et al., 2015b). The degree of CFC in those areas has been linked to working memory, neuronal computation, communication, learning and emotion (Tort et al., 2009; Jensen et al., 2016; Canolty and Knight, 2010; Dejean et al., 2016; Karalis et al., 2016; Likhtik et al., 2014; Jones and Wilson, 2005; Lisman, 2005; Sirota et al., 2008), and clinical disorders (Gordon, 2016; Widge et al., 2017; Voytek and Knight, 2015; Başar et al., 2016; Mathalon and Sohal, 2015), including epilepsy (Weiss et al., 2015). Although the cellular mechanisms giving rise to some neural rhythms are relatively well understood (e.g. gamma [Whittington et al., 2000; Whittington et al., 2011; Mann and Mody, 2010]), the neuronal substrate of CFC itself remains obscure.

Analysis of CFC focuses on relationships between the amplitude, phase, and frequency of two rhythms from different frequency bands. The notion of CFC, therefore, subsumes more specific types of coupling, including: phase-phase coupling (PPC), phase-amplitude coupling (PAC), and amplitude-amplitude coupling (AAC) (Hyafil et al., 2015b). PAC has been observed in rodent striatum and hippocampus (Tort et al., 2008) and human cortex (Canolty et al., 2006), AAC has been observed between the alpha and gamma rhythms in dorsal and ventral cortices (Popov et al., 2018), and between theta and gamma rhythms during spatial navigation (Shirvalkar et al., 2010), and both PAC and AAC have been observed between alpha and gamma rhythms (Osipova et al., 2008). Many quantitative measures exist to characterize different types of CFC, including: mean vector length or modulation index (Canolty et al., 2006; Tort et al., 2010), phase-locking value (Mormann et al., 2005; Lachaux et al., 1999; Vanhatalo et al., 2004), envelope-to-signal correlation (Bruns and Eckhorn, 2004), analysis of amplitude spectra (Cohen, 2008), coherence between amplitude and signal (Colgin et al., 2009), coherence between the time course of power and signal (Osipova et al., 2008), and eigendecomposition of multichannel covariance matrices (Cohen, 2017). Overall, these different measures have been developed from different principles and made suitable for different purposes, as shown in comparative studies (Tort et al., 2010; Cohen, 2008; Penny et al., 2008; Onslow et al., 2011).

Despite the richness of this methodological toolbox, it has limitations. For example, because each method focuses on one type of CFC, the choice of method restricts the type of CFC detectable in data. Applying a method to detect PAC in data with both PAC and AAC may: (i) falsely report no PAC in the data, or (ii) miss the presence of significant AAC in the same data. Changes in the low frequency power can also affect measures of PAC; increases in low frequency power can increase the signal to noise ratio of phase and amplitude variables, increasing the measure of PAC, even when the phase-amplitude coupling remains constant (Aru et al., 2015; van Wijk et al., 2015; Jensen et al., 2016). Furthermore, many experimental or clinical factors (e.g., stimulation parameters, age or sex of subject) can impact CFC in ways that are difficult to characterize with existing methods (Cole and Voytek, 2017). These observations suggest that an accurate measure of PAC would control for confounding variables, including the power of low frequency oscillations.

To that end, we propose here a generalized linear model (GLM) framework to assess CFC between the high-frequency amplitude and, simultaneously, the low frequency phase and amplitude. This formal statistical inference framework builds upon previous work (Kramer and Eden, 2013; Penny et al., 2008; Voytek et al., 2013; van Wijk et al., 2015) to address the limitations of existing CFC measures. In what follows, we show that this framework successfully detects CFC in simulated signals. We compare this method to the modulation index, and show that in signals with CFC dependent on the low-frequency amplitude, the proposed method more accurately detects PAC than the modulation index. We apply this framework to in vivo recordings from human and rodent cortex to show examples of PAC and AAC detected in real data, and how to incorporate new covariates directly into the model framework.

Materials and methods

Estimation of the phase and amplitude envelope

To study CFC we estimate three quantities: the phase of the low frequency signal, ϕlow; the amplitude envelope of the high frequency signal, Ahigh; and the amplitude envelope of the low frequency signal, Alow. To do so, we first bandpass filter the data into low frequency (4–7 Hz) and high frequency (100–140 Hz) signals, Vlow and Vhigh, respectively, using a least-squares linear-phase FIR filter of order 375 for the high frequency signal, and order 50 for the low frequency signal. Here we choose specific high and low frequency ranges of interest, motivated by previous in vivo observations (Canolty et al., 2006; Tort et al., 2008; Scheffer-Teixeira et al., 2013). However, we note that this method is flexible and not dependent on this choice. We select a wide high frequency band consistent with recommendations from the literature (Aru et al., 2015) and the mechanistic explanation that extracellular spikes produce this broadband high frequency activity (Scheffer-Teixeira et al., 2013). We use the Hilbert transform to compute the analytic signals of Vlow and Vhigh, and from these compute the phase and amplitude of the low frequency signal (Alow and ϕlow) and the amplitude of the high frequency signal (Ahigh).

Modeling framework to assess CFC

Generalized linear models (GLMs) provide a principled framework to assess CFC (Penny et al., 2008; Kramer and Eden, 2013; van Wijk et al., 2015). Here, we present three models to analyze different types of CFC. The fundamental logic behind this approach is to model the distribution of Ahigh as a function of different predictors. In existing measures of PAC, the distribution of Ahigh versus ϕlow is assessed using a variety of different metrics (e.g., Tort et al., 2010). Here, we estimate statistical models to fit Ahigh as a function of ϕlow, Alow, and their combinations. If these models fit the data sufficiently well, then we estimate distances between the modeled surfaces to measure the impact of each predictor.

The ϕlow model

The ϕlow model relates Ahigh, the response variable, to a linear combination of ϕlow, the predictor variable, expressed in a spline basis:

Ahigh|ϕlowGamma[μ,ν] (1)
logμ=k=1nβkfk(ϕlow),

where the conditional distribution of Ahigh given ϕlow is modeled as a Gamma random variable with mean parameter μ and shape parameter ν, and βk are undetermined coefficients, which we refer to collectively as βϕlow. We choose this distribution as it guarantees real, positive amplitude values; we note that this distribution provides an acceptable fit to the example human data analyzed here (Figure 1). The functions {f1,,fn} correspond to spline basis functions, with n control points equally spaced between 0 and 2π, used to approximate ϕlow. We note that the spline functions sum to 1, and therefore we omit a constant offset term. We use a tension parameter of 0.5, which controls the smoothness of the splines. We note that, because the link function of the conditional mean of the response variable (Ahigh) varies linearly with the model coefficients βk the model is a GLM, though the spline basis functions situate the model in the larger class of Generalized Additive Models (GAMs). Here we fix n=10, which is a reasonable choice for smooth PAC with one or two broad peaks (Kramer and Eden, 2013). To support this choice, we apply an AIC-based selection procedure to 1000 simulated instances of signals of duration 20 s with phase-amplitude coupling and amplitude-amplitude coupling (see Materials and methods: Synthetic Time Series with PAC and Synthetic Time Series with AAC, below, for simulation details). For each simulation, we fit the model in Equation 1 to these data for 27 different values of n from n=4 to n=30. For each simulated signal, we record the value of n that minimizes the AIC, defined as

AIC=Δ+2n,

where Δ is the deviance from the model in Equation 1. The values of n that minimize the AIC tend to lie between n=7 and n=12 (Figure 2). These simulations support the choice of n=10 as a sufficient number of splines.

Figure 1. The gamma distribution provides a good fit to example human data.

Figure 1.

Three examples of 20 s duration recorded from a single electrode during a human seizure. In each case, the gamma fit (red curve) provides an acceptable fit to the empirical distributions of the high frequency amplitude.

Figure 2. Distribution of the number of control points (n) that minimize the AIC.

Figure 2.

Values of n between 7 and 12 minimize the AIC in a simulation with phase-amplitude coupling and amplitude-amplitude coupling.

For a more detailed discussion and simulation examples of the PAC model, see Kramer and Eden (2013). We note that the choices of distribution and link function differ from those in Penny et al. (2008) and van Wijk et al. (2015), where the normal distribution and identity link are used instead.

The Alow model

The Alow model relates the high frequency amplitude to the low frequency amplitude:

Ahigh|AlowGamma[μ,ν] (2)
logμ=β1+β2Alow,

where the conditional distribution of Ahigh given Alow is modeled as a Gamma random variable with mean parameter μ and shape parameter ν. The predictor consists of a single variable and a constant, and the length of the coefficient vector βAlow={β1,β2} is 2.

The Alow,ϕlow model

The Alow,ϕlow model extends the ϕlow model in Equation 1 by including three additional predictors in the GLM: Alow, the low frequency amplitude; and interaction terms between the low frequency amplitude and the low frequency phase: Alowsin(ϕlow), and Alowcos(ϕlow). These new terms allow assessment of phase-amplitude coupling while accounting for linear amplitude-amplitude dependence and more complicated phase-dependent relationships on the low frequency amplitude without introducing many more parameters. Compared to the original ϕlow model in Equation 1, including these new terms increases the number of variables to n+3, and the length of the coefficient vector βAlow,ϕlow to n+3. These changes result in the following model:

Ahigh|ϕlow,AlowGamma[μ,ν], (3)
logμ=k=1nβkfk(ϕlow)+βn+1Alow+βn+2Alowsin(ϕlow)+βn+3Alowcos(ϕlow).

Here, the conditional distribution of Ahigh given ϕlow and Alow is modeled as a Gamma random variable with mean parameter μ and shape parameter ν, and βk are undetermined coefficients. We note that we only consider two interaction terms, rather than the spline basis function of phase, to limit the number of parameters in the model.

The statistics RPAC and RAAC

We compute two measures of CFC, RPAC and RAAC which use the three models defined in the previous section. We evaluate each model in the three-dimensional space (ϕlow, Alow, Ahigh) and calculate the statistics RPAC and RAAC. We use the MATLAB (RRID:SCR001622) function fitglm to estimate the models; we note that this procedure estimates the dispersion directly for the gamma distribution. In what follows, we first discuss the three model surfaces estimated from the data, and then how we use these surfaces to compute the statistics RPAC and RAAC.

To create the surface SAlow,ϕlow, which fits the Alow,ϕlow model in the three-dimensional (Alow, ϕlow, Ahigh) space, we first compute estimates of the parameters βAlow,ϕlow in Equation 3. We then estimate Ahigh by fixing Alow at one of 640 evenly spaced values between the 5th and 95th quantiles of Alow observed; we choose these quantiles to avoid extremely small or large values of Alow. Finally, at the fixed Alow, we compute the high frequency amplitude values from the Alow,ϕlow model over 100 evenly spaced values of ϕlow between -π and π. This results in a two-dimensional curve CAlow,ϕlow in the two-dimensional (ϕlow, Ahigh) space with fixed Alow. We repeat this procedure for all 640 values of Alow to create a surface SAlow,ϕlow in the three-dimensional space (Alow, ϕlow, Ahigh) (Figure 3C). To create the surface SAlow, which fits the Alow model in the three-dimensional (Alow, ϕlow, Ahigh) space, we estimate the coefficient vector βAlow for the model in Equation 2. We then estimate the high frequency amplitude over 640 evenly spaced values between the 5th and 95th quantiles of Alow observed, again to avoid extremely small or large values of Alow. This creates a mean response function which appears as a curve CAlow in the two-dimensional (Alow, Ahigh) space. We extend this two-dimensional curve to a three-dimensional surface SAlow by extending CAlow along the ϕlow dimension (Figure 3A).

Figure 3. Example model surfaces used to determine RPAC and RAAC.

Figure 3.

(A,B,C) Three example surfaces (ASAlow, (BSϕlow, and (CSAlow,ϕlow in the three-dimensional space (Alow, ϕlow, Ahigh). (D) The maximal distance between the surfaces SAlow (red) and SAlow,ϕlow (yellow) is used to compute RPAC. (E) The maximal distance between the surfaces Sϕlow (blue) and SAlow,ϕlow (yellow) is used to compute RAAC.

To create the surface Sϕlow, which fits the ϕlow model in the three-dimensional (Alow, ϕlow, Ahigh) space, we first estimate the coefficients βϕlow for the model in Equation 1. From this, we then compute estimates for the high frequency amplitude using the ϕlow model with 100 evenly spaced values of ϕlow between -π and π. This results in the mean response function of the ϕlow model. We extend this curve Cϕlow in the Alow dimension to create a surface Sϕlow in the three-dimensional (Alow, ϕlow, Ahigh) space. The surface Sϕlow has the same structure as the curve Cϕlow in the (ϕlow, Ahigh) space, and remains constant along the dimension Alow (Figure 3B).

The statistic RPAC measures the effect of low frequency phase on high frequency amplitude, while accounting for fluctuations in the low frequency amplitude. To compute this statistic, we note that the model in Equation 3 measures the combined effect of Alow and ϕlow on Ahigh, while the model in Equation 2 measures only the effect of Alow on Ahigh. Hence, to isolate the effect of ϕlow on Ahigh, while accounting for Alow, we compare the difference in fits between the models in Equations 2 and 3. We fit the mean response functions of the models in Equations 2 and 3, and calculate RPAC as the maximum absolute fractional difference between the resulting surfaces SAlow,ϕlow and SAlow (Figure 3D):

RPAC=max[abs[1SAlow/SAlow,ϕlow]], (4)

That is we measure the largest distance between the Alow and the Alow,ϕlow models. We expect fluctuations in SAlow,ϕlow not present in SAlow to be the result of ϕlow, that is PAC. In the absence of PAC, we expect the surfaces SAlow,ϕlow and SAlow to be very close, resulting in a small value of RPAC. However, in the presence of PAC, we expect SAlow,ϕlow to deviate from SAlow, resulting in a large value of RPAC. We note that this measure, unlike R2 metrics for linear regression, is not meant to measure the goodness-of-fit of these models to the data, but rather the differences in fits between the two models. We also note that RPAC is an unbounded measure, as it equals the maximum absolute fractional difference between distributions, which may exceed 1.

To compute the statistic RAAC, which measures the effect of low frequency amplitude on high frequency amplitude while accounting for fluctuations in the low frequency phase, we compare the difference in fits of the model in Equation 3 from the model in Equation 1. We note that the model in Equation 3 predicts Ahigh as a function of Alow and ϕlow, while the model in Equation 1 predicts Ahigh as a function of ϕlow only. Therefore we expect a difference in fits between the models in Equations 1 and 3 results from the effects of Alow on Ahigh. We fit the mean response functions of the models in Equations 1 and 3 in the three-dimensional (ϕlow, Alow, Ahigh) space, and calculate RAAC as the maximum absolute fractional difference between the resulting surfaces SAlow,ϕlow and Sϕlow (Figure 3E):

RAAC=max[abs[1Sϕlow/SAlow,ϕlow]]. (5)

That is we measure the distance between the ϕlow and the Alow,ϕlow models. We expect fluctuations in SAlow,ϕlow not present in Sϕlow to be the result of Alow, that is AAC. In the absence of AAC, we expect the surfaces SAlow,ϕlow and Sϕlow to be very close, resulting in a small value for RAAC. Alternatively, in the presence of AAC, we expect SAlow,ϕlow to deviate from Sϕlow, resulting in a large value of RAAC.

Estimating 95% confidence intervals for RPAC and RAAC

We compute 95% confidence intervals for RPAC and RAAC via a parametric bootstrap method (Kramer and Eden, 2013). Given a vector of estimated coefficients βx for x={Alow; ϕlow;orAlow,ϕlow}, we use its estimated covariance and estimated mean to generate 10,000 normally distributed coefficient sample vectors βxj, j{0,,10000}. For each βxj, we then compute the high frequency amplitude values from the Alow, ϕlow, or Alow,ϕlow model, Sxj. Finally, we compute the statistics RPACj and RAACj for each j as,

RPACj=max[abs[1SAlowj/SAlow,ϕlowj]], (6)
RAACj=max[abs[1Sϕlowj/SAlow,ϕlowj]]. (7)

The 95% confidence intervals for the statistics are the values of RPACj and RAACj at the 0.025 and 0.975 quantiles (Kramer and Eden, 2013).

Assessing significance of AAC and PAC with bootstrap p-values

To assess whether evidence exists for significant PAC or AAC, we implement a bootstrap procedure to compute p-values as follows. Given two signals Vlow and Vhigh, and the resulting estimated statistics RPAC and RAAC we apply the Amplitude Adjusted Fourier Transform (AAFT) algorithm (Theiler et al., 1992) on Vhigh to generate a surrogate signal Vhighi. In the AAFT algorithm, we first reorder the values of Vhigh by creating a random Gaussian signal W and ordering the values of Vhigh to match W. For example, if the highest value of W occurs at index j, then the highest value of Vhigh will be reordered to occur at index j. Next, we apply the Fourier Transform (FT) to the reordered Vhigh and randomize the phase of the frequency domain signal. This signal is then inverse Fourier transformed and rescaled to have the same amplitude distribution as the original signal Vhigh. In this way, the algorithm produces a permutation Vhighi of Vhigh such that the power spectrum and amplitude distribution of the original signal are preserved.

We create 1000 such surrogate signals Vhighi, and calculate RPACi and RAACi between Vlow and each Vhighi. We define the p-values pPAC and pAAC as the proportion of values in {𝐑PACi}i=11000 and {𝐑AACi}i=11000 greater than the estimated statistics RPAC and RAAC, respectively. If the proportion is zero, we set p=0.0005.

We calculate p-values for the modulation index in the same way. The modulation index calculates the distribution of high frequency amplitudes versus low frequency phases and measures the distance from this distribution to a uniform distribution of amplitudes. Given the signals Vlow and Vhigh, and the resulting modulation index MI between them, we calculate the modulation index between Vlow and 1000 surrogate permutations of Vhigh using the AAFT algorithm. We set pMI to be the proportion of these resulting values greater than the MI value estimated from the original signals.

Synthetic time series with PAC

We construct synthetic time series to examine the performance of the proposed method as follows. First, we simulate 20 s of pink noise data such that the power spectrum scales as 1/f. We then filter these data into low (4–7 Hz) and high (100–140 Hz) frequency bands, as described in Materials and methods: Estimation of the phase and amplitude envelope, creating signals Vlow and Vhigh. Next, we couple the amplitude of the high frequency signal to the phase of the low frequency signal. To do so, we first locate the peaks of Vlow and determine the times tk,k={1,2,3,,K}, of the K relative extrema. We note that these times correspond approximately to ϕlow=0. We then create a smooth modulation signal M which consists of a 42 ms Hanning window of height 1+IPAC centered at each tk, and a value of 1 at all other times (Figure 4A). The intensity parameter IPAC in the modulation signal corresponds to the strength of PAC. IPAC=0.0 corresponds to no PAC, while IPAC=1.0 results in a 100% increase in the high frequency amplitude at each tk, creating strong PAC. We create a new signal Vhigh with the same phase as Vhigh, but with amplitude dependent on the phase of Vlow by setting,

Vhigh=MVhigh.

Figure 4. Illustration of synthetic time series with PAC and AAC.

Figure 4.

(A) Example simulation of Vlow (blue) and modulation signal M (red). When the phase of Vlow is near 0 radians, M increases. (B) Example simulation of PAC. When the phase of Vlow is approximately 0 radians, the high frequency amplitude (yellow) increases. (C) Example simulations of AAC. When the amplitude of Vlow is large, so is the amplitude of the high frequency signal (purple).

We create the final voltage trace V as

V=Vlow+Vhigh+cVpink,

where Vpink is a new instance of pink noise multiplied by a small constant c=0.01. In the signal V, brief increases of the high frequency activity occur at a specific phase (0 radians) of the low frequency signal (Figure 4B).

Synthetic time series with AAC

To generate synthetic time series with dependence on the low frequency amplitude, we follow the procedure in the preceding section to generate Vlow, Vhigh, and Alow. We then induce amplitude-amplitude coupling between the low and high frequency components by creating a new signal Vhigh* such that

Vhigh=Vhigh(1+IAACAlowmax(Alow)),

where IAAC is the intensity parameter corresponding to the strength of amplitude-amplitude coupling. We define the final voltage trace V as

V=Vlow+Vhigh+cVpink,

where Vpink is a new instance of pink noise multiplied by a small constant c=0.01 (Figure 4C).

Human subject data

A patient (male, age 32 years) with medically intractable focal epilepsy underwent clinically indicated intracranial cortical recordings for epilepsy monitoring. In addition to clinical electrode implantation, the patient was also implanted with a 10 × 10 (4 mm ×4 mm) NeuroPort microelectrode array (MEA; Blackrock Microsystems, Utah) in a neocortical area expected to be resected with high probability in the temporal gyrus. The MEA consists of 96 platinum-tipped silicon probes, with a length of 1.5 mm, corresponding to neocortical layer III as confirmed by histology after resection. Signals from the MEA were acquired continuously at 30 kHz per channel. Seizure onset times were determined by an experienced encephalographer (S.S.C.) through inspection of the macroelectrode recordings, referral to the clinical report, and clinical manifestations recorded on video. For a detailed clinical summary, see patient P2 of Wagner et al. (2015). For these data, we analyze the 100–140 Hz and 4–7 Hz frequency bands to illustrate the proposed method; a more rigorous study of CFC in these data may require a more principled choice of high frequency band. All patients were enrolled after informed consent and consent to publish was obtained, and approval was granted by local Institutional Review Boards at Massachusetts General Hospital and Brigham Women’s Hospitals (Partners Human Research Committee), and at Boston University according to National Institutes of Health guidelines.

Code availability

The code to perform this analysis is available for reuse and further development at https://github.com/Eden-Kramer-Lab/GLM-CFC (Nadalin and Kramer, 2019; copy archived at https://github.com/elifesciences-publications/GLM-CFC).

Results

We first examine the performance of the CFC measure through simulation examples. In doing so, we show that the statistics 𝐑PAC and 𝐑AAC accurately detect different types of cross-frequency coupling, increase with the intensity of coupling, and detect weak PAC coupled to the low frequency amplitude. We show that the proposed method is less sensitive to changes in low frequency power, and outperforms an existing PAC measure that lacks dependence on the low frequency amplitude. We conclude with example applications to human and rodent in vivo recordings, and show how to extend the modeling framework to include a new covariate.

The absence of CFC produces no significant detections of coupling

We first consider simulated signals without CFC. To create these signals, we follow the procedure in Materials and methods: Synthetic Time Series with PAC with the modulation intensity set to zero (IPAC=0). In the resulting signals, Ahigh is approximately constant and does not depend on ϕlow or Alow (Figure 5A). We estimate the ϕlow model, the Alow model, and the Alow,ϕlow model from these data; we show example fits of the model surfaces in Figure 5B. We observe that the models exhibit small modulations in the estimated high frequency amplitude envelope as a function of the low frequency phase and amplitude.

Figure 5. The statistical modeling framework successfully detects different types of cross-frequency coupling.

Figure 5.

(A–C) Simulations with no CFC. (A) When no CFC occurs, the low frequency signal (blue) and high frequency signal (orange) evolve independently. (B) The surfaces SAlow, Sϕlow, and SAlow,ϕlow suggest no dependence of Ahigh on ϕlow or Alow. (C) Significant (p<0.05) values of 𝐑PAC and 𝐑AAC from 1000 simulations. Very few significant values for the statistics R are detected. (D–G) Simulations with PAC only. (D) When the phase of the low frequency signal is near 0 radians (red tick marks), the amplitude of the high frequency signal increases. (E) The surfaces SAlow, Sϕlow, and SAlow,ϕlow suggest dependence of Ahigh on ϕlow. (F) In 1000 simulations, significant values of RPAC frequently appear, while significant values of 𝐑AAC rarely appear. (G) As the intensity of PAC increases, so do the significant values of 𝐑PAC (black), while any significant values of 𝐑AAC remain small. (H–K) Simulations with AAC only. (H) The amplitudes of the high frequency signal and low frequency signal are positively correlated. (I) The surfaces SAlow, Sϕlow, and SAlow,ϕlow suggest dependence of Ahigh on Alow. (J) In 1000 simulations, significant values of 𝐑AAC frequently appear. (K) As the intensity of AAC increases, so do the significant values of 𝐑AAC (blue), while any significant values of 𝐑PAC remain small. (L–O) Simulations with PAC and AAC. (L) The amplitude of the high frequency signal increases when the phase of the low frequency signal is near 0 radians and the amplitude of the low frequency signal is large. (M) The surfaces SAlow, Sϕlow, and SAlow,ϕlow suggest dependence of Ahigh on ϕlow and Alow. (N) In 1000 simulations, significant values of 𝐑PAC and 𝐑AAC frequently appear. (O) As the intensity of PAC and AAC increase, so do the significant values of 𝐑PAC and 𝐑AAC. In (G,K,O), circles indicate the median, and x’s the 5th and 95th quantiles.

To assess the distribution of significant R values in the case of no cross-frequency coupling, we simulate 1000 instances of the pink noise signals (each of 20 s) and apply the R measures to each instance, plotting significant R values in Figure 5C. We find that for all 1000 instances, pPAC and pAAC are less than 0.05 in only 0.6% and 0.2% of the simulations, respectively, indicating no significant evidence of PAC or AAC, as expected.

We also applied these simulated signals to assess the performance of two standard model comparison procedures for GLMs. Simulating 1000 instances of pink noise signals (each of 20 s) with no induced PAC or AAC, we performed a chi-squared test for nested models (Kramer and Eden, 2016) between models Alow and Alow,ϕlow, and detected significant PAC (p < 0.05) in 59.7% of simulations. Similarly, performing a chi-squared test for nested models between models ϕlow and Alow,ϕlow, we detected significant AAC (p < 0.05) in 41.5% of simulations. Using an AIC-based model comparison, we found a decrease in AIC from the Alow model to the Alow,ϕlow model (consistent with significant PAC) in 98.6% of simulations, and a decrease in AIC from the ϕlow model to the Alow,ϕlow model (consistent with significant AAC) in 87.2% of simulations. By contrast, we rarely detect significant PAC (<0.6% of simulations) or AAC (<0.2% of simulations) in the pink noise signals using the two statistics 𝐑PAC and 𝐑AAC implemented here. We conclude that, in this modeling regime, two deviance-based model comparison procedures for GLMs are less robust measures of significant PAC and AAC.

The proposed method accurately detects PAC

We next consider signals that possess phase-amplitude coupling, but lack amplitude-amplitude coupling. To do so, we simulate a 20 s signal with Ahigh modulated by ϕlow (Figure 5D); more specifically, Ahigh increases when ϕlow is near 0 radians (see Materials and methods, IPAC=1). We then estimate the ϕlow model, the Alow model, and the Alow,ϕlow model from these data; we show example fits in Figure 5E. We find that in the ϕlow model Ahigh is higher when ϕlow is close to 0 radians, and the Alow,ϕlow model follows this trend. We note that, because the data do not depend on the low frequency amplitude (Alow), the ϕlow and Alow,ϕlow models have very similar shapes in the (ϕlow, Alow, Ahigh) space, and the Alow model is nearly flat.

Simulating 1000 instances of these 20 s signals with induced phase-amplitude coupling, we find pAAC<0.05 for only 0.6% of the simulations, while pPAC<0.05 for 96.5% of the simulations. We find that the significant values of 𝐑PAC lie well above 0 (Figure 5F), and that as the intensity of the simulated phase-amplitude coupling increases, so does the statistic 𝐑PAC (Figure 5G). We conclude that the proposed method accurately detects the presence of phase-amplitude coupling in these simulated data.

The proposed method accurately detects AAC

We next consider signals with amplitude-amplitude coupling, but without phase-amplitude coupling. We simulate a 20 s signal such that Ahigh is modulated by Alow (see Materials and methods, IAAC=1); when Alow is large, so is Ahigh (Figure 5H). We then estimate the ϕlow model, the Alow model, and the Alow,ϕlow model (example fits in Figure 5I). We find that the Alow model increases along the Alow axis, and that the Alow,ϕlow model closely follows this trend, while the ϕlow model remains mostly flat, as expected.

Simulating 1000 instances of these signals we find that pAAC<0.05 for 97.9% of simulations, while pPAC<0.05 for 0.3% of simulations. The significant values of 𝐑AAC lie above 0 (Figure 5J), and increases in the intensity of AAC produce increases in 𝐑AAC (Figure 5K). We conclude that the proposed method accurately detects the presence of amplitude-amplitude coupling.

The proposed method accurately detects the simultaneous occurrence of PAC and AAC

We now consider signals that possess both phase-amplitude coupling and amplitude-amplitude coupling. To do so, we simulate time series data with both AAC and PAC (Figure 5L). In this case, Ahigh increases when ϕlow is near 0 radians and when Alow is large (see Materials and methods, IPAC=1 and IAAC=1). We then estimate the ϕlow model, the Alow model, and the Alow,ϕlow model from the data and visualize the results (Figure 5M). We find that the ϕlow model increases near ϕlow=0, and that the Alow model increases linearly with Alow. The Alow,ϕlow model exhibits both of these behaviors, increasing at ϕlow=0 and as Alow increases.

Simulating 1000 instances of signals with both AAC and PAC present, we find that pAAC<0.05 in 96.7% of simulations and pPAC<0.05 in 98.1% of simulations. The distributions of significant 𝐑PAC and 𝐑AAC values lie above 0, consistent with the presence of both PAC and AAC (Figure 5N), and as the intensity of PAC and AAC increases, so do the values of 𝐑PAC and 𝐑AAC (Figure 5O). We conclude that the model successfully detects the concurrent presence of PAC and AAC.

𝐑PAC and modulation index are both sensitive to weak modulations

To investigate the ability of the proposed method and the modulation index to detect weak coupling between the low frequency phase and high frequency amplitude, we perform the following simulations. For each intensity value IPAC between 0 and 0.5 (in steps of 0.025), we simulate 1000 signals (see Materials and methods) and compute 𝐑PAC and a measure of PAC in common use: the modulation index MI (Tort et al., 2010) (Figure 6). We find that both MI and 𝐑PAC, while small, increase with IPAC; in this way, both measures are sensitive to small values of IPAC. However, we note that 𝐑PAC is not significant for very small intensity values (IPAC0.3), while MI is significant at these small intensities. Significant 𝐑PAC appears when the MI exceeds 0.7 × 10-3, a value below the range of MI values detected in many existing studies (Tort et al., 2008; Zhong et al., 2017; Jackson et al., 2019; Axmacher et al., 2010; Tort et al., 2018). We conclude that, while the modulation index may be more sensitive than 𝐑PAC to very weak phase-amplitude coupling, 𝐑PAC can detect phase-amplitude coupling at MI values consistent with those observed in the literature.

Figure 6. The two measures of PAC increase with intensities near zero.

Figure 6.

The mean (circles) and 5th to 95th quantiles (x’s) of (A𝐑PAC and (B) MI for intensity values between 0 and 0.5. Black bars indicate pPAC or pMI is below 0.05 for ≥95% of simulations; gray bars indicate pPAC is not below 0.05 for ≥95% of simulations. While both measures increase with intensity, MI detects more instances of significant PAC than does 𝐑PAC for very small values of IPAC.

The proposed method is less affected by fluctuations in low-frequency amplitude and AAC

Increases in low frequency power can increase measures of phase-amplitude coupling, although the underlying PAC remains unchanged (Aru et al., 2015; Cole and Voytek, 2017). Characterizing the impact of this confounding effect is important both to understand measure performance and to produce accurate interpretations of analyzed data. To examine this phenomenon, we perform the following simulation. First, we simulate a signal V with fixed PAC (intensity IPAC=1, see Materials and methods). Second, we filter V into its low and high frequency components Vlow and Vhigh, respectively. Then, we create a new signal V* as follows:

V=2Vlow+Vhigh+Vnoise, (8)

where Vnoise is a pink noise term (see Materials and methods). We note that we only alter the low frequency component of V and do not alter the PAC. To analyze the PAC in this new signal we compute 𝐑PAC and MI.

We show in Figure 7 population results (1000 realizations each of the simulated signals V and V*) for the R and MI values. We observe that increases in the amplitude of Vlow produce increases in MI and 𝐑PAC. However, this increase is more dramatic for MI than for 𝐑PAC; we note that the distributions of 𝐑PAC almost completely overlap (Figure 7A), while the distribution of MI shifts to larger values when the amplitude of Vlow increases (Figure 7B). We conclude that the statistic 𝐑PAC — which includes the low frequency amplitude as a predictor in the GLM — is more robust to increases in low frequency power than a method that only includes the low frequency phase.

Figure 7. Increases in the amplitude of the low frequency signal, and the amplitude-amplitude coupling (AAC), increase the modulation index more than RPAC.

Figure 7.

(A,B) Distributions of (A) RPAC and (B) MI when Alow is small (blue) and when Alow is large (red). (C,D) Distributions of (C) RPAC and (D) MI when AAC is small (blue) and when AAC is large (red).

We also investigate the effect of increases in amplitude-amplitude coupling (AAC) on the two measures of PAC. As before, we simulate a signal V with fixed PAC (intensity IPAC=1) and no AAC (intensity IAAC=0). We then simulate a second signal V* with the same fixed PAC as V, and with additional AAC (intensity IAAC=10). We simulate 1000 realizations of V and V* and compute the corresponding 𝐑PAC and MI values. We observe that the increase in AAC produces a small increase in the distribution of 𝐑PAC values (Figure 7C), but a large increase in the distribution of MI values (Figure 7D). We conclude that the statistic 𝐑PAC is more robust to increases in AAC than MI.

These simulations show that at a fixed, non-zero PAC, the modulation index increases with increased Alow and AAC. We now consider the scenario of increased Alow and AAC in the absence of PAC. To do so, we simulate 1000 signals of 200 s duration, with no PAC (intensity IPAC=0). For each signal, at time 100 s (i.e., the midpoint of the simulation) we increase the low frequency amplitude by a factor of 10 (consistent with observations from an experiment in rodent cortex, as described below), and include AAC between the low and high frequency signals (intensity IAAC=0 for t<100s and intensity IAAC=2 for t100s). We find that, in the absence of PAC, 𝐑PAC detects significant PAC (p<0.05) in 0.4% of the simulated signals, while MI detects significant PAC in 34.3% of simulated signals. We conclude that in the presence of increased low frequency amplitude and amplitude-amplitude coupling, MI may detect PAC where none exists, while RPAC, which accounts for fluctuations in low frequency amplitude, does not.

Sparse PAC is detected when coupled to the low frequency amplitude

While the modulation index has been successfully applied in many contexts (Canolty and Knight, 2010; Hyafil et al., 2015b), instances may exist where this measure is not optimal. For example, because the modulation index was not designed to account for the low frequency amplitude, it may fail to detect PAC when Ahigh depends not only on ϕlow, but also on Alow. For example, since the modulation index considers the distribution of Ahigh at all observed values of ϕlow, it may fail to detect coupling events that occur sparsely at only a subset of appropriate ϕlow occurrences. RPAC, on the other hand, may detect these sparse events if these events are coupled to Alow, as RPAC accounts for fluctuations in low frequency amplitude. To illustrate this, we consider a simulation scenario in which PAC occurs sparsely in time.

We create a signal V with PAC, and corresponding modulation signal M with intensity value IPAC=1.0 (see Materials and methods, Figure 8A–B). We then modify this signal to reduce the number of PAC events in a way that depends on Alow. To do so, we preserve PAC at the peaks of Vlow (i.e., when ϕlow=0), but now only when these peaks are large, more specifically in the top 5% of peak values.

Figure 8. PAC events restricted to a subset of occurrences are still detectable.

Figure 8.

(A) The low frequency signal (blue), amplitude envelope (yellow), and threshold (black dashed). (B–C) The modulation signal increases (B) at every occurrence of ϕlow=0, or (C) only when Alow exceeds the threshold and ϕlow=0.

We define a threshold value T to be the 95th quantile of the peak Vlow values, and modify the modulation signal M as follows. When M exceeds 1 (i.e., when ϕlow=0) and the low frequency amplitude exceeds T (i.e., AlowT), we make no change to M. Alternatively, when M exceeds one and the low frequency amplitude lies below T (i.e., Alow<T), we decrease M to 1 (Figure 8C). In this way, we create a modified modulation signal M1 such that in the resulting signal V1, when ϕlow=0 and Alow is large enough, Ahigh is increased; and when ϕlow=0 and Alow is not large enough, there is no change to Ahigh. This signal V1 hence has fewer phase-amplitude coupling events than the number of times ϕlow=0.

We generate 1000 realizations of the simulated signals V1, and compute RPAC and MI. We find that while MI detects significant PAC in only 37% of simulations, RPAC detects significant PAC in 72% of simulations. In this case, although the PAC occurs infrequently, these occurrences are coupled to Alow, and RPAC, which accounts for changes in Alow, successfully detects these events much more frequently. We conclude that when the PAC is dependent on Alow, RPAC more accurately detects these sparse coupling events.

The CFC model detects simultaneous PAC and AAC missed in an existing method

To further illustrate the utility of the proposed method, we consider another scenario in which Alow impacts the occurrence of PAC. More specifically, we consider a case in which Ahigh increases at a fixed low frequency phase for high values of Alow, and Ahigh decreases at the same phase for small values of Alow. In this case, we expect that the modulation index may fail to detect the coupling because the distribution of Ahigh over ϕlow would appear uniform when averaged over all values of Alow; the dependence of Ahigh on ϕlow would only become apparent after accounting for Alow.

To implement this scenario, we consider the modulation signal M (see Materials and methods) with an intensity value IPAC=1. We consider all peaks of Alow and set the threshold T to be the 50th quantile (Figure 9A). We then modify the modulation signal M as follows. When M exceeds 1 (i.e., when ϕlow=0) and the low frequency amplitude exceeds T (i.e., AlowT), we make no change to M. Alternatively, when M exceeds one and the low frequency amplitude lies below T (i.e. Alow<T), we decrease M to 0 (Figure 9B). In this way, we create a modified modulation signal M such that when ϕlow=0 and Alow is large enough, Ahigh is increased; and when ϕlow=0 and Alow is small enough, Ahigh is decreased (Figure 9C).

Figure 9. PAC with AAC is accurately detected with the proposed method, but not with the modulation index.

Figure 9.

(A) The low frequency signal (blue), amplitude envelope (yellow), and threshold (black dashed). (B) The modulation signal (red) increases when ϕlow=0 and Alow>T, and deceases when ϕlow=0 and Alow<T. (C) The modulated Ahigh signal (purple) increases and decreases with the modulation signal. (D) The proportion of significant detections (out of 1000) for MI and RPAC.

Using this method, we simulate 1000 realizations of this signal, and calculate MI and RPAC for each signal (Figure 9D). We find that RPAC detects significant PAC in nearly all (96%) of the simulations, while MI detects significant PAC in only 58% of the simulations. We conclude that, in this simulation, RPAC more accurately detects PAC coupled to low frequency amplitude.

A simple stochastic spiking neural model illustrates the utility of the proposed method

In the previous simulations, we created synthetic data without a biophysically principled generative model. Here we consider an alternative simulation strategy with a more direct connection to neural dynamics. While many biophysically motivated models of cross-frequency coupling exist (Sase et al., 2017; Chehelcheraghi et al., 2017; Sotero, 2016; Hyafil et al., 2015a; Lepage and Vijayan, 2015; Onslow et al., 2014; Fontolan et al., 2013; Malerba and Kopell, 2013; Jirsa and Müller, 2013; Spaak et al., 2012; Wulff et al., 2009; Tort et al., 2007), we consider here a relatively simple stochastic spiking neuron model (Aljadeff et al., 2016). In this stochastic model, we generate a spike train (Vhigh) in which an externally imposed signal Vlow modulates the probability of spiking as a function of Alow and ϕlow. We note that high frequency activity is thought to represent the aggregate spiking activity of local neural populations (Ray and Maunsell, 2011; Buzsáki and Wang, 2012; Ray et al., 2008a; Jia and Kohn, 2011); while here we simulate the activity of a single neuron, the spike train still produces temporally focal events of high frequency activity. In this framework, we allow the target phase (ϕlow) modulating Ahigh to change as a function of Alow: when Alow is large, the probability of spiking is highest near ϕlow=±π, and when Alow is small, the probability of spiking is highest near ϕlow=0. More precisely, we define ϕlow as

ϕlow=π(1+Alow)

where Alow is a sinusoid oscillating between 1 and 2 with period 0.1 Hz. We define the spiking probability, λ, as

λ=λ0exp[(1+s(ϕlowϕlow)22σ2)],

where σ=0.01, s(ϕ) is a triangle wave, and we choose λ0 so that the maximum value of λ is 2. We note that the spiking probability λ is zero except near times when the phase of the low frequency signal (ϕlow) is near ϕlow. We then define Ahigh as:

Ahigh=S+n,

where S is the binary sequence generated by the stochastic spiking neuron model, and n is Gaussian noise with mean zero and standard deviation 0.1. In this scenario, the distribution of Ahigh over ϕlow appears uniform when averaged over all values of Alow. We therefore expect the modulation index to remain small, despite the presence of PAC with maximal phase dependent on Alow. However, we expect that RPAC, which accounts for fluctuations in low frequency amplitude, will detect this PAC. We show an example signal from this simulation in Figure 10A. As expected, we find that RPAC detects PAC (RPAC=0.172, p=0.02); we note that the (Alow, ϕlow) surface exhibits a single peak near ϕlow=0 at small values of Alow, and at ϕlow=±π at large value of Alow (Figure 10B). The (Alow, ϕlow) surface deviates significantly from the Alow surface, resulting in a large RPAC value. However, the non-uniform shape of the (Alow, ϕlow) surface is lost when we fail to account for Alow. In this scenario, the distribution of Ahigh over ϕlow appears uniform, resulting in a low MI value (Figure 10C).

Figure 10. RPAC, but not MI, detects phase-amplitude coupling in a simple stochastic spiking neuron model.

Figure 10.

(A) The phase and amplitude of the low frequency signal (blue) modulate the probability of a high frequency spike (orange). (B) The surfaces SAlow (red) and SAlow,ϕlow (yellow). The phase of maximal Ahigh modulation depends on Alow. (C) The modulation index fails to detect this type of PAC.

Application to in vivo human seizure data

To evaluate the performance of the proposed method on in vivo data, we first consider an example recording from human cortex during a seizure (see Materials and methods: Human subject data). Visual inspection of the LFP data (Figure 11A) reveals the emergence of large amplitude voltage fluctuations during the approximately 80 s seizure. We compute the spectrogram over the entire seizure, using windows of width 0.8 s with 0.002 s overlap, and identify a distinct 10 s interval of increased power in the 4–7 Hz band (Figure 11B). We analyze this section of the voltage trace V, filtering into Vhigh (100–140 Hz) and Vlow (4–7 Hz), and extracting Ahigh, Alow, and ϕlow as in Methods (Figure 11C). Visual inspection reveals the occurrence of large amplitude, low frequency oscillations and small amplitude, high frequency oscillations.

Figure 11. The proposed method detects cross-frequency coupling in an in vivo human recording.

Figure 11.

(A,B) Voltage recording (A) and spectrogram (B) from one MEA electrode over the course of a seizure; PAC and AAC were computed for the time segment outlined in red. (C) The 10 s voltage trace (blue) corresponding to the outlined segment in (A), and Vlow (red), Vhigh (yellow), and Alow (purple). (D) A 2 s subinterval of the voltage trace (blue), Vlow (red), Vhigh (yellow), Alow (purple), and ϕlow (green). (EAlow (purple) and Ahigh (red) for the 10 s segment in (C), normalized and smoothed.

We find during this interval significant phase-amplitude coupling computed using RPAC (RPAC=1.55, pPAC=0.005Figure 12), and using the modulation index (MI=0.03, pMI=5.0×104). To examine the phase-amplitude coupling in more detail, we isolate a 2 s segment (Figure 11D) and display the signal V, the high frequency signal Vhigh, the low frequency phase ϕlow, and the low frequency amplitude Alow. We observe that when ϕlow is near π, the amplitude of Vhigh tends to increase, consistent with the presence of PAC and a significant value of RPAC and MI.

Figure 12. The SAlow,ϕlow surface shows how PAC changes with the low frequency amplitude and phase during an interval of human seizure.

Figure 12.

(A) The full model surface (blue) in the (ϕlow, Alow, Ahigh) space, and components of that surface when (BAlow is small (black), and Alow is large (red).

We also find significant amplitude-amplitude coupling computed using RAAC (RAAC=0.85, pAAC=0.005). Comparing Ahigh and Alow over the 10 s interval (each smoothed using a 1 s moving average filter and normalized), we observe that both Ahigh and Alow steadily increase over the duration of the interval (Figure 11E).

Application to in vivo rodent data

As a second example to illustrate the performance of the new method, we consider LFP recordings from from the infralimbic cortex (IL) and basolateral amygdala (BLA) of an outbred Long-Evans rat before and after the delivery of an experimental electrical stimulation intervention described in Blackwood et al. (2018). Eight microwires in each region, referenced as bipolar pairs, sampled the LFP at 30 kHz, and electrical stimulation was delivered to change inter-regional coupling (see Blackwood et al., 2018 for a detailed description of the experiment). Here we examine how cross-frequency coupling between low frequency (5–8 Hz) IL signals and high frequency (70–110 Hz) BLA signals changes from the pre-stimulation to the post-stimulation condition. To do so, we filter the data V into low and high frequency signals (see Materials and methods), and compute the MI, RPAC and RAAC between each possible BLA-IL pairing, sixteen in total.

We find three separate BLA-IL pairings where RPAC reports no significant PAC pre- or post-stimulation, but MI reports significant coupling post-stimulation. Investigating further, we note that in all three cases, the amplitude of the low frequency IL signal increases from pre- to post-stimulation, and RAAC, the measure of amplitude-amplitude coupling, increases from pre- to post-stimulation. These observations are consistent with the simulations in Results: The proposed method is less affected by fluctuations in low-frequency amplitude and AAC, in which we showed that increases in the low frequency amplitude and AAC produced increases in MI, although the PAC remained fixed. We therefore propose that, consistent with these simulation results, the increase in MI observed in these data may result from changes in the low frequency amplitude and AAC, not in PAC.

Using the flexibility of GLMs to improve detection of phase-amplitude coupling in vivo

One advantage of the proposed framework is its flexibility: covariates are easily added to the generalized linear model and tested for significance. For example, we could include covariates for trial, sex, and stimulus parameters and explore their effects on PAC, AAC, or both.

Here, we illustrate this flexibility through continued analysis of the rodent data. We select a single electrode recording from these data, and hypothesize that the condition, either pre-stimulation or post-stimulation, affects the coupling. To incorporate this new covariate into the framework, we consider the concatenated voltage recordings from the pre-stimulation condition Vpre and the post-stimulation condition Vpost:

V=[Vpre,Vpost].

From V, we obtain the corresponding high frequency signal Vhigh and low frequency signal Vlow, and subsequently the high frequency amplitude Ahigh, low frequency phase ϕlow, and low frequency amplitude Alow. We use these data to generate two new models:

Ahigh|ϕlow,Alow,PGamma[μ,ν], (9)
logμ=k=1nβkfk(ϕlow)+βn+1Alow+βn+2Alowsin(ϕlow)+βn+3Alowcos(ϕlow)+P(j=1nβn+3+jfj(ϕlow)+β2n+4Alow)
Ahigh|ϕlow,Alow,PGamma[μ,ν] (10)
logμ=k=1nβkfk(ϕ low)+βn+1Alow+βn+2Alowsin(ϕlow )+βn+3Alowcos(ϕlow)+P(βn+4A low),

where P is an indicator function specifying whether the signal is in the pre-stimulation (P=0) or post-stimulation (P=1) condition. The effect of the indicator function is to include the effect of stimulus condition on the high frequency amplitude. The models in Equations 9 and 10 now include the effect of low frequency amplitude, low frequency phase, and condition on high frequency amplitude. To determine whether the condition has an effect on PAC, we test whether the term P(j=1nβn+3+jfj(ϕlow)) in Equation 9 is significant, that is whether there is a significant difference between the models in Equations 9 and 10. If the difference between the two models is very small, we gain no improvement in modeling Ahigh by including the interaction between P and ϕlow. In that case, the impact of ϕlow on Ahigh can be modeled without considering stimulus condition P, that is the impact of stimulus condition on PAC is negligible.

To measure the difference between the models in Equations 9 and 10, we construct a surface SPϕlow from the model in Equation 9, and a surface SP from the model in Equation 10 in the (Alow, ϕlow, Ahigh, P) space, assessing the models at P=1. We compute RPAC,condition, which measures the impact of stimulus condition on PAC, as:

RPAC, condition=max[abs[1SP/SPϕlow]]. (11)

We find for the example rodent data an RPAC, condition value of 0.3608, with a p-value of 0.0005. Hence, we find evidence for a significant effect of stimulus on PAC.

To further explore this assessment of stimulus condition on PAC, we simulate 1000 instances of a 40 s signal divided into two conditions: no PAC for the first 20 s (IPAC=0) and non-zero PAC for the final 20 s (IPAC=1). We design this simulation to mimic an increase in PAC from pre-stimulation to post-stimulation (Figure 13A). Using the models in Equations 9 and 10, and computing RPAC, condition, we find p<0.05 for 100% of simulated signals. We also simulate 1000 instances of a 40 s signal with no PAC (IPAC=0) for the entire 40 s, that is PAC does not change from pre-stimulation to post-stimulation (Figure 13B), and find in this case p<0.05 for only 4.6% of simulations. Finally, we simulate 1000 instances of a 40 s signal with fixed PAC (IPAC=1), and with a doubling of the low frequency amplitude occuring at 20 s (i.e., pre-stimulation the low frequency amplitude is 1, and post-stimulation the low frequency amplitude is 2). We find p<0.05 for only 3.6% of simulations. We conclude that this method effectively determines whether stimulation condition significantly changes PAC.

Figure 13. Example simulated Vlow (blue) and Vhigh (orange) signals for which (A) PAC increases at 20 s (indicated by black dashed line), and (B) no increase in PAC occurs.

Figure 13.

This example illustrates the flexibility of the statistical modeling framework. Extending this framework is straightforward, and new extensions allow a common principled approach to test the impact of new predictors. Here we considered an indicator function that divides the data into two states (pre- and post-stimulation). We note that the models are easily extended to account for multiple discrete predictors such as gender and participation in a drug trial, or for continuous predictors such as age and time since stimulus.

Discussion

In this paper, we proposed a new method for measuring cross-frequency coupling that accounts for both phase-amplitude coupling and amplitude-amplitude coupling, along with a principled statistical modeling framework to assess the significance of this coupling. We have shown that this method effectively detects CFC, both as PAC and AAC, and is more sensitive to weak PAC obscured by or coupled to low-frequency amplitude fluctuations. Compared to an existing method, the modulation index (Tort et al., 2010), the newly proposed method more accurately detects scenarios in which PAC is coupled to the low-frequency amplitude. Finally, we applied this method to in vivo data to illustrate examples of PAC and AAC in real systems, and show how to extend the modeling framework to include a new covariate.

One of the most important features of the new method is an increased ability to detect weak PAC coupled to AAC. For example, when sparse PAC events occur only when the low frequency amplitude (Alow) is large, the proposed method detects this coupling while another method not accounting for Alow misses it. While PAC often occurs in neural data, and has been associated with numerous neurological functions (Canolty and Knight, 2010; Hyafil et al., 2015b), the simultaneous occurrence of PAC and AAC is less well studied (Osipova et al., 2008). Here, we showed examples of simultaneous PAC and AAC recorded from human cortex during a seizure, and we note that this phenomena has been simulated in other works (Mazzoni et al., 2010).

While the exact mechanisms that support CFC are not well understood (Hyafil et al., 2015b), the general mechanisms of low and high frequency rhythms have been proposed. Low frequency rhythms are associated with the aggregate activity of large neural populations and modulations of neuronal excitability (Engel et al., 2001; Varela et al., 2001; Buzsáki and Draguhn, 2004), while high frequency rhythms provided a surrogate measure of neuronal spiking (Rasch et al., 2008; Mukamel et al., 2005; Fries et al., 2001; Pesaran et al., 2002; Whittingstall and Logothetis, 2009; Ray and Maunsell, 2011; Ray et al., 2008b). These two observations provide a physical interpretation for PAC: when a low frequency rhythm modulates the excitability of a neural population, we expect spiking to occur (i.e., an increase in Ahigh) at a particular phase of the low frequency rhythm (ϕlow) when excitation is maximal. These notions also provide a physical interpretation for AAC: increases in Alow produce larger modulations in neural excitability, and therefore increased intervals of neuronal spiking (i.e., increases in Ahigh). Alternatively, decreases in Alow reduce excitability and neuronal spiking (i.e., decreases in Ahigh).

The function of concurrent PAC and AAC, both for healthy brain function and during a seizure as illustrated here, is not well understood. As PAC occurs normally in healthy brain signals, for example during working memory, neuronal computation, communication, learning and emotion (Tort et al., 2009; Jensen et al., 2016; Canolty and Knight, 2010; Dejean et al., 2016; Karalis et al., 2016; Likhtik et al., 2014; Jones and Wilson, 2005; Lisman, 2005; Sirota et al., 2008), these preliminary results may suggest a pathological aspect of strong AAC occurring concurrently with PAC.

Proposed functions of PAC include multi-item encoding, long-distance communication, and sensory parsing (Hyafil et al., 2015b). Each of these functions takes advantage of the low frequency phase, encoding different objects or pieces of information in distinct phase intervals of ϕlow. PAC can be interpreted as a type of focused attention; Ahigh modulation occurring only in a particular interval of ϕlow organizes neural activity - and presumably information - into discrete packets of time. Similarly, a proposed function of AAC is to encode the number of represented items, or the amount of information encoded in the modulated signal (Hyafil et al., 2015b). A pathological increase in AAC may support the transmission of more information than is needed, overloading the communication of relevant information with irrelevant noise. The attention-based function of PAC, that is having reduced high frequency amplitude at phases not containing the targeted information, may be lost if the amplitude of the high frequency oscillation is increased across wide intervals of low frequency phase.

Like all measures of CFC, the proposed method possesses specific limitations. We discuss five limitations here. First, the choice of spline basis to represent the low frequency phase may be inaccurate, for example if the PAC changes rapidly with ϕlow. Second, the value of RAAC depends on the range of Alow observed. This is due to the linear relationship between Alow and Ahigh in the Alow model, which causes the maximum distance between the surfaces SAlow and SAlow,ϕlow to occur at the largest or smallest value of Alow. To mitigate the impact of extreme Alow values on RAAC, we evaluate the surfaces SAlow and SAlow,ϕlow over the 5th to 95th quantiles of Alow. We note that an alternative metric of AAC could instead evaluate the slope of the SAlow surface; to maintain consistency of the PAC and AAC measures, we chose not to implement this alternative measure here. Third, the frequency bands for Vhigh and Vlow must be established before R values are calculated. Hence, if the wrong frequency bands are chosen, coupling may be missed. It is possible, though computationally expensive, to scan over all reasonable frequency bands for both Vhigh and Vlow, calculating R values for each frequency band pair. Fourth, we note that the proposed modeling framework assumes the data contain approximately sinusoidal signals, which have been appropriately isolated for analysis. In general, CFC measures are sensitive to non-sinusoidal signals, which may confound interpretation of cross-frequency analyses (Cole and Voytek, 2017; Kramer et al., 2008; Aru et al., 2015). While the modeling framework proposed here does not directly account for the confounds introduced by non-sinusoidal signals, the inclusion of additional predictors (e.g. detections of sharp changes in the unfiltered data) in the model may help mitigate these effects. Fifth, we simulate time series with known PAC and AAC, and then test whether the proposed analysis framework detects this coupling. The simulated relationships between Ahigh and (ϕlow,Alow) may result in time series with simpler structure than those observed in vivo. For example, a latent signal may drive both Ahigh and ϕlow, and in this way establish nonlinear relationships between the two observables Ahigh and ϕlow. We note that, if this were the case, the latent signal could also be incorporated in the statistical modeling framework (Yousefi et al., 2019).

We chose the statistics RPAC and RAAC for two reasons. First, we found that two common methods of model comparison for GLMs provide less robust measures of significance than RPAC and RAAC. While the statistics RPAC and RAAC are less powerful than standard model comparison tests, the large amount of data typically assessed in CFC analysis may compensate for this loss. We showed that the statistics RPAC and RAAC performed well in simulations, and we note that these statistics are directly interpretable. While many model comparison methods exist - and another method may provide specific advantages - we found that the framework implemented here is sufficiently powerful, interpretable, and robust for real-world neural data analysis.

The proposed method can easily be extended by inclusion of additional predictors in the GLM. Polynomial Alow predictors, rather than the current linear Alow predictors, may better capture the relationship between Alow and Ahigh. One could also include different types of covariates, for example classes of drugs administered to a patient, or time since an administered stimulus during an experiment. To capture more complex relationships between the predictors (Alow, ϕlow) and Ahigh, the GLM could be replaced by a more general form of Generalized Additive Model (GAM). Choosing GAMs would remove the restriction that the conditional mean Ahigh must be linear in each of the model parameters (which would allow us to estimate knot locations directly from the data, for example), at the cost of greater computational time to estimate these parameters. The code developed to implement the method is flexible and modular, which facilitates modifications and extensions motivated by the particular data analysis scenario. This modular code, available at https://github.com/Eden-Kramer-Lab/GLM-CFC, also allows the user to change latent assumptions, such as choice of frequency bands and filtering method. The code is freely available for reuse and further development.

Rhythms, and particularly the interactions of different frequency rhythms, are an important component for a complete understanding of neural activity. While the mechanisms and functions of some rhythms are well understood, how and why rhythms interact remains uncertain. A first step in addressing these uncertainties is the application of appropriate data analysis tools. Here we provide a new tool to measure coupling between different brain rhythms: the method utilizes a statistical modeling framework that is flexible and captures subtle differences in cross-frequency coupling. We hope that this method will better enable practicing neuroscientists to measure and relate brain rhythms, and ultimately better understand brain function and interactions.

Acknowledgements

This work was supported in part by the National Science Foundation Award #1451384, in part by R01 EB026938, in part by R21 MH109722, and in part by the National Science Foundation (NSF) under a Graduate Research Fellowship.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Mark A Kramer, Email: mak@bu.edu.

Frances K Skinner, Krembil Research Institute, University Health Network, Canada.

Laura L Colgin, University of Texas at Austin, United States.

Funding Information

This paper was supported by the following grants:

  • National Science Foundation NSF DMS #1451384 to Jessica K Nadalin, Mark A Kramer.

  • National Science Foundation GRFP to Jessica K Nadalin.

  • National Institutes of Health R21 MH109722 to Alik S Widge.

  • National Institutes of Health R01 EB026938 to Alik S Widge, Uri T Eden, Mark A Kramer.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing—original draft, Writing—review and editing.

Data curation, Writing—review and editing.

Resources, Data curation, Software.

Resources, Data curation, Software.

Resources, Investigation, Writing—review and editing.

Investigation, Writing—review and editing.

Software, Methodology, Writing—review and editing.

Conceptualization, Software, Supervision, Methodology, Writing—review and editing.

Ethics

Human subjects: All patients were enrolled after informed consent, and consent to publish, was obtained and approval was granted by local Institutional Review Boards at Massachusetts General Hospital and Brigham Women's Hospitals (Partners Human Research Committee), and at Boston University according to National Institutes of Health guidelines (IRB Protocol # 1558X).

Animal experimentation: The animal experimentation received IACUC approval from the University of Minnesota (IACUC Protocol # 1806-36024A).

Additional files

Transparent reporting form
DOI: 10.7554/eLife.44287.015

Data availability

In vivo human data available at https://github.com/Eden-Kramer-Lab/GLM-CFC (copy archived at https://github.com/elifesciences-publications/GLM-CFC). In vivo rat data available at https://github.com/tne-lab/cl-example-data (copy archived at https://github.com/elifesciences-publications/cl-example-data).

References

  1. Agarwal G, Stevenson IH, Berényi A, Mizuseki K, Buzsáki G, Sommer FT. Spatially distributed local fields in the Hippocampus encode rat position. Science. 2014;344:626–630. doi: 10.1126/science.1250444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aljadeff J, Lansdell BJ, Fairhall AL, Kleinfeld D. Analysis of neuronal spike trains, deconstructed. Neuron. 2016;91:221–259. doi: 10.1016/j.neuron.2016.05.039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aru J, Aru J, Priesemann V, Wibral M, Lana L, Pipa G, Singer W, Vicente R. Untangling cross-frequency coupling in neuroscience. Current Opinion in Neurobiology. 2015;31:51–61. doi: 10.1016/j.conb.2014.08.002. [DOI] [PubMed] [Google Scholar]
  4. Axmacher N, Henseler MM, Jensen O, Weinreich I, Elger CE, Fell J. Cross-frequency coupling supports multi-item working memory in the human Hippocampus. PNAS. 2010;107:3228–3233. doi: 10.1073/pnas.0911531107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Başar E, Schmiedt-Fehr C, Mathes B, Femir B, Emek-Savaş DD, Tülay E, Tan D, Düzgün A, Güntekin B, Özerdem A, Yener G, Başar-Eroğlu C. What does the broken brain say to the neuroscientist? oscillations and connectivity in schizophrenia, Alzheimer’s disease, and bipolar disorder. International Journal of Psychophysiology. 2016;103:135–148. doi: 10.1016/j.ijpsycho.2015.02.004. [DOI] [PubMed] [Google Scholar]
  6. Blackwood E, Lo M, Widge SA. Continuous phase estimation for phase-locked neural stimulation using an autoregressive model for signal prediction. Conference of the IEEE Engineering in Medicine and Biology Society. 2018;2018:4736–4739. doi: 10.1109/EMBC.2018.8513232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Börgers C, Epstein S, Kopell NJ. Gamma oscillations mediate stimulus competition and attentional selection in a cortical network model. PNAS. 2008;105:18023–18028. doi: 10.1073/pnas.0809511105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bragin A, Jandó G, Nádasdy Z, Hetke J, Wise K, Buzsáki G. Gamma (40-100 hz) oscillation in the Hippocampus of the behaving rat. The Journal of Neuroscience. 1995;15:47–60. doi: 10.1523/JNEUROSCI.15-01-00047.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bruns A, Eckhorn R. Task-related coupling from high- to low-frequency signals among visual cortical Areas in human subdural recordings. International Journal of Psychophysiology. 2004;51:97–116. doi: 10.1016/j.ijpsycho.2003.07.001. [DOI] [PubMed] [Google Scholar]
  10. Buzsáki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–1929. doi: 10.1126/science.1099745. [DOI] [PubMed] [Google Scholar]
  11. Buzsáki G, Wang XJ. Mechanisms of gamma oscillations. Annual Review of Neuroscience. 2012;35:203–225. doi: 10.1146/annurev-neuro-062111-150444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Canolty RT, Edwards E, Dalal SS, Soltani M, Nagarajan SS, Kirsch HE, Berger MS, Barbaro NM, Knight RT. High gamma power is phase-locked to theta oscillations in human neocortex. Science. 2006;313:1626–1628. doi: 10.1126/science.1128115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Canolty RT, Knight RT. The functional role of cross-frequency coupling. Trends in Cognitive Sciences. 2010;14:506–515. doi: 10.1016/j.tics.2010.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chehelcheraghi M, van Leeuwen C, Steur E, Nakatani C. A neural mass model of cross frequency coupling. PLOS ONE. 2017;12:e0173776. doi: 10.1371/journal.pone.0173776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Chrobak JJ, Buzsáki G. Gamma oscillations in the entorhinal cortex of the freely behaving rat. The Journal of Neuroscience. 1998;18:388–398. doi: 10.1523/JNEUROSCI.18-01-00388.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cohen MX. Assessing transient cross-frequency coupling in EEG data. Journal of Neuroscience Methods. 2008;168:494–499. doi: 10.1016/j.jneumeth.2007.10.012. [DOI] [PubMed] [Google Scholar]
  17. Cohen MX. Multivariate cross-frequency coupling via generalized eigendecomposition. eLife. 2017;6:e21792. doi: 10.7554/eLife.21792. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Cole SR, Voytek B. Brain oscillations and the importance of waveform shape. Trends in Cognitive Sciences. 2017;21:137–149. doi: 10.1016/j.tics.2016.12.008. [DOI] [PubMed] [Google Scholar]
  19. Colgin LL, Denninger T, Fyhn M, Hafting T, Bonnevie T, Jensen O, Moser MB, Moser EI. Frequency of gamma oscillations routes flow of information in the Hippocampus. Nature. 2009;462:353–357. doi: 10.1038/nature08573. [DOI] [PubMed] [Google Scholar]
  20. Csicsvari J, Jamieson B, Wise KD, Buzsáki G. Mechanisms of gamma oscillations in the Hippocampus of the behaving rat. Neuron. 2003;37:311–322. doi: 10.1016/S0896-6273(02)01169-8. [DOI] [PubMed] [Google Scholar]
  21. Dean HL, Hagan MA, Pesaran B. Only coherent spiking in posterior parietal cortex coordinates looking and reaching. Neuron. 2012;73:829–841. doi: 10.1016/j.neuron.2011.12.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Dejean C, Courtin J, Karalis N, Chaudun F, Wurtz H, Bienvenu TC, Herry C. Prefrontal neuronal assemblies temporally control fear behaviour. Nature. 2016;535:420–424. doi: 10.1038/nature18630. [DOI] [PubMed] [Google Scholar]
  23. Engel AK, Fries P, Singer W. Dynamic predictions: oscillations and synchrony in top-down processing. Nature Reviews Neuroscience. 2001;2:704–716. doi: 10.1038/35094565. [DOI] [PubMed] [Google Scholar]
  24. Fontolan L, Krupa M, Hyafil A, Gutkin B. Analytical insights on theta-gamma coupled neural oscillators. The Journal of Mathematical Neuroscience. 2013;3:16. doi: 10.1186/2190-8567-3-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Fries P, Reynolds JH, Rorie AE, Desimone R. Modulation of oscillatory neuronal synchronization by selective visual attention. Science. 2001;291:1560–1563. doi: 10.1126/science.1055465. [DOI] [PubMed] [Google Scholar]
  26. Fries P, Nikolić D, Singer W. The gamma cycle. Trends in Neurosciences. 2007;30:309–316. doi: 10.1016/j.tins.2007.05.005. [DOI] [PubMed] [Google Scholar]
  27. Gordon JA. On being a circuit psychiatrist. Nature Neuroscience. 2016;19:1385–1386. doi: 10.1038/nn.4419. [DOI] [PubMed] [Google Scholar]
  28. Hawellek DJ, Wong YT, Pesaran B. Temporal coding of reward-guided choice in the posterior parietal cortex. PNAS. 2016;113:13492–13497. doi: 10.1073/pnas.1606479113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Hyafil A, Fontolan L, Kabdebon C, Gutkin B, Giraud AL. Speech encoding by coupled cortical theta and gamma oscillations. eLife. 2015a;4:e06213. doi: 10.7554/eLife.06213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Hyafil A, Giraud AL, Fontolan L, Gutkin B. Neural Cross-Frequency coupling: connecting architectures, mechanisms, and functions. Trends in Neurosciences. 2015b;38:725–740. doi: 10.1016/j.tins.2015.09.001. [DOI] [PubMed] [Google Scholar]
  31. Jackson N, Cole SR, Voytek B, Swann NC. Characteristics of waveform shape in Parkinson's Disease Detected with Scalp Electroencephalography. Eneuro. 2019;6:ENEURO.0151-19.2019. doi: 10.1523/ENEURO.0151-19.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Jensen O, Spaak E, Park H. Discriminating valid from spurious indices of Phase-Amplitude coupling. Eneuro. 2016;3:ENEURO.0334-16.2016. doi: 10.1523/ENEURO.0334-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jensen O, Lisman JE. Position reconstruction from an ensemble of hippocampal place cells: contribution of theta phase coding. Journal of Neurophysiology. 2000;83:2602–2609. doi: 10.1152/jn.2000.83.5.2602. [DOI] [PubMed] [Google Scholar]
  34. Jia X, Kohn A. Gamma rhythms in the brain. PLOS Biology. 2011;9:e1001045. doi: 10.1371/journal.pbio.1001045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Jirsa V, Müller V. Cross-frequency coupling in real and virtual brain networks. Frontiers in Computational Neuroscience. 2013;7:78. doi: 10.3389/fncom.2013.00078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Jones MW, Wilson MA. Theta rhythms coordinate hippocampal–prefrontal interactions in a spatial memory task. PLOS Biology. 2005;11:e402. doi: 10.1371/journal.pbio.0030402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Karalis N, Dejean C, Chaudun F, Khoder S, Rozeske RR, Wurtz H, Bagur S, Benchenane K, Sirota A, Courtin J, Herry C. 4-Hz oscillations synchronize prefrontal-amygdala circuits during fear behavior. Nature Neuroscience. 2016;19:605–612. doi: 10.1038/nn.4251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Kopell N, Ermentrout GB, Whittington MA, Traub RD. Gamma rhythms and beta rhythms have different synchronization properties. PNAS. 2000;97:1867–1872. doi: 10.1073/pnas.97.4.1867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Kramer MA, Tort AB, Kopell NJ. Sharp edge artifacts and spurious coupling in EEG frequency comodulation measures. Journal of Neuroscience Methods. 2008;170:352–357. doi: 10.1016/j.jneumeth.2008.01.020. [DOI] [PubMed] [Google Scholar]
  40. Kramer MA, Eden UT. Assessment of cross-frequency coupling with confidence using generalized linear models. Journal of Neuroscience Methods. 2013;220:64–74. doi: 10.1016/j.jneumeth.2013.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Kramer MA, Eden UT. Case Studies in Neural Data Analysis: A Guide for the Practicing Neuroscientist. The MIT Press; 2016. [Google Scholar]
  42. Lachaux JP, Rodriguez E, Martinerie J, Varela FJ. Measuring phase synchrony in brain signals. Human Brain Mapping. 1999;8:194–208. doi: 10.1002/(SICI)1097-0193(1999)8:4&#x0003c;194::AID-HBM4&#x0003e;3.0.CO;2-C. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Lakatos P, Shah AS, Knuth KH, Ulbert I, Karmos G, Schroeder CE. An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. Journal of Neurophysiology. 2005;94:1904–1911. doi: 10.1152/jn.00263.2005. [DOI] [PubMed] [Google Scholar]
  44. Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE. Entrainment of neuronal oscillations as a mechanism of attentional selection. Science. 2008;320:110–113. doi: 10.1126/science.1154735. [DOI] [PubMed] [Google Scholar]
  45. Lepage KQ, Vijayan S. A Time-Series model of phase amplitude cross frequency coupling and comparison of spectral characteristics with neural data. BioMed Research International. 2015;2015:1–8. doi: 10.1155/2015/140837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Likhtik E, Stujenske JM, Topiwala MA, Harris AZ, Gordon JA. Prefrontal entrainment of amygdala activity signals safety in learned fear and innate anxiety. Nature Neuroscience. 2014;17:106–113. doi: 10.1038/nn.3582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Lisman J. The theta/gamma discrete phase code occuring during the hippocampal phase precession may be a more general brain coding scheme. Hippocampus. 2005;15:913–922. doi: 10.1002/hipo.20121. [DOI] [PubMed] [Google Scholar]
  48. Malerba P, Kopell N. Phase resetting reduces theta-gamma rhythmic interaction to a one-dimensional map. Journal of Mathematical Biology. 2013;66:1361–1386. doi: 10.1007/s00285-012-0534-9. [DOI] [PubMed] [Google Scholar]
  49. Mann EO, Mody I. Control of hippocampal gamma oscillation frequency by tonic inhibition and excitation of interneurons. Nature Neuroscience. 2010;13:205–212. doi: 10.1038/nn.2464. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Mathalon DH, Sohal VS. Neural oscillations and synchrony in brain dysfunction and neuropsychiatric disorders: it's about time. JAMA Psychiatry. 2015;72:840. doi: 10.1001/jamapsychiatry.2015.0483. [DOI] [PubMed] [Google Scholar]
  51. Mazzoni A, Whittingstall K, Brunel N, Logothetis NK, Panzeri S. Understanding the relationships between spike rate and Delta/gamma frequency bands of LFPs and EEGs using a local cortical network model. NeuroImage. 2010;52:956–972. doi: 10.1016/j.neuroimage.2009.12.040. [DOI] [PubMed] [Google Scholar]
  52. Mormann F, Fell J, Axmacher N, Weber B, Lehnertz K, Elger CE, Fernández G. Phase/amplitude reset and theta–gamma interaction in the human medial temporal lobe during a continuous word recognition memory task. Hippocampus. 2005;15:890–900. doi: 10.1002/hipo.20117. [DOI] [PubMed] [Google Scholar]
  53. Mukamel R, Gelbard H, Arieli A, Hasson U, Fried I, Malach R. Coupling between neuronal firing, field potentials, and FMRI in human auditory cortex. Science. 2005;309:951–954. doi: 10.1126/science.1110913. [DOI] [PubMed] [Google Scholar]
  54. Nadalin J, Kramer M. GitHub; 2019. https://github.com/Eden-Kramer-Lab/GLM-CFC [Google Scholar]
  55. Onslow AC, Bogacz R, Jones MW. Quantifying phase-amplitude coupling in neuronal network oscillations. Progress in Biophysics and Molecular Biology. 2011;105:49–57. doi: 10.1016/j.pbiomolbio.2010.09.007. [DOI] [PubMed] [Google Scholar]
  56. Onslow AC, Jones MW, Bogacz R. A canonical circuit for generating phase-amplitude coupling. PLOS ONE. 2014;9:e102591. doi: 10.1371/journal.pone.0102591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Osipova D, Hermes D, Jensen O. Gamma power is phase-locked to posterior alpha activity. PLOS ONE. 2008;3:e3990. doi: 10.1371/journal.pone.0003990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Penny WD, Duzel E, Miller KJ, Ojemann JG. Testing for nested oscillation. Journal of Neuroscience Methods. 2008;174:50–61. doi: 10.1016/j.jneumeth.2008.06.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Pesaran B, Pezaris JS, Sahani M, Mitra PP, Andersen RA. Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nature Neuroscience. 2002;5:805–811. doi: 10.1038/nn890. [DOI] [PubMed] [Google Scholar]
  60. Pesaran B, Nelson MJ, Andersen RA. Free choice activates a decision circuit between frontal and parietal cortex. Nature. 2008;453:406–409. doi: 10.1038/nature06849. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Popov T, Jensen O, Schoffelen JM. Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage. 2018;178:277–286. doi: 10.1016/j.neuroimage.2018.05.054. [DOI] [PubMed] [Google Scholar]
  62. Rasch MJ, Gretton A, Murayama Y, Maass W, Logothetis NK. Inferring spike trains from local field potentials. Journal of Neurophysiology. 2008;99:1461–1476. doi: 10.1152/jn.00919.2007. [DOI] [PubMed] [Google Scholar]
  63. Ray S, Crone NE, Niebur E, Franaszczuk PJ, Hsiao SS. Neural correlates of high-gamma oscillations (60-200 hz) in macaque local field potentials and their potential implications in electrocorticography. Journal of Neuroscience. 2008a;28:11526–11536. doi: 10.1523/JNEUROSCI.2848-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Ray S, Hsiao SS, Crone NE, Franaszczuk PJ, Niebur E. Effect of stimulus intensity on the spike-local field potential relationship in the secondary somatosensory cortex. Journal of Neuroscience. 2008b;28:7334–7343. doi: 10.1523/JNEUROSCI.1588-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Ray S, Maunsell JH. Different origins of gamma rhythm and high-gamma activity in macaque visual cortex. PLOS Biology. 2011;9:e1000610. doi: 10.1371/journal.pbio.1000610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Sase T, Katori Y, Komuro M, Aihara K. Bifurcation analysis on Phase-Amplitude Cross-Frequency coupling in neural networks with dynamic synapses. Frontiers in Computational Neuroscience. 2017;11:18. doi: 10.3389/fncom.2017.00018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Scheffer-Teixeira R, Belchior H, Leão RN, Ribeiro S, Tort AB. On high-frequency field oscillations (>100 hz) and the spectral leakage of spiking activity. Journal of Neuroscience. 2013;33:1535–1539. doi: 10.1523/JNEUROSCI.4217-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Shirvalkar PR, Rapp PR, Shapiro ML. Bidirectional changes to hippocampal theta-gamma comodulation predict memory for recent spatial episodes. PNAS. 2010;107:7054–7059. doi: 10.1073/pnas.0911184107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Siegel M, Warden MR, Miller EK. Phase-dependent neuronal coding of objects in short-term memory. PNAS. 2009;106:21341–21346. doi: 10.1073/pnas.0908193106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Sirota A, Montgomery S, Fujisawa S, Isomura Y, Zugaro M, Buzsáki G. Entrainment of neocortical neurons and gamma oscillations by the hippocampal theta rhythm. Neuron. 2008;60:683–697. doi: 10.1016/j.neuron.2008.09.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Sotero RC. Topology, Cross-Frequency, and Same-Frequency band interactions shape the generation of Phase-Amplitude coupling in a neural mass model of a cortical column. PLOS Computational Biology. 2016;12:e1005180. doi: 10.1371/journal.pcbi.1005180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Spaak E, Bonnefond M, Maier A, Leopold DA, Jensen O. Layer-specific entrainment of γ-band neural activity by the α rhythm in monkey visual cortex. Current Biology. 2012;22:2313–2318. doi: 10.1016/j.cub.2012.10.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Theiler J, Eubank S, Longtin A, Galdrikian B, Doyne Farmer J. Testing for nonlinearity in time series: the method of surrogate data. Physica D: Nonlinear Phenomena. 1992;58:77–94. doi: 10.1016/0167-2789(92)90102-S. [DOI] [Google Scholar]
  74. Tort AB, Rotstein HG, Dugladze T, Gloveli T, Kopell NJ. On the formation of gamma-coherent cell assemblies by Oriens lacunosum-moleculare interneurons in the Hippocampus. PNAS. 2007;104:13490–13495. doi: 10.1073/pnas.0705708104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Tort AB, Kramer MA, Thorn C, Gibson DJ, Kubota Y, Graybiel AM, Kopell NJ. Dynamic cross-frequency couplings of local field potential oscillations in rat striatum and Hippocampus during performance of a T-maze task. PNAS. 2008;105:20517–20522. doi: 10.1073/pnas.0810524105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Tort AB, Komorowski RW, Manns JR, Kopell NJ, Eichenbaum H. Theta-gamma coupling increases during the learning of item-context associations. PNAS. 2009;106:20942–20947. doi: 10.1073/pnas.0911331106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Tort AB, Komorowski R, Eichenbaum H, Kopell N. Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. Journal of Neurophysiology. 2010;104:1195–1210. doi: 10.1152/jn.00106.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Tort ABL, Brankačk J, Draguhn A. Respiration-Entrained brain rhythms are global but often overlooked. Trends in Neurosciences. 2018;41:186–197. doi: 10.1016/j.tins.2018.01.007. [DOI] [PubMed] [Google Scholar]
  79. van Wijk BC, Jha A, Penny W, Litvak V. Parametric estimation of cross-frequency coupling. Journal of Neuroscience Methods. 2015;243:94–102. doi: 10.1016/j.jneumeth.2015.01.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Vanhatalo S, Palva JM, Holmes MD, Miller JW, Voipio J, Kaila K. Infraslow oscillations modulate excitability and interictal epileptic activity in the human cortex during sleep. PNAS. 2004;101:5053–5057. doi: 10.1073/pnas.0305375101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Varela F, Lachaux JP, Rodriguez E, Martinerie J. The brainweb: phase synchronization and large-scale integration. Nature Reviews Neuroscience. 2001;2:229–239. doi: 10.1038/35067550. [DOI] [PubMed] [Google Scholar]
  82. Vinck M, Lima B, Womelsdorf T, Oostenveld R, Singer W, Neuenschwander S, Fries P. Gamma-phase shifting in awake monkey visual cortex. Journal of Neuroscience. 2010;30:1250–1257. doi: 10.1523/JNEUROSCI.1623-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. von Stein A, Sarnthein J. Different frequencies for different scales of cortical integration: from local gamma to long range alpha/theta synchronization. International Journal of Psychophysiology. 2000;38:301–313. doi: 10.1016/S0167-8760(00)00172-0. [DOI] [PubMed] [Google Scholar]
  84. Voytek B, D'Esposito M, Crone N, Knight RT. A method for event-related phase/amplitude coupling. NeuroImage. 2013;64:416–424. doi: 10.1016/j.neuroimage.2012.09.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Voytek B, Knight RT. Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease. Biological Psychiatry. 2015;77:1089–1097. doi: 10.1016/j.biopsych.2015.04.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Wagner FB, Eskandar EN, Cosgrove GR, Madsen JR, Blum AS, Potter NS, Hochberg LR, Cash SS, Truccolo W. Microscale spatiotemporal dynamics during neocortical propagation of human focal seizures. NeuroImage. 2015;122:114–130. doi: 10.1016/j.neuroimage.2015.08.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Weiss SA, Lemesiou A, Connors R, Banks GP, McKhann GM, Goodman RR, Zhao B, Filippi CG, Nowell M, Rodionov R, Diehl B, McEvoy AW, Walker MC, Trevelyan AJ, Bateman LM, Emerson RG, Schevon CA. Seizure localization using ictal phase-locked high gamma. Neurology. 2015;84:2320–2328. doi: 10.1212/WNL.0000000000001656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Whittingstall K, Logothetis NK. Frequency-band coupling in surface EEG reflects spiking activity in monkey visual cortex. Neuron. 2009;64:281–289. doi: 10.1016/j.neuron.2009.08.016. [DOI] [PubMed] [Google Scholar]
  89. Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH. Inhibition-based rhythms: experimental and mathematical observations on network dynamics. International Journal of Psychophysiology. 2000;38:315–336. doi: 10.1016/S0167-8760(00)00173-2. [DOI] [PubMed] [Google Scholar]
  90. Whittington MA, Cunningham MO, LeBeau FE, Racca C, Traub RD. Multiple origins of the cortical γ rhythm. Developmental Neurobiology. 2011;71:92–106. doi: 10.1002/dneu.20814. [DOI] [PubMed] [Google Scholar]
  91. Widge AS, Ellard KK, Paulk AC, Basu I, Yousefi A, Zorowitz S, Gilmour A, Afzal A, Deckersbach T, Cash SS, Kramer MA, Eden UT, Dougherty DD, Eskandar EN. Treating refractory mental illness with closed-loop brain stimulation: progress towards a patient-specific transdiagnostic approach. Experimental Neurology. 2017;287:461–472. doi: 10.1016/j.expneurol.2016.07.021. [DOI] [PubMed] [Google Scholar]
  92. Wong YT, Fabiszak MM, Novikov Y, Daw ND, Pesaran B. Coherent neuronal ensembles are rapidly recruited when making a look-reach decision. Nature Neuroscience. 2016;19:327–334. doi: 10.1038/nn.4210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Wulff P, Ponomarenko AA, Bartos M, Korotkova TM, Fuchs EC, Bähner F, Both M, Tort AB, Kopell NJ, Wisden W, Monyer H. Hippocampal theta rhythm and its coupling with gamma oscillations require fast inhibition onto parvalbumin-positive interneurons. PNAS. 2009;106:3561–3566. doi: 10.1073/pnas.0813176106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Yousefi A, Basu I, Paulk AC, Peled N, Eskandar EN, Dougherty DD, Cash SS, Widge AS, Eden UT. Decoding hidden cognitive states from behavior and physiology using a bayesian approach. Neural Computation. 2019;31:1751–1788. doi: 10.1162/neco_a_01196. [DOI] [PubMed] [Google Scholar]
  95. Zhong W, Ciatipis M, Wolfenstetter T, Jessberger J, Müller C, Ponsel S, Yanovsky Y, Brankačk J, Tort ABL, Draguhn A. Selective entrainment of gamma subbands by different slow network oscillations. PNAS. 2017;114:4519–4524. doi: 10.1073/pnas.1617249114. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Frances K Skinner1
Reviewed by: Alexandre Hyafil2, Jan-Mathijs Schoffelen3

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

[Editors’ note: the authors were asked to provide a plan for revisions before the editors issued a final decision. What follows is the editors’ letter requesting such plan.]

Thank you for sending your article entitled "A statistical modeling framework to assess cross-frequency coupling while accounting for confounding effects" for peer review at eLife. Your article is being evaluated by Laura Colgin as the Senior Editor, a Reviewing Editor, and three reviewers.

Given the list of essential revisions, the editors and reviewers invite you to respond with an action plan and timetable for the completion of the additional work. We plan to share your responses with the reviewers and then issue a binding recommendation.

While the work was appreciated (that is, the importance of including statistical approaches), several aspects were raised that require clarification and additional work (e.g., comparison to other methods, confounding factor issues, etc.). It is unclear whether this would be addressable by the authors and in a timely fashion. As this is understood to be a methods paper, it was not deemed critical for the authors to provide explanations per se of the meaning of results, although they would be welcome to do so.

Essential revisions:

The article by Jessica Nadalin and colleagues makes a key improvement in the statistical methods to detect cross-frequency coupling in neural signals. As the coupling between neural oscillations has become the focus of intense research and has been linked to a wide array of cognitive functions, the proposed method has very general implications for neuroscience. It may constitute the first attempt to assess conjointly different types of CFC. The use of generative models, in particular GLMs, seems like a solid way for building complex statistical approaches – they are increasingly popular in neuroscience to analyze behavioral and neural spiking data. There are nevertheless quite a number of points that should be resolved or clarified in the manuscript to provide a clear validation of the method.

1) The model for PAC and CFC is a Generalized Additive Model (GAM) rather than a GLM, as it takes the form: log μ = f(Фlow).

Here the function f is approximated by splines, which is one typical decomposition of GAMs. See the relevant literature, e.g. textbook by Wood, which provides principled ways of selecting the number of splines, assessing the uncertainty about the inferred function f, performing model comparison, etc. In particular it would be nice to show a few examples of the estimated function f, the typical modulation of Ahigh by Фlow.

2) Presumably, the formula for R is an attempt to mimic the R2 metrics used in linear regression. However there already exists of series of pseudo-R2 metrics for GLMs, for which pros and cons have been discussed, see e.g. https://web.archive.org/web/20130701052120/http://www.ats.ucla.edu:80/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm. These measures should not suffer from the problem of using an unbounded space as is the case here for RAAC and RCFC. The authors should pick one of these and apply it instead of the R measure (unless authors provide a clear explanation of why they used such definition of R).

3) Comparing statistically CFC values between distinct conditions is a key experimental method. The method described in rat data looks really nice, but it would need to be assessed beforehand on synthetic data. Will the method display false alarms (incorrect detection of changes in CFC) when the overall amplitude of the low frequency changes between conditions, as would direct comparison of CFC values between conditions? A method that gets rid of this confound would be a major advancement. Moreover, authors fail to explain the discrepancy in rat data between MI results and GLM results. Is it because of variations of the amplitude of the low frequency signal between conditions?

4) The authors quite surprisingly test their different methods of computing p-values of different synthetic datasets, and never on the same. It is frustrating that one cannot conclude in the end which method provides better results. Moreover, the two methods for generic synthetic data should be presented next to each other to allow comparison (the presentations of the equations could be made more similar).

5) It is important to see how the GLM method compares to other PAC detection method (e.g. MI) for weaker modulations. Is it as sensitive? Moreover, reviewers felt that the comparison of the sensitivity of the PAC measures to changes in Alow (Figure 5) was quite unfair. MI is an unbounded measure while R cannot be larger than 1. If reviewers understood correctly, the base value for scale factor should be around 0.5 (from Figure 4F), so that it cannot increase more than a twofold increase. This could explain the plateau in Figure 5. Is this correct? Perhaps performing the same analysis with a much weaker coupling would remove this concern.

6) The modulation of PAC by the low-frequency amplitude is an important contribution of the paper. It would deserve an actual figure for the second patient data, illustrating it with the method and showing modulations of Ahigh by Фlow splitting the dataset into low and high Alow. From a mechanistic point of view, the fact that PAC is larger when Alow is larger makes perfect sense, and it has been linked to the generation of AAC (Hyafil et al., 2015 Figure 3). Is there an intuitive generative mechanism that would create lower PAC for larger Alow (as in Figure 7)?

7) Figure 9 and Figure 10 make apparent that the recovered amplitude of the low frequency signal Alow fluctuates within each cycle, which goes against the very notion of amplitude of an oscillation. This can lead to falsely detecting AAC when there is PAC, as is evident for example in Figure 9: Ahigh is higher at the phase of the low-frequency signal where Alow is higher. Proper AAC should mean that Ahigh is higher for entire cycles of low frequency where Alow is high/low.

8) Clarification of confounding aspect intention and circularity perception in the method:

8.1) The simulation part is suboptimal and to some extent based on circularity. Specifically, the method is based on modelling the Hilbert-envelope of high frequency band limited activity with different GLMs, using regressors that are functions of the Hilbert envelope and phase of low frequency band limited activity. The test statistic is derived from the response functions, and statistical inference can be done with parametric methods, or, more appropriately for real data, using bootstrap techniques.

For the simulations, a generative model was used which essentially builds the high frequency amplitude component as a function of low frequency phase component and/or the low frequency amplitude component. Thus, it is not really surprising that the method works, specifically if in the processing and generation of the data similar filter passbands etc. have been used. It is unclear how this can be alleviated, unless the perceived circularity is actually not there, or the authors are able to come up with a fairer way to simulate the data in order to convincingly show that their method works in general.

8.2) Next, in support of the claim that the proposed test-statistic 'accounts for confounding effects', I feel that the evidence presented at most shows that the RCFC metric scales less strongly with low frequency amplitude (Figure 5), this property should not be oversold.

8.3) With respect to the application to real data, the abstract mentions that "we illustrate how CFC evolves during seizures and is affected by electrical stimuli". This indeed is illustrated by the data, in that the CFC metric is modulated. Yet, it is questionable (specifically for the human ictal data) that this reflects (patho)physiologically meaningful interactions between band-limited neural signal components. If anything, the increased CFC metric highlights the highly non-sinusoidal nature of the ictal spikes, and demonstrates the generic sensitivity of CFC-measures to the non-sinusoidality of the associated periodic signal components. This is a well-known feature, and the most important confounding interpretational issue in most cross-frequency analyses. Therefore, the high expectations based on the manuscript's title were not met in that respect. This important issue needs to be discussed in more detail, and the title should be adjusted for the sake of the reader's expectation management.

9) The authors proposed new measures of cross frequency coupling (CFC) in neurophysiological data where the emphasis is placed on phase-amplitude coupling (PAC) and amplitude-amplitude coupling (AAC). Statistical properties of the new estimators are discussed. These methods are tested in simulated data and in neural recordings from human epilepsy patients and rodents undergoing electrical stimulation. Whereas CFC is an important research area, and better measures are always welcome, it was not clear if the proposed methods are sufficiently novel to potentially offer new physiological insights into the neural operations represented by the data.

Estimating phase and amplitude from Hilbert transform requires the signal to be narrow band. Whereas the 4-7 Hz filtering band for theta phase and amplitude estimation is adequate, the 100-140 Hz filtering band for estimating high gamma phase and amplitude is too wide; it is thus likely that the phase and amplitude estimated this way is not accurate. A way to empirically test whether the filtering band is sufficiently narrow is to plot the real and the imaginary part of the filtered signal in a phase portrait to see whether the trajectory moves around a well-formed center. It is the rotation around this center that allows us to define instantaneous phase and amplitude properly from Hilbert transforms.

Numerous ad hoc assumptions go into Equations (1)-(4). It is difficult to follow the logic behind the method. The beauty of the original CFC measures (e.g., Canolty's or Tort's definition of PAC) is that they are simple and intuitive.

The synthetic time series considered here are not biologically realistic. First, spiking neural models should be used to generate such time series. In particular, the model should incorporate the property that gamma and high gamma in some way reflect neural spiking. Second, noise should be added to the synthetic time series to mimic the real world recordings; the amplitude of the noise can be used to assess the influence of signal to noise ratio on the various CFC quantities defined.

For the human seizure data, the authors only evaluated their new measures on the data, but did not compute the commonly applied PAC measures for comparison.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "A statistical framework to assess cross-frequency coupling while accounting for modeled confounding effects" for further consideration at eLife. Your revised article has been favorably evaluated by Laura Colgin (Senior Editor), a Reviewing Editor (Frances Skinner), and 3 reviewers (Reviewer #1: Alexandre Hyafil; Reviewer #2: Jan-Mathijs Schoffelen; Reviewer #3 has opted to remain anonymous).

The reviewers appreciated the new figures and improvements throughout. The manuscript is likely to be accepted after the remaining essential issues below are addressed; however, they still need to be addressed. The main thrust of most of these requests is to ensure clarity for readers of where this approach is situated relative to others, and in some cases, there may have been misinterpretations of previous requests.

1) Title: Another suggestion is that one could use "analysis", rather than "modeling" to have a title of "… confounding analysis effects". This is a suggestion, not required.

2) The new Figure 2 is fine. It was thought that the comment regarding GLM vs. GAM was misinterpreted. The point was conceptual rather than algorithmic. The essence of the models described here is to capture a nonlinear mapping of the regressor(s), that is then transformed into the observable using a link function and some observation noise: this is exactly what a GAM is. As described in the textbook by Wood (section 4.2), using basis functions such as splines, a GAM can be transformed into a (penalized) GLM, and then exactly fit using the standard GLM procedure that authors have used. (I believe outfitting is rather an outdated form of fitting GAMs). Please simply add some wording to make this clear. That is, the suggestion is not to change the core model fitting part, only to acknowledge the direct link with GAM.

3) We understand that the R measure captures how strongly the regressors modulate high frequency amplitude: this is exactly what R2 does in linear regression (assessing the part of the variance explained by the model), although it is sometimes described incorrectly as measuring goodness-of-fit. In their response, reviewers claim a somehow different thing, that their measure "estimate differences between fits from two different models". Now, if this refers to model comparison, again, there is large literature on how to perform model comparison between GLMs (including AIC, cross-validation, etc.). So I can see no reason to create a new method that suffers some important drawbacks (not derived from any apparent principle, arbitrariness for unbounded regressors, PAC measure is modulated by levels of AAC and low frequency amplitude, etc.) and has not been tested against those standard measures. We fully support the idea of using GLM as statistical tools for complex signals; it is a pity though not to leverage on the rich tools that have been developed within the GLM framework (and beyond that in machine learning). Please either use these rich tools or provide a proper argumentation of why your measure is used.

4) The new simulations represent an important step in the good direction but the analysis performed on the synthetic data is not quite the one performed on the experimental data. Figure 7 shows that the R measure is modulated by amplitude of the low frequency signal as well as AAC (although less so than the MI measure), preventing any direct comparison of PAC between differing conditions. Now on the rodent data, authors very astutely use a single model for both conditions, and indicator functions (Equation 10). For some reason, the model comparison performed on the previous version of the manuscript has disappeared. This was viewed as regrettable as this indicated that PAC was modulated between the two experimental conditions, something that direct comparison of MI/R values cannot afford to test. This is also the manipulation that we would like to see tested on synthetic data, to make sure whether authors are using a method to isolate modulations of PAC from modulations of low frequency amplitude and AAC.

In other words, it would be great to see this method tested on synthetic data to see if they get rid of the confounds, and then possibly applied on rat data as in previous manuscript. This was viewed as something that could make the paper much better. However, it is acceptable to also choose to trim the comparison between conditions part. If so, the authors are requested to be very explicit in their wording that by using separate models for different conditions, they cannot get rid of possible confounds.

5) The R measure seems pretty much as sensitive as the MI measure to low PAC. However, the two measures cannot be compared directly. Since most research is geared towards assessing significance levels of PAC, it would be interesting to rather show how the two methods compare in terms of statistical power: comparing type I error rates for low levels of PAC (although reviewers are a bit wary of what could be the results, since the type II error rate related to Figure 5 was less than 1% when it is supposedly calibrated to be 5%, suggesting the method could be too conservative).

6) The response to point 7 misses the important message: the extraction of the amplitude of the low frequency signal is intrinsically flawed as the recovered amplitude fluctuates within cycles of the low frequency rhythm. Looking at Figure 9C, peaks of Alow and Ahigh coincide within these cycles, which will very likely give rise to spurious AAC for this segment (irrespective of whether slow fluctuations of Alow do modulate Ahigh). It is unclear why this is apparent here but was not detected in the analysis of synthetic data, but in any case, it would require in our opinion a serious assessment of the properties of the algorithm to extract Alow. The authors' solution (changing the frequency range of the simulation to make the problem less apparent) is not a valid one. In any case, we can still see the issue in Figure 11F: sub-second fluctuations in Alow. Perhaps one quick fix would simply be to use a lowpass filter for Alow to remove these spurious fluctuations. It makes no sense conceptually that a signal assessing the amplitude of an oscillation would fluctuate rapidly within each cycle of that oscillation.

7) Regarding point 8.1 in the rebuttal: the change proposed in generating the signals was viewed as just a minor cosmetic change. It does not take away our concern for circularity, since the high frequency amplitude is still a direct function of the phase time series of the low frequency signal. Also, a change in signal generation was only done for the PAC, and not for the AAC, which suffers from the same circularity concern. As mentioned before, perhaps this concern cannot be alleviated at all (although we think that the authors could have done a better job by generating both the low frequency phase time series and the high frequency amplitude time series as a (possibly non-linear) function of a third unobserved time series. Either way, the authors need to at least discuss this concern for circularity explicitly, and argue why this in their view is not a problem in convincingly showing the utility of their new method.

8) Regarding point 8.3 in the rebuttal: we find the proposed fourth concern far too theoretical to be of practical use. The readership really should be informed about the interpretational confounds of CFC metrics, as mentioned in an original comment. We are not convinced that 'appropriate filtering of the data into high and low frequency components' is fundamentally possible, but we'd like to stand corrected with a convincing argumentation. In other words, the authors are requested to be very explicit in the interpretational limitations of any CFC measure, which is independent of the signal processing that is applied to the data before the measure is computed. This is important to be very clear about since 'non-methods' people may use the method without too much thinking about the potential shortcomings.

9) The authors are asked to edit a sentence in their Introduction about cellular mechanisms being ‘well-understood’ for gamma and theta – this was viewed as somewhat arguable for theta. Tort et al. is cited for theta which does not seem to be an appropriate reference for cellular perspectives. A recent paper for cellular mechanisms for theta could be used (Ferguson et al., 2018).

10) Please also consider the following: a) Some of the figures/panels could be placed as figure supplements to avoid distracting the reader away from the main points of the manuscript.

b) The figures could be polished a bit more, notably:

- Use labels such as 'π2' for phase variables.

- Merge single lines panels when possible (e.g. Figure 12B,C).

- Improve the readability of the 3D figures (e.g. using meshgrids; perhaps plotting only the full model surface in Figure 5).

- Adjust font size.

- Adjust axis limit and panel size for better readability (for some panels it's hard to see anything beyond noise, e.g. Figure 4, Figure 9B,C, Figure 10B).

- Figure 5G,K and O are difficult to read.

- Figure 11B: are these stacked bars of is RPAC always larger than RAAC? In the former case, it makes RPAC difficult to read: plot unstacked bars or curves instead.

- Figure 11F: use different scales from Ahigh and Alow to make Ahigh visible (or normalize signals).

c) Names of the models: why name them “Фlow” and “Alow” rather than PAC and AAC models?

d) Isn't the constant offset β0 missing from Equation 1 and Equation 3?

e) Please use semi-colons instead of commas to separate possible values of x (it took me a while to understand this).

f) In the spiking model, specify that the slow oscillation is imposed externally, not generated by the network.

g) "at the segment indicated by the asterisk in Figure 11B (…)": there is no asterisk in Figure 11B.

h) Figure 11F: " Ahigh increases with Alow over time"-> not clear. Ahigh increases over time but it seems that it is higher for intermediate than for high value of Alow.

i) A concern was raised about whether the frequency band corresponding to the low signal in the human data is described in the text.

eLife. 2019 Oct 16;8:e44287. doi: 10.7554/eLife.44287.018

Author response


Essential revisions:

1) The model for PAC and CFC is a Generalized Additive Model (GAM) rather than a GLM, as it takes the form: log μ = f(Фlow).

Here the function f is approximated by splines, which is one typical decomposition of GAMs. See the relevant literature, e.g. textbook by Wood, which provides principled ways of selecting the number of splines, assessing the uncertainty about the inferred function f, performing model comparison, etc. In particular it would be nice to show a few examples of the estimated function f, the typical modulation of Ahigh by Фlow.

To address these comments, we have updated the manuscript in the following ways. First, we now describe a principled procedure to select the number of splines. We use an AIC-based selection procedure, as described in (Kramer and Eden, 2013). We have updated the manuscript as follows:

“Here, we fix 𝑛 = 10, which is a reasonable choice for smooth PAC with one or two broad peaks (Karalis et at., 2016). ​To support this choice, we apply an AIC-based selection procedure to 1000 simulated instances of signals of duration 20 s with phase-amplitude coupling and amplitude-amplitude coupling (see Methods: Synthetic Time Series with PAC and Synthetic Time Series with AAC, below, for simulation details). For each simulation, we fit Model ​1 ​to these data for 27 different values of 𝑛 from 𝑛 = 4 to 𝑛 = 30. For each simulated signal, we record the value of 𝑛 such that we minimize the AIC, defined as

AIC = Δ+2𝑛​,

where Δ is the deviance from Model ​1​. The values of 𝑛 that minimize the AIC tend to lie between 𝑛 = 7 and 𝑛 = 12 (Figure ​2​). These simulations support the choice of 𝑛 = 10 as a sufficient number of​ ​splines.​”

Second, we note that one purpose of our method is to examine the impact of Фlowand Alow on Ahigh in 3-dimensional space. However, in response to this comment, we have also updated the manuscript to show examples of the requested projection: the estimated modulation of Ahigh by Фlow. We now include examples of this estimation in Figure 12. Please see our response to comment (6) for examples.

Third, we would like to thank the reviewer for bringing up the distinction between GAMs and GLMs in our approach. We note that although the models [1] and [3] incorporate spline basis functions of low frequency phase as predictors, these models are still GLMs, as the link function of the conditional mean of the response variable (Ahigh)​ varies linearly with all of the model parameters to be estimated. More specifically, we note that the coefficients 𝛽​k multiply the splinebasis functions, remaining outside of the functions themselves, consistent with the definition of GLMs. This allows all of the parameters to be estimated directly via an iteratively reweighted least squares procedure, as is common for GLM fitting, as opposed to a more computationally intensive backfitting procedure often used for GAM fitting. We now make this distinction clear in the revised manuscript as follows:

“We use a tension parameter of 0.5, which controls the smoothness of the splines. ​We note that, because the link function of the conditional mean of the response variable (Ahigh​) varies linearly with the model coefficients 𝛽​k​, the model is a GLM. ​Here, we fix n=10, which is a reasonable choice […]”

We chose to use GLMs in part for computational efficiency: as noted above, we can fit the GLMs in models [1]-[3] directly using iteratively reweighted least squares, whereas GAMs would require a more complex backfitting algorithm, adding considerable computation time to our method. This computational efficiency is especially important in our approach. For example, to create each surface ​S​, we fit a separate GLM 640 times (once for each value of A​low), and to compute ​p​-values, we repeat this entire procedure 1000 times. Hence, any small increase in computation time would have a multiplicatively large impact, resulting in a computationally prohibitive measure.

Finally, we note that the primary focus of our method is to determine the impact of predictors (ɸ​low, Alow)​ on the response variable A​high by fitting a full model with functions of both predictors and a smaller, nested model with functions of only a single predictor, and comparing the difference in fits between these models. We use this difference to measure the impact of ɸ​low or A​low on Ahigh. We have shown in simulation and in our data that the GLMs in models [1]-[3] are sufficiently sensitive to differences in these model fits, detecting even weak impacts of ɸ​low and A​low on A​​high. However, in a case where greater flexibility is needed, that is GLMs fail to sufficiently capture subtle impacts of these predictors, it could be beneficial to explore the broader class of GAMs. Extending our method to use a broader class of GAMs in lieu of GLMs would be relatively straightforward: we would construct the surfaces ​Sɸlow, Alow, ​SAlow, and ​Sɸlow as before, but would replace the models [1]-[3] with GAMs, which could include additional parameters related to the splines. We now mention this important extension in the Discussion section as follows:

“​The proposed method can easily be extended by inclusion of additional predictors in the GLM. Polynomial Alow​ predictors, rather than the current linear A​low​ predictors, may better capture the relationship between A​low ​and Ahigh​. […] The code developed to implement the method is flexible and modular, which facilitates ​modifications and extensions motivated by the particular data analysis scenario. ​This modular code, available at […]​”

2) Presumably, the formula for R is an attempt to mimic the R2 metrics used in linear regression. However there already exists of series of pseudo-R2 metrics for GLMs, for which pros and cons have been discussed, see e.g. https://web.archive.org/web/20130701052120/http://www.ats.ucla.edu:80/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm. These measures should not suffer from the problem of using an unbounded space as is the case here for RAAC and RCFC. The authors should pick one of these and apply it instead of the R measure (unless authors provide a clear explanation of why they used such definition of R).

First, we agree that – in retrospect – the choice of symbol R may be confusing. In this manuscript, the measure R is based on the distance between fitted distributions. This notation is motivated by our previous work in (Kramer and Eden, 2013). Unlike the R2 metrics for linear regression, our measure is not meant to estimate the goodness-of-fit of the models to the data, but rather to estimate differences between fits from two different models. To make clear this distinction, we have updated our manuscript to include the following text:

“​The statistic ​R​PAC​ measures the effect of low frequency phase on high frequency amplitude, while accounting for fluctuations in the low frequency amplitude. To compute this statistic, we note that the model in Equation ​3 ​measures the combined effect of 𝐴​low​ and 𝜙​low​ on 𝐴​high​, while the model in Equation ​2 ​measures only the effect of 𝐴low on 𝐴high. Hence, to isolate the effect of 𝜙​low​ on 𝐴​high​, while accounting for 𝐴​low​, we compare the difference in fits between the models in Equations ​2 ​and ​3​.”

“​However, in the presence of PAC, we expect 𝑆​𝐴low, 𝜙low​ to deviate from 𝑆​𝐴low​, resulting in a large value of ​R​PAC​. We note that this measure, unlike R2 metrics for linear regression, is not meant to measure the goodness-of-fit of these models to the data, but rather the differences in fits between the two models.​”

3) Comparing statistically CFC values between distinct conditions is a key experimental method. The method described in rat data looks really nice, but it would need to be assessed beforehand on synthetic data. Will the method display false alarms (incorrect detection of changes in CFC) when the overall amplitude of the low frequency changes between conditions, as would direct comparison of CFC values between conditions? A method that gets rid of this confound would be a major advancement. Moreover, authors fail to explain the discrepancy in rat data between MI results and GLM results. Is it because of variations of the amplitude of the low frequency signal between conditions?

The reviewer makes an important point: a simulation that mimics the results observed in the rodent data would enhance interpretation of these results. To that end, we now include in the revised manuscript new simulations to mimic the observed results in the rodent data. As recommended by the reviewer, we change the overall amplitude of the low frequency signal (𝐴low) and the AAC between two conditions, and compare the MI and RPAC between these two conditions. We find, in the absence of actual PAC, significant MI values while the RPAC values remain insignificant (please see text in comment 8.2).

4) Authors quite surprisingly test their different methods of computing p-values of different synthetic datasets, and never on the same. It is frustrating that one cannot conclude in the end which method provides better results. Moreover, the two methods for generic synthetic data should be presented next to each other to allow comparison (the presentations of the equations could be made more similar).

We agree with the reviewer that, in retrospect, computing p-values in different ways for different simulations was confusing. To address this, we have eliminated the use of analytic p-values in favor of bootstrap p-values. Doing so better aligns our analysis of the simulated data with the in​ vivo​ data, and simplifies the analysis presentation. Similarly, in the revised manuscript, we now generate synthetic data with PAC using only one method. Doing so greatly simplifies our analysis and presentation, and additionally circumvents the circularity concern raised in point (8) below.

To make these changes, we have removed the section ​“Assessing significance of AAC, PAC, and CFC with analytic p-values”, ​and have renamed the section ​“Assessing significance of AAC, PAC, and CFC with bootstrap p-values”​ as​ “Assessing significance of AAC and PAC with bootstrap p-values”.​ We note that, in this revised section, we no longer compute a p-value for the CFC. We have chosen to eliminate p-value calculations for CFC to focus on the specific CFC types of interest (i.e., PAC and AAC); doing so further simplifies the manuscript presentation. This section begins:

“​Assessing significance of AAC and PAC with bootstrap p-values. ​To assess whether evidence exists for significant PAC or AAC, we implement a bootstrap procedure​ ​to compute p-values ​as follows. Given two signals 𝑉low and 𝑉high, and the resulting ​estimated statistics R​PAC​ and ​R​AAC​ we apply the Amplitude Adjusted Fourier Transform (AAFT) algorithm (Siegel et al., 2009) on 𝑉high to generate a surrogate …”

5) It is important to see how the GLM method compares to other PAC detection method (e.g. MI) for weaker modulations. Is it as sensitive? Moreover, reviewers felt that the comparison of the sensitivity of the PAC measures to changes in Alow (Figure 5) was quite unfair. MI is an unbounded measure while R cannot be larger than 1. If reviewers understood correctly, the base value for scale factor should be around 0.5 (from Figure 4F), so that it cannot increase more than a twofold increase. This could explain the plateau in Figure 5. Is this correct? Perhaps performing the same analysis with a much weaker coupling would remove this concern.

To address this comment, we have included new simulations to compare the GLM method and the modulation index for weaker modulations. We have updated the manuscript to include the following new subsection “​R​PAC​and modulation index are both sensitive to weak modulations”

“To investigate the ability of the proposed method and the modulation index to detect weak coupling between the low frequency phase and high frequency amplitude, we perform the following simulations. For each intensity value 𝐼​PAC​ between 0 and 0.5 (in steps of 0.025), we simulate 1000​ ​signals (see Methods) and compute ​R​PAC​ and a measure of PAC in common use: the modulation index ​MI (Theller et al., 1992) (Figure ​6​). We find that both ​MI ​and ​R​PAC​, while small, increase with 𝐼​PAC​; in this way,​ ​both measures are sensitive to small values of 𝐼PAC​.​”

We also note that R is an unbounded measure, as it equals the maximum absolute fractional difference between distributions, which may exceed 1. We now state this clarification in the revised manuscript as follows:

“​However, in the presence of PAC, we expect 𝑆​𝐴low, 𝜙low​ to deviate from 𝑆​𝐴low​, resulting in a large value of ​R​PAC​. We note that this measure, unlike R2 metrics for linear regression, is not meant to measure the goodness-of-fit of these models to the data, but rather the differences in fits between the two models. We also note that ​R​PAC​ is an unbounded measure, as it equals the maximum absolute fractional difference between distributions, which may exceed 1.​”

6) The modulation of PAC by the low-frequency amplitude is an important contribution of the paper. It would deserve an actual figure for the second patient data, illustrating it with the method and showing modulations of Ahigh by Фlow splitting the dataset into low and high Alow. From a mechanistic point of view, the fact that PAC is larger when Alow is larger makes perfect sense, and it has been linked to the generation of AAC (Hyafil et al., 2015 Figure 3). Is there an intuitive generative mechanism that would create lower PAC for larger Alow (as in Figure 7)?

As recommended by the reviewer, we now include in the revised manuscript a new figure showing the modulation of Ahighby Фlowfor the data from the second patient. We show the results of the complete model surface in the three-dimensional space (Фlow, Alow, Ahigh), and the components of this surface when Alow is small, and when Alow is large. We describe this new figure in ​subsection “Application to in vivo human seizure data​” as follows:

“​We show an example 𝑆​𝐴low,𝜙low​ surface, and visualizations of this surface at small and large Alow values, in Figure ​12​.​”

We agree with the reviewer that linking the simulated and observed CFC to candidate biological mechanisms is an important – and very interesting – goal. However, as suggested by the editor, we refrain from speculating on these generative mechanisms in this methods-focused manuscript.

7) Figure 9 and Figure 10 make apparent that the recovered amplitude of the low frequency signal Alow fluctuates within each cycle, which goes against the very notion of amplitude of an oscillation. This can lead to falsely detecting AAC when there is PAC, as is evident for example in Figure 9: Ahigh is higher at the phase of the low-frequency signal where Alow is higher. Proper AAC should mean that Ahigh is higher for entire cycles of low frequency where Alow is high/low.

The reviewer raises an important issue, which made clear the difficulty in interpreting Figure 9 and Figure 10 of the original manuscript. As noted by the reviewer, in the original Figure 9, the low frequency signal visible in the unfiltered trace (V, blue) was slower than the low frequency band we isolated to study. To address this, we now select a low frequency band (1-3 Hz) more consistent with the dominant rhythms visible in the unfiltered signal. In addition, to allow a more direct comparison between Ahigh and Alow, we have updated the figures to include Alow. Finally, we have removed the second example of CFC in human seizure data, to reduce the number of examples and figures in the paper.

8) Clarification of confounding aspect intention and circularity perception in the method:

8.1) The simulation part is suboptimal and to some extent based on circularity. Specifically, the method is based on modelling the Hilbert-envelope of high frequency band limited activity with different GLMs, using regressors that are functions of the Hilbert envelope and phase of low frequency band limited activity. The test statistic is derived from the response functions, and statistical inference can be done with parametric methods, or, more appropriately for real data, using bootstrap techniques.

For the simulations, a generative model was used which essentially builds the high frequency amplitude component as a function of low frequency phase component and/or the low frequency amplitude component. Thus, it is not really surprising that the method works, specifically if in the processing and generation of the data similar filter passbands etc. have been used. It is unclear how this can be alleviated, unless the perceived circularity is actually not there, or the authors are able to come up with a fairer way to simulate the data in order to convincingly show that their method works in general.

We agree with the reviewer that simulating and measuring PAC with the same generative models weakens the significance of the results. Therefore, we have updated the manuscript to simulate all instances of PAC using the pink noise based method, rather than the GLM-based method. In addition, in the revised manuscript, we now only utilize bootstrap p-values to assess significance. These two changes focus the results on methods applicable to real world data, make the manuscript less verbose, and address the circularity concern.

In the revised manuscript, we now omit the sections ​Assessing significance of AAC, PAC, and CFC with analytic p-values, ​and we have revised the section ​Synthetic Time Series with PAC ​to reflect our use of the pink noise based method to generate simulated time series as follows:

“​We construct synthetic time series to examine the performance of the ​proposed method ​as follows. First, we simulate 20 s of pink noise data such that the power spectrum scales as 1⁄𝑓. […] We create a new signal 𝑉′ with the same phase as 𝑉​high​, but with amplitude dependent on the phase of 𝑉​low​ by setting,

𝑉’​high​ = ​M ​𝑉​high​ ​.

We create the final voltage trace 𝑉 as

𝑉 = 𝑉​low​ +𝑉​′high​ +𝑐∗𝑉​pink,

where 𝑉​pink​ is a new instance of pink noise multiplied by a small constant 𝑐 = 0.01. ​”

8.2) Next, in support of the claim that the proposed test-statistic 'accounts for confounding effects', I feel that the evidence presented at most shows that the RCFC metric scales less strongly with low frequency amplitude (Figure 5), this property should not be oversold.

To address further this important concern, we performed two additional simulations (Figure 7 in the revised manuscript) to compare how RPAC and MI behave under fixed PAC and increased Alow and AAC. These simulations are motivated in part by the results of the​in vivo​ rodent data. In the first set of simulations, we fix PAC at a non-zero value, and increase Alow and AAC. We find that both RPAC and MI increase with increased Alow and AAC, but this increase is much less dramatic for RPAC. In the second set of simulations, we consider the absence of PAC, under increased AAC and Alow. We find that MI frequently detects significant PAC while RPAC does not. We include these simulation results in the revised subsection “​The proposed method is less affected by fluctuations in low-frequency amplitude ​and AAC”.

“Increases in low frequency power can increase measures of ​phase-amplitude ​coupling, although the underlying ​PAC ​remains unchanged (Aru et al., 2016; Cole and Voytek et al., 2017). […] We conclude that in the presence of increased low frequency amplitude and amplitude-amplitude coupling, ​MI ​may detect PAC where none exists, while ​R​PAC​, which accounts for fluctuations in low frequency amplitude, does not.​”

8.3) With respect to the application to real data, the abstract mentions that "we illustrate how CFC evolves during seizures and is affected by electrical stimuli". This indeed is illustrated by the data, in that the CFC metric is modulated. Yet, it is questionable (specifically for the human ictal data) that this reflects (patho)physiologically meaningful interactions between band-limited neural signal components. If anything, the increased CFC metric highlights the highly non-sinusoidal nature of the ictal spikes, and demonstrates the generic sensitivity of CFC-measures to the non-sinusoidality of the associated periodic signal components. This is a well-known feature, and the most important confounding interpretational issue in most cross-frequency analyses. Therefore, the high expectations based on the manuscript's title were not met in that respect. This important issue needs to be discussed in more detail, and the title should be adjusted for the sake of the reader's expectation management.

We agree with the reviewer that the non-sinusoidal nature of real brain data is an important confound in CFC analysis; it is an issue that has bothered us for some time (Kramer, Tort and Kopell, ​2008). As recommended by the reviewer, we have adjusted the Title to manage better the reader’s expectations: “​A statistical framework to assess cross-frequency coupling while accounting for modeled​ confounding effects​”

We also now mention this important issue in the revised manuscript as follows:

“​Like all measures of CFC, the proposed method possesses specific limitations. We discuss ​four limitations here. […] ​Fourth, we note that the proposed modeling framework assumes appropriate filtering of the data into high and low frequency bands. This filtering step is a fundamental component of CFC analysis, and incorrect filtering may produce spurious or misinterpreted results (Aru et al., 2015; Kramer et al., 2008; Scheffer-Texeira et al., 2013). While the modeling framework proposed here does not directly account for artifacts introduced by filtering, additional predictors (e.g., detections of sharp changes in the unfiltered data) in the model may help mitigate these filtering effects.​”

9) Estimating phase and amplitude from Hilbert transform requires the signal to be narrow band. Whereas the 4-7 Hz filtering band for theta phase and amplitude estimation is adequate, the 100-140 Hz filtering band for estimating high gamma phase and amplitude is too wide; it is thus likely that the phase and amplitude estimated this way is not accurate. A way to empirically test whether the filtering band is sufficiently narrow is to plot the real and the imaginary part of the filtered signal in a phase portrait to see whether the trajectory moves around a well-formed center. It is the rotation around this center that allows us to define instantaneous phase and amplitude properly from Hilbert transforms.

The reviewer raises an important point. We agree that using the Hilbert transform to estimate the instantaneous phase is ill-suited for a wide frequency band, and therefore choose this low-frequency band to be narrow. We note that, for the high frequency band, we only estimate the amplitude (and not the phase). We do so motivated by the existing neuroscience literature that utilizes wide, high frequency bands in practice (e.g., Canolty et al., 2006) and advocates for choosing high frequency bands that are wide enough (Aru et al., 2015). We also note that the choice of a wide high frequency band is consistent with the mechanistic explanation that extracellular spikes produce this broadband high frequency activity (Ray and Maunsell, 2011).

To state this clearly in the revised manuscript, we now include the following text:

“​However, we note that this method is flexible and not dependent on this choice, ​and that we select a wide high frequency band consistent with recommendations from the literature (Aru et al., 2015) and the mechanistic explanation that extracellular spikes produce this broadband high frequency activity (Sase et al., 2017). ​We use the Hilbert transform to compute the analytic signals.…​”

Numerous ad hoc assumptions go into Equations (1)-(4). It is difficult to follow the logic behind the method. The beauty of the original CFC measures (e.g., Canolty's or Tort's definition of PAC) is that they are simple and intuitive.

To better describe the logic of the new method, we further explain the reasoning that motivates the method, and include the intuition that the proposed analysis measures the distances between distributions fit to the data. We now include this explanation in the revised manuscript as follows:

“​Generalized linear models (GLMs) provide a principled framework to assess CFC (Kramer and Eden, 2013; Osipova et al., 2008; Tort et al., 2007). […] If these models fit the data sufficiently well, then we estimate distances between the modeled surfaces to measure the impact of each predictor. ​”

In addition, to enhance the flow of the logic, we have simplified the presentation by: (i) removing the calculation of analytic p-values, (ii) removing the null model, (iii) removing the model based simulations, (iv) making the names of the surfaces more intuitive, (v) eliminating p-value calculations for the CFC surface and focusing instead on the specific couplings of interest, PAC and AAC. By simplifying the presentations, we hope that we have made the manuscript’s logic easier to follow.

The synthetic time series considered here are not biologically realistic. First, spiking neural models should be used to generate such time series. In particular, the model should incorporate the property that gamma and high gamma in some way reflect neural spiking. Second, noise should be added to the synthetic time series to mimic the real world recordings; the amplitude of the noise can be used to assess the influence of signal to noise ratio on the various CFC quantities defined.

To address this concern, we first note that, in the simulated data, we do include a pink noise term to mimic the 1/f distribution of power observed in ​in vivo​ field recordings of the voltage. While we do not adjust the noise term directly, we do vary the intensity of PAC and AAC (e.g., Figure 5) which illustrates how the signal to noise ratio impacts these quantities.

In addition, as recommended by the reviewer, we have updated the manuscript to include a simple spiking neural model, and now use it to provide an additional simulation demonstrating the ability of RPAC to detect PAC in the presence of fluctuations in low frequency amplitude, while MI is unable to detect this coupling. We have updated the text to include the following new subsection “A simple stochastic spiking neural model illustrates the utility of the proposed method”:

“In the previous simulations, we created synthetic data without a biophysically principled generative model. Here we consider an alternative simulation strategy with a more direct connection to neural dynamics. […] However, the non-uniform shape of the (𝐴​low​, 𝜙​low​) surface is lost when we fail to account for 𝐴​low​. In this scenario, the distribution of 𝐴​high​ over 𝜙​low​ appears uniform, resulting in a low ​MI ​value (Figure ​10​C).​”

For the human seizure data, the authors only evaluated their new measures on the data, but did not compute the commonly applied PAC measures for comparison.

As recommended by the reviewer, we now also apply the modulation index to the human seizure data, and include the results as a new panel in the revised Figure 11 (see Comment 7).

We have updated the Results to describe the inclusion of the modulation index as follows: “​Repeating this analysis with the modulation index (Figure ​11​C), we find qualitatively similar changes in the PAC over the duration of the recording. However, we note that differences do occur. For example, at the segment indicated by the asterisk in Figure ​11​B, we find large ​R​AAC and an increase in ​RPAC​ relative to the prior 20 s time segment, while increases in PAC and AAC remain undetected by ​MI​.​”

[Editors' note: further revisions were requested prior to acceptance, as described below.]

1) Title: Another suggestion is that one could use "analysis", rather than "modeling" to have a title of "… confounding analysis effects". This is a suggestion, not required.

As recommended, we have updated the Title to read:

“​A statistical framework to assess cross-frequency coupling while accounting for confounding ​analysis effects​”

2) The new Figure 2 is fine. It was thought that the comment regarding GLM vs. GAM was misinterpreted. The point was conceptual rather than algorithmic. The essence of the models described here is to capture a nonlinear mapping of the regressor(s), that is then transformed into the observable using a link function and some observation noise: this is exactly what a GAM is. As described in the textbook by Wood (section 4.2), using basis functions such as splines, a GAM can be transformed into a (penalized) GLM, and then exactly fit using the standard GLM procedure that authors have used. (I believe outfitting is rather an outdated form of fitting GAMs). Please simply add some wording to make this clear. That is, the suggestion is not to change the core model fitting part, only to acknowledge the direct link with GAM.

We thank the reviewer for this clarification. We agree that the models are situated within the class of GAMs. To acknowledge the direct link with GAMs, we have added the following text to the revised manuscript:

“We note that, because the link function of the conditional mean of the response variable A​high​varies linearly with the model coefficients 𝛽​k​, the model is a GLM, ​though the spline basis functions situate the model in the larger class of generalized additive models (GAMs).”

3) We understand that the R measure captures how strongly the regressors modulate high frequency amplitude: this is exactly what R2 does in linear regression (assessing the part of the variance explained by the model), although it is sometimes described incorrectly as measuring goodness-of-fit. In their response, reviewers claim a somehow different thing, that their measure "estimate differences between fits from two different models". Now, if this refers to model comparison, again, there is large literature on how to perform model comparison between GLMs (including AIC, cross-validation, etc.). So I can see no reason to create a new method that suffers some important drawbacks (not derived from any apparent principle, arbitrariness for unbounded regressors, PAC measure is modulated by levels of AAC and low frequency amplitude, etc.) and has not been tested against those standard measures. We fully support the idea of using GLM as statistical tools for complex signals; it is a pity though not to leverage on the rich tools that have been developed within the GLM framework (and beyond that in machine learning). Please either use these rich tools or provide a proper argumentation of why your measure is used.

To address this important concern, we have added new results and additional discussion to the revised manuscript. We show that two standard model comparison methods for nested GLMs frequently detect PAC and AAC in pink noise signals. We have added the following text to subsection “​The absence of CFC produces no significant detections of coupling​”:

“​We also applied these simulated signals to assess the performance of two standard model comparison procedures for GLMs. […] We conclude that, in this modeling regime, two deviance-based model comparison procedures for GLMs are less robust measures of significant PAC and AAC.​”

We have also added the following text to the Discussion section:

“​We chose the statistics R​PAC​ and R​AAC​ for two reasons. […] While many model comparison methods exist – and another method may provide specific advantages – we found that the framework implemented here is sufficiently powerful, interpretable, and robust for real-world neural data analysis.​”

4) The new simulations represent an important step in the good direction but the analysis performed on the synthetic data is not quite the one performed on the experimental data. Figure 7 shows that the R measure is modulated by amplitude of the low frequency signal as well as AAC (although less so than the MI measure), preventing any direct comparison of PAC between differing conditions. Now on the rodent data, authors very astutely use a single model for both conditions, and indicator functions (Equation 10). For some reason, the model comparison performed on the previous version of the manuscript has disappeared. This was viewed as regrettable as this indicated that PAC was modulated between the two experimental conditions, something that direct comparison of MI/R values cannot afford to test. This is also the manipulation that we would like to see tested on synthetic data, to make sure whether authors are using a method to isolate modulations of PAC from modulations of low frequency amplitude and AAC.

In other words, it would be great to see this method tested on synthetic data to see if they get rid of the confounds, and then possibly applied on rat data as in previous manuscript. This was viewed as something that could make the paper much better. However, it is acceptable to also choose to trim the comparison between conditions part. If so, the authors are requested to be very explicit in their wording that by using separate models for different conditions, they cannot get rid of possible confounds.

We agree that this model comparison is an important result and have updated the manuscript to include this result in subsection “Using the flexibility of GLMs to improve detection of phase-amplitude coupling in vivo​​” as follows:

“To determine whether the condition has an effect on PAC, we test whether the term

P∑j=1nβn+3+jfjlowin Equation 9 is significant, i.e. whether there is a significant difference between the models in Equations 9 and 10. […] We conclude that this method effectively determines whether stimulation condition significantly changes PAC.​”

5) The R measure seems pretty much as sensitive as the MI measure to low PAC. However, the two measures cannot be compared directly. Since most research is geared towards assessing significance levels of PAC, it would be interesting to rather show how the two methods compare in terms of statistical power: comparing type I error rates for low levels of PAC (although reviewers are a bit wary of what could be the results, since the type II error rate related to Figure 5 was less than 1% when it is supposedly calibrated to be 5%, suggesting the method could be too conservative).

In the revised manuscript, we now show how the two methods compare in terms of statistical power. We have updated Figure 5.

We have also updated the results text in subsection “​R​PAC​ and modulation index are both sensitive to weak modulations​” as follows:

“We find that both ​MI​ and ​R​PAC​, while small, increase with ​IPAC​; in this way, both measures are sensitive to small values of ​IPAC. ​However, we note that R​PAC​ is not significant for very small intensity values (I​PAC​ <= 0.3), while ​MI​ is significant at these small intensities. We conclude that the modulation index may be more sensitive than R​PAC​ to weak phase amplitude coupling.​”

6) The authors do not seem to have taken into account a reviewer's response to their revision workplan, so it is repeated here. The response to point 7 misses the important message: the extraction of the amplitude of the low frequency signal is intrinsically flawed as the recovered amplitude fluctuates within cycles of the low frequency rhythm. Looking at Figure 9C, peaks of Alow and Ahigh coincide within these cycles, which will very likely give rise to spurious AAC for this segment (irrespective of whether slow fluctuations of Alow do modulate Ahigh). It is unclear why this is apparent here but was not detected in the analysis of synthetic data, but in any case, it would require in our opinion a serious assessment of the properties of the algorithm to extract Alow. The authors' solution (changing the frequency range of the simulation to make the problem less apparent) is not a valid one. In any case, we can still see the issue in Figure 11F: sub-second fluctuations in Alow. Perhaps one quick fix would simply be to use a lowpass filter for Alow to remove these spurious fluctuations. It makes no sense conceptually that a signal assessing the amplitude of an oscillation would fluctuate rapidly within each cycle of that oscillation.

We would like to thank the reviewer for pointing out this filtering issue. To address this concern, we re-examined the spectrogram of the full signal (updated Figure 11B), and identified a limited region of increased power in the 4-7 Hz range from 130 s to 140 s. We then focused our analysis only on this time interval, and selected a filter to isolate the 4-7 Hz range. We note that, in our previous analysis, we applied the same filter over the entire duration of the seizure, which – due to the changing dominant rhythms during a seizure – complicated the subsequent analysis. By focusing on a time period with a clear 4-7 Hz rhythm, we greatly improved the estimate of Alow (Figure 11C). We have updated this subsection “​Application to in vivo human seizure data​” as follows:

“To evaluate the performance of the proposed method on in vivo data, we first consider an example recording from human cortex during a seizure (see subsection “Human subject data”). […] Comparing Alow and Ahigh​over the 10 s interval (each smoothed using a 1 s moving average filter and normalized) we observe that both A​low​ and A​high​steadily increase over the duration of the interval.”

7) Regarding point 8.1 in the rebuttal: the change proposed in generating the signals was viewed as just a minor cosmetic change. It does not take away our concern for circularity, since the high frequency amplitude is still a direct function of the phase time series of the low frequency signal. Also, a change in signal generation was only done for the PAC, and not for the AAC, which suffers from the same circularity concern. As mentioned before, perhaps this concern cannot be alleviated at all (although we think that the authors could have done a better job by generating both the low frequency phase time series and the high frequency amplitude time series as a (possibly non-linear) function of a third unobserved time series. Either way, the authors need to at least discuss this concern for circularity explicitly, and argue why this in their view is not a problem in convincingly showing the utility of their new method.

Our goal in simulating PAC and AAC was to create time series that explicitly contained these types of CFC. We agree with the reviewer’s point that detecting the simulated PAC and AAC is not surprising; we expect that GLMs are able to capture these relationships. However, we feel that the broad, interdisciplinary audience of this journal would benefit from a demonstration of the efficacy of the analysis framework on data that are known to have PAC and AAC, i.e. on the simulated data. To that end, we simulated data of the size we expect to analyze in real neural systems, and have shown that the proposed method has enough power to detect different types of CFC. This type of demonstration is common for new methods in neural data analysis (and in CFC methods specifically), and simulated PAC data with this particular structure are also common in the neuroscience community (e.g., Lepage and Vijayan, 2015,​ Section 2.1, Equation 6).

We have updated the revised manuscript to note this point in the Discussion as follows:

“​Fifth, we simulate time series with known PAC and AAC, and then test whether the proposed analysis framework detects this coupling. The simulated relationships between Ahigh and (Фlow, Alow) may result in time series with simpler structure than those observed in vivo. For example, a latent signal may drive both Ahigh and Фlow, and in this way establish nonlinear relationships between the two observables Ahigh and Фlow. We note that, if this were the case, the latent signal could also be incorporated in the statistical modeling framework (Widge et al., 2017).”

8) Regarding point 8.3 in the rebuttal: we find the proposed fourth concern far too theoretical to be of practical use. The readership really should be informed about the interpretational confounds of CFC metrics, as mentioned in an original comment. We are not convinced that 'appropriate filtering of the data into high and low frequency components' is fundamentally possible, but we'd like to stand corrected with a convincing argumentation. In other words, the authors are requested to be very explicit in the interpretational limitations of any CFC measure, which is independent of the signal processing that is applied to the data before the measure is computed. This is important to be very clear about since 'non-methods' people may use the method without too much thinking about the potential shortcomings.

To address this concern we have updated the Discussion section to focus more specifically on the impact of non-sinusoidal signals on CFC analysis:

“​Fourth, we note that the proposed modeling framework assumes ​the data contain approximately sinusoidal signals, which have been appropriately isolated for analysis. In general, CFC measures are sensitive to non-sinusoidal signals, which may confound interpretation of cross-frequency analyses (Aru et al., 2015; Cohen and Devachi, 2017; Kramer and Eden, 2013).​ While the modeling framework proposed here does not directly account for​ the confounds introduced by non-sinusoidal signals, ​the inclusion of ​additional predictors (e.g., detections of sharp changes in the unfiltered data) in the model may help mitigate these effects.​”

9) The authors are asked to edit a sentence in their Introduction about cellular mechanisms being ‘well-understood’ for gamma and theta – this was viewed as somewhat arguable for theta. Tort et al. is cited for theta which does not seem to be an appropriate reference for cellular perspectives. A recent paper for cellular mechanisms for theta could be used (Ferguson et al., 2018).

As recommended, we have updated the manuscript to read:

“​Although the cellular mechanisms giving rise to some neural rhythms are relatively well understood (e.g. gamma: Likhtik et al., 2013; Wagner et al., 2015; Weiss et al., 2015), the neuronal substrate of CFC itself remains obscure.​ “

10) Please also consider the following: a) Some of the figures/panels could be placed as figure supplements to avoid distracting the reader away from the main points of the manuscript.

Although we appreciate this recommendation, we would prefer to keep all material in the main manuscript.

b) The figures could be polished a bit more, notably:

- Use labels such as 'π2' for phase variables.

- Merge single lines panels when possible (e.g. Figure 12B,C).

- Improve the readability of the 3D figures (e.g. using meshgrids; perhaps plotting only the full model surface in Figure 5).

- Adjust font size.

In the revised manuscript, we have included appropriate labels for phase variables, merged single line panels when possible, included meshgrids for 3D figures, and increased the font size.

- Adjust axis limit and panel size for better readability (for some panels it's hard to see anything beyond noise, e.g. Figure 4, Figure 9B,C, Figure 10B).

In the revised manuscript, we have adjusted the axis limit and panel size to better visualize effects beyond noise.

- Figure 5G,K and O are difficult to read.

We have widened these subfigures to be more readable.

- Figure 11B: are these stacked bars of is RPAC always larger than RAAC? In the former case, it makes RPAC difficult to read: plot unstacked bars or curves instead.

In the revised manuscript, we have eliminated this subplot; please see our response to (6) above.

- Figure 11F: use different scales from Ahigh and Alow to make Ahigh visible (or normalize signals)

In the revised manuscript, we now normalize the Ahigh and A​ ​low signals in Figure 11E.

c) Names of the models: why name them “ɸ​​low” and “A​low “rather than PAC and AAC models?

We use ɸ​​low​ and A​low to indicate which signals are modulating Ahigh in the respective models. We decided against calling these models AAC and PAC models, as R​PACutilizes the Alowand A​​low, ɸlow​ models (formerly AAC and CFC models), but not the ɸlow​ model (formerly, PAC model). Similarly, R​AAC uses the ɸlow​ and Alow,​ ɸlow​ models (formerly PAC and CFC models), but not the A​low model (formerly AAC model), which may be confusing to readers.​

To further clarify the model names, we have added the following text after the definitions of R​PAC and R​​AAC::​

R​PAC​ = max[abs[1-S​Alow​/S​Alowlow ​]],

“i.e. we measure the distance between the Alow​ and the Alow​,ɸlow​ models.”

R​AAC​ = max[abs[1-S​ɸlow ​/S​Alow,ɸlow ​]],

“i.e. we measure the distance between the ɸlow​ and the A​low​,ɸ​low​ models.”

d) Isn't the constant offset β0 missing from equation 1 and 3?

We now include the following sentence in the manuscript for clarification:

“The functions {f​1​ … f​n​} correspond to spline basis functions, with n control points equally spaced between 0 and 2 π, used to approximate ɸlow​.​ We note that the spline functions sum to 1, and therefore we omit a constant offset term.”

e) Please use semi-columns instead of comas to separate possible values of x (it took me a while to understand this).

In the revised manuscript, we have updated this sentence to read:

“Given a vector of estimated coefficients 𝛽​x​ for ​x = {Alow​; ɸlow​; or Alow​,ɸ​low​},​ we use its estimated…”

f) In the spiking model, specify that the slow oscillation is imposed externally, not generated by the network.

“In this stochastic model, we generate a spike train (Vhigh​) in which ​an externally imposed signal​ V​low​ modulates the probability of spiking as a function of Alow​and ɸlow​.”

g) "at the segment indicated by the asterisk in Figure 11B (…)": there is no asterisk in Figure 11B.

This subfigure is no longer included in the revised manuscript.

h) Figure 11F: "Ahigh increases with Alow over time"-> not clear. Ahigh increases over time but it seems that it is higher for intermediate than for high value of Alow.

In the revised manuscript, we analyze a time segment with a clear relationship between Alow and Ahigh. Please see our response to question (6).

i) A concern was raised about whether the frequency band corresponding to the low signal in the human data is described in the text.

Thank you, we have updated the Methods section to read:

“For these data, we analyze the 100-140 Hz ​and 4-7 Hz ​frequency bands…”

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Transparent reporting form
    DOI: 10.7554/eLife.44287.015

    Data Availability Statement

    In vivo human data available at https://github.com/Eden-Kramer-Lab/GLM-CFC (copy archived at https://github.com/elifesciences-publications/GLM-CFC). In vivo rat data available at https://github.com/tne-lab/cl-example-data (copy archived at https://github.com/elifesciences-publications/cl-example-data).


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES