Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Dec 2.
Published in final edited form as: Psychophysiology. 2022 May;59(5):e14052. doi: 10.1111/psyp.14052

Recommendations and publication guidelines for studies using frequency domain and time-frequency domain analyses of neural time series

Andreas Keil 1, Edward M Bernat 2, Michael X Cohen 3, Mingzhou Ding 4, Monica Fabiani 5,6, Gabriele Gratton 5,6, Emily S Kappenman 7, Eric Maris 8, Kyle E Mathewson 9, Richard T Ward 1, Nathan Weisz 10,11
PMCID: PMC9717489  NIHMSID: NIHMS1850625  PMID: 35398913

Abstract

Since its beginnings in the early 20th century, the psychophysiological study of human brain function has included research into the spectral properties of electrical and magnetic brain signals. Now, dramatic advances in digital signal processing, biophysics, and computer science have enabled increasingly sophisticated methodology for neural time series analysis. Innovations in hardware and recording techniques have further expanded the range of tools available to researchers interested in measuring, quantifying, modeling, and altering the spectral properties of neural time series. These tools are increasingly used in the field, by a growing number of researchers who vary in their training, background, and research interests. Implementation and reporting standards also vary greatly in the published literature, causing challenges for authors, readers, reviewers, and editors alike. The present report addresses this issue by providing recommendations for the use of these methods, with a focus on foundational aspects of frequency domain and time-frequency analyses. It also provides publication guidelines, which aim to (1) foster replication and scientific rigor, (2) assist new researchers who wish to enter the field of brain oscillations, and (3) facilitate communication among authors, reviewers, and editors.

Keywords: EEG, electrophysiology, frequency domain analysis, MEG, time-frequency analysis

1 |. INTRODUCTION, DEFINITIONS, AND BACKGROUND

Rhythmic patterns are ubiquitous in electrophysiological recordings from the human brain. Often referred to as brain oscillations, these patterns have been examined in a rapidly growing literature, using increasingly sophisticated algorithms. Growing attention has also been captured by other, non-oscillatory properties of brain activity, which likewise may be measured using an evolving set of spectral analysis tools (Donoghue et al., 2020; Freeman & Zhai, 2009; Lin et al., 2016). However, with these advancements arrive new challenges to overcome. Scientists, acting both as authors and reviewers, may struggle to keep up with the wide spectrum of available methods. Communication among authors, reviewers, and readers may suffer from the lack of a unifying approach that includes shared terminology, accepted best practice methodology, and effective ways of reporting relevant information.

Here, we present a set of recommendations and guidelines for reporting on studies using frequency domain and time-frequency domain analyses, with the aim of facilitating communication within the scientific community by identifying common standards. Section 1 introduces definitions, terminology, and foundational aspects of these analyses. It may be used as a tutorial overview and introduction, providing references to relevant introductory materials as well as a glossary. Section 2 provides recommendations on study planning and discusses different conceptualizations of frequency domain analyses. Section 3 covers guidelines for reporting on different analytical techniques. Finally, Section 4 provides recommendations for statistical analyses and data presentation through figures.

1.1 |. Definitions and taxonomy

Different aspects of neural activity can be extracted from scalp-recorded electromagnetic time series, using electroencephalography (EEG) and magnetoencephalography (MEG). If time anchoring events are present, then event-related brain responses can be obtained from the EEG/MEG time series by stimulus- or response-locked averaging of time-varying signals across trials (for recent reviews, see Kappenman & Luck, 2012; Luck, 2005). These event-related potentials (ERPs) and event-related fields (ERFs) are often referred to as transient responses. These signals tend to unfold as a sequence of deflections varying in duration, each showing distinctive timing relative to the anchoring event. ERPs and ERFs are represented in the time domain, graphically illustrated by showing voltage or field strength on the y axis and time on the x axis. Time domain analyses are also used by researchers interested in brain oscillations, as discussed in Sections 1.2, 2.2, and 2.3 (for further discussion, see Schaworonkow & Nikulin, 2019). An example of time domain and frequency domain representations is shown in Figure 1.

FIGURE 1.

FIGURE 1

Alpha oscillation (~12 Hz) represented in the time domain (left panel), and in the frequency domain (right panel). Note the units on the x and y axes

By contrast, frequency domain analyses decompose neural time series into a weighted sum of a set of elementary cyclic waves differing in their temporal rate. These elementary waves are often called basis functions. Basis functions consist of cycles in which a temporal pattern is repeated at a given rate. Each temporal rate is measured in cycles per second or Hertz (Hz). Higher temporal rates have shorter cycle durations, which are also called wavelengths or periods. Thus, a given wavelength (cycle duration) is the inverse of frequency. Most readers will be familiar with sine and cosine waves, which serve as basis functions in Fourier analysis, the most widely used algorithm for converting between the time domain to the frequency domain. The set of weights given to each wave (i.e., each basis function) is called an amplitude spectrum; if the square of the amplitude weights is used, it is referred to as a power spectrum (see Sections 1.2 and 1.3 for a discussion of these concepts). In this document, we refer to the power spectrum for brevity, and to distinguish it from the phase spectrum, which describes the temporal relation of the signal relative to the basis functions at each frequency. Figure 2 illustrates some of the fundamental properties of oscillatory time series as well as the elements of frequency domain analyses: frequency, power, and phase. For example, the oscillation depicted in orange may have higher frequency (Figure 2a), greater power (Figure 2b), or different phase (Figure 2c) than the signal shown in blue.

FIGURE 2.

FIGURE 2

Illustration of different aspects of oscillatory activity. Relative to a 3 Hz sine wave that completes three cycles in each second (blue line), the orange dashed line differs in terms of (a) frequency; (b) power; (c) phase

Mathematical transformations that produce a spectrum (i.e., the representation of features as a function of frequency) are referred to as spectral analyses. A power spectrum is graphically illustrated with frequency on the x axis and power on the y axis (see Figure 1, right panel). Finally, various combinations of event-related and frequency domain analyses allow researchers to study changes in the amplitude or power spectrum over time, referred to as an evolutionary spectrum or spectrogram. The spectrogram is determined using methods referred to as time-frequency analysis (TFA). Sometimes, the term event-related spectral perturbations (ERSPs) is used (Makeig et al., 2004) to indicate a focus on changes in spectral properties over time, rather than on their absolute values.

Time domain averaging methods are typically used when the aim is to study transient activity that arises in response to (or in preparation for) anchoring events, such as the onset of a stimulus or the initiation of a motor response (e.g., ERPs and ERFs). By contrast, frequency domain analyses are typically used to quantify recurrent phenomena, referred to as brain rhythms or oscillations. Although many definitions of these terms exist, both “brain oscillations” and “brain rhythms” are most frequently used to denote electrophysiological patterns which recur more or less regularly (i.e., they repeat at least several times). However, as we will see later, nonrecurrent (or transient) phenomena are also represented in the spectra obtained with frequency domain analyses. Thus, spectral analyses represent a widely employed approach for quantifying not just brain oscillations but also transient or other non-oscillatory phenomena (e.g., Harper et al., 2014).

One widely used taxonomy of the brain’s oscillatory activity is the classification introduced by Robert Galambos (1992). Galambos distinguished (i) spontaneous oscillations, which are not related to external stimuli, (ii) evoked oscillations, which are elicited and precisely time-locked to the onset of an external stimulus, (iii) emitted oscillations, which are time-locked to a stimulus that was expected but then did not occur, and (iv) induced oscillations, which are prompted by a stimulus but are not time- and phase-locked to its onset. Figure 3 illustrates these concepts, respectively.

FIGURE 3.

FIGURE 3

Example waveforms illustrating Galambos’s taxonomy. Evoked oscillations (a) across trials occur in a phase-locked and time-locked manner response to a stimulus, whereas induced oscillations (b) are neither time nor phase-locked to a stimulus onset. Emitted oscillations (c) are similar to evoked, but occur in trials where a stimulus was expected but did not occur. Spontaneous oscillations (d) occur in continuous recordings and are not driven by or systematically linked to anchoring events

A further classification used in the literature is based on the separation between intrinsic oscillations, or the emergent dynamics of the brain itself, versus driven oscillations, which occur in response to periodic stimulation, such as a response to regularly flickering light or to an amplitude-modulated tone (Norcia et al., 2015; Picton et al., 2003). Multiple taxonomies are in use, and a substantial body of research has suggested that distinctions among different types of oscillations, as well as between oscillations and non-oscillations, are graded rather than categorical in nature (Moratti et al., 2007; Truccolo et al., 2002). Thus, authors may prefer to abstain from taxonomic labels (e.g., “evoked”) and instead quantitatively characterize the oscillatory properties of interest based on their similarity or the degree of phase locking across trials (Aviyente et al., 2011; Eidelman-Rothman et al., 2019), using methods described in Sections 3.3 and 3.4. Table 1 provides an overview of key definitions and concepts related to spectral analyses, used in this document.

TABLE 1.

Key terms and definitions

Term Definition
Aliasing The misrepresentation of frequencies that is not appropriately captured by the digital sampling. For example, frequencies above the Nyquist frequency (see below), if not removed prior to digitization of the neural signal, will appear as spurious lower-frequency phenomena in the resulting spectrum
Autoregressive (AR) model This approach reconstructs a time series into frequency components by means of linearly regressing past time points onto future time points. The beta values of this regression serve as estimates of power at different frequencies. AR modeling is often used for spectral analysis and in Granger causality analyses
Basis functions Sets of models used for the decomposition of a time series into the frequency domain. For example, sine and cosine functions serve as the basis functions in Fourier analysis
Complex number A numerical representation in which a value consists of two components, a real and an imaginary part. Often used to represent the frequency domain components (see Fourier components) corresponding to sine and cosine basis functions, respectively
Cross-frequency coupling (CFC) A term for analyses that examine the interactions between oscillatory processes at different frequencies, such as the systematic co-variation of power changes at one frequency (e.g., 40 Hz) with changes in phase at another frequency (e.g., 6 Hz). Other examples include covariation of power changes at two different frequencies, and interactions between the phase at two different frequencies
Edge artifacts Distortions in spectral representations caused by variations in values at the beginning and/or end of the empirical input time series
Event-related spectral perturbations (ERSPs) A term used to denote power changes in the evolutionary spectrum (i.e., changes in the time-frequency domain). A Fourier spectrogram or wavelet analysis may be used to quantify ERSPs
Fourier analysis A method for decomposing time series into frequency-specific components, modeled by sine and cosine waves. The result is a complex spectrum in which each frequency is represented by a pair of real and imaginary numbers, joined together as one complex number per frequency. From these Fourier components, power and phase may be extracted
Fourier components The weights of the sine and cosine basis functions in a Fourier analysis, typically referred to as imaginary (i.e., sine) and real (i.e., cosine) components
Fourier uncertainty principle The notion that the detail contained in a spectrum varies inversely as a function of the duration of the input time domain signal. As such, longer time domain segments result in greater resolution in the frequency domain
Frequency domain A representation in which properties of a signal are analyzed as a function of frequency, instead of time or space. Typically, this is shown as a figure with frequency in Hertz on the x axis
Hertz (Hz) A unit for the temporal rate (i.e., frequency) of repeating events, measured in full cycles per second. For example, an oscillation that repeats five times per second has a frequency of 5 Hz
Nyquist frequency The frequency that is ½ the rate at which a time series was digitized (sampled). For example, when sampling at 500 Hz, the Nyquist frequency is 250 Hz. The Nyquist frequency defines the width of the range of contiguous frequencies that can be represented without aliasing
Phase-locking A measure of the similarity of phase values, or phase differences, across observations such as repeated trials, channels, or time windows
Pink, or 1/f, noise A collection of nonperiodic processes in which power at lower frequencies is relatively larger in amplitude, resulting in a spectrum that takes the shape of an exponential function f(x) = x−1
Power spectrum A frequency domain representation in which the magnitude (e.g., y axis) of activity present in a series of data points is calculated for different frequencies (e.g., x axis)
Sample or sampling rate The temporal rate (i.e., frequency) at which continuous, analog data are converted to numerical values to be digitally stored
Spectrogram An analysis that quantifies changes in spectral properties as they develop over time. Sometimes called the evolutionary spectrum, spectrogram analyses are often associated with shifting Fourier windows across a time series and measuring the spectrum during subsequent time points
Stationarity Often referred to as covariance stationarity, it indicates that the low-order statistical properties of the frequency domain signal (e.g., mean power and phase) do not change during the interval considered for the analysis
Temporal integration window The time window over which specific values of power and phase for a particular frequency are computed. Since only one value of power and phase is computed for each temporal integration window, it is linked with the Fourier uncertainty principle and the concept of stationarity
Time domain Representation of a signal as a function of time. For example, event-related potentials (i.e., ERPs), event-related fields (i.e., ERFs), and raw EEG are time domain data
Time-frequency plot A graphical representation illustrating changes in spectral properties as they develop over time (see spectrogram). Graphically, time and frequency are typically shown as two orthogonal (e.g., x and y) axes, and the spectral feature of interest (e.g., power) is shown on a z axis in three-dimensional plots, or color coded in two-dimensional plots
Time series A sequence of temporal observations ordered along a time axis
Wavelength The inverse of frequency. This metric describes the duration of a full cycle of an oscillation
Wavelet transform A method for extracting time and frequency information from time domain signals
White noise Nonperiodic signals in which the spectral energy is evenly distributed across frequencies, often associated with stochastic, nonbiological processes
Zero-padding A technique for increasing the length (i.e., duration) of a time domain signal by adding zeros at the beginning and/or end. It is often used with the intention to heighten the frequency resolution of a spectrum (see Fourier uncertainty principle) by adding time points without adding spectral information

Note: This table summarizes definitions for some of the key terms, with a focus on application in human electrophysiology.

1.2 |. Conceptual foundations: What is measured in frequency domain analyses?

Time domain (e.g., ERPs and ERFs), frequency domain, and time-frequency analyses reflect different ways of representing or summarizing the same underlying neural time series. Averaging in the time domain (commonly used to derive ERP waveforms) is designed to quantify the central tendency of the observed voltage or field strength values relative to an anchoring event. Thus, in time domain averaging, the variability around this most representative time course is considered a form of error, or noise. When averaged waveforms are computed on a sufficient number of trials, neural phenomena that share a common time course remain visible. These signals are often referred to as phase-locked and time-locked to the event, because they reflect the central tendency (i.e., the mean) of the time course that unfolds in each trial, relative to the anchoring event. They can be measured at the signal’s native sampling rate.

By contrast, frequency domain analyses are designed to decompose the variance (or, more precisely, the sum-of-squares) of the neural time series. Thus, temporal fluctuations of voltage or magnetic fields in a given recording epoch are not considered as noise but are quantified across a range of frequencies. In this decomposition, any source of variance of the time series is represented in the resulting frequency spectrum. Thus, the power spectrum based on frequency decomposition methods includes both transient (non-periodic) and oscillatory activities that occur during the time interval of interest. Because time information (i.e., a temporal integration window) is used to estimate variance, the temporal precision of the resulting variable is lower than that of time domain analyses, such as ERPs. This property of frequency domain and time-frequency domain analyses will be discussed in Sections 1.3 and 2.3.

Importantly, the power assigned to each frequency cannot assume negative values and will therefore not cancel out, even when averaging across multiple power spectra from different epochs. Consequently, the average power spectrum over epochs will reflect both oscillatory and transient activity. Transient activities in response, or in preparation, to unidentified internal or external events coexist with oscillatory phenomena. Their wavelengths are likely to vary, primarily reflecting the wide variety of underlying generation mechanisms. When researchers are interested in focusing on oscillatory activity, they may consider the contribution of non-oscillatory activity to the power spectrum as “noise” because it extends across a wide set of frequencies (Barry & Blasio, 2021; Donoghue et al., 2020). Techniques for addressing this problem are discussed in Section 3.1.

Two major types of broadband noise have been identified. The first is noise that displays no dominant frequency, reflecting factors such as stochastic phenomena from outside the brain, but also noise intrinsic to the recording and digitization of brain signals, such as the approximations made during analog-to-digital conversion (e.g., Oken, 1986). The spectrum of this type of noise is uniform, and is referred to as “white noise” (Barry & Blasio, 2021).

By contrast, nonperiodic brain signals, like most biological systems (Szendro et al., 2001), tend to show a stronger relative contribution of lower-frequency activity than of higher-frequency activity to the spectrum (He, 2014). As a consequence, scalp-recorded broadband noise often shows an inverse relationship between power and frequency. This second form of noise is often labeled “pink noise”, 1/f noise, or aperiodic activity (Donoghue et al., 2020; Freeman & Zhai, 2009; Lin et al., 2016), meaning that its power declines with increasing frequency, following a power function. The term aperiodic is used to indicate that pink noise is not rhythmic, that is, the underlying signal does not repeat itself in a regular fashion. In the published literature, pink noise is typically considered physiological in origin and is ubiquitous, although varying in intensity. Broadband activity may be more pronounced at some frequencies than others, reflecting non-oscillatory contributions at different wavelengths. This may create a complex spectral shape for the “pink noise”, which may be best described by a power function with an exponent other than −1. Figure 4 shows an example of a power spectrum derived from 80 trials of EEG in a young healthy participant, along with the best-fitting pink noise defined by a 1.5/f function.

FIGURE 4.

FIGURE 4

Spectral analysis of electrophysiological data. Blue line: Example power spectrum derived from 80 segments of resting EEG through discrete Fourier transform, derived from sensor oz in one participant. Orange line: Best-fitting 1/f function (i.e., 1.5/f) illustrating the pink noise portion of the power spectrum. Note the deviation from 1/f at ~10 Hz, consistent with occipital alpha-band activity

Given these nonlinear properties of the non-oscillatory noise, the extent to which a given neural or behavioral time series should be regarded as an oscillation has been a matter of debate (Donoghue et al., 2020; Gyurkovics et al., 2021). Recent work has increasingly separated “true” near-periodic oscillations at a specific temporal frequency, or in a frequency band, from aperiodic fluctuations in the signal such as pink noise or white noise (Donoghue et al., 2020; He, 2014; Hughes et al., 2012). As oscillatory activity is expected to occur at regular intervals, while aperiodic activity is supposed to occur at relatively random intervals, it has also been proposed to quantify the rhythmicity of a signal by the degree of how the phase spectrum is preserved over time (Fransen et al., 2015). Another widely used criterion for identifying “true” oscillations considers the degree of power concentrated within a specific frequency range relative to power in other frequency ranges (Keil et al., 2014), as shown for the alpha-band oscillation in Figure 4. In the final section of the introduction to this document, we discuss the computational foundations of algorithms used to quantify spectral phenomena.

1.3 |. Basic computational principles of frequency domain analyses

A comprehensive introduction of the mathematical concepts related to frequency domain analyses is outside the scope of this report, and readers are referred to widely used textbooks and tutorials on the topic (Cohen, 2014; Gable et al., 2022; Handy, 2004). To facilitate reading, and to highlight concepts of relevance for communicating psychophysiological research, this section gives a short introduction to the fundamental principles that are shared between most of the analytical techniques discussed in this paper, using the Fourier transformation as an example. As mentioned above, spectral power at a particular frequency reflects the amount of variance (fluctuation around the mean) that is accounted for by the corresponding basis function integrated across the time interval entering the analysis. Because the power spectrum represents an integral over time, the same total power for a given frequency can be obtained by a single large deflection or by a series of smaller regularly occurring oscillations covering the entire analysis interval. Thus, time information is lost and power at a particular frequency cannot, per se, be interpreted as demonstrating the existence of oscillatory activity at that frequency (see e.g., Donoghue et al., 2022). Other techniques are needed to establish the presence of an oscillation and examples of these techniques are discussed in Section 3.1.

1.3.1 |. Power, phase, and complex spectra

To be valid, the decomposition of time series into basis function weights must take into account the relative timing, or phase, of the oscillatory waveforms in the basis functions relative to the observed time series (see Figure 2). To meet this requirement, frequency domain analyses include not just one, but two basis functions for each frequency, so that their joint information covers all possible phase differences. Typically, orthogonal pairs of basis functions are used, such as the sine and cosine functions, or other function pairs in which one is a derivative of the other. Note that in applying this analytic approach one is not assuming that there actually were these basis functions operating in the biological system being analyzed—only that this approach can represent the actual biological system with high fidelity. When using basis function pairs in which each function has a mean of 0, and their cross-product is also 0, the combined sum-of-squares reflects the power of the empirical time series at that particular frequency. Because of these functional definitions, spectral data are readily illustrated in a two-dimensional Cartesian space, spanned by the two orthogonal basis functions (Figure 5).

FIGURE 5.

FIGURE 5

Illustration of the polar representation of a time series in the frequency domain. Three example waveforms are decomposed into real (cosine basis function) and imaginary (sine basis function) parts, and plotted as a vector in a cartesian space, where the length of the vector represents the amplitude at a given frequency (i.e., the joint contribution of both basis functions to the time series) and the angle represents the phase of the signal (i.e., the temporal position relative to the basis functions)

In general, the value on each axis represents the independent contribution of each of the two basis functions to the observed waveform during the interval that is examined. The joint ability of the two basis waveforms to account for the temporal variance in the time series is expressed by the length of the vector joining the point identified in the Cartesian space with the origin. This length is called amplitude (and its square value is the power). Note that shifts in the timing of the observed waveform relative to the two basis functions will change their relative contributions to the observed waveform, but will not change their cumulative contribution, in a manner analogous to an orthogonal rotation in two-dimensional space. Hence, these graphical representations illustrate another fundamental aspect of spectral analysis: orthogonal basis function pairs also allow for the computation of the phase spectrum—containing the phase difference between the empirical signal and the best-fitting basis functions. The phase at a given frequency can be computed as the angle between the basis functions: The tangent of that angle is equal to the ratio between the cross-products of the basis functions (e.g., the sine and cosine at 10 Hz) with the observed time series. The arctangent function is used to find the angle (see Figure 5).

Mathematically, the pair of orthogonal functions is often represented as two components in a so-called complex number, in which the two paired components, called the real and imaginary part, are combined. In Fourier analysis, by convention, the sinusoidal contribution is reflected in the imaginary part and the cosinusoidal contribution in the real part of the complex number. Together, the two orthogonal components span the Cartesian space shown in Figure 5. Thus, this representation is called Fourier component representation, or trigonomic representation. However, in the majority of analysis suites and packages available to EEG/MEG researchers, Euler’s equation is used to describe the complex spectrum in terms of an exponential equation. This equation states that the component formulation shown in Figure 5, can be rewritten for any real number x as:

cos(x)+i*sin(x)=eix (1.1)

with i being the imaginary component (the square root of −1) and e the base of the natural logarithm. This is convenient because it fully describes the complex spectrum, but it does so using intuitive terms of power and phase, with the real part in the right side of Equation (1.1) representing power and the imaginary part representing phase. In both cases, the published literature refers to the “imaginary part” and “real part”, but depending on whether the component formulation or Euler’s formula is used, this term refers to different aspects of the spectrum. Replication and communication are aided by clearly stating which formulation is used in a given algorithm or published work.

1.3.2 |. The Fourier spectrum and its frequency resolution

As noted above, the power and phase spectra of a digitally sampled time series contain the complete information available in the original time series, if full spectra are calculated. This means that a full spectrum of phase and power values can be converted back into its original time series. A full spectrum contains the same number of points as the decomposed time series, but usually only the first half of the Fourier coefficients are shown because the second half contains the same information. This is a result of the mathematical properties of the Fourier transform. As already noted, this decomposition method uses basis functions for each frequency that are orthogonal, with their cross-product being equal to 0. For infinitely repeating functions, such as sinusoidal and cosinusoidal waves, both assumptions are often not met when the basis function time series are truncated without completing a full cycle. As a consequence, only certain sets of frequencies can be analyzed in this fashion—frequencies that are integer multiples of the fundamental frequency of the time series, calculated as the inverse of its duration. Thus, for an analysis interval of length T seconds, the frequencies in the spectrum will be 1/T Hz, 2/T Hz, 3/T Hz, etc. The step size between these frequencies is called the frequency resolution of the spectrum. That is, the resolution of the output in the frequency domain is a function of the duration of the input in the time series. Therefore, inputting longer time segments produces higher resolution in the frequency domain. For example, a Fourier spectrum based on 2000 ms of EEG data will contain power values at intervals spaced at 0.5 Hz (1/2 s), and a spectrum based on 5000-ms segments will have steps spaced at 0.2 Hz (1/5 s). Section 2.3 provides a more detailed and practical discussion of this topic.

In principle, the choice of the basis functions is arbitrary, in that a variety of basis function pairs can represent signals. The use of sine and cosine as the basis functions is most common, but other basis functions are in wide use. If the full range of frequency spectra in a data set is to be built, it is important that the basic temporal shape of the basis functions can be scaled for all sets of frequencies that are going to be used. This potential limitation should be considered when analyzing high frequencies, in which the wavelength (and therefore the number of sampling points) available to reproduce the basic shape of the basis function may be limited. Therefore, it is important to identify frequencies of interest, particularly the highest frequency, and the time segments needed to analyze these frequencies. The vast majority of available algorithms for spectral analysis rely on sine and cosine waves and their variants.

Finally, neural time series used in the research context are digitized sequences of discrete samples from the continuous voltage or field data, and as such are subject to the Nyquist sampling theorem. This theorem implies that the spectrum of a time series may only correctly reflect frequencies from 0 Hz up until half of the sampling rate. This rate, ½ of the sampling or digitization frequency used during recording, is also referred to as the Nyquist frequency and represents an upper boundary for the frequency domain representation of a time series. For example, a sampling rate of 500 Hz would result in a Nyquist frequency of 250 Hz, with this frequency serving as the upper boundary in the frequency domain. To prevent misrepresentation of signals in the frequency spectrum, it is mandatory to filter out any signals exceeding the Nyquist frequency prior to analog-to-digital conversion. In the absence of robust hardware filtering at or below the Nyquist frequency, these under-sampled signals will result in so-called aliasing, the misrepresentation of above Nyquist frequencies as lower-frequency phenomena. A detailed discussion and tutorial of digital sampling, filtering, and aliasing is provided by Cook and Miller (1992).

2 |. STUDY PLANNING AND DATA PREPROCESSING STEPS

Planning a study for spectral analyses involves decisions regarding a set of general topics, shared across many different analytical approaches. These include the conceptualization and definition of the dependent variables, the experimental design, and the analysis interval, as well as decisions regarding the settings for recording and preprocessing. In this section, we discuss several of these issues and suggest ways in which authors may address them.

2.1 |. Conceptualizing spectral representations of neural data

As described above, any spectral representation of neural data may reflect unknown proportions of broadband activity and frequency-specific oscillatory phenomena which, while more narrow-band in nature, may also extend over a range of frequencies. Thus, an observed change in the power spectrum may reflect a change in activity in a specific frequency range or may reflect a change in the offset and exponent of the 1/f pink noise, or a combination of both. Several methods exist to identify these different contributions (e.g., Donoghue et al., 2020; He, 2014; Hughes et al., 2012). At the conceptual level, these methods rest on different assumptions regarding how the frequency spectrum is generated. There are two broad conceptualizations, and it is helpful to consider them explicitly. First, a power spectrum may be considered as resulting from a set of non-overlapping narrowband activities plus stochastic error (narrowband model, see Model 1 below). In contrast, the second conceptualization proposes that a power spectrum may reflect the sum of a set of narrowband activities added to a background formed by broadband phenomena (narrowband+broadband model, see Models 2a and 2b below) plus stochastic error. Formally, these two models correspond to the following equations describing activity at a frequency f:

Forthenarrowbandmodel:Model1:Power(f)=Narrowband(f)+error
Forthenarrowband+broadbandmodel:Model2a:Power(f)=Narrowband(f)+Broadband(f)+error
Ormorespecifically:Model2b:Power(f)=Narrowband(f)+1/f(f)+error

Although both models are mathematically viable, the model chosen to represent the power spectrum leads to fundamental differences in the estimation of the parameters entered in the statistical analyses and is therefore critical for the practical and theoretical inferences that are made. Traditionally, analyses in the frequency domain were conducted implicitly assuming the narrowband model (e.g., Lehmann et al., 1987). However, it should be noted that some contribution of non-oscillatory broadband (1/f) phenomena is likely present in most data sets and, therefore, Model 2 is typically more realistic. There are several different methods for conducting data analyses under the narrowband + broadband model that are discussed later in this document.

Another conceptual distinction refers to the extent to which differences in spectral power are thought to reflect the multiplicative modulation of narrowband activity, whereby a frequency band only reflects a single type of activity that can change over time, versus additive mechanisms, in which changes in power reflect the summation of different types of activity. Considering multiplicative versus additive mechanisms is important because this consideration impact how the spectrum is quantified: The narrowband model (see Model 1) readily accommodates both multiplicative and additive mechanisms, since only one parameter, the intensity of the narrowband effect, is estimated for each frequency. Many traditional studies in so-called quantitative EEG research adopt this perspective (Nuwer, 1997; Pivik et al., 1993). Therefore, nonlinear transformations of the observed power, such as log or decibel transformations, which are consistent with the multiplicative model, are mathematically appropriate. However, if the narrowband+broadband model is adopted, two parameters exist for each frequency: the respective contributions of narrowband and of broadband components to the observed power. Therefore, nonlinear transformations should not be applied to the raw observed power before separating the contributions due to each component, because this would lead to incorrect estimation of these two parameters. For example, one of the parameters could be systematically over- or under-estimated depending on the value of the other (Gyurkovics et al., 2021). Several procedures are available to achieve this separation if one is interested in applying nonlinear transformations under the narrowband+broadband model choice (Clements et al., 2021; Donoghue et al., 2020; He, 2014; Hughes et al., 2012).

In summary, when quantifying frequency domain data, results may be strongly influenced by the underlying model that guides the interpretation process. When oscillatory activity is the focus of analysis in the context of the narrowband+broadband model, it is critical to take into account concurrent non-oscillatory activity, such as 1/f noise. Importantly, the model adopted, whether explicitly or implicitly, affects the outcome and interpretation of the data, such as when differences in spectral power are interpreted as only due to narrowband activity or when using nonlinear transformations. It is therefore recommended that the conceptualization of the spectral composition be made explicit and justified when making inferences from frequency domain representations in articles and reports.

2.2 |. Defining and selecting frequency bands

Paralleling the plethora of methods available for extracting dependent variables from time domain data, such as ERPs, many different approaches are used for measuring frequency domain or time-frequency phenomena. For decades, researchers have relied on averaging spectral power across frequencies within so-called canonical frequency bands to obtain indices thought to relate to certain behavioral and cognitive processes. Traditional demarcations of canonical frequency bands have typically defined the delta (<3 Hz), theta (4–7 Hz), alpha (8–12 Hz), beta (13–30 Hz), and gamma (>30 Hz) bands. As discussed in Section 2.1, raw band power derived from a spectrum will reflect a mixture of oscillatory and non-oscillatory processes. It is thus recommended to consider these two sources of power and specify the assumptions regarding contributing processes.

In addition, the literature is increasingly converging on showing that many canonical frequency bands listed in textbooks and recent guideline articles are poorly replicable across different populations, and across various tasks and paradigms. For example, the frequency of the occipital alpha signal (around 10 Hz in young adults, see Figure 4) changes substantially over the lifespan (Hashemi et al., 2016; Polich, 1997). Furthermore, experimental and individual difference effects that have been traditionally linked to specific canonical frequency bands have been commonly observed at frequencies outside these canonical bands (Newson & Thiagarajan, 2019; Shapiro et al., 2017). As such, forming and testing hypotheses regarding effects in canonical bands without establishing the specificity and sensitivity of the dependent variable may yield misleading or ungeneralizable results. With the advent of advanced statistical techniques (see Section 4.1), it is possible to apply mass univariate techniques with appropriate corrections for multiple comparisons to examine multiple frequencies (Groppe et al., 2011; Maris, 2012), aiding in linking specific frequency domain phenomena to the manipulation or comparison of interest.

2.3 |. The trade-off between temporal resolution and frequency resolution

As discussed in Section 1.3, quantifying the power and phase of a time series in the frequency domain requires integrating information across a period of time. Frequency domain analyses are subject to the Fourier uncertainty principle, which holds that the number of available frequency bins (e.g., ticks on the x axis, maximally extending between 0 Hz and the Nyquist frequency) increases with the temporal duration of the time segment used for the spectral analysis (temporal integration window). Thus, spectra computed from longer time series have greater frequency detail than do spectra computed from shorter time series. As a result, higher frequency resolution comes at the cost of lower time resolution. Consideration of this tradeoff is particularly important because most EEG/MEG signals are not stationary for long. This trade-off between temporal and frequency specificity is inherent in the majority of methods discussed in this document.

The Fourier uncertainty principle impacts study designs in which a researcher may wish to include longer versus shorter inter-trial intervals, or consider shorter versus longer trial durations to ensure sensitivity to a time range of interest, while also ensuring robust estimation of the spectrum. Depending on the aims of the study, researchers may want to emphasize time resolution (e.g., using shorter analysis intervals), frequency resolution (e.g., using longer analysis intervals), or select a specific trade-off between them, accomplished by methods such as wavelet transforms or multitaper analyses. For example, researchers interested in short bursts of high-frequency broadband signals at frequencies above 40 Hz may not be concerned with specific frequencies, but may wish to characterize the timing of these neural events in sufficient detail. By contrast, researchers interested in changes in alpha peak frequency over the life span may wish to emphasize frequency resolution, by ensuring sufficient duration of the analytical intervals examined. Many methods for time-frequency analysis, as discussed in Section 3.2 below, also involve trade-offs between time and frequency resolution for different frequency ranges (Tallon-Baudry & Bertrand, 1999). To enable reproduction of these algorithms within and across labs, it is recommended that authors report the duration of the analytical time interval used for the frequency domain analyses. It is also recommended that they report the resulting time and frequency resolution of the spectrum or of the time-frequency representations at the frequencies of interest.

Many algorithms and widely used pipelines include an option to increase the frequency detail of a spectral representation by adding zeros to the time series entered in the analysis. This practice is referred to as zero-padding. Zero-padding may be helpful in situations where a given frequency resolution is desirable but cannot be attained with the interval duration available from the time-segmented data. Such situations occur in cases where researchers wish to quantify the power at a driving frequency evoked by oscillatory stimulation of a sensory system. For example, researchers conducting a study with auditory steady-state responses may be interested in the 41.6 Hz auditory response to 1-s sound stimuli that are amplitude-modulated at that exact frequency. A frequency analysis of the 1-s stimulation intervals would result in a spectrum with 1 Hz resolution, failing to include a frequency bin for the frequency of interest, 41.6 Hz. This is because frequency bins would increase in constant steps of 1 Hz, eventually yielding bins of 41 Hz and 42 Hz, which do not fully capture the 41.6 Hz frequency of interest. Thus, the researchers may opt to add zeros at the beginning and end of each epoch to be analyzed, to attain the desired frequency resolution. In this case, they may add 750 ms of zeros at the beginning and end of each 1-s data segment. The resulting epoch duration of 2.5 s results in a frequency resolution of 1/2.5 = 0.4 Hz. Starting at 0 Hz and extending in even steps of 0.4 Hz, the spectrum will now include a frequency bin corresponding to a basis function at 41.6 Hz, allowing for clearer quantification of the auditory steady-state response. It should be noted, however, that zero-padding does not increase the true underlying frequency resolution, as no new information is added. Instead, it is a form of interpolation using the existing data. In cases where zero-padding is used, it should be fully reported in the manuscript, including the number and location of the added zeros (i.e., before, after, or before and after) relative to the empirical time series.

2.4 |. Stationarity of the signal

Stationarity, often conceptualized as covariance stationarity, indicates that low-order statistical properties of the time domain signal (e.g., the mean and variance; in the case of sinusoidal data, this includes frequency, amplitude, and phase) do not change over time. This is relevant because most spectral transformations, such as the Fourier transform, veridically represent all aspects of the time series in the complex spectrum. These aspects include transient and nonstationary signals in addition to oscillatory processes, which are more likely to be stationary. Thus, interpretation of a given frequency spectrum partly depends on the extent to which the underlying processes were stationary and extended throughout the time interval entering the analysis.

Stationarity is also an assumption of many non-Fourier algorithms for spectral analysis, such as half-wave analysis and autoregression (see Section 3.1.4), both of which will yield misleading results if conducted on nonstationary signals. One useful approach to addressing this problem is to quantify stationarity, using suitable statistical tests such as the augmented Dickey-Fuller test (Elliott et al., 1996) or the Kwaitkowski Phillips Schmidt Shin test (KPSS; Kwaitkowski et al., 1992). It is recommended that authors detail the extent to which their data were stationary and the tests used to confirm that they were so, along with any transforms, such as differentiating or filtering, aimed to achieve stationarity.

2.5 |. Artifacts and artifact control

Neurophysiological time series are prone to a variety of artifacts, defined as signals that do not reflect the neural processes targeted by the analysis. The detection, control, and correction of these artifacts is a rich topic and discussed elsewhere more broadly, including recommendations for the implementation of these methods (Keil et al., 2014). This section is focused on aspects of artifact detection and control that are particularly pertinent for studies using frequency domain and time-frequency domain techniques.

As discussed in Sections 1.2, 1.3, and 2.1, spectral representations contain all aspects of the original time series, including non-oscillatory, transient events such as ERPs or broadband phenomena (e.g., blinks), but also oscillatory events that may have non-cerebral origin such as electromyographic (EMG) signals from facial and bodily muscles. As such, spectral representations will also fully reflect nonphysiological artifacts such as voltage jumps caused by loose electrodes, 50/60 Hz line noise, and rapid jumps in voltage created by movement of the participant or equipment. Thus, carefully examining the time series and the spectrum before and after artifact rejection and artifact control is recommended to ensure the validity of the dependent variable of interest. In addition to visual inspection and semi-automatic artifact control, automated pipelines are increasingly used to accomplish these steps. In all these cases the pipeline usage, settings, and parameters should be fully documented in a published manuscript. The following paragraphs describe major physiological artifacts that may threaten the validity of frequency domain analyses, along with approaches for controlling them. Figure 6 illustrates how retaining epochs with common artifacts affects spectral and time-frequency analyses in a data set with 20 artifact-free trials, shown in Figure 6a.

FIGURE 6.

FIGURE 6

Typical artifacts affecting frequency domain and time-frequency domain analyses of neural time series data. Left column shows one of the 20 EEG trials, segmented relative to the onset of a visual working memory task, and either free of artifact (a), or affected by three frequent artifact types (b: Sharp transient, c: Drift, d: EMG). The middle column shows the average (across 20 trials) power spectrum of the time period between 0 and 6 s, with one trial (left column) affected by different artifact types (red lines) compared to the average of 20 artifact-free spectra (a, middle panel). The right column shows results of a wavelet transform of the same data, with (b through d) and without (a) the contaminated trial included. Note that the presence of one trial with a strong artifact is sufficient for inducing pronounced changes in both the frequency domain and time-frequency representation. See text for artifact description

2.5.1 |. Ocular artifacts

A variety of artifacts arise from eye-related activity. These include eye movements, in which the corneo-retinal dipole, extending between the negatively charged retina and the positively charged cornea, creates changes in transient voltage and field gradients across the head as the eyes move. Eye blinks (i.e., complete or partial eye lid closures) cause similar, abrupt changes in the electromagnetic field, maximal near frontal sensors (see Figure 6b). Such sharp, transient changes in the time series tend to be represented as strong broadband signals in spectral analyses in that they extend across a wide range of frequencies and may thus be mistaken as heightened power in a specific band (Figure 6b, right panel), especially if broadband contributions are not separately considered in the analysis (see Section 2.1). Another source of artifact includes microsaccades, also referred to as fixational saccades, which are associated with spike potentials in neural time series (Plöchl et al., 2012). The voltage changes caused by microsaccades also tend to appear as broadband signals in spectral analyses, often in the higher frequency ranges, and thus may be misinterpreted in a fashion similar to that of transient eye movements. Under certain conditions, these ocular artifacts have been shown to greatly affect spectral analyses of neural time series, including time-frequency analyses (Yuval-Greenberg et al., 2008). EMG signals, arising from ocular muscles, such as the musculus orbicularis oculi, may introduce oscillatory as well as non-oscillatory artifacts, predominantly at frontal sensors. EMG artifacts are discussed in greater detail in Section 2.5.4 below. It is recommended that artifact correction in studies of oscillatory brain activity consider the unique challenges discussed above, beyond what has been recommended for electromagnetic time series analyses more broadly (Keil et al., 2014; Pernet et al., 2020; Picton et al., 2000). Specifically, authors may wish to report the exact types of ocular artifacts removed from the data, instead of referring just to artifact removal more broadly, by including specific information on the extent to which blinks, saccades, or oculomotor EMG were controlled for and how, respectively. Because most ocular artifacts have a characteristic topography, visible in most spatial representations, it may be useful to include a topographical illustration that includes frontal sensors, allowing readers to assess the presence of any residual ocular activity in the signal of interest. Finally, it may be necessary to conduct analyses that quantify the relationship between the occurrence of a given ocular artifact, such as microsaccades, and variations in the dependent variable, to rule out that the outcome measure is driven by or confounded with ocular artifacts. Many analysis pipelines contain algorithms for detecting and controlling ocular artifacts. Their usage, parameter settings, and the numbers of affected trials and channels should be reported in the manuscript.

2.5.2 |. Cardiac and respiratory artifacts

Cardiac artifacts include the direct interference of voltage gradients or magnetic fields generated by the cardiac cycle at cranial sensor locations (Sun et al., 2016), as well as artifacts related to associated cardiovascular (blood flow) processes, often referred to as pulse artifacts (Tamburro et al., 2019). These two types of cardiac artifacts differ in their temporal profile, with vascular artifacts showing a slower, smoother time course and electrical artifacts reflecting the cardiac cycle, thus including a sharp transient deflection corresponding to the R-wave of the electrocardiogram. These artifacts may introduce non-cerebral signals at a variety of frequencies, ranging from below 1 Hz to broadband signals introduced by the sharp transient caused by the R-wave. The prominence of these artifacts can be reduced by the choice of an appropriate recording reference (see Keil et al., 2014).

Respiratory activity is likewise associated with two types of artifacts. The first (Figure 6c) is related to the slow and rhythmic movements of the body, affecting sensor position relative to the head (MEG) or influencing electrode impedance though motion of the head or electrode leads (EEG). The second type of artifact linked to respiration is produced by more abrupt changes in body position co-occurring with inhaling and exhaling, again prompting changes of head position and/or slight moving of scalp sensors, reflected in peaks in the recorded time series.

As discussed in Sections 1.2 and 2.1, both cardiac and respiratory artifacts will be represented in spectral analyses and may not be readily identifiable as artifactual after being included in the spectrum (see Figure 6c, middle and right panels, for an illustration of low-frequency artifacts induced by slow drift). Thus, examining the time domain signals used for frequency domain analyses is particularly important. Researchers may assume that averaging across multiple trials may attenuate the contribution of these signals, as long as artifacts are not systematically related to the interval timing used for averaging and that a sufficient number of trials is available. Both conditions are often not met. For example, heart rate may systematically vary across the analytical time segment in studies of emotional reactivity or attention, when an attended or alerting stimulus is presented, and in studies where the experimental paradigm does not involve trial averaging, such as in studies of resting states or sleep. Several methods for removing cardiac and respiratory artifacts exist, some of which rely on multivariate analysis of the data through principal component analysis (PCA) or independent component analysis (ICA). To facilitate replication, these methods should be fully described with appropriate citations, and user settings and interactive choices reported, including the component selection criteria, number of components selected for each participant, and algorithm used, if possible, with a reference to the original manuscript guiding the choice.

2.5.3 |. Electrodermal (sweating) artifacts

Artifacts produced by sweat gland activity share several properties with cardiac and respiratory artifacts in that the associated changes in impedance prompt slow changes, typically in the range well below 1 Hz (see Figure 6c). Paralleling cardiac and respiratory artifacts, perspiration-related artifacts may also be misinterpreted as slow EEG activity and may be identified and controlled for with the same methods discussed in Section 2.5.2. In addition to these slow artifacts, in EEG recordings the influx of sweat may also cause rapid changes in electrode impedance as well as short-circuiting sensors, often reflected in brief voltage spikes (Kappenman & Luck, 2010). This has been a concern particularly when using dry electrode systems in which the humidity of the skin serves as the electrolyte facilitating conductance. The identification and control of such sweat spikes in voltage time series parallels those of other rapid transient artifacts, including faulty electrodes, which are identifiable by their specific topography. They are typically controlled by removal or interpolation of affected channels, or by removal of the affected time segment. Experimenters should work to control the environmental conditions to minimize these artifacts when possible.

2.5.4 |. Non-ocular (facial, neck) EMG and other motor artifacts

Movement of the neck, extremities, and facial muscles, as well as talking, shivering, sniffling, hiccupping, and glossokinetic (tongue) movements introduce artifacts in EEG and MEG recordings. Some of the EMG phenomena caused by these processes are oscillatory in nature in that they prompt rhythmic field changes at specific frequencies, typically in higher frequencies. As such, these artifacts tend to threaten validity, especially for studies focusing on higher-frequency oscillations, which may overlap with the EMG spectrum, which tends to contain substantial power in frequencies above 20 Hz (see Figure 6d, middle and right panels). It is recommended that these artifacts be identified through their topography, which is expected to be at its maximum near the generating muscle groups, as well as through inspection of the time series. In addition, multivariate approaches (e.g., PCA and ICA), as discussed below in Section 4.1, may be suitable to detect and remove variance related to these motor artifacts.

2.6 |. Referencing and spatial transformations

A substantial number of EEG and MEG studies aim to quantify spatial dependencies, across sensors or across brain regions. Often, the overarching goal of these analyses is to characterize neural connectivity across brain regions. Various algorithms exist to measure spatial dependencies, including the methods described in Section 3.4 below (Ding et al., 2011; Nolte et al., 2004; Nunez, 1996; Stam et al., 2007). Both volume conduction effects (i.e., spreading of voltage within the brain and across the scalp) and dipolar fields may lead to spurious positive results, suggesting oscillatory interactions among different locations where none exist (Nunez et al., 1997). Thus, many of the available metrics benefit from—and some require—spatial transformations of EEG and MEG data. For example, measures of inter-site dependence (e.g., inter-site phase-locking, magnitude-squared coherence) are more readily interpretable if applied to Laplacian or current source density (CSD) transformations of EEG data (Nunez et al., 1997). The CSD is based on the second spatial derivative of the EEG scalp potential, thus reducing the impact of constant voltage shifts as produced by volume conduction. As is evident from these examples, spatial transformations are often used as preprocessing steps, applied to single-trial data, or to averaged data. The order in which these steps are applied is crucial for the pipeline to yield interpretable results.

For example, performing spectral analyses on absolute source strength values rendered by a distributed source model projection is incorrect, because the phase information of the underlying signal is no longer present in absolute source strength values. Instead, spectral transformations will have to be performed on source representations that still possess phase information (Hauk et al., 2002). In a similar vein, performing source estimation on power spectra or on time-varying power is also incorrect in most cases, because the phase/polarity information needed is no longer present and the data are not in the unit (e.g., voltage, magnetic field strength) that the source projection algorithms expects. Instead, spatial transformations, such as CSD or source estimation, may be applied on the real and imaginary parts of the complex spectrum that are output by a Fourier analysis, or on corresponding complex elements form other analyses that are still endowed with phase information. The flowchart in Figure 7 summarizes these issues.

FIGURE 7.

FIGURE 7

Combining source estimation (including similar spatial transformation such as CSD) and spectral analyses in a sequential analysis pipeline. Pipelines with inappropriate ordering of analytical steps (shown in red) may yield non-interpretable results, or results that do not reflect what the user intends. For example, most source estimation algorithms assume that the original polarity in EEG/MEG recordings is present and thus yield uninterpretable results when applied to power spectra. Authors may wish to ensure that the sequence of processing steps as applied in their pipeline is appropriate for their data type

In summary, it is strongly recommended that authors report the reference used during EEG recording, along with any subsequent spatial preprocessing steps and transformations, and the order in which they are applied. Additional methods for heightening the validity of inter-site analyses are discussed in Section 3.4.

3 |. RECOMMENDATIONS FOR REPORTING ON SPECIFIC ANALYTICAL TECHNIQUES

This section gives specific recommendations for widely used methods. Brief explanations are given, some of which expand concepts introduced above. Summary recommendations are given at the end of each sub-section, and readers are also invited to use the corresponding checklists at the end of this document.

3.1 |. Spectral analyses

As discussed in Section 1.1, the spectrum of a neural time series is a representation in which the x axis shows frequency in Hz, and the y axis shows spectral amplitude, power, or phase at each of the frequencies plotted on the x axis (see Figure 1, right panel). The most widely used form of spectral analysis technique in neuroscience is a variant of Fourier analysis, called the discrete Fourier transform (DFT). The output of the DFT is a complex spectrum, which contains two values for each frequency, the real (i.e., cosine) and imaginary (i.e., sine) components (see Figure 5 and Section 1.3). From these components, the power (i.e., magnitude, computed as the modulus of the two values) and phase (i.e., relative position in the oscillatory cycle, computed as the arctangent of the imaginary over the real component) can be determined, after taking into account two properties of the raw DFT spectrum: First, it is symmetrical in nature, mirrored at the Nyquist frequency (i.e., half of the sampling rate), and the portion above Nyquist is not interpretable; Second, because DFT is mathematically an integral across time, the raw power increases with the duration of the input segments.

3.1.1 |. Normalization of the spectrum

To facilitate interpretability of the power values across different studies using different interval lengths, many available implementations for neural time series analysis contain normalizations for the length of the analytical segment used to calculate the spectrum, often by dividing the power by the number of bins in the spectrum. Normalization by the length of time often results in a density measure with a unit of power (e.g., μV2/Hz). Further normalization steps involve multiplying the valid, lower half of the power spectrum by 2, or equivalently multiplying it by its complex conjugate, and discarding the invalid portion above Nyquist to correct for the allocation of power to the invalid portion the spectrum. Reporting on any normalization steps involved in the spectral analysis is strongly encouraged, because it enables the interpretation of published spectral power values and fosters replicability and reproducibility of findings. See https://github.com/kylemath/MathewsonMatlabTools/blob/master/EEG_analysis/kyle_fft.m for an example implementation in MATLAB code.

3.1.2 |. Measuring band power from the spectrum

As discussed in Sections 2.1 and 2.2, specificity of effects in a frequency band of interest depends on a range of assumptions regarding the composition of the spectrum. Regardless of how these assumptions were addressed, specificity of effects in a frequency band may be tested by entering other control band power values from the same spectrum in the analysis and using appropriate statistical models to examine specificity (see Section 4.1 for examples and guidelines). When using band power, it is generally recommended to report the full spectrum from which the band was extracted, along with the way band power was measured (e.g., mean, median, peak), and how 1/f effects or other spectral shape effects were addressed.

Another widely used approach has been the computation of relative power, where the relative contribution of a given frequency band of interest to the total spectral power is expressed as a ratio, dividing the power in each frequency band, including the band of interest, by the total power across all frequency bands. This method reduces biased estimates that arise from differences in spectral offset and expresses power as percentage, or another proportion metric, of the total power. However, it is important to note that low- and high frequencies from the same Fourier spectrum are based on the same window length. Thus, it becomes improper to compare power of distant frequencies. For example, if 3000 ms of data are used, 1 Hz is estimated based on three cycles and 100 Hz is estimated based on 300 cycles. Wavelet techniques in which the number of cycles changes as a function of increasing frequency (see Section 3.2.4) can be used in this case to reduce this bias. Relative power estimation may introduce new biases reflective of the 1/f shape as discussed in recent reports (Barry & Blasio, 2021; Donoghue et al., 2020) and is not recommended, but, if used, the full power spectrum should be reported (Pivik et al., 1993).

Researchers may also have a priori hypotheses regarding specific ratios between frequency band power values extracted from the same spectrum, such as the ratio of power values in the traditional alpha (8–12 Hz) and theta (4–7 Hz) frequency bands. It should be noted that these relatively simple indices, although traditionally used, have received substantial recent criticism for being confounded with the overall spectral offset and with the shape of the spectrum they are calculated from (see Sections 2.1 and 2.2). They may thus be replaced by more sophisticated analyses mentioned in Section 2.2, which are already available to researchers (see Section 4.1 below; Clements et al., 2021; Donoghue et al., 2020). Metrics of relative spectral indices may also be informed by neurophysiological theories of brain function (Haegens et al., 2022; Lisman & Jensen, 2013) and may increase the external validity of the measurement under certain circumstances. Where such metrics are used, it is recommended that the spectral analysis underlying the calculation of the relative power measures be detailed as described in this section.

3.1.3 |. Edge artifacts and window functions

Because spectral representations reflect all existing variance in the time series, large variations in values at the beginning and end of the empirical input time series, due the abrupt bound of the time series, lead to spectral distortions, known as “edge artifacts”. Edge artifacts are present in many situations where temporally constrained intervals are analyzed, as is often done in time-frequency analysis, but also in many studies measuring spectral power. To minimize these effects, researchers often apply window or taper functions, which ramp up from zero to one and back to zero. Weighting the data series by such a function forces the ends of the data vector to zero. Different taper window functions are defined by the way in which they ramp up to one and down to zero. Common window functions include Hann(ing), Hamming, Kayser, Bartlett, Tukey, Blackman, and Cosine-Square functions. Many window functions do not allow or require the definition of a ramp-up/ramp-down duration because they ramp up over half of the segment and then down over the second half, reaching a value of unity only at the midpoint of the segment, so that only the midpoint remains at its original value (i.e., is multiplied by 1). In these cases, as well as in cases where the ramp-up and ramp-down periods are set by the researcher, these parameters should be reported along with the duration and type of the window function. Choices in this regard may be guided by computational principles (Harris, 1978) but also by aiming to replicate common methods (e.g., most use a Hann or Hamming window).

Averaging the spectral estimates of multiple overlapping windows within a segment is often used in spectral analysis to increase the signal-to-noise ratio of the spectrum. However, because of the Fourier uncertainty principle described above (see Section 1.3.2), estimating spectra for multiple, shorter, sub-segments also decreases the frequency resolution of the spectrum. Thus, replicability of the analysis is only achieved by fully reporting the type, number, and overlap of any window functions used in the estimation of the spectrum. A further reason for fully describing window functions lies in the fact that the application of a window function often changes the so-called leaking effects, or the spurious artifactual shifting of power to other parts of the spectrum, adding to the power, if any, at those frequencies. Leaking is often observed in spectra with relatively short analytical intervals. Under certain conditions, some applications benefit from non-windowed spectral analysis, especially those in which the specific frequency of interest is exactly known a priori, such as in studies of brain stimulation or steady-state potentials. In these cases, researchers may wish to avoid window functions and instead ensure that full cycles of the frequency of interest are present in the analytical interval (i.e., select interval that is an integer multiple of that cycle length) and that trends and offsets from baseline are removed as needed.

3.1.4 |. Non-Fourier methods

As discussed above (see Sections 1.3 and 2.1), the basis functions used for quantifying the spectral power at each frequency are critical for the correct interpretation of power spectra. In addition to the many flavors of Fourier analysis, all of which use sine and cosine basis functions for spectral estimation, several other approaches are frequently used. These methods include autoregression, in which oscillatory patterns in the data are quantified based on linear prediction of future data points by past data points, and a range of methods in which basis functions are estimated empirically, from the data themselves. This section briefly discusses recommendations for methods based on autoregression and on empirical basis functions.

Parametric spectral analysis

In contrast to the non-parametric, Fourier-based approaches discussed above, parametric spectral analysis starts with the assumption that the measured data are realizations of an underlying stochastic process that can be well-characterized by an autoregressive (AR) model (Ding & Rangarajan, 2013). An AR model predicts future points by past points from the same time series. The extent to which its assumptions are met should be addressed in the manuscript, which may include tests of statistical stationarity of the time series (see Section 2.4). The parameters of the AR model, including the model order and the model coefficients, are estimated from the data and become the basis for obtaining spectral quantities such as the power spectrum. The advantages of the parametric method include the ability to resolve spectral quantities to arbitrarily high resolution in the frequency domain, the ability to obtain smooth spectral estimates, being less vulnerable to the shortness of data segments, and the ability to generate Granger causality spectra. Disadvantages include the potential difficulty in identifying an optimal model that fits the data well. It is recommended that the model order be reported in addition to the method in which it was determined (e.g., Bayesian or Akaike Information Criterion; Akaike, 1974). In addition, the exact implementation of the autoregressive algorithm should be given, with specific references or in mathematical form.

Data-based analyses

Methods for spectral analysis based on empirical features have existed for a long time and have recently seen revived interest (Loza, 2019; Melkonian et al., 2003). For example, a range of spectral analysis algorithms, so called half-wave analyses, aim to identify peaks or zero-crossings in the data, which are taken as indexing the completion of one half-cycle of the oscillation of interest (Oken, 1986; Pooja et al., 2021). Several variants of so-called matching pursuit algorithms are also increasingly used (e.g., Loza & Principe, 2016). These computationally demanding methods quantify the overlap between a user-defined set (i.e., dictionary) of oscillations of interest (i.e., atoms) and the empirical data. If these methods are used, reproducing the approach is aided by reporting the specific algorithm used and providing mathematical formulation and links to example data and working code. Papers proposing new analytical methods and algorithms are expected to provide the code needed for running the analyses, enabling reviewers and readers to test and use the method. Often, preprocessing steps are crucial for reproducing the analyses. Thus, providing details about these steps facilitates communication as well. For instance, knowing the exact type of band-pass filter used for zero-centering a signal prior to half-cycle analysis is required for replicating the analysis.

3.1.5 |. Summary: Reporting spectral analyses

In summary, it is recommended that studies using frequency domain analyses provide an explicit conceptualization of the spectral phenomenon of interest (see Section 2.1), and a rationale for how the dependent variable was measured. In addition, the duration of the data segment of interest that was used for transformation into the frequency domain should be given, accompanied by the frequency resolution of the spectrum and details regarding taper windows or other ways in which edge artifacts were addressed. The way in which data epochs were combined within and across recording segments, such as through overlapping windows and how the resulting spectrum was normalized should be detailed as well. Finally, it is strongly recommended that figures be included showing the spectral shape from representative sensor locations, instead of showing only reduced data such as bar graphs or scatter plots for mean band power (see Section 4.2 for recommendations about data figures).

3.2 |. Time-frequency analysis

Several methods are available to investigators for analyzing the event-related changes in oscillatory activity as they evolve over a given period. Because most time-frequency analyses are extensions of the frequency domain approaches discussed above, the same reporting guidelines apply regarding describing the nature of the input data, the exact steps taken by the algorithm used, and any transformation/normalization steps performed. In the following we discuss additional aspects of time-frequency domain analysis for widely used methods.

3.2.1 |. Reporting inputs of time-frequency analysis

As a main determinant of the frequency information contained in spectral representations, the temporal duration of the segments entering time-frequency information provides crucial information. In addition, the length and shape of any window functions and how they were applied will affect the interpretability of time segments at the beginning and end of the temporal segment. Notably, the duration and temporal position of a segment used for a pre-stimulus (baseline) period can introduce data from the post-stimulus interval through smearing in the time domain. Baselining is therefore discussed in detail in Section 3.2.6 below, along with guidelines for reporting the time and frequency resolution of the frequencies of interest (see Section 3.2.4 below).

It is crucial that authors communicate the processing stage at which the time-frequency analysis is applied: Applying time-frequency analysis to single trials, followed by hypothesis testing or additional averaging, emphasizes different aspects of the oscillatory activity (e.g., spontaneous or induced in Galambos’ taxonomy, above) compared to applying the time-frequency analysis after trial averaging (emphasizing “evoked” oscillations in Galambos’ taxonomy).

Some published work in the field subtracts the averaged potential (i.e., the ERP) from each single trial prior to time-frequency analysis on single trials, aiming to emphasize oscillations that are not time- and phase-locked to the anchoring event. If this technique is used, replication depends on this step being prominently mentioned in the manuscript and the averaged potential shown in the time and frequency domain. Subtracting cross-trial average waveforms from single-trial waveforms is not generally recommended, because it assumes additive, linear relations between single trials and the average, which may not be the case (e.g., Moratti et al., 2007). As such, subtraction techniques may introduce spurious power indications, such as reflecting the variable latency of time-locked potentials across single trials (Li et al., 2009; Xu et al., 2009). This is particularly problematic in cases where the evoked response is driven mainly by phase locking rather than changes in signal amplitude. These problems may be addressed by quantitatively assessing the amount of phase similarity across trials. Available techniques (see Section 3.3) allow researchers to quantify the amount of phase locking across trials, rather than assuming linearity in the interaction of induced and evoked activity.

3.2.2 |. Time-frequency methods based on the Fourier transform: Spectrogram, moving DFTs, complex demodulation, and multitapers

One obvious means of measuring changes in oscillatory activity over time is to apply any of the spectral domain methods described above to shifted time segments of data. Versions of this approach are commonly used with Fourier spectra and are referred to as spectrograms, or moving-window DFT/FFT analyses. For example, researchers may calculate a DFT for a window comprising the first 400 ms of the analytic segment, and then shift this window by one or more sample points until it reaches the end of the analytic segment. When applying this approach, it is recommended to report the step size and window length, along with any within-window averaging done by algorithms such as the Welch periodogram method. Paralleling the recommendations for Fourier spectra discussed above, time domain data are typically multiplied by a taper window function prior to DFT, and reporting the type of the taper window function used along with its temporal properties is crucial for replication. This is particularly true for multitaper analysis, in which multiple window functions are applied prior to the moving-window DFT to extract different information (e.g., power at different frequencies), and the resulting time-varying spectra are then combined to optimize the trade-off between resolution in the time and frequency domain. If multitapers are used, it is recommended that authors report the number of different tapering windows used, their center frequencies, any smoothing factors that are applied, and the algorithm used for generating their shapes (e.g., the Slepian sequence).

Complex demodulation is a technique in which sine and cosine functions tuned to a frequency of interest are multiplied by the data in the time domain, followed by low-pass filtering to isolate the envelope of the time-varying power at the frequency of interest. This process may be repeated at different frequencies of interest, resulting in a time-by-frequency representation. It is recommended that usage of complex demodulation is accompanied by reporting the frequencies examined and detailed description of the low-pass filter employed, including filter type, filter order, and how the cutoff frequency was defined (e.g., as the 3 dB power or amplitude point).

3.2.3 |. Time-frequency methods based on time domain filtering

If a specific frequency of interest is known a priori, authors may opt to use time domain filtering, in combination with other techniques, to isolate the time-varying power at a given frequency. One widely used group of methods using this approach are the Filter-Hilbert methods. These methods are based on the idea that oscillatory activity at a given frequency can be quantified by a combination of band-pass filtering and subsequent estimation of the instantaneous (moment-by-moment) phase using a mathematical technique that estimates a phase-shifted version of the empirical signal, the so-called analytical signal. Combining the two signals (empirical and analytical) for each time point using the modulus function (the square root of the sum of their squares) yields an estimate of time-varying amplitude at the frequency of interest. It can also be used to estimate time-varying phase using the arctangent of the ratio of the empirical and analytical signal for each time point. If the Filter-Hilbert, or a similar approach, is used it is recommended that the implementation (software and version number) of the Hilbert transform used for finding the phase-shifted version of the empirical signal and the details of the band-pass filtering process, including filter types used, filter order, and how the cutoff frequencies are defined (i.e., the half-power, or half-amplitude point) be reported. Because the Filter-Hilbert method is based on estimating time-varying phase, it is critical for correct application that the filter be narrow-band, focusing on one frequency. Broadband phase is mathematically undefined and empirically meaningless, and Hilbert transforms applied to broadband data yield meaningless indices. Thus, it is strongly recommended that the description of the filter allow readers to assess the extent to which the resulting time domain data were narrow-band as opposed to broadband in nature.

3.2.4 |. Time-frequency methods based on wavelets

Wavelet analysis is a widely used method for estimating the time-varying oscillatory properties of a neural time series. So-called wavelet families are groups of finite time series that are tuned to different frequencies and are convolved with the empirical signal. Wavelet analysis has been widely used because of its favorable properties: Differing from standard spectrograms, which are defined by fixed temporal smoothing across frequencies and fixed frequency smearing across time points, wavelets have variable time and frequency smoothing in which lower frequencies are more precisely represented in the frequency domain, whereas higher frequencies are more precisely represented in the time domain. Readers interested in the application of wavelet analysis may want to peruse the seminal review by Tallon-Baudry and Bertrand (1999) or read recent textbooks covering this topic (Cohen, 2014). Morlet wavelets are the most commonly used wavelets in neuroscience. In the time domain, they represent segments of sine and cosine functions at the frequencies of interest, multiplied by a Gaussian envelope. The width of the Gaussian envelope determines the trade-off between temporal smoothing and frequency smoothing and is in turn under the control of the Morlet parameter m. The Morlet parameter is typically between 5 and 10 and often equated with the number of cycles present in each wavelet of the family. Smoothing (or smearing) is a consequence of the Fourier uncertainty principle and represents an uncertainty in the temporal or frequency position of the signal. Smearing that introduces artifact or spurious effects from outside the time-frequency range of interest undermines validity. We discuss strategies for managing smoothing later in this section.

Some implementations (e.g., wavelet analysis in EEGLAB; Delorme & Makeig, 2004, https://sccn.ucsd.edu/eeglab/index.php) increase the amount of smoothing across the range of frequencies (e.g., higher frequencies experience greater smoothing), and this should be noted in the manuscript. Exact reporting of the settings used in defining a wavelet family is crucial for exact replication and reproduction of empirical findings. It is particularly helpful for readers if authors report the maximal temporal and frequency smoothing associated with a given wavelet family. For example, reporting on the smoothing in time at the highest and lowest frequency of interest, and the frequency smoothing at the lowest and highest frequency of interest enables readers to interpret differences in latency, or differences in frequency.

In an example experiment, researchers decide to conduct a wavelet analysis. They report the following to describe the wavelet family chosen:

A family of complex Morlet wavelets (Bertrand et al., 1994) were used to compute time-by-frequency representations of each artifact-free trial. A Morlet constant m = 7 was chosen because it ensured acceptable trade-off between time and frequency smoothing in the frequency range between 8 and 120 Hz (Tallon-Baudry & Bertrand, 1999). The Morlet constant m defines the ratio of each analysis frequency f0 and the standard deviation σf of the wavelet in the frequency domain, which corresponds to the smoothing in the frequency domain.

σf=f0/m. (1)

The corresponding smearing in the time domain is given as

σt=1/(2*pi*σf) (2)

Thus, given a segment length of 1600 ms (600 ms baseline and 1000 ms post-onset), wavelets were spaced at the native frequency resolution of 1/1.6 = 0.625 Hz. Wavelets with a center frequency between 8.75 Hz and 12.5 Hz were used to quantify alpha-band changes. Because the width of wavelets in the frequency and time domains changes as a function of m (7 here), frequency smoothing (σf) was 1.25 Hz (8.75 Hz/7, Equation 1) for the wavelet centered at 8.75 Hz and 1.79 Hz (12.5 Hz/7, Equation 1) for the wavelet centered at 12.5 Hz. Applying Equation (2), temporal smoothing (σt) at these frequencies was 1/(2*pi*1.25) = 0.127 s and 1/(2*pi*1.79) = 0.89 s, respectively. In the high-frequency band of interest, σt at 30 Hz was ….

Thus, in the case of Morlet wavelets, the standard deviation (smoothing) in both the time (σt) and frequency (σf) domains can be obtained using Equations (1) and (2). Smoothing changes with the frequency in both the time and frequency domain, because of Equation (1).

For methods other than wavelet analysis, different ways exist to identify the temporal and frequency smoothing at frequencies of interest. If unsure how to find these metrics for their specific method, researchers may empirically measure the smoothing by applying their algorithm to a pulse signal. A pulse signal is a vector of zeros having the duration of the empirical data to be analyzed, with a singular unit value (i.e., the number one) at its center. The full width at half maximum (FWHM) of the pulse in the filtered data is a metric corresponding to σt and σf and is often used to measure uncertainty or spread in time series analyses. An extensive tutorial and discussion of FWHM in time-frequency analysis with Morlet wavelets is given in Cohen (2019). Importantly, knowing and reporting the temporal and frequency smoothing is also crucial for any baselining procedures, discussed below.

3.2.5 |. Time-frequency methods based on Cohen’s class reduced interference distributions

The reduced interference distribution (RID) from Cohen’s class of time-frequency transforms offers a kernel-based approach to computing time-frequency transforms. A kernel is an algorithm which maps an input to an output. For a description of the RID, see (Cohen, 1995), and for additional discussion of differences with wavelets, see related EEG/ERP applications (Aviyente et al., 2011; Bernat et al., 2005). Perhaps the most relevant features of RIDs are represented in the nonlinear transforms they produce. RID time-frequency transforms have uniform time-frequency resolution, with accurate instantaneous power, and include local and global features. This means that they minimize the smoothing in time at low frequencies and the smoothing in frequency at high frequencies that is observed with wavelets (see Section 3.2.4). This is most relevant for event-related applications, where the high time resolution of the EEG/MEG is used to infer the timescale of brain signals. Another key property of RIDs is the preservation of power in the time-frequency representation of the signal (generally referred to as satisfying the marginals—sums across the time-frequency rows or columns). There is not currently evidence demonstrating that satisfying the marginals is relevant for EEG/ERP work, although when comparing time and time-frequency domain results it is helpful to have the same accurate preservation of the signal power across domains. As stated above, RIDs are nonlinear, relative to wavelets, and thus can be more difficult to interpret (e.g., signal reconstruction is more complicated). Finally, the RID characterizes global features (e.g., harmonics), relative to wavelets, which index only local activity. Several other approaches exist, which leverage kernels describing time-frequency distributions.

3.2.6 |. Baseline adjustment

The output of most time-frequency analyses consists of high-dimensional matrices of complex numbers, often containing values for sensors, time points, and frequencies. In addition, different indices may be computed. Time-varying spectral power is the most frequently used index, representing the magnitude of the oscillatory activity at a given sensor, time, and frequency. Power at one sensor or in one region of interest may be illustrated as a two-dimensional map, which shows the time-varying power over time relative to the event of interest at different frequencies. This allows us to compare spectral power observed before and after an event of interest, as well as at different temporal distances from this event. However, as discussed in Section 2.1, the interpretation of changes from baseline rests on assumptions regarding the underlying processes contributing to the spectrum. Furthermore, because power is a measure of variance, it is often distributed in a skewed manner across observations (e.g., trials, participants), which may complicate statistical analyses. To address this challenge, investigators may use transformations, such a log transform, with the purpose of making the observed distributions more Gaussian. However, transformations such as the log transform imply a multiplicative model of the time-by-frequency matrix, as discussed in Section 2.1. Paralleling the recommendations for spectral analysis, the model and assumptions underlying the composition of the time-frequency plane and their implications for data reduction should be described in the manuscript.

Researchers often perform a baselining procedure, in which the time-varying power is expressed as change in power relative to a suitable pre-stimulus time segment. Selection of the baseline segment should take into account the Fourier uncertainty principle mentioned above: Although a spectral estimate may be available for each sample point, the data in a time-frequency representation contain information that is smeared/smoothed, both in the temporal and frequency dimensions. Thus, researchers may wish to consider the following selection requirements for a suitable baseline segment: (1) The baseline segment should not be contaminated by edge artifacts and may not include time segments that are subjected to taper windows; (2) It should be of sufficient length to render a robust estimate of the baseline level; (3) It should be of sufficient distance from stimulus onset to exclude activity evoked by the stimulus. Such a segment will be temporally removed from the ramp of the taper window, or from the onset of the trial if no window was used, by at least one standard deviation (σt), at the lowest frequency considered in the analysis. In the same vein, the end of the baseline segment should be removed from event onset by at least one standard deviation. Finally, it is common to use a baseline duration that accommodates several cycles of the lowest frequency of interest, ensuring that the baseline segment contains a robust estimate of the oscillatory process under consideration. Following the above suggestions prevent the baseline from being contaminated by oscillatory activity following the stimulus, or by edge and window artifacts from the beginning of the epoch. When contrasting conditions, it is important to assure that no power differences exist during the baseline interval that would confound post-stimulus differences when baseline normalization is performed. In general, when performing statistical contrasts between conditions, baseline normalization may often be unnecessary, and authors may wish to cross-validate analyses with and without baseline adjustment, as well as examining any baseline differences between conditions.

3.2.7 |. Summary: Reporting on time-frequency analysis

All time-frequency analyses are strongly affected by the nature of the input data: It is thus recommended that authors detail the duration of the time range entering the analysis, including the duration of time ranges before and after any anchoring events. Paralleling requirements for ERP studies, the number of segments entering the analyses in each experimental condition and/or group should be reported, as it affects the signal-to-noise ratio of the resulting time-frequency representations. Furthermore, describing the implementation of the algorithm in sufficient detail to allow reproduction, even in other software, is recommended. The temporal and frequency smoothing inherent in time-frequency analyses should be reported in detail sufficient for readers to interpret the extent to which different events and phenomena in the time-frequency plane are to be considered overlapping or independent. Smearing information at the frequencies of interest also allows readers to understand the authors’ choice of any baseline segments used in the published work. Finally, if a nonlinear transformation or baseline removal was applied prior to statistical analysis, including a rationale for how these choices were made is recommended.

3.3 |. Phase-based analyses

Time domain averaging is a staple of electrophysiology, in which segments from repeated trials are time-locked to the event of interest and averaged together to minimize what is considered noise (e.g., processes that do not have similar time courses in each trial). This procedure enables calculating and visualizing waveforms that represent the mean response, or “evoked” response in Galambos’ taxonomy. Going beyond this approach that is most prominently used in event-related potential (ERP) research, researchers may use frequency domain or time-frequency domain approaches to quantify the amount of temporal similarity of a given oscillation across repeated trials. Methods toward this end are often referred to as phase-locking, phase-similarity, or phase clustering analyses (e.g., Lachaux et al., 1999). For example, researchers may wish to examine the amount of phase-locking of μ-oscillations as participants prepare for self-paced manual responses, or as participants listen to syllables varying in duration or intensity. For reviews of phase-similarity measures, readers are directed to extant tutorial and review papers (Aviyente et al., 2011; Lachaux et al., 1999; Roach & Mathalon, 2008; Tallon-Baudry & Bertrand, 1999).

3.3.1 |. Reporting on inputs of phase-based analyses

Because phase is typically extracted by one of the algorithms for extracting power as described above, the description of the inputs will likely contain the information discussed in Sections 3.1 and 3.2. Several authors have found that phase analyses are particularly sensitive to filtering, where filtering at high filter orders and/or in narrow frequency bands may result in spuriously high phase similarity across trials, participants, or sensors (Kolev & Yordanova, 1997; Kramer et al., 2008). Thus, detailed description of filters is particularly relevant when researchers are interested in phase-based analyses. In a similar vein, there is an ongoing discussion regarding the benefits versus disadvantages of multivariate processing steps, such as independent component analysis, often used for artifact removal (e.g., Castellanos & Makarov, 2006). To the extent that these methods remove a linear combination of channel readings from the data, they may alter the observed phase. However, analytical and empirical studies suggest that these changes do not affect the phase similarity or phase locking across trials or channels. To heighten the robustness of reported results, authors may want to compare results with and without advanced artifact removal techniques. There appears to be a need for systematic analyses of processing pipelines on observed effects, and such work is increasingly seen in the literature.

The mathematical nature of phase as a metric of location in a cycle raises another pertinent issue regarding the inputs of phase-based analyses, including analyses in the spatial domain as discussed below: Phase is not defined for broadband signals. Although it is possible to instruct an analytical tool, such as a MATLAB toolbox or Python library, to determine the phase for a broadband signal, the resulting values are meaningless. Phase is a circular index (e.g., degrees) of location within the cycle (e.g., at the peak going down; above zero-crossing going up, etc.). If multiple frequencies exist concurrently, then multiple cycles with conflicting locations can be found at any given time point, rendering the notion of phase meaningless. Thus, when reporting on the input of phase-based analyses, authors may want to specify the frequency specificity of the algorithm itself (e.g., convolution with a family of wavelets) or the filters used (e.g., the Filter-Hilbert method).

Phase-based analyses are highly sensitive to the number of trials entering the analysis. For example, the phase-locking value, often also referred to as phase-clustering or phase-synchrony index, computed as a function of 1 minus variance of phase values over trials, tends to decrease with increasing numbers of trials. This makes it difficult to compare experimental conditions or groups or participants in cases where the number of trials differ. As discussed below (Section 3.4), several algorithms are available for addressing this issue (e.g., Stam et al., 2007), but in general it is considered good practice to compare phase-based indices between conditions after ensuring that the trial count for each condition is matched, potentially by randomly dropping trials in a condition with a greater trial count, likely at the level of participants.

3.3.2 |. Summary: Reporting on phase-based analyses

The guidelines discussed in Sections 3.1 and 3.2 largely apply to phase-based analyses. Reporting on the number of trials entering the analysis per condition and subject is particularly relevant for phase-based analyses. Authors are also encouraged to ensure that phase is not estimated from broadband signals (e.g., the phase of a 4–8 Hz band-passed signal is undefined). To the extent that phase-clustering indices tend to be bounded between 0 and 1, authors may wish to take this non-normality into account when conducting statistical analyses (Maris & Oostenveld, 2007; Tallon et al., 1995), because many implementations of the general linear model assume normality.

3.4 |. Analyses of spatial dependence (“connectivity analysis”)

Although temporal sensitivity and specificity are often seen as the primary strength of neural time series analysis, these analyses may also provide unique ways to test hypotheses regarding interdependencies across space, specifically between different sensor or source locations. From the perspective of understanding brain function, these dependencies may provide a means to quantify large-scale interactions or connectivity between brain regions. Since most researchers reading these guidelines are likely working with noninvasive methods, we will first address the volume conduction issue. Then, we will provide a brief overview of commonly used methods and close with some comments on reporting.

3.4.1 |. The volume conduction problem

Volume conduction, or “field spread”, describes the phenomenon that neural activity in one area is captured not only by an electrode in the vicinity, but also by other electrodes at more distant locations. This leads to two relevant problems when it comes to studying neural interactions, especially when using EEG or MEG: 1) Signals from adjacent sensors will be highly correlated, without providing evidence for actual interactions between separate phenomena; 2) The signal at one sensor is a mixture of several underlying sources that are concurrently active. For these two main reasons, interpreting cross-area interactions from data recorded at the scalp is highly problematic. Therefore, it is advised to apply some form of spatial filtering to “unmix” the signal and, when using approaches for source reconstruction, to ideally obtain anatomically interpretable signals.1 If a priori regions of interest are available, researchers may decide to perform source modeling by using coordinates determined from a separate localizer run. Although this step will mitigate the volume conduction issue somewhat, depending on the inverse modeling approach used (Schoffelen & Gross, 2009), the issue will not be entirely eliminated, which has to be considered when interpreting and reporting the results.

3.4.2 |. Common oscillation-related connectivity measures

Brain oscillations have been proposed to play an important role in enabling inter-area communication (Singer, 1999). For example, optimal alignment of oscillatory phases, reflecting different excitability states, has been proposed to enable or block communication between respective neural ensembles (Fries, 2015). In this regard, spectral coherence (Nunez et al., 1997) and phase synchrony (Lachaux et al., 1999) are the most common measures for quantifying the consistency of phase differences between two recording sites. Another popular approach is to quantify correlations between the amplitude envelopes of band-pass filtered signals (e.g., Hipp et al., 2011), even though it is less clear how these slower processes support interareal communication compared to the aforementioned phase-based approaches. Amplitude correlation as well as coherence/phase-synchrony measures are heavily influenced by volume conduction. Variations of methods mitigating this issue have been proposed (e.g., orthogonalization of envelope time-series, imaginary coherence, weighted phase-lag index; Nolte et al., 2004; Stam et al., 2007). Together, these measures capture aspects of linear dependencies between two signals, without providing information about their directionality. This may be insufficient in some cases, and methods that operationalize causality in terms of temporal causality may be desirable. Granger causality estimated from Fourier-transformed data (Dhamala et al., 2008) is gaining popularity because it does not require the user to determine the model order, as is required in autoregressive modeling (see Section “Data-based analyses”).

Whereas linear relations are in general more easily understood and modeled, some interdependencies may be non-linear. Undirected (e.g., Mutual Information) and directed (e.g., Transfer Entropy) measures have been applied recently to capture interdependencies, within and across frequency bands, in a generalized manner (e.g., Giordano et al., 2017). Finally, all aforementioned measures are data-driven (i.e., they do not involve an explicit model of how the data are generated). When a generative neuronal model exists, along with a clear and circumscribed region of interest, then using a Dynamical Causal Model framework also offers an interesting option to quantify interactions, along with other parameters of the neuronal model (Friston et al., 2012).

3.4.3 |. Reporting on outputs of interareal dependence analysis

This section aims to illustrate that there is no single best measure to quantify “connectivity” based on EEG/MEG signals. In practice, the decision about which measures to report will most commonly depend on the time scale of the putative interaction (e.g., slow: envelope correlations; fast: phase-based measures), and whether the directionality of the interaction is a relevant piece of information with regards to the research question. A crucial need is to explicitly describe how the volume conduction issue is addressed. Contrasting conditions does mitigate this issue. However, especially for measures such as coherence or phase synchrony, which are appealing due to their presumed mechanistic relevance, problems remain when condition differences in terms of power are present in an overlapping frequency range. Stratifying trials within condition with respect to power or resorting to measures less affected by volume conduction would provide alternative ways of addressing this issue. Independent of the choice of connectivity metric, applying approaches to unmix the signals is also helpful. Researchers should precisely describe the source-modeling approach used (e.g., sparse or distributed set of sources). Next, to mitigate volume conduction issues, these approaches also make the results more interpretable by referring to an anatomical region rather than to an arbitrary electrode or sensor. When clear regions of interest exist, results may be depicted using surface or volume plots in which connectivity strength (or differences between conditions) are shown with reference to the seed region. These 3D depictions are not possible unless applying graph theoretical measures (Bullmore & Sporns, 2009), when full (i.e., all-to-all) connectivity effects need to be visualized. For this purpose, circular connectograms or Sankey plots (Schmidt, 2008) may be an option. Although all-to-all connectivity analyses may sound appealing, they dramatically increase the multiple-comparison issue, even more so when time and frequency are included as additional dimensions. Therefore, having at least one clear region of interest facilitates not only computation, but also the reporting of effects.

3.4.4 |. Summary: Reporting on spatial dependence analyses

In addition to reporting on the methods for generating the spectral representation used for assessing the dependence, the metric, and algorithm used (e.g., inter-site phase locking, Granger causality, DCM) should be reported with references that facilitate replication. The same applies when using Graph theory to describe connectivity matrices. Citing software version and manufacturer of software does not suffice in this regard. Likewise, the method for addressing volume conduction or spatial smearing should be detailed and the algorithm provided. Finally, it should be made clear to what extent spatial nodes examined were hypothesized a priori or discovered ad hoc, because the multiple comparison problem tends to be particularly severe in studies with dense sensor arrays and full site-to-site connectivity.

3.5 |. Testing hypotheses regarding interactions between oscillations at different frequencies and interactions between oscillations and behavior (coupling analyses)

In recent decades, driven by computational and animal-model work, interest has grown in interactions between brain oscillations at different frequencies. Researchers have developed methods to characterize different types of cross-frequency interactions, which are often categorized by what is being measured for each of the oscillations of interest. Furthermore, similar techniques are widely used to assess coupling between neural and autonomic (Mueller et al., 2010) or neural and behavioral data (Vanrullen & Dubois, 2011). In this section, we provide recommendations for reporting on the usage of these approaches.

3.5.1 |. Principles of cross-frequency coupling analyses

Many studies interested in cross-frequency interactions use an approach akin to cross-tabulation in statistical dependence analysis. For example, phase-to-amplitude coupling methods quantify the extent to which the phase at one frequency is systematically related to the amplitude at another frequency (Canolty & Knight, 2010; Kramer et al., 2008; Voytek et al., 2013). In a similar vein, phase-to-phase and amplitude-to-amplitude coupling analyses aim to quantify statistical dependencies between phases or amplitudes measured at different frequencies. Specifically, with respect to phase-to-amplitude measurement, various techniques follow the same logic: Take a time series and transform it into a spectrogram, divide the phase values of a lower carrier frequency into bins, for each bin find all the time points with that phase, and measure the power of higher frequencies during these time points. Next, observe the distribution of power in high frequencies as a function of low-frequency phase and compare this to a null distribution with parametric (χ2) or nonparametric tests. Recent reviews of various techniques recommend the Modulation Index (MI) as a robust estimate for characterizing the coupling between phase and power (Tort et al., 2010), but many alternative algorithms have been used (Hülsemann et al., 2019). These algorithms are often applied across a range of higher frequencies in order to identify the frequencies with the most phase locking to the low-frequency carrier frequencies. Analyses that focus on coupling between power at different frequencies or phase angles at different frequencies tend to follow the same principle of identifying statistical dependence, often using cross-histograms.

3.5.2 |. Principles of brain-behavior coupling analyses

Rooted in the notion that brain oscillations represent cycles of excitability of neural populations, there is a long history of research into the relationship between the phase of ongoing oscillatory activity and behavior or other physiological phenomena (Klimesch, 2018). The relationship between phase and behavior can be tested with a variety of methods, many of which were recently compared in a systematic review (Wolpert & Tallon-Baudry, 2021). These methods generally follow the same logic as those described for cross-frequency coupling: The oscillatory phase or amplitude is divided into bins, and the distribution of the behavioral variable across these neural bins is analyzed.

3.5.3 |. Reporting on the implementation of coupling analyses

When reporting on cross-frequency coupling analyses, we recommend a clear indication of the algorithm used, including all necessary preprocessing steps to separate out the frequency bands of interest. As discussed in Section 3.3, estimation of phase requires narrow-band signals, highlighting the benefit of detailing the underlying frequency or time-frequency analysis. If binning is used as described above, authors should indicate how many bins were used. It is also recommended to report the number of trials going into each condition, and to ensure equal number of trials. If the number of trials differs between conditions, then the same goal can be accomplished through resampling of trials.

3.5.4 |. Reporting the output of coupling analyses

Many different statistical indices of coupling are used in the field. It is recommended that the variables be normally distributed if using parametric statistical tests or apply normalizing transformations. Care should also be taken to ensure that other assumptions of the statistical model are met. Nonparametric tests are also widely used, including approaches using permutation, randomization, and re-sampling techniques (Groppe et al., 2011; Maris & Oostenveld, 2007). Authors should report the specific algorithm used, provide a link to the code used, and indicate what was randomized/permuted if applicable. Finally, we recommend that authors clearly indicate whether coupling analyses were done within or across subjects and show the whole range of the distribution or histogram for each of the variables entering the analysis.

4 |. CONSIDERATIONS FOR STATISTICAL ANALYSIS AND DATA FIGURES

The previous sections highlighted the abundance of dependent variables and the richness of information that may be obtained in studies of oscillatory activity. The number of potential variables (e.g., metrics of power, phase, phase-locking, inter-area, and inter-frequency interactions) as well as their high-dimensional nature (i.e., time, location, frequency) pose-specific demands on statistical procedures. In the following sections, we focus on statistical approaches that are particularly relevant when dealing with high-dimensional data and methods for addressing other challenges specific to the measures of oscillatory activity discussed above. Readers with a broader interest in the foundations of measurement and statistical analysis of neural data are directed to available guidelines and review papers (Keil et al., 2014; Luck, 2005; Luck & Gaspelin, 2017; Maris, 2012).

4.1 |. Statistical analysis with spectral outcome variables

Almost always, the main interest in electrophysiological studies pertains to the difference between two or more experimental conditions and/or groups. Therefore, a necessary, but not sufficient requirement for an informative empirical result is a reliable difference between conditions. In practice, almost always, the significance of this difference is evaluated by means of a statistical test. Theory and application of statistical tests are well established, but only for the case of univariate/scalar observations (e.g., power in a given channel and frequency band). Care must be taken, however, since power values are non-normally distributed. Either normalizing corrections or nonparametric statistical procedures are preferred. Between condition comparisons of whole spatio-spectral matrices (multivariate observations) require specialized statistical methods, two of which will be discussed in the following (see 4.1.1. and 4.1.2.). Both methods effectively deal with the so-called multiple comparison problem: Inflation of the Type I error (false alarm) rate, which may occur if univariate statistical tests are applied to multivariate observations.

4.1.1 |. Methods based on regions of interest

A region of interest (ROI) comprises a set of channel-frequency or area-frequency pairs or channel/area-time-frequency triplets at which a between-conditions difference is expected to occur. This ROI must be chosen before the data are known. If the power values (or any other measure) are averaged over these channel-frequency pairs, then the original multivariate problem reduces to a univariate problem, and standard statistical tests (t test, F-test) may be applied. There are three ways of determining an ROI: (1) based on published results and/or hypotheses, (2) based on an anatomical atlas, in an estimated source space, and (3) based on a localizer (Maris, 2012). In cases where published results and/or a priori hypotheses are used for determining an ROI, preregistration of this ROI-based analysis is recommended. It is possible to use multiple ROIs. In that case, Bonferroni (usually too conservative, as it assumes that the tests are uncorrelated) or some other correction method must be used to prevent Type I error inflation as a result of the multiple tests (one per ROI).

4.1.2 |. Mass univariate techniques

In its simplest form, the mass univariate technique is a generalization of the ROI-based method, with one ROI per channel-frequency pair. Unfortunately, the typical Bonferroni correction drastically reduces the sensitivity of this approach. The overly conservative nature of Bonferroni correction is due to the fact that many of the statistical comparisons are not independent of each other. To increase sensitivity, methods have been proposed that are based on permutation test statistics that depend on all channel-frequency pairs jointly (Maris & Oostenveld, 2007). These include selecting the maximum (minimum) test-statistic from each permutation (Karniski et al., 1994) and forming a tmax or Fmax distribution that is used in place of the statistical reference distribution (student’s t, or F-distribution). The most popular of these statistics is the so-called cluster-based tests, which start from the univariate test statistics for all channel-frequency pairs and then combine these test statistics in a way that reflects the spatio-spectral clustering that one observes with genuine physiological effects. Using the permutation distribution as a reference distribution, these cluster-based tests control for the Type I Error rate under the null hypothesis of identical probability distributions for the raw spatiotemporal data in the two conditions and therefore also for the derived spatio-spectral data (Maris & Oostenveld, 2007).

When reporting results from cluster-based permutation tests, it is important to be aware that the null hypothesis pertains to the whole raw spatiotemporal data matrix. Therefore, it is not permissible to make spatially and/or spectrally specific inferences such as, “There was an effect over area A in frequency band X, but not over area B in frequency band Y.” This point has been made in several publications (Maris, 2012; Maris & Oostenveld, 2007; Piai et al., 2015). For a tutorial and recommendations regarding appropriate language, readers are referred to the helpful discussion by Sassenhagen and Draschkow (2019).

4.1.3 |. Usage of principal component analysis for data reduction

PCA is often applied to spatiotemporal electrophysiological data to identify linear combinations of sensors (i.e., components) that explain the most variance. Decomposition methods related to PCA have been proposed for the analysis of coherence patterns across electrode locations. These are routinely obtained in the form of cross-spectral density or coherence matrices for a range of frequencies (van der Meij et al., 2015). These methods do not produce components that explain the most variance, but components that explain the data using the most parsimonious three-way tensor decomposition. Although these methods produce physiologically plausible components (see van der Meij et al., 2016), they do not necessarily correspond to existing physiological sources. This is because, in most cases, there are a large number of plausible arrangements of the variance across coherence patterns that can equally account for the data, making solutions not unique.

4.1.4 |. Bayesian statistics and machine learning approaches

In addition to the multivariate and mass univariate methods for traditional null hypothesis testing described above, the field of neuroscience has seen a steady increase in usage of Bayesian approaches for modeling and statistical testing. Bayesian approaches share the common goal of quantifying the extent to which prior knowledge is updated by new data (van de Schoot et al., 2021). In the context of neural time series analysis, Bayesian approaches have been used for combining information obtained from different measurement modalities (Kook et al., 2021), as well as hierarchically modeling different sources of variance that contribute to a neural time series (Gorrostieta et al., 2013; Zhang et al., 2016). An increasingly popular Bayesian index is the Bayes factor, which has some similarity in use and interpretation with traditional null hypothesis test statistics (e.g., p values). However, its usage remains a matter of debate in the literature, and researchers are encouraged to consider potential limitations, such as the dependence on the precise shape of the prior distributions that are compared by means of the Bayes factor, along with potential strengths (Keysers et al., 2020). In hypothesis testing, Bayes factors are frequently used to express the amount of support for a given hypothesis over another (e.g., the null hypothesis vs. alternative hypothesis). For example, the extent to which differences in EEG signals across different experimental conditions are in agreement with one of several a priori probability distributions is readily expressed as a Bayes factor (Kopp et al., 2016). Bayes factors can also be used to transitively compare different models to each other (see Thigpen et al., 2019, for an example). Because Bayes factors do not involve rejection of a null hypothesis based on the likelihood of a parameter to occur, they are not as strongly affected by multiple-comparison problems that need to be considered in traditional frequentist null hypothesis testing (e.g., Gelman et al., 2012). As such, they can be used for scalp mapping (e.g., Stegmann et al., 2020) as well as point-wise time series analyses (Antov et al., 2020). A final consideration is that Bayes factors may be used to quantify the absence of evidence, such as support for the null hypothesis, which is not readily accomplished in a null hypothesis testing framework (Keysers et al., 2020).

An extensive discussion of Bayesian statistical techniques is outside the scope of the present report. A general set of guidelines are given in van de Schoot et al. (2021). Authors using Bayesian approaches in electrophysiological work are directed to recent recommendations for reporting and preregistration of ERP studies (Paul et al., 2021), which contain suggested language and information. Notably, replication of Bayesian methods involves a precise description of the priors along with the models included in the analysis. If Bayes factors are used, including a rationale for the interpretation of different levels of Bayes factors, as well as what exact software implementation was used for their estimation, is recommended.

Finally, machine learning approaches have been increasingly used, notably in decoding analyses that use classification algorithms such as logistic regression, discriminant analysis, and support vector machines (e.g., Bae & Luck, 2019). For example, researchers have used these techniques to examine the extent to which time-varying alpha power topographies contain decodable information regarding an observer’s visuo-spatial attention focus (Bae & Luck, 2018) or responses to conditioned stimuli (Riels et al., 2022). When using these methods, reporting the specific algorithm used, including the software implementation, along with the exact method for cross-validation and model evaluation, is recommended. Decoding (i.e., classification) accuracy should be reported as a proportion and confusion matrices indicated if possible. If resampling (e.g., permutation) methods are used to address multiple comparisons, then the reporting guidelines in Section 4.1 should be applied. In a similar vein, inverted encoding models have been increasingly used to examine tuning of neural variables to specific feature dimensions, such as orientation, location, or facial expression (e.g., Garcia et al., 2013). When using such models, including a description of how model fit was evaluated and how noise was addressed (Liu et al., 2018) is recommended. Similarly, when model weights are interpreted and reported, including a discussion of how weights were extracted from the model and how noise contributions were addressed (Haufe et al., 2014) is recommended.

4.2 |. Recommendations for data figures

Many of the analytical strategies, methods, and algorithms discussed above make use of high-dimensional aspects of neurophysiological time series, often reflecting a combination of spatial, temporal, and/or frequency information. Therefore, the resulting data figures are often high-dimensional (e.g., connectivity matrices, cross-frequency interaction maps) and therefore difficult to present in two-dimensional journal space. Color coding of third dimensions and use of multi-panel figures are widely accepted ways to address this issue. To the extent that most publication outlets provide options for supplemental online materials, authors may also wish to document complex, high-dimensional data using suitable digital representations, which may include data shown in figures, movies, or code. In a similar vein, sharing code and data through widely accessible portals such as github, https://github.com, the open science framework, https://osf.io, openneuro, https://openneuro.org, databrary, https://nyu.databrary.org, etc., further enables readers to illustrate data in a way that is intuitive to them, thus facilitating communication, reproducibility, and replicability.

4.2.1 |. Recommendations for illustrating distributions of the dependent variable

Reduced data, pooled across frequencies, sensors, time points, etc., are often used as dependent variables for hypothesis testing. A discussion of how to illustrate such low-dimensional data is outside the scope of the present paper. Many journals have encouraged authors to join recent discipline-wide trends toward providing distribution-based figures instead of, or in addition to, bar plots showing measures of central tendency. One goal of this trend is to clearly illustrate inter-participant variability, aiding in communication of robustness and effect size. Such figure types include scatter plots, so-called violin plots, pirate plots, histograms, and smoothed distribution plots, popular in the context of Bayesian approaches. They are useful for illustrating dependent variables after substantial data reduction and allow readers to assess consistency of effects, as well as providing a visual impression of effect size (Rousselet et al., 2016). Many widely used statistics packages include methods for producing such distribution-based plots (e.g., Kampstra, 2008). Often, distribution-based figures will be accompanied by data figures illustrating spatial and temporal aspects of the data, discussed next.

4.2.2 |. Recommendations for line graphs

Two-dimensional plots such as mean power values across time points, frequencies, and sensors often serve to illustrate time course data, similar to figures in ERP studies. Spectral power is also often illustrated as a line or bar graph plotted with two dimensions, frequency and power. Often, recommendations for line graphs parallel published recommendations for ERP work (Keil et al., 2014; Picton et al., 2000). These recommendations include clearly labeling the x axis and y axis, with reference time points (e.g., stimulus onset) indicated by appropriate markers such as a vertical line. The physical unit should be prominently labeled near the appropriate axis, which will often be the y axis. Furthermore, clearly visible tick marks at appropriate spacing assist readers in correctly identifying time ranges of interest.

In addition, shaded error areas around line plots (Figure 8) have become increasingly used and are recommended because they allow readers to recognize time ranges with higher versus lower variability. The type of variability index that is most helpful will depend on the study questions and the hypotheses being tested. Metrics of between-subjects variability (e.g., the standard deviation or standard error of the mean) are most informative in studies with between-subjects designs, such as group comparisons or correlational studies of inter-individual differences. They are also widely used to illustrate variability in within-subjects designs. As an alternative, these latter studies may also consider displaying suitable estimates of within-subjects variability across conditions, often more pertinent for illustrating the robustness of condition differences (Cousineau, 2005, 2017).

FIGURE 8.

FIGURE 8

Example of a line figure with corresponding topography, showing time-varying alpha power changes, expressed in percent change from baseline, as indicated on the y axis. Data from electrode location oz are shown, and oz is highlighted in the topography shown in the inset. The baseline segment used for percent conversion is clearly marked, and a time axis showing stimulus onset at time zero is provided. Shaded error bars (here: Standard error of the mean) illustrate the variability of the time-varying power change across participants. The averaged topography across a time window (red line segment) is shown as the inset, highlighting the electrode from which the data are taken. It is accompanied by a color bar, which is labeled with the unit used (here: Percent change)

4.2.3 |. Recommendations for higher-dimensional figures such as time-by-frequency plots

Displaying changes in power across a time-by-frequency plane is often accomplished by color coding power, phase-locking, or another frequency domain index as a third dimension, resulting in a figure as shown in Figure 9. Given concerns discussed above regarding broadband phenomena being misinterpreted as band specific, it is highly recommended that time-frequency plots include a sufficient number of frequencies to illustrate the extent to which any effects are specific to a given frequency band. Often, this will involve including low frequencies, which assists in identifying transient responses masquerading as high-frequency oscillatory bursts. Conveying the information of interest is facilitated by selecting a color or grayscale scheme that appropriately translates distance in data space into distance in color space. For example, traditionally used color schemes ranging from blue to red often distort the representation of the numerical range and make small differences in the upper range of the distribution appear larger than they are (Karim et al., 2019). Furthermore, the scientific community has increasingly prioritized accommodating those with color vision conditions such as red-green blindness or yellow-blue blindness. Many colormaps exist that accomplish veridical representation of the numerical range while also allowing people with color vision conditions to glean the appropriate information from the figure (Nuñez et al., 2018). Examples include colormaps with names, such as “Viridis”, “Magma”, or “Cividis”, implementable in most programming environments. Authors are encouraged to consider the range of the data and select an appropriate colormap, ensuring a fair and complete representation of the data range across the color range. For example, in raw power plots without baseline removal or normalization, power values are non-negative, and thus a unipolar map is preferred.

FIGURE 9.

FIGURE 9

Example of a time-frequency plot, showing the time-varying power at a range of different frequencies. The frequency axis is clearly labeled with the frequencies depicted in each row of the plot. Data from electrode location oz are shown, indicated in the top left corner. The baseline segment used for percent change conversion is clearly marked, and a time axis showing stimulus onset at time zero is provided. It is accompanied by a color bar, which is labeled with the unit used (here: Percent change from baseline). Note that calculating percent change implicitly assumes a multiplicative model of change in oscillatory power. Authors may wish to make this assumption explicit and provide the underlying rationale in the manuscript

As discussed above (see Section 3.2), temporal smearing is a challenge for the interpretation of time-frequency information. Thus, sufficient time before and after events of interest should be included in the figure, allowing readers to assess the variability in the information provided and to understand the time course. In the case of spectrograms or similar analyses with fixed window length, it is also helpful if the figure includes a representation of the window length used to compute the time-frequency representation. Authors may opt to discuss key metrics of temporal smearing such as the FWHM or standard deviation in the time domain or in the frequency domain in the figure caption to facilitate reading.

4.2.4 |. Recommendations for figures with topographical information

Many methods exist for topographically mapping physical or statistical indices of frequency domain activity onto spatial representations of the head or brain. Often, these will involve interpolation of values into the inter-electrode spaces on a scalp volume or brain volume. In these cases, specifying the interpolation method (e.g., linear interpolation, spline interpolation, machine learning-based approaches) is critical for reproduction and communication of findings, because some interpolation techniques contain underlying assumptions regarding the nature of the interpolated data (Brunet et al., 2011; Perrin et al., 1987), and different interpolation methods may yield drastically different results at certain locations of the brain or head volume.

4.2.5 |. Recommendations for figures showing spatial relations

Authors may be interested in examining spatial relations between sensors or brain regions. A wide variety of methods is available, as discussed above (see Section 3.4), often resulting in high-dimensional dependence information, sometimes including directional information. Major figure types include color-coded matrices illustrating pairwise dependence information such as inter-site phase-locking or Granger causality and graphs illustrating connectivity/dependency as lines or arrows between spatial nodes. For color-coded dependence maps, the recommendations for time-frequency plots above apply. Using clearly labeled axes and including a clearly visible color bar, mapping colors to numerical values, is recommended. Connectivity graphs with nodes likewise benefit from clearly labeled nodes and require clear definitions and figure legends defining the implications of graphical elements, such as line thickness, arrow direction, line style, shading, or any other graphical indicators of inter-node dependence. Any thresholding used to limit the lines shown should also be made explicit in the methods and figure caption.

5 |. CHECKLISTS

In order to facilitate communication concerning terminology, best practices in methodology, assessment, transparency, and replication, and to provide a general guideline for studies concerning oscillatory brain activity, we have provided a set of detailed checklists authors are encouraged to address in their manuscripts for publication. Table 2 covers principal methodological elements of spectral analyses. Given that all forms of oscillatory measures (e.g., time-frequency analysis, phase-based analyses, etc.) include these fundamental properties, researchers conducting any form of spectral analysis should provide the information in Table 2. Table 3 expands upon the spectral domain guidelines by including information pertinent to time-frequency analyses. This encompasses what specific elements should be reported based on the methodology used to conduct time-frequency analysis. Additional tables provide recommendations for phase-based analyses (Table 4), connectivity analyses (Table 5), and coupling analyses (Table 6). Additional guidelines for reporting figures can be found in Table 7. Although not directly covered here, authors are also encouraged to include additional details highlighted in previous reports (Keil et al., 2014; Picton et al., 2000) pertaining to general EEG/MEG methods, such as equipment recording characteristics, preprocessing steps, and stimulus timing parameters.

TABLE 2.

Checklist for spectral analyses

# Information to be included in the manuscript Sections Completed?
1. Specifying the inputs and outputs of all algorithms used in the processing pipeline 1.3 Y/N
2. A discussion of how oscillatory activity was conceptualized relative to 1/f noise and/or other broadband phenomena (underlying model) 1.2, 2.1 Y/N
3. A rationale for the choice of measurement of power in a specific frequency band, including how nonperiodic (1/f) contributions to the spectrum were addressed 3.1 Y/N
4. A statement describing the specific type of Fourier- or non-Fourier-based algorithm used for transformation from the time domain to the frequency domain.
For non-Fourier analysis: (1) If using parametric spectral analysis, the extent to which the assumption that the observed data are reflections of stochastic processes identified through autoregressive models. Tests of statistical stationarity of the time series can be used to address this assumption. (2) If using data-driven methods (e.g., Half-wave analyses, matching pursuit algorithms, etc.), a description of the specific algorithm used, including code and example data, and preprocessing steps
1.1, 1.3, 2.4, & 3.1 Y/N
5. The exact duration of the time segments (e.g., duration of segmented trials and the duration of the temporal integration windows) used for transformation into the frequency domain for each condition of interest. In addition, the total number of segments (e.g., trials per condition) entering an averaged spectrum, along with how data epochs were combined within and across recordings (e.g., overlapping windows) 1.3.2, 2.3, 3.1, 3.2, & 3.1.5 Y/N
6. The type, total number of, overlap between, and duration of any taper window functions, along with their ramp-on and ramp-off duration. If alternative and/or additional steps were taken to address edge artifacts, these should be stated. If applicable, the choice of taper window function should be specified as being guided by computational principles and/or by aiming to replicate current methods (e.g., Hann or Hamming window) 3.1.3 & 3.1.5 Y/N
7. If zero-padding is applied, the number and location of added zeros (e.g., before the time series, after the time series, or both before and after the time series) 2.3 Y/N
8. All normalization steps (e.g., by length of time, multiplication of the lower half of the spectrum, or by the complex conjugate, etc.) applied to the spectral power or power density calculation 3.1.1 & 3.1.5 Y/N
9. The native frequency resolution of the spectrum (e.g., 1/(epoch duration in seconds). In addition, the number of frequency bins extracted for a specific band of interest, and the range of these bins (e.g., 7.98 Hz to 11.97 Hz) 1.3.2, 2.3, & 3.1.5 Y/N
10. Whether analyses were conducted using single trials or the average across trials 3.2.1 Y/N
11. How band power was measured from a spectrum. The following recommendations are provided: (1) If measuring raw band power, report the full spectrum the band was extracted from, how the band was measured (i.e., mean, median, and peak), and how 1/f effects or other spectral shape effects were addressed. (2) If measuring relative band power, the full power spectrum should be reported, along with how 1/f effects or other spectral shape effects were addressed (note that this method is not recommended). (3) If measuring band power ratios between specific frequencies, describe the full power spectrum, including the calculation of specific frequency band power (note that this method is not recommended) 3.1.2 Y/N

TABLE 3.

Checklist for time-frequency analyses

# Information to be included in the manuscript Sections Completed?
1. The specific stage of processing in which time-frequency analysis was applied (e.g., single-trials, after trial averaging, etc.). This clarifies which aspect(s) of oscillatory activity (e.g., spontaneous and/or induced, evoked, etc.) are being observed. If averaged potentials of each trial were subtracted prior to conducting time-frequency analyses on single trials, this step should be stated along with figures depicting the averaged potential in both time and frequency domains 3.2.1 Y/N
2. For authors using Fourier-based time-frequency analyses (spectrograms), the following recommendations are provided for each specific approach: (1) If using spectrograms, or moving-window DFT/FFT analyses, report the specific window size and step size. Additional within-window averaging achieved via algorithms (e.g., Welch periodogram method) should also be reported. (2) If using multitaper analyses, the type of tapering windows used, total number used, their center frequencies, whether any smoothing factors are applied, and the specific algorithms used to form their shapes should be reported. (3) If using complex demodulation, the frequencies examined, and the specific properties of the low-pass filter used (i.e., filter type, order, and cutoff frequency) should be reported 3.2.2 Y/N
3. For authors conducting time-frequency analyses based on time domain filtering methods (i.e., Filter-Hilbert or similar approaches), the software and version number of the Hilbert transform used to identify the phase-shifted version of the empirical signal. In addition, authors should state the specific properties of band-pass filters (i.e., filter types, order, and cutoff frequencies) 3.2.3 Y/N
4. If using wavelet-based methods for time-frequency analyses, include the smoothing/smearing for the minimum and maximum frequency of interest and indicate the maximal temporal and frequency smoothing for a specific wavelet family. In addition, if using Morlet wavelets, include the Morlet parameter (m) indicating the trade-off between time and frequency smoothing and smoothing values in the time (σt) and frequency (σf) domains 3.2.4 Y/N
5. As for frequency domain analyses, specify the duration of analytical time segments used, with pre- and post-event onset durations. In addition, include the number of time segments for each condition/group 3.2.6 Y/N
6. Descriptions of any nonlinear transformations and/or baseline adjustment that were used prior to statistical analyses, accompanied by a rationale for these decisions. Specifically, include the duration used as a baseline and the type of algorithm (e.g., division, subtraction, etc.) used for this adjustment 3.2.6 Y/N

TABLE 4.

Checklist for phase-based analyses

# Information to be included in the manuscript Sections Completed?
1. Properties of filters used (i.e., filter type, order, and cutoff frequency). A high level of detail is needed especially if using high filter orders and/or filtering in narrow frequency bands 3.3.1 Y/N
2. If applying advanced artifact removal techniques (e.g., ICA), authors are encouraged to consider reporting results with and without the use of these methods. Specific preprocessing pipeline steps should also be described 3.3.1 Y/N
3. The input of phase-based analyses, including the frequency specificity of the algorithm and/or filters used. Furthermore, authors should not estimate phase of broadband signals. Non-normality of indices (e.g., being bounded between 0 and 1) should be addressed appropriately in averaging and statistical testing 3.3.1 Y/N
4. The number of trials used in the analysis per condition or group. Should be reported along with steps taken to address unequal trials counts. For example, a description of how equal trial numbers for each condition were achieved by randomly dropping trials or how the algorithm used addresses unequal trial counts 3.3.1 Y/N

TABLE 5.

Checklist for connectivity analyses

# Steps to be addressed in the manuscript Sections Completed?
1. The specific source-modeling approach is described, including the metrics and algorithms used with references to these methods. This should also be done if using Graph theory to assess connectivity matrices 3.4.3 Y/N
2. Methods and algorithms used to address volume conduction and/or spatial smearing are detailed, providing specific references or mathematical formulations 3.4.4 Y/N
3. State the extent to which spatial nodes were examined ad hoc or specified a priori 3.4.4 Y/N

TABLE 6.

Checklist for coupling analyses

# Steps to be addressed in the manuscript Sections Completed?
1. The specific preprocessing steps and algorithm used to perform cross-frequency coupling analyses. A statement is included regarding the number of trials per condition and steps used to ensure equal trials per condition. If binning is used, state the number of bins used 3.5.3 Y/N
2. The extent to which variables are normally distributed is described along with appropriate parametric or nonparametric statistical tests. This often will include showing the range and distribution of variables used in analyses. If data are randomized/permuted to determine statistical significance, the specific algorithms and code used should be reported 3.5.4 Y/N
3. It is stated whether coupling analyses and randomization/permutation were conducted within participants, between participants, or in a mixed design 3.5.4 Y/N

TABLE 7.

Checklist for data figures

# Steps to be addressed Sections Completed?
1. Including distributions in figures is encouraged where possible. Distribution-based figures are preferred, showing inter-participant variability, such as scatterplots, violin plots, pirate plots, histograms, smoothed distribution plots, and/or bar/line plots including individual subject data points.
Often, within-participant variability will be of greater interest, and thus authors may wish to consider connected line plots displaying within-participant effects for each participant
4.2.1 Y/N
2. If using line graphs to represent power values across time, frequency, or sensor(s)/sources, the x- and y-axes are clearly labeled with reference points associated with specific markers (e.g., time points with a vertical line indicating stimulus onset). The unit of measurement is labeled near each respective axis, with clear tick marks indicating x and y ranges of interest.
Authors may also consider applying shaded areas along line plots to indicate ranges with lower versus higher variability
4.2.2 Y/N
3. If displaying changes in frequency power across time, authors may use color coding to illustrate the third dimension (e.g., power). In addition to the requirements described for line plots (i.e., clearly labeled x- and y-axes with respective units), time-frequency plots should contain sufficient frequencies above and below a range where an effect is observed to demonstrate frequency specificity (e.g., to demonstrate specificity of changes at 8–12 Hz, a y range that extends beyond this region is needed, such as from 2 to 50 Hz). Similarly, time units should extend far before and after events of interest to allow for assessment of any temporal smearing effect. Furthermore, authors should use color schemes that facilitate veridical representations of value ranges and accommodate those with color vision deficits
If using a spectrogram, or moving-window approach, authors are encouraged to include a figure element depicting the window length used.
4.2.3 Y/N
4. If depicting topography of frequency domain activity across the scalp, include the interpolation method used for calculating values into an inter-electrode space across the scalp 4.2.4 Y/N
5. If reporting spatial relationships between sensors/sources, figures using color-coded matrices to depict pair-wise dependent information and/or graphs demonstrating connectivity lines/arrows between spatial nodes may be used. Include clearly labeled x- and y-axes and color bars. For connectivity graphs, nodes should be clearly labeled 4.2.5 Y/N

ACKNOWLEDGMENTS

This work was supported by National Institute of Mental Health grants R01MH112558 and R01MH125615 to A. Keil and M. Ding and NIA grant RF1AG062666 to G. Gratton and M. Fabiani. The authors would like to thank the many researchers who commented on previous versions of this manuscript. The authors are particularly grateful to the following individuals for their input at various stages of manuscript preparation: Martin Antov, Felix Bartsch, Maeve Boylan, Margaret M. Bradley, Jessica Sanches Braga Figueira, W. Matt Friedl, Peter J. Lang, Gregory A. Miller, Emily Martinez, Andrew Maurer, Christian Panitz, Jourdan Pouliot, Harold Rocha, Kaia Sargent, Lisa S. Scott, Anna-Lena Tebbe, and Cindy Yee-Bradbury.

Footnotes

1

Note that intracranial recordings such as ECoG do not completely eliminate issues of volume conduction, especially when using a unipolar reference; e.g., Mercier et al., (2017).

REFERENCES

  1. Akaike H (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. [Google Scholar]
  2. Antov MI, Plog E, Bierwirth P, Keil A, & Stockhorst U (2020). Visuocortical tuning to a threat-related feature persists after extinction and consolidation of conditioned fear. Scientific Reports, 10(1), 1–15. 10.1038/s41598-020-60597-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aviyente S, Bernat EM, Evans WS, & Sponheim SR (2011). A phase synchrony measure for quantifying dynamic functional integration in the brain. Human Brain Mapping, 32(1), 80–93. 10.1002/hbm.21000 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bae G-Y, & Luck SJ (2018). Dissociable decoding of spatial attention and working memory from EEG oscillations and sustained potentials. Journal of Neuroscience, 38(2), 409–422. 10.1523/JNEUROSCI.2860-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bae G-Y, & Luck SJ (2019). Reactivation of previous experiences in a working memory task. Psychological Science, 30(4), 587–595. 10.1177/0956797619830398 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Barry RJ, & Blasio FMD (2021). Characterizing pink and white noise in the human electroencephalogram. Journal of Neural Engineering, 18(3), 034001. 10.1088/1741-2552/abe399 [DOI] [PubMed] [Google Scholar]
  7. Bernat EM, Williams WJ, & Gehring WJ (2005). Decomposing ERP time–frequency energy using PCA. Clinical Neurophysiology, 116(6), 1314–1334. 10.1016/j.clinph.2005.01.019 [DOI] [PubMed] [Google Scholar]
  8. Bertrand O, Bohorquez J, & Pernier J (1994). Time-frequency digital filtering based on an invertible wavelet transform: An application to evoked potentials. IEEE Transactions on Biomedical Engineering, 41(1), 77–88. [DOI] [PubMed] [Google Scholar]
  9. Brunet D, Murray MM, & Michel CM (2011). Spatiotemporal analysis of multichannel EEG: CARTOOL. Computational Intelligence and Neuroscience, 2011, 813870. 10.1155/2011/813870 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bullmore E, & Sporns O (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. 10.1038/nrn2575 [DOI] [PubMed] [Google Scholar]
  11. Canolty RT, & Knight RT (2010). The functional role of cross-frequency coupling. Trends in Cognitive Sciences, 14(11), 506–515. 10.1016/j.tics.2010.09.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Castellanos NP, & Makarov VA (2006). Recovering EEG brain signals: Artifact suppression with wavelet enhanced independent component analysis. Journal of Neuroscience Methods, 158(2), 300–312. 10.1016/j.jneumeth.2006.05.033 [DOI] [PubMed] [Google Scholar]
  13. Clements GM, Bowie DC, Gyurkovics M, Low KA, Fabiani M, & Gratton G (2021). Spontaneous alpha and theta oscillations are related to complementary aspects of cognitive control in younger and older adults. Frontiers in Human Neuroscience, 15, 621620. 10.3389/fnhum.2021.621620 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cohen L (1995). Time-frequency analysis. Prentice Hall PTR. [Google Scholar]
  15. Cohen MX (2014). Analyzing neural time series data: Theory and practice. MIT Press. [Google Scholar]
  16. Cohen MX (2019). A better way to define and describe Morlet wavelets for time-frequency analysis. NeuroImage, 199, 81–86. 10.1016/j.neuroimage.2019.05.048 [DOI] [PubMed] [Google Scholar]
  17. Cook EW 3rd, & Miller GA (1992). Digital filtering: Background and tutorial for psychophysiologists. Psychophysiology, 29(3), 350–367. [DOI] [PubMed] [Google Scholar]
  18. Cousineau D (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in quantitative methods for Psychology, 1(1), 42–45. doi: 10.20982/tqmp.01.1.p042 [DOI] [Google Scholar]
  19. Cousineau D (2017). Varieties of confidence intervals. Advances in Cognitive Psychology, 13(2), 140–155. 10.5709/acp-0214-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Delorme A, & Makeig S (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. 10.1016/J.Jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
  21. Dhamala M, Rangarajan G, & Ding M (2008). Estimating granger causality from fourier and wavelet transforms of time series data. Physical Review Letters, 100(1), 018701. [DOI] [PubMed] [Google Scholar]
  22. Ding M, Mo J, Schroeder CE, & Wen X (2011). Analyzing coherent brain networks with granger causality. Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2011, 5916–5918. 10.1109/IEMBS.2011.6091463 [DOI] [PubMed] [Google Scholar]
  23. Ding M, & Rangarajan G (2013). Parametric spectral analysis. In Jaeger D & Jung R (Eds.), Encyclopedia of computational neuroscience (pp. 1–9). Springer. 10.1007/978-1-4614-7320-6_416-1 [DOI] [Google Scholar]
  24. Donoghue T, Haller M, Peterson EJ, Varma P, Sebastian P, Gao R, Noto T, Lara AH, Wallis JD, Knight RT, Shestyuk A, & Voytek B (2020). Parameterizing neural power spectra into periodic and aperiodic components. Nature Neuroscience, 23(12), 1655–1665. 10.1038/s41593-020-00744-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Donoghue T, Schaworonkow N, & Voytek B (2022). Methodological considerations for studying neural oscillations. European Journal of Neuroscience. 10.1111/ejn.15361 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Eidelman-Rothman M, Ben-Simon E, Freche D, Keil A, Hendler T, & Levit-Binnun N (2019). Sleepless and desynchronized: Impaired inter trial phase coherence of steady-state potentials following sleep deprivation. NeuroImage, 202, 116055. 10.1016/j.neuroimage.2019.116055 [DOI] [PubMed] [Google Scholar]
  27. Elliott G, Rothenberg TJ, & Stock J (1996). Efficient tests for an autoregressive unit root. Econometrica, 64(4), 813–836. [Google Scholar]
  28. Fransen AMM, van Ede F, & Maris E (2015). Identifying neuronal oscillations using rhythmicity. NeuroImage, 118, 256–267. 10.1016/j.neuroimage.2015.06.003 [DOI] [PubMed] [Google Scholar]
  29. Freeman WJ, & Zhai J (2009). Simulated power spectral density (PSD) of background electrocorticogram (ECoG). Cognitive Neurodynamics, 3(1), 97–103. 10.1007/s11571-008-9064-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fries P (2015). Rhythms for cognition: Communication through coherence. Neuron, 88(1), 220–235. 10.1016/j.neuron.2015.09.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Friston KJ, Bastos A, Litvak V, Stephan KE, Fries P, & Moran RJ (2012). DCM for complex-valued data: Cross-spectra, coherence and phase-delays. NeuroImage, 59(1), 439–455. 10.1016/j.neuroimage.2011.07.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Gable PA, Miller MW, & Bernat EM (2022). The Oxford handbook of EEG frequency analyses. Oxford University Press. [Google Scholar]
  33. Galambos R (1992). A comparison of certain gamma-band (40 Hz) brain rhythms in cat and man. In Basar E & Bullock T (Eds.), Induced rhythms in the brain (pp. 103–122). Springer. [Google Scholar]
  34. Garcia JO, Srinivasan R, & Serences JT (2013). Near-real-time feature-selective modulations in human cortex. Current Biology, 23(6), 515–522. 10.1016/j.cub.2013.02.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Gelman A, Hill J, & Yajima M (2012). Why we (usually) don’t have to worry about multiple comparisons. Journal of Research on Educational Effectiveness, 5(2), 189–211. 10.1080/19345747.2011.618213 [DOI] [Google Scholar]
  36. Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, & Kayser C (2017). Contributions of local speech encoding and functional connectivity to audio-visual speech perception. eLife, 6, e24763. 10.7554/eLife.24763 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Gorrostieta C, Fiecas M, Ombao H, Burke E, & Cramer S (2013). Hierarchical vector auto-regressive models and their applications to multi-subject effective connectivity. Frontiers in Computational Neuroscience, 7, 159. 10.3389/fncom.2013.00159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Groppe DM, Urbach TP, & Kutas M (2011). Mass univariate analysis of event-related brain potentials/fields I: A critical tutorial review. Psychophysiology, 48(12), 1711–1725. 10.1111/j.1469-8986.2011.01273.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Gyurkovics M, Clements GM, Low KA, Fabiani M, & Gratton G (2021). The impact of 1/f activity and baseline correction on the results and interpretation of time-frequency analyses of EEG/MEG data: A cautionary tale. NeuroImage, 237, 118192. 10.1016/j.neuroimage.2021.118192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Haegens S, Pathak YJ, Smith EH, Mikell CB, Banks GP, Yates M, Bijanki KR, Schevon CA, McKhann GM, Schroeder CE, & Sheth SA (2022). Alpha and broadband high-frequency activity track task dynamics and predict performance in controlled decision-making. Psychophysiology, 59(5), e13901. 10.1111/psyp.13901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Handy TC (2004). Event-related potentials: A methods handbook. MIT Press. [Google Scholar]
  42. Harper J, Malone SM, & Bernat EM (2014). Theta and delta band activity explain N2 and P3 ERP component activity in a go/no-go task. Clinical Neurophysiology, 125(1), 124–132. 10.1016/j.clinph.2013.06.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Harris FJ (1978). On the use of windows for harmonic analysis with the discrete Fourier transform. Proceedings of the IEEE, 66(1), 51–83. 10.1109/PROC.1978.10837 [DOI] [Google Scholar]
  44. Hashemi A, Pino LJ, Moffat G, Mathewson KJ, Aimone C, Bennett PJ, Schmidt LA, & Sekuler AB (2016). Characterizing population EEG dynamics throughout adulthood. eNeuro, 3(6), 1–13. 10.1523/ENEURO.0275-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Haufe S, Meinecke F, Görgen K, Dähne S, Haynes J-D, Blankertz B, & Bießmann F (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110. 10.1016/j.neuroimage.2013.10.067 [DOI] [PubMed] [Google Scholar]
  46. Hauk O, Keil A, Elbert T, & Muller MM (2002). Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG. Journal of Neuroscience Methods, 113(2), 111–122. [DOI] [PubMed] [Google Scholar]
  47. He BJ (2014). Scale-free brain activity: Past, present, and future. Trends in Cognitive Sciences, 18(9), 480–487. 10.1016/j.tics.2014.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Hipp JF, Engel AK, & Siegel M (2011). Oscillatory synchronization in large-scale cortical networks predicts perception. Neuron, 69(2), 387–396. 10.1016/j.neuron.2010.12.027 [DOI] [PubMed] [Google Scholar]
  49. Hughes AM, Whitten TA, Caplan JB, & Dickson CT (2012). BOSC: A better oscillation detection method, extracts both sustained and transient rhythms from rat hippocampal recordings. Hippocampus, 22(6), 1417–1428. 10.1002/hipo.20979 [DOI] [PubMed] [Google Scholar]
  50. Hülsemann MJ, Naumann E, & Rasch B (2019). Quantification of phase-amplitude coupling in neuronal oscillations: Comparison of phase-locking value, mean vector length, modulation index, and generalized-linear-modeling-cross-frequency-coupling. Frontiers in Neuroscience, 13, 573. 10.3389/fnins.2019.00573 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kampstra P (2008). Beanplot: A boxplot alternative for visual comparison of distributions. Journal of Statistical Software, 28(1), 1–9. 10.18637/jss.v028.c0127774042 [DOI] [Google Scholar]
  52. Kappenman ES, & Luck SJ (2010). The effects of electrode impedance on data quality and statistical significance in ERP recordings. Psychophysiology, 47(5), 888–904. 10.1111/j.1469-8986.2010.01009.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Kappenman ES, & Luck SJ (2012). ERP components: The ups and downs of brainwave recordings. In Luck SJ & Kappenman ES (Eds.), Oxford handbook of ERP components. Oxford University Press. [Google Scholar]
  54. Karim RM, Kwon O-H, Park C, & Lee K (2019). A study of colormaps in network visualization. Applied Sciences, 9(20), 4228. 10.3390/app9204228 [DOI] [Google Scholar]
  55. Karniski W, Blair RC, & Snider AD (1994). An exact statistical method for comparing topographic maps, with any number of subjects and electrodes. Brain Topography, 6(3), 203–210. [DOI] [PubMed] [Google Scholar]
  56. Keil A, Debener S, Gratton G, Junghofer M, Kappenman ES, Luck SJ, Luu P, Miller GA, & Yee CM (2014). Committee report: Publication guidelines and recommendations for studies using electroencephalography and magnetoencephalography. Psychophysiology, 51(1), 1–21. 10.1111/psyp.12147 [DOI] [PubMed] [Google Scholar]
  57. Keysers C, Gazzola V, & Wagenmakers E-J (2020). Using bayes factor hypothesis testing in neuroscience to establish evidence of absence. Nature Neuroscience, 23(7), 788–799. 10.1038/s41593-020-0660-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Klimesch W (2018). The frequency architecture of brain and brain body oscillations: An analysis. European Journal of Neuroscience, 48(7), 2431–2453. 10.1111/ejn.14192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Kolev V, & Yordanova J (1997). Analysis of phase-locking is informative for studying event-related EEG activity. Biological Cybernetics, 76(3), 229–235. 10.1007/s004220050335 [DOI] [PubMed] [Google Scholar]
  60. Kook JH, Vaughn KA, DeMaster DM, Ewing-Cobbs L, & Vannucci M (2021). BVAR-connect: A variational bayes approach to multi-subject vector autoregressive models for inference on brain connectivity networks. Neuroinformatics, 19(1), 39–56. 10.1007/s12021-020-09472-w [DOI] [PubMed] [Google Scholar]
  61. Kopp B, Seer C, Lange F, Kluytmans A, Kolossa A, Fingscheidt T, & Hoijtink H (2016). P300 amplitude variations, prior probabilities, and likelihoods: A Bayesian ERP study. Cognitive, Affective, & Behavioral Neuroscience, 16(5), 911–928. 10.3758/s13415-016-0442-3 [DOI] [PubMed] [Google Scholar]
  62. Kramer MA, Tort ABL, & Kopell NJ (2008). Sharp edge artifacts and spurious coupling in EEG frequency comodulation measures. Journal of Neuroscience Methods, 170(2), 352–357. 10.1016/j.jneumeth.2008.01.020 [DOI] [PubMed] [Google Scholar]
  63. Kwaitkowski D, Phillips PC, Schmidt P, & Shin Y (1992). Testing the null hypothesis of stationarity against the alternative of a unit root. Journal of Econometrics, 54(1), 159–178. [Google Scholar]
  64. Lachaux JP, Rodriguez E, Martinerie J, & Varela FJ (1999). Measuring phase synchrony in brain signals. Human Brain Mapping, 8(4), 194–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Lehmann D, Ozaki H, & Pal I (1987). EEG alpha map series: Brain micro-states by space-oriented adaptive segmentation. Electroencephalography and Clinical Neurophysiology, 67(3), 271–288. 10.1016/0013-4694(87)90025-3 [DOI] [PubMed] [Google Scholar]
  66. Li R, Keil A, & Principe JC (2009). Single-trial P300 estimation with a spatiotemporal filtering method. Journal of Neuroscience Methods, 177(2), 488–496. 10.1016/j.jneumeth.2008.10.035 [DOI] [PubMed] [Google Scholar]
  67. Lin A, Maniscalco B, & He BJ (2016). Scale-free neural and physiological dynamics in naturalistic stimuli processing. eNeuro, 3(5), 1–13. 10.1523/ENEURO.0191-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Lisman JE, & Jensen O (2013). The theta-gamma neural code. Neuron, 77(6), 1002–1016. 10.1016/j.neuron.2013.03.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Liu T, Cable D, & Gardner JL (2018). Inverted encoding models of human population response conflate noise and neural tuning width. Journal of Neuroscience, 38(2), 398–408. 10.1523/JNEUROSCI.2453-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Loza CA (2019). RobOMP: Robust variants of orthogonal matching pursuit for sparse representations. PeerJ Computer Science, 5, e192. 10.7717/peerj-cs.192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Loza CA, & Principe JC (2016). Transient model of EEG using Gini index-based matching pursuit. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 724–728). IEEE. 10.1109/ICASSP.2016.7471770 [DOI] [Google Scholar]
  72. Luck SJ (2005). An introduction to the event-related potential technique. MIT Press. [Google Scholar]
  73. Luck SJ, & Gaspelin N (2017). How to get statistically significant effects in any ERP experiment (and why you shouldn’t). Psychophysiology, 54(1), 146–157. 10.1111/psyp.12639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Makeig S, Debener S, Onton J, & Delorme A (2004). Mining event-related brain dynamics. Trends in Cognitive Sciences, 8(5), 204–210. 10.1016/j.tics.2004.03.008 [DOI] [PubMed] [Google Scholar]
  75. Maris E (2012). Statistical testing in electrophysiological studies. Psychophysiology, 49(4), 549–565. 10.1111/j.1469-8986.2011.01320.x [DOI] [PubMed] [Google Scholar]
  76. Maris E, & Oostenveld R (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177–190. 10.1016/j.jneumeth.2007.03.024 [DOI] [PubMed] [Google Scholar]
  77. Melkonian D, Blumenthal TD, & Meares R (2003). High-resolution fragmentary decomposition—A model-based method of non-stationary electrophysiological signal analysis. Journal of Neuroscience Methods, 131(1), 149–159. 10.1016/j.jneumeth.2003.08.005 [DOI] [PubMed] [Google Scholar]
  78. Moratti S, Clementz BA, Gao Y, Ortiz T, & Keil A (2007). Neural mechanisms of evoked oscillations: Stability and interaction with transient events. Human Brain Mapping, 28(12), 1318–1333. 10.1002/hbm.20342 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Mueller EM, Stemmler G, & Wacker J (2010). Single-trial electroencephalogram predicts cardiac acceleration: A time-lagged P-correlation approach for studying neurovisceral connectivity. Neuroscience, 166(2), 491–500. 10.1016/j.neuroscience.2009.12.051 [DOI] [PubMed] [Google Scholar]
  80. Newson JJ, & Thiagarajan TC (2019). EEG frequency bands in psychiatric disorders: A review of resting state studies. Frontiers in Human Neuroscience, 12, 521. 10.3389/fnhum.2018.00521 [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Nolte G, Bai O, Wheaton L, Mari Z, Vorbach S, & Hallett M (2004). Identifying true brain interaction from EEG data using the imaginary part of coherency. Clinical Neurophysiology, 115(10), 2292–2307. 10.1016/j.clinph.2004.04.029 [DOI] [PubMed] [Google Scholar]
  82. Norcia AM, Appelbaum LG, Ales JM, Cottereau BR, & Rossion B (2015). The steady-state visual evoked potential in vision research: A review. Journal of Vision, 15(6), 4. 10.1167/15.6.4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Nuñez JR, Anderton CR, & Renslow RS (2018). Optimizing colormaps with consideration for color vision deficiency to enable accurate interpretation of scientific data. PLoS One, 13(7), e0199239. 10.1371/journal.pone.0199239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Nunez PL (1996). Spatial analysis of EEG. Electroencephalography and Clinical Neurophysiology, 45, 37–38. [PubMed] [Google Scholar]
  85. Nunez PL, Srinivasan R, Westdorp AF, Wijesinghe RS, Tucker DM, Silberstein RB, & Cadusch PJ (1997). EEG coherency. I: Statistics, reference electrode, volume conduction, Laplacians, cortical imaging, and interpretation at multiple scales. Electroencephalography and Clinical Neurophysiology, 103(5), 499–515. [DOI] [PubMed] [Google Scholar]
  86. Nuwer M (1997). Assessment of digital EEG, quantitative EEG, and EEG brain mapping: Report of the American Academy of Neurology and the American clinical neurophysiology society* [RETIRED]. Neurology, 49(1), 277–292. 10.1212/WNL.49.1.277 [DOI] [PubMed] [Google Scholar]
  87. Oken BS (1986). Filtering and aliasing of muscle activity in EEG frequency analysis. Electroencephalography and Clinical Neurophysiology, 64(1), 77–80. 10.1016/0013-4694(86)90045-3 [DOI] [PubMed] [Google Scholar]
  88. Paul M, Govaart GH, & Schettino A (2021). Making ERP research more transparent: Guidelines for preregistration. International Journal of Psychophysiology, 164, 52–63. 10.1016/j.ijpsycho.2021.02.016 [DOI] [PubMed] [Google Scholar]
  89. Pernet C, Garrido MI, Gramfort A, Maurits N, Michel CM, Pang E, Salmelin R, Schoffelen JM, Valdes-Sosa PA, & Puce A (2020). Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nature Neuroscience, 23(12), 1473–1483. 10.1038/s41593-020-00709-0 [DOI] [PubMed] [Google Scholar]
  90. Perrin F, Pernier J, Bertrand O, Giard MH, & Echallier JF (1987). Mapping of scalp potentials by surface spline interpolation. Electroencephalography and Clinical Neurophysiology, 66(1), 75–81. [DOI] [PubMed] [Google Scholar]
  91. Piai V, Dahlslätt K, & Maris E (2015). Statistically comparing EEG/MEG waveforms through successive significant univariate tests: How bad can it be? Psychophysiology, 52(3), 440–443. 10.1111/psyp.12335 [DOI] [PubMed] [Google Scholar]
  92. Picton TW, Bentin S, Berg P, Donchin E, Hillyard SA, Johnson R Jr., Miller GA, Ritter W, Ruchkin DS, Rugg MD, & Taylor MJ (2000). Guidelines for using human event-related potentials to study cognition: Recording standards and publication criteria. Psychophysiology, 37(2), 127–152. [PubMed] [Google Scholar]
  93. Picton TW, John MS, Dimitrijevic A, & Purcell D (2003). Human auditory steady-state responses: Respuestas auditivas de estado estable en humanos. International Journal of Audiology, 42(4), 177–219. 10.3109/14992020309101316 [DOI] [PubMed] [Google Scholar]
  94. Pivik RT, Broughton RJ, Coppola R, Davidson RJ, Fox N, & Nuwer MR (1993). Guidelines for the recording and quantitative analysis of electroencephalographic activity in research contexts. Psychophysiology, 30(6), 547–558. [DOI] [PubMed] [Google Scholar]
  95. Plöchl M, Ossandón JP, & König P (2012). Combining EEG and eye tracking: Identification, characterization, and correction of eye movement artifacts in electroencephalographic data. Frontiers in Human Neuroscience, 6, 1–23. 10.3389/fnhum.2012.00278 [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Polich J (1997). EEG and ERP assessment of normal aging. Neurophysiology/Evoked Potentials Section, 104(3), 244–256. 10.1016/S0168-5597(97)96139-6 [DOI] [PubMed] [Google Scholar]
  97. Pooja Pahuja, S. K., & Veer K (2021). Recent approaches on classification and feature extraction of EEG signal: A review. Robotica, 40(1), 77–101. 10.1017/S0263574721000382 [DOI] [Google Scholar]
  98. Riels K, Ramos Campagnoli R, Thigpen N, & Keil A (2022). Oscillatory brain activity links experience to expectancy during associative learning. Psychophysiology, 59(5), e13946. 10.1111/psyp.13946 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Roach BJ, & Mathalon DH (2008). Event-related EEG time-frequency analysis: An overview of measures and an analysis of early gamma band phase locking in schizophrenia. Schizophrenia Bulletin, 34(5), 907–926. 10.1093/schbul/sbn093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Rousselet GA, Foxe JJ, & Bolam JP (2016). A few simple steps to improve the description of group results in neuroscience. European Journal of Neuroscience, 44(9), 2647–2651. 10.1111/ejn.13400 [DOI] [PubMed] [Google Scholar]
  101. Sassenhagen J, & Draschkow D (2019). Cluster-based permutation tests of MEG/EEG data do not establish significance of effect latency or location. Psychophysiology, 56(6), e13335. 10.1111/psyp.13335 [DOI] [PubMed] [Google Scholar]
  102. Schaworonkow N, & Nikulin VV (2019). Spatial neuronal synchronization and the waveform of oscillations: Implications for EEG and MEG. PLoS Computational Biology, 15(5), e1007055. 10.1371/journal.pcbi.1007055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Schmidt M (2008). The Sankey diagram in energy and material flow management. Journal of Industrial Ecology, 12(2), 173–185. 10.1111/j.1530-9290.2008.00015.x [DOI] [Google Scholar]
  104. Schoffelen J-M, & Gross J (2009). Source connectivity analysis with MEG and EEG. Human Brain Mapping, 30(6), 1857–1865. 10.1002/hbm.20745 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Shapiro KL, Hanslmayr S, Enns JT, & Lleras A (2017). Alpha, beta: The rhythm of the attentional blink. Psychonomic Bulletin & Review, 24(6), 1862–1869. 10.3758/s13423-017-1257-0 [DOI] [PubMed] [Google Scholar]
  106. Singer W (1999). Neuronal synchrony: A versatile code for the definition of relations? Neuron, 24(1), 49–65. 10.1016/S0896-6273(00)80821-1 [DOI] [PubMed] [Google Scholar]
  107. Stam CJ, Nolte G, & Daffertshofer A (2007). Phase lag index: Assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Human Brain Mapping, 28(11), 1178–1193. 10.1002/hbm.20346 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Stegmann Y, Ahrens L, Pauli P, Keil A, & Wieser MJ (2020). Social aversive generalization learning sharpens the tuning of visuocortical neurons to facial identity cues. eLife, 9, e55204. 10.7554/eLife.55204 [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Sun L, Ahlfors SP, & Hinrichs H (2016). Removing cardiac artefacts in magnetoencephalography with resampled moving average subtraction. Brain Topography, 29(6), 783–790. 10.1007/s10548-016-0513-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Szendro P, Vincze G, & Szasz A (2001). Pink-noise behaviour of biosystems. European Biophysics Journal, 30(3), 227–231. 10.1007/s002490100143 [DOI] [PubMed] [Google Scholar]
  111. Tallon C, Bertrand O, Bouchet P, & Pernier J (1995). Gamma-range activity evoked by coherent visual stimuli in humans. European Journal of Neuroscience, 7, 1285–1291. [DOI] [PubMed] [Google Scholar]
  112. Tallon-Baudry C, & Bertrand O (1999). Oscillatory gamma activity in humans and its role in object representation. Trends in Cognitive Sciences, 3(4), 151–162. [DOI] [PubMed] [Google Scholar]
  113. Tamburro G, Stone DB, & Comani S (2019). Automatic removal of cardiac interference (ARCI): A new approach for EEG data. Frontiers in Neuroscience, 13, 1–17. 10.3389/fnins.2019.00441 [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Thigpen N, Petro NM, Oschwald J, Oberauer K, & Keil A (2019). Selection of visual objects in perception and working memory one at a time. Psychological Science, 30(9), 1259–1272. 10.1177/0956797619854067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Tort ABL, Komorowski R, Eichenbaum H, & Kopell N (2010). Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. Journal of Neurophysiology, 104(2), 1195–1210. 10.1152/jn.00106.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Truccolo WA, Ding M, Knuth KH, Nakamura R, & Bressler SL (2002). Trial-to-trial variability of cortical evoked responses: Implications for the analysis of functional connectivity. Clinical Neurophysiology, 113(2), 206–226. [DOI] [PubMed] [Google Scholar]
  117. van de Schoot R, Depaoli S, King R, Kramer B, Märtens K, Tadesse MG, Vannucci M, Gelman A, Veen D, Willemsen J, & Yau C (2021). Bayesian statistics and modelling. Nature Reviews Methods Primers, 1(1), 1–26. 10.1038/s43586-020-00001-2 [DOI] [Google Scholar]
  118. van der Meij R, Jacobs J, & Maris E (2015). Uncovering phase-coupled oscillatory networks in electrophysiological data. Human Brain Mapping, 36(7), 2655–2680. 10.1002/hbm.22798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. van der Meij R, van Ede F, & Maris E (2016). Rhythmic components in extracranial brain signals reveal multifaceted task modulation of overlapping neuronal activity. PLoS One, 11(6), e0154881. 10.1371/journal.pone.0154881 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Vanrullen R, & Dubois J (2011). The psychophysics of brain rhythms. Frontiers in Psychology, 2, 203. 10.3389/fpsyg.2011.00203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Voytek B, D’Esposito M, Crone N, & Knight RT (2013). A method for event-related phase/amplitude coupling. NeuroImage, 64, 416–424. 10.1016/j.neuroimage.2012.09.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Wolpert N, & Tallon-Baudry C (2021). Coupling between the phase of a neural oscillation or bodily rhythm with behavior: Evaluation of different statistical procedures. NeuroImage, 236, 118050. 10.1016/j.neuroimage.2021.118050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Xu L, Stoica P, Li J, Bressler SL, Shao X, & Ding M (2009). ASEO: A method for the simultaneous estimation of single-trial event-related potentials and ongoing brain activities. IEEE Transactions on Biomedical Engineering, 56(1), 111–121. 10.1109/TBME.2008.2008166 [DOI] [PubMed] [Google Scholar]
  124. Yuval-Greenberg S, Tomer O, Keren AS, Nelken I, & Deouell LY (2008). Transient induced gamma-band response in EEG as a manifestation of miniature saccades. Neuron, 58(3), 429–441. [DOI] [PubMed] [Google Scholar]
  125. Zhang L, Guindani M, Versace F, Engelmann JM, & Vannucci M (2016). A spatiotemporal nonparametric Bayesian model of multi-subject fMRI data. The Annals of Applied Statistics, 10(2), 638–666. 10.1214/16-AOAS926 [DOI] [Google Scholar]

RESOURCES