Skip to main content
Frontiers in Physiology logoLink to Frontiers in Physiology
. 2012 Nov 15;3:417. doi: 10.3389/fphys.2012.00417

Pitfalls in Fractal Time Series Analysis: fMRI BOLD as an Exemplary Case

Andras Eke 1,2,*, Peter Herman 2, Basavaraju G Sanganahalli 2, Fahmeed Hyder 2,3, Peter Mukli 1, Zoltan Nagy 1
PMCID: PMC3513686  PMID: 23227008

Abstract

This article will be positioned on our previous work demonstrating the importance of adhering to a carefully selected set of criteria when choosing the suitable method from those available ensuring its adequate performance when applied to real temporal signals, such as fMRI BOLD, to evaluate one important facet of their behavior, fractality. Earlier, we have reviewed on a range of monofractal tools and evaluated their performance. Given the advance in the fractal field, in this article we will discuss the most widely used implementations of multifractal analyses, too. Our recommended flowchart for the fractal characterization of spontaneous, low frequency fluctuations in fMRI BOLD will be used as the framework for this article to make certain that it will provide a hands-on experience for the reader in handling the perplexed issues of fractal analysis. The reason why this particular signal modality and its fractal analysis has been chosen was due to its high impact on today’s neuroscience given it had powerfully emerged as a new way of interpreting the complex functioning of the brain (see “intrinsic activity”). The reader will first be presented with the basic concepts of mono and multifractal time series analyses, followed by some of the most relevant implementations, characterization by numerical approaches. The notion of the dichotomy of fractional Gaussian noise and fractional Brownian motion signal classes and their impact on fractal time series analyses will be thoroughly discussed as the central theme of our application strategy. Sources of pitfalls and way how to avoid them will be identified followed by a demonstration on fractal studies of fMRI BOLD taken from the literature and that of our own in an attempt to consolidate the best practice in fractal analysis of empirical fMRI BOLD signals mapped throughout the brain as an exemplary case of potentially wide interest.

Keywords: fractals, monofractals, multifractals, time series analysis, numerical testing, fMRI BOLD, brain

Introduction

Fractality (Mandelbrot, 1967, 1980, 1985; Bassingthwaighte et al., 1994; Gouyet, 1996; Eke et al., 2002), – in addition to deterministic chaos, modularity, self-organized criticality, “small word” network-connectivity – by now has established itself as one of the fundaments of complexity science (Phelan, 2001) impacting many areas including the analysis of brain imaging data such as fMRI BOLD (Zarahn et al., 1997; Thurner et al., 2003; Maxim et al., 2005; Raichle and Mintun, 2006; Fox et al., 2007; Razavi et al., 2008; Wink et al., 2008; Bullmore et al., 2009; Herman et al., 2009, 2011; Ciuciu et al., 2012).

The interest in fractal analysis accelerated the development of the new paradigm beyond a rate when the new – essentially mathematical or physical (i.e., statistical mechanics) – knowledge could be consolidated, their tools thoroughly evaluated and tested before being put to wide-spread use in various fields of science; typically beyond the frontiers of mathematics. The lack of an in-depth understanding of the implications of the methods when applied to empirical data, often generated conflicting results, but also prompted efforts at making up for this deficiency. Early, with the migration of the fractal concept from mathematics to various fields of science like physiology, the groups of Bassingthwaighte (Bassingthwaighte, 1988; Bassingthwaighte et al., 1994) and Eke et al. (1997) realized the need to adopt a systematic approach in developing needed analytical and testing frameworks to characterize and evaluate various monofractal time series methods (Bassingthwaighte and Raymond, 1994, 1995; Caccia et al., 1997; Eke et al., 2000, 2002). Eke and coworkers demonstrated that conscious and precise monofractal time series analysis could only be done when one has an a priori concept of the nature of the observed signals. They introduced the dichotomous fractional Gaussian noise (fGn)/fractional Brownian motion (fBm) model of Mandelbrot and Ness (1968) as the basis of monofractal time series analysis (Eke et al., 2000, 2002) and offered a strategy for choosing tools according to a proven selection criteria (Eke et al., 2000). Given the continuing advance in the fractal field and in sync with the increasing awareness to avoid potential pitfalls and misinterpretation of results in various forms of fractal analyses (Delignieres et al., 2005; Gao et al., 2007; Delignieres and Torre, 2009; Marmelat and Delignieres, 2011; Ciuciu et al., 2012), in this article we apply our evaluation strategy to multifractal tools, and characterize their most widely used implementations. Our motivation in doing so stems from the potentials of fMRI BOLD multifractal analysis in revealing the physiological underpinnings of activation-related change in scaling properties in the brain (Shimizu et al., 2004).

fMRI BOLD (Ogawa et al., 1990, 1993b; Kwong et al., 1992; Bandettini, 1993) has been selected as an exemplary empirical signal in our demonstrations, because its impact on contemporary neuroscience (Fox and Raichle, 2007). The human brain represents the most complex form of the matter (Cramer, 1993) whose inner workings can only be revealed if signals reflecting on neuronal activities are recorded at high spatio-temporal resolution. One of the most powerful methods, which can record spatially registered temporal signals from the brain, is magnetic resonance imaging (MRI; Lauterbur, 1973). The MRI scanner can non-invasively record a paramagnetic signal (referred to as blood oxygen level dependent, BOLD; Ogawa et al., 1990, 1993a) that can be interpreted as the signature of the functioning brain via its metabolic activity continuously modulating the blood content, blood flow, and oxygen level of the blood within the scanned tissue elements (voxels). Recently, a rapidly increasing volume of experimental data has demonstrated that BOLD is a complex signal, whose fractality – if properly evaluated – can reveal fundamental properties of the brain among them the so called “intrinsic or default mode” of operation that appears complementing the stimulus-response paradigm in the understanding the brain in a powerful way (Raichle et al., 2001). We hope, our paper could contribute to this major effort from the angle of consolidating some relevant issues concerning fractal analysis of fMRI BOLD.

Concept of Fractal Time Series Analyses

Monofractals

All fractals are self-similar structures (mathematical fractals in an exact, natural fractals in a statistical sense), with their fractal dimension falling between the Euclidian and topologic dimensions (Mandelbrot, 1983; Eke et al., 2002). When self-similarity is anisotropic, the structure is referred to as self-affine; a feature, which applies to fractal time series (Mandelbrot, 1985; Barabási and Vicsek, 1991; Eke et al., 2002), too. Statistical fractals cannot be described comprehensively by descriptive statistical measures, as mean and variance, because these do depend on the scale of observation in a power law fashion:

μ2μ1=s2s1ε, (1)

where μ1, μ2 are descriptive statistical measures, and s1, s2 are scales within the scaling range where self-affinity is present, and ε is the power law scaling exponent. From this definition a universal scale-free measure of fractals can be derived:

D=-lims0inflogNslogs. (2)

D is called capacity dimension (Barnsley, 1988; Liebovitch and Tóth, 1989; Bassingthwaighte et al., 1994), which is related but not identical to the Hausdorff dimension (Hausdorff, 1918; Mandelbrot, 1967), s is scale and N(s) is the minimum number of circles with size s needed to cover the fractal object to quantify its capacity on the embedding dimensional space (it corresponds to μ in Eq. 1). For fractal time series, the power law scaling exponent ε is typically calculated in the time domain as the Hurst exponent (H), or in the frequency domain as the spectral index (β). H and D relate (Bassingthwaighte et al., 1994) as:

H=2-D. (3)

Further, β can also be obtained from H as (H − 1)/2 for fGn and (H + 1)/2 for fBm processes (Eke et al., 2000).

Multifractals

While D does not vary along a monofractal time series, it is heterogeneously distributed along the length of a multifractal signal.

This phenomenon gave rise to the term “singular behavior,” as self-affinity can be expressed by differing power law scaling along a multifractal time series, Xi as:

Xi+Δi-XiΔih(i), (4)

where h is the Hölder exponent defining the degree of singularity at time point, i. Calculating the fractal dimension for each subsets of Xi of the same h, one obtains the singularity spectrum, D(h) (Mandelbrot spectrum), which describes the distribution of singularities (Frisch and Parisi, 1985; Falconer, 1990; Turiel et al., 2006).

D(h)=log(ρ(h)ρ(hmax))logsmin, (5)

where hmax is the Hölder exponent corresponding to maximal fractal dimension, smin is the finest scale corresponding to Hölder trajectory, and ρ(h) is the distribution of singularities.

The singular behavior of a multifractal is a local property. Separation of the singularities can be difficult, given the finite sampling frequency of the signal of interest (Mallat, 1999). Thus, in contrast with monofractality, a direct evaluation of multifractality is a demanding task in terms of the amount of data and the computational efforts needed, which can still not guarantee precise results under all circumstance.

With the aid of different moments of appropriate measure, μ, a set of equations can be established to obtain the singularity spectrum, which is a common framework exploited by multifractal analysis methods referred to as multifractal formalism (Frisch and Parisi, 1985; Mandelbrot, 1986; Barabási and Vicsek, 1991; Muzy et al., 1993). Using a set of different moment orders, one can determine the scaling behavior of μq, yielding the generalized Hurst exponent, H(q) (Barunik and Kristoufek, 2010; See Figure 1):

μq(s)sqH(q). (6)

Figure 1.

Figure 1

Monofractal and multifractal temporal scaling. Three kinds of fractals are shown to demonstrate scale-free property of these structures: a stationary monofractal (fractional Gaussian noise), a non-stationary monofractal (fractional Brownian motion), and a multifractal (Devil’s staircase with weight factors p1 = p3 = 0.2, p2 = 0.6). Every fractal is self-similar: fGn and fBm in a statistical sense (as in empirical structures and processes where fractality is manifested in equal distributions, only) and Devil’s staircase in an exact manner (as self-similar structuring in mathematical, i.e., ideal fractals is exact). For fractals, descriptive statistical measures [for example mean, variance, fluctuation (Fq) etc.] depend on the corresponding scale in a power law fashion. Thus as a scale-free descriptor, the extended Hurst exponent (H′) is calculated as a slope of regression line between the logarithms of the scale (s) and Fq (For an explanation of H′, see main text). The obtained slopes for different magnifications of the time series [here with the order of q = (1, 2, 3), which is the order of moment of the used measure] are the same for monofractals and different for multifractals, demonstrating that power law scaling behavior is a global property of monofractals, while it is a local property of multifractals. Accordingly, note that slopes in the bottom left and middle panel are the same, while in the right panel they indeed differ. For further details, see main text.

On the right side of Eq. 4 Δi corresponds to scale, s, on the right side of Eq. 6. Using the partition function – introduced in context of Wavelet Transform Modulus Maxima (WTMM) method – singularities are analyzed globally for estimating the (multi)scaling exponent (Mallat, 1999):

Z(s,q)=k=1N(s)μiq(s)(7)τ(q)=lims0inflogZ(s,q)logs,(8)

where τ(q) can be also expressed from H(q) (Kantelhardt et al., 2002) as:

τ(q)=qH(q)-DT, (9)

where DT is the topological dimension, which equals 1 for time series.

The generalized fractal dimension can also describe the scale-free features of a multifractal time series:

D(q)=τ(q)q-1=qh(q)-1q-1. (10)

The singularity spectrum, D(h), can be derived from τ(q) with Legendre transform (Figure 2), via taking

h=τ(q), (11)

the slope of the tangent line taken at q for τ(q), and yielding

D(h)=infq(qh-τ(q)), (12)

that when evaluated gives the negative of the intercept at q = 0 for the tangent line (See Figure 2).

Figure 2.

Figure 2

Legendre transform. It is known that singularity spectrum, D(h), has a concave shape, and provided that τ(q) is also a concave function, they can be explicitly transformed into each other via the Legendre transform (Bacry et al., 1993). Legendre transform takes a function, in our case τ(q) and produces a function of a different variable, D(h). The Legendre transform is its own inverse and uses minimization as the basis of the transformation process according to Eq. 12. If minimization cannot be achieved, the transformation would fail. On the left a real (concave), on the right a non-concave case for τ(q) is shown. A simple concave function, f(x) = −x2 + 5x + 4 (shown in blue) is used for modeling τ(q). If f(x) is differentiable, hence a tangent line (shown in red) can be taken at point of P0 (q0, τ0) with a slope τ′(q), then g*(q0) is the y-intercept, (0, g*), and −g* is the value of the Legendre transform (See Eq. 11). Maximization at (q0, τ0) is valid since for any other point on the blue curve, a line drawn through that point with the same slope as the red line will yield a τ0-intercept below the point (0, g*), showing that g* is indeed obtained as a boundary value (maximum), thus the transformation for D(h) would also yield a single boundary value (minimum) on the green curve as D(h) = −g* = τ′(q)q−τ(q). Steps of the transformation process are shown (1) select q, (2) read τ(q), (3) take a tangent line at (q, τ) and determine its slope, h = τ′(q), (4) select h, (5) determine D(h) using the above equation; repeat for the set. On the right side, a non-concave function is shown (blue) for demonstrating a case, when due to the non-concave shape of τ(q) the shape of the transformed function, D(h), does not yield a realistic singularity spectrum given that in this case the transform by failing on minimization is poorly behaved yielding ambiguous values.

Natural signals have a singularity spectrum over a bounded set of Hölder exponents, whose width is defined by [h−∞, h+∞] (Figure 3).

Figure 3.

Figure 3

Approaches to multifractal analyses. Direct approach of multifractal analysis means exploiting the local power law scaling behavior to obtain local Hölder exponents (Eq. 4), from which the Mandelbrot spectrum is calculated with histogram method (Falconer, 1990; Eq. 5). Indirect approaches shown here (MF-DFA, multifractal detrended fluctuation analysis; MF-DMA, multifractal detrended moving average; WTMM, Wavelet Transform Modulus Maxima) estimates the scaling exponent, τ as a function of q. It is worth to note, that this is carried out differently for MF-DFA, MF-DMA (Eq. 9), and for WTMM (Eq. 8). From τ(q), the Mandelbrot spectrum can be obtained with the application of the Legendre transform, while its relation to generalized fractal dimension D(q) is given by Eq. 10. Singularity spectrum, D(h), is an important endpoint of the analysis. The spectrum is concave and has a nearly parabolic shape with a maximum identified by the capacity dimension at q = 0 (Mallat, 1999; Shimizu et al., 2004; Ihlen, 2012). Please note that some of its measures (FWHM, Dmax, W + , W−) can be used to calculate meaningful combined parameters (such as Pc, and W in Eqs 13 and 14, respectively) with potential in correlating with key features of fMRI BOLD time series.

A combination parameter, Pc, can be calculated (definitions on Figure 3) to facilitate the separation of time series characteristics (Shimizu et al., 2004), which can aid the exploration of the physiological underpinnings, too.

Pc=hmaxDmaxFWHM. (13)

A similar parameter is W (Wink et al., 2008) calculated as

W=W+W-. (14)

Implementation of Fractal Time Series Analyses

Implementation of concepts in reliable algorithms is a critical task, as stationary and non-stationary signals require different methods when analyzed for their fractality. For a stationary signal the probability distribution of signal segments is independent of the (temporal) position of the segment and segment length, which translates into constant descriptive statistical measures such as mean, variance, correlation structure etc. over time (Eke et al., 2000, 2002).

Accordingly, signals can be seen as realizations of one of two temporal processes: fBm, and fGn (Eke et al., 2000). The fBm signal is non-stationary with stationary increments. An fBm signal, Xi, is self-similar in that its sampled segment Xi,n of length n is equal in distribution with a longer segment Xi,sn of length sn when the latter is rescaled (multiplied) by s-H. This means that every statistical measure, mn, of an fBm time series of length n is proportional to nH

Xi,ns-HXi,sn,(15)mnpnH,whichyieldslogmnlogp+Hlogn,(16)

where H is the Hurst exponent. H ranges between 0 and 1. Increments Yi = Xi − Xi−1 of a non-stationary fBm signal yield a stationary fGn signal and vice versa, cumulative summation of an fGn signal results in an fBm signal. Note that most methods listed below that have been developed to analyze statistical fractal processes share the philosophy of Eq. 15 in that in their own ways all attempt to capture the power law scaling in the various statistical measures of the evaluated time series (Eke et al., 2002).

Monofractal methods

Here we focus on widely used monofractal methods selected from those in the literature.

Time domain methods

Detrended fluctuation analysis

The method of Peng et al. (1994) begins with the signal summed and the mean subtracted

Yj=i=1jXi-X. (17)

Then the local trend Yj,n is estimated in non-overlapping windows of equal length n, using least-square fit on the data. For a given window size n the fluctuation is determined as the variance upon the local trend:

Fn=1Nj=1N(Yj-Yj,n)2, (18)

For fBm processes of length N with non-overlapping windows of size n the fluctuation depends on the window size n in a power law fashion:

Fnpnα,and (19)
α=limn0logFnlogn.(20)

If Xi is an fGn signal then Yj will be an fBm signal. Fn then is equivalent to mn of Eq. 16 yielding FnpnH therefore in this case α = H. If Xi is an fBm signal then Yj will be a summed fBm signal. Then FnpnH + 1, where α = H + 1 (Peng et al., 1994).

Signal summation conversion method

This method was first introduced by Eke et al. (2000) for enhancing signal classification as a variant of the scaled windowed variance (SWV) analysis of Mandelbrot (1985) as further developed by Peng et al. (1994).

Fluctuations of a parameter over time can be characterized by calculating the standard deviation

SDn=1N-1i=1NXi-X2. (21)

For fBm processes of length N when divided into non-overlapping windows of size n as Eq. 21 predicts the standard deviation within the window, sn, depends on the window size n in a power law fashion:

SDnpnH, (22)

and

H=limn0logSDnlogn. (23)

In practice SDn’s calculated for each segment of length n of the time series are averaged for the signal at each window size. The standard method applies no trend correction. Trend in the signal seen within a given window can be corrected either by subtracting a linearly estimated trend (line detrended version) or the values of a line bridging the first and last values of the signal (bridge detrended version; Cannon et al., 1997). This method can only be applied to fBm signals or cumulatively summed fGn signals.

The signal summation conversion (SSC) method was first used for enhanced signal classification according to the dichotomous fGn/fBm model (Eke et al., 2000). There are two steps: (1) calculate from Xi its cumulative sum (this converts an fGn to an fBm or converts an fBm to its cumulant), and (2) use the bdSWV method to calculate from the cumulant series Ĥ. The interpretation of Ĥ is that when 0<Ĥ1, then Xi is an fGn with Ĥ. Alternatively, when Ĥ>1, then the cumulant series is identified as an fBm signal of Ĥ=Ĥ-1. As seen, in order to keep Ĥ scaled within the [0,1] range, in the original version of the method in the fBm case 1 was subtracted from the estimate of H. Given that the SSC method handles fGn and fBm signals alike, we eliminate this step and report values as 0<Ĥ<1 for fGn and 1<Ĥ<2 for fBm signals referring Ĥ as the “extended” Hurst exponent. This way, the mere value of the Hurst exponent would reflect on signal class, the focus of fractal time series analysis strategy. Also the use of Ĥ would greatly facilitate reviewing the results of numerical performance analyses.

Real-time implementations of SSC and Detrended Fluctuation Analysis (DFA) methods have been recently reported (Hartmann et al., 2012).

Frequency domain method

Fractal analysis can also be done in the frequency domain using methods such as the power spectral density (PSD) analysis (Fougere, 1985; Weitkunat, 1991; Eke et al., 2000).

Power spectral density analysis (lowPSDw,e)

A time series can be represented as a sum of cosine wave components of different frequencies:

Xi=n=0N2Ancosωnti+φn=n=0N2Ancos2πnNi+φn, (24)

where An is the amplitude and Φn is the phase of the cosine-component with ωn angular frequency. The commonly used sample frequency is fn = ωn/2π. The An(fn), Φn(fn), and An2(fn) functions are termed amplitude, phase, and power spectrum of the signal, respectively. These spectra can be determined by an effective computational technique, the fast Fourier transform (FFT). The power spectrum (periodogram, PSD) of a fractal process is a power law relationship

An2pωn-β,orA(f)21fβwhich yieldsβ=limn0logAn2logfn,(25)

where β is termed spectral index. The power law relationship expresses the idea that as one doubles the frequency the power changes by the same fraction (2−β) regardless of the chosen frequency, i.e., the ratio is independent of where one is on the frequency scale.

The signal has to be preprocessed before applying the FFT (subtraction of mean, windowing, and endmatching, i.e., bridge detrending). Discarding the high power frequency estimates improves the precision of the estimates of β (Fougere, 1985; Eke et al., 2000). Eke et al. (2000) introduced this version denoted as lowPSD w,e as a fractal analytical tool.

Time-frequency domain method

Fractal wavelet analysis uses a waveform of limited duration with an average value of zero for variable-sized windowing allowing an equally precise characterization of low and high frequency dynamics in the signal. The wavelet analysis breaks up a signal into shifted and stretched versions of the original wavelet. In other words, instead of a time-frequency domain it rather uses a time-scale domain, which is extremely useful not only in monofractal but multifractal analysis, too. One such way to estimate H is by the averaged wavelet coefficient (AWC) method (Simonsen and Hansen, 1998). The most commonly used analyzing wavelet is the second derivative of a standard normalized Gaussian function, which is:

ψ(t)=d2dt2e-t22. (26)

The scaled and translated version of the analyzing wavelet is given by

ψa;b(t)=ψt-ba, (27)

where the scale parameter is a, and the translation parameter b.

The wavelet transformation is essentially a convolution operation in the time domain:

Wψ[X](a,b)=1a-+X(t)ψa;bdt. (28)

From Eq. 16, one can easily derive how the self-affinity of an fBm signal X(t) determines its continuous wavelet transform (CWT) coefficients:

W[X](sa,sb)=ds12+HW[X](a,b). (29)

The AWC method is based on Eq. 29 (Simonsen and Hansen, 1998) and can be applied to fBm signals or to cumulatively summed fGn signals.

Multifractal methods

Three analysis methods are described here; all use different statistical moments (termed q-th order) of the selected measure to evaluate the signal’s multifractality. Despite of certain inherent drawbacks, these methods are widely used in the literature, and can obtain reliable results if their use is proper with limitations considered.

Time domain methods

Below, the Multifractal DFA (MF-DFA; Kantelhardt et al., 2002) and the recently published Multifractal Detrended Moving Average (MF-DMA; Gu and Zhou, 2010) will be reviewed. We will focus on MF-DMA, but since it is similar to MF-DFA, their differences will be pointed out, too. They rely on a measure of fluctuation, F, as in their monofractal variant (Peng et al., 1994), and differ in calculating the q-th order moments of the fluctuation function.

  • Step 1 – calculating signal profile, Yj, by cumulative summation. It is essentially the same as in Eq. 17, however note that in DFA methods, the mean of the whole signal is subtracted before summation, while in DMA methods this is carried out locally in step 3.

  • Step 2 – calculating the moving average function,j.
    j=1nk=-[(n-1)θ][(n-1)(1-θ)]yt-k (30)
    For further details, see Figure 4.
  • Step 3 – detrending by moving average: By subtracting t a residual signal, εt, is obtained:
    εt=Yt-t, (31)
    where n−[(n−1) · θ] ≤ t ≤ N−[(n−1)· θ].

    This fundamental step of the DMA methods is essentially different from the detrending step of DFA methods (See Figure 4).

  • Step 4 – calculation of fluctuation measure. The signal is split into Nn = [N/n − 1] number of windows (See Figure 4), ε(v), where v refers to the index of a given window. The fluctuating process is characterized by Fv(n), which is given as a function of window size, n:
    Fv2(n)=1nt=1nεt2(v). (32)
  • Step 5 – calculation of q-th order moments of the fluctuation function.

Figure 4.

Figure 4

Detrending scheme and fluctuation analysis for MF-DFA and MF-DMA methods. The detrending strategy for MF-DFA (A) is that the signal is divided into a set of non-overlapping windows of different sizes, and a local low-order polynomial (typically linear) fit (shown in green) is removed from each window’s data. In contrast, MF-DMA (B) removes the moving average point-by-point calculated in different window sizes around the processed point with a position given by θ. This parameter describes the delay between the moving average function and the original signal. Its value is taken from [0, 1] interval, 0 meaning only from signal values on the left (“backward,” past), in contrast with 1 meaning that only signal values to the right (“forward,” future) are used for calculating j. The centrally positioned sliding window corresponds to the case of θ = 0.5 balancing contributions from the past and the future to the reference point. The approaches of MF-DFA and MF-DMA thus ought to yield different detrended signals, whose calculated moments (C,D) and Eqs 33 and 34 obtained by the analysis should also be somewhat different.

Fq(n)=1Nnv=1NnFvq(n)1q. (33)

For q = 2, the algorithm reduces to the monofractal DMA method. For the special case q = 0, Fq(n) can be obtained as a limit value that can be expressed in a closed form:

log[F0(n)]=1Nnv=1Nnlog[Fv(n)]. (34)

Relation of the q-th order moment of the fluctuation measure and H(q) follows a power law:

Fq(n)nH(q). (35)

Thus H(q) can be estimated as the slope of the least-square fitted regression line between log n and log [Fq(n)]. Finally, Mandelbrot spectrum is obtained with subsequent application of multifractal formalism equations (Eqs 912) yielding multifractal features τ(q), D(h).

Time-frequency domain methods

Wavelet analysis methods can be used to estimate the singularity spectrum of a multifractal signal by exploiting the multifractal formalism (Muzy et al., 1991, 1993, 1994; Mallat and Hwang, 1992; Bacry et al., 1993; Arneodo et al., 1995, 1998; Mallat, 1999; Figure 5). Wavelet transform modulus maxima (WTMM) has strong theoretical basis and has been widely used in natural sciences to assess multifractality.

Figure 5.

Figure 5

Relations of Continuous Wavelet Transform operation, Wavelet Transform Modulus Maxima method, and multifractal formalism to obtain singularity spectrum of an ideal multifractal. Devil’s staircase with weight factors p1 = p3 = 0.2, p2 = 0.6 was used to model an ideal multifractal time series (A). The wavelet coefficient matrix (B) is obtained by continuous wavelet transform in the time-scale space. Modulus maxima map (C) containing the maxima lines across the scales defined by CWT. We call modulus maximum of the wavelet transform |Wψ[X](t, s0)|; any point (t0, s0), which corresponds to a local maximum of the modulus of |Wψ[X](t, s0)| is considered as a function of t. For a given scale, it means that |Wψ[X](t0, s0)| > |Wψ[X](t, s0)| for all t in the neighborhood right of t0, and |Wψ[X](t0, s0)| ≥ |Wψ[X](t, s0)| for all t in the neighborhood left of t0. Local maxima are chained, and in the subsequent calculations only maxima chains propagating to the finest scales are used (Mallat, 1999). Chaining local maxima is important, because it is proven that their distribution along multiple scales identifies and measures local singularities, which is tightly linked to the singularity spectrum. The moment-based partition function (D) separates singularities of various strength as coded in (B,C) as follows. Z is obtained for the range [smin, s] as the sum of moments of the wavelet coefficients belonging to those along a set of maxima lines at s [shown as circles in (C)]. This definition corresponds to a “scale-adapted” partition with wavelets at different sizes. A moment-based set of Z are plotted in a log-log representation as shown in (D). Notice that these log Z(log s) functions are lines representing the power law behavior of the multifractal signal within the scaling range shown. Therefore when the slope of each and every log Z(log s) lines are plotted as a function of moment order, q, it yields τ(q) (E). From τ(q) via Legendre transform the singularity spectrum, D(h) (F), is obtained (See Chapter 2, Figure 3).

  • Step 1 – continuous wavelet transformation: This step is essentially the same as described previously in Eqs 2628 yielding a matrix of wavelet coefficients (Figure 5B):
    W[w(it,is)], (36)
    where w(it, is) = |Wψ[X](t, s)|, is is the scaling index, where s = smin, …, smax and it = 1, 2, …, N, where t is the sampling time of each successive data point.
  • Step 2 – chaining local maxima: The term modulus maxima describes any point (t0, s0) where |Wψ=[X](t, s)| is a local maximum at t = t0:
    WψXt0,s0t=0. (37)

    This local maximum is strict in terms of its relation to t0 in its immediate vicinity. These local maxima are to be chained by interconnection to form a local maxima line in the space-scale plane (t, s) (See Figure 5C).

  • Step 3 – calculating partition function. With the aid of partition function (Eq. 7, Figure 5D), singular behavior of the multifractal time series can be isolated. Wavelet coefficients along maxima chains are considered as μ measures.

    Z(s,q)=L(s)w(is,it)q. (38)

    Summation is executed along maxima chains (ℓ), the set of all maxima lines is marked by L(s).

  • Step 4 – calculating singularity spectra and parameters of multifractality. The following step is to determine the multiscaling exponent, τ(q) by H(q), and then using Eqs 1012 to give full quantification of the multifractal nature.

Characterization of Methods

Before the application of fractal analysis methods, their behavior should be thoroughly evaluated on a large set of signals with known scale-free structure and broad representation (Bassingthwaighte and Raymond, 1994, 1995; Caccia et al., 1997; Cannon et al., 1997; Eke et al., 2000, 2002; Turiel et al., 2006). Signal classification, estimating performance in terms of precision and limitations of the methods should be clarified during characterization. The capability of multifractal analysis to distinguish between mono- and multifractal processes should also be evaluated.

Stationarity of a signal is an important property for pairing with a compatible fractal analysis tool (see Table 2 in Eke et al., 2002). In addition, all methods have some degree of inherent bias and variance in their estimates of the scaling exponent bearing great importance due to their influence on the results, which can be misinterpreted as a consequence of this effect. The goal of performance analysis is therefore to characterize the reliability of selected fractal tools in estimating fractal parameters on synthesized time series. This should be carried out at least for a range of signal sizes and structures similar to the empirical dataset, so that the reliability of fractal estimates could be accurately determined.

Extensive results obtained with our monofractal framework have been reported elsewhere (Eke et al., 2000, 2002), but for the sake of comparison it will be briefly described. Our multifractal testing framework is aimed to demonstrate relevant features of MF-DFA and MF-DMA method, utilizing the equations described in Section “Implementation of Fractal Time Series Analyses.”

Testing framework for multifractal tools on monofractals

Monofractal signals of known autocorrelation (AC) structure can be synthesized based on their power law scaling. The method of Davies and Harte (1987) (DHM for short) produces an exact fGn signal using its special correlation structure, which is a consequence of the power law scaling of the related fBm signal in the time domain (Eq. 19). It is important, that different realizations can be generated with DHM at a given signal length and Hurst exponent, which consists of a statistical distribution of similarly structured and sized monofractals.

The next question is how to define meaningful end-points for the tests? For ideal monofractals with a given length and true H, Mean Square Error (MSE) is a good descriptor: it can be calculated for each set of series of known H and particular signal length, N (Eke et al., 2002). It carries a combined information about bias and variance, as MSE = bias2 + variance.

Interpreting the multifaceted results of numerical experiments is a complex task. It can be facilitated if they are plotted in a properly selected set of independent variable with impact shown in intensity-coded representations (Figure 6; Eke et al., 2002). Precision index is determined as the ratio of results falling in the interval of [Htrue – Hdev, Htrue + Hdev], where Hdev is an arbitrarily chosen value referring to the tolerable degree of deviation.

Figure 6.

Figure 6

Precision as a function of moment order, signal length, and Hurst exponent. Precision of MF-DFA [left side of (A–C)] and MF-DMA [right side of (A–C)] as a function of q, Htrue, N. fGn and fBm signals were generated by DHM with length of 28, 210, 212, 214, and Htrue increased from 0.1 to 1.9 in steps of 0.1, skipping Htrue = 1 (corresponding to 1/f boundary seen as the black horizontal line in the middle). Estimation of the generalized Hurst exponent should not depend on q, as monofractal’s H(q) is a theoretically constant function scattering around Htrue across different order of moments. The intensity-coded precision index is proportional to the number of estimates of H falling into the range of Htrue ± 0.1, with lighter areas indicating more precise estimation. Calculation of this measure is based on 20 realizations for each q, Htrue, N. (A) Performance of methods for q = ± 5. (B) Performance of methods for q = ± 2. (C) Performance of methods for q = ± 0.5. Besides the clear dependence of precision on Htrue and N, influence of moment order is also evident, given that the lightest areas corresponding to the most reliable estimates tend to increase in parallel with moment order approaching 0 [Note the trend from (A–C)]. The lower half of the plots indicates that MF-DFA is applicable for signals of both types, while MF-DMA is reliable only on fGn signals. This result is further supported by the paper of Gao et al. (2006), who demonstrated a saturation of DMA at 1 for H when the true extended Hurst exponent exceeds 1 (thus it is non-stationary)

In the monofractal testing framework, we used DHM-signals to evaluate the performance of MF-DMA (Gu and Zhou, 2010) and MF-DFA (Gu and Zhou, 2006), by the code obtained from http://rce.ecust.edu.cn/index.php/en/research/129-multifractalanalysis. It was implemented in Matlab, in accordance with Eqs 17 and 3035. As seen in Figure 6, precision of MF-DFA and MF-DMA depends on N, H, and the order of moment.

In order to compare the methods in distinguishing multifractality, end-points should be defined reflecting the narrow or wide distribution of Hölder exponents. We select a valid endpoint Δh proposed by Grech and Pamula (2012), which is the difference of Hölder exponents corresponding to q = −15 and q = + 15 (Figure 7).

Figure 7.

Figure 7

Separating monofractals from multifractals. Δh values obtained by MF-DFA (as difference of Hölder exponents at q = + 15 and q = −15) are shown for monofractals with length of 210 (blue), 212 (green), 214 (red). It is clearly shown that longer signals are characterized by lower Δh, and its value below 0.2 means that true multifractality is unlikely present (Grech and Pamula, 2012). Signals were created by DHM at extended Hurst exponents of 0–1.9 with a step of 0.1.

Testing approaches for multifractal tools on multifractals

Extending the dichotomous model of fGn/fBm signals (introduced in context of monofractals; Mandelbrot and Ness, 1968; Eke et al., 2000) toward multifractal time series is reasonable as it can account for essential features of natural processes exhibiting local power law scaling. Description of an algorithm creating multifractional Brownian motion (mBm) and multifractional Gaussian noise (mGn) can be found here (Hosking, 1984), while implementation of such code can be found on the net (URL1: http://fraclab.saclay.inria.fr/, URL2: www.ntnu.edu/inm/geri/software). Given that these algorithms require Hölder trajectories as inputs, multifractality cannot be defined exactly on a finite set, which is a common problem of such synthesis methods. Selecting a set of meaningful trajectories is a challenging task: it should resemble those of empirical processes and meet the analytical criteria of the selected algorithms (such criteria are mentioned in Concept of Fractal Time Series Analyses).

On the contrary, iterative cascades defined with analytic functions are not influenced by the perplexity of definitions associated with multifractality outlined in the previous paragraph, given that their value at every real point of the theoretical singularity spectrum is known. Due to their simplicity, binomial cascades (Kantelhardt et al., 2002; Makowiec et al., 2012) and Devil’s staircases (Mandelbrot, 1983; Faghfouri and Kinsner, 2005) are common examples of theoretical multifractals used for testing purposes. A major drawback of this approach is that these mathematical objects do not account for features in empirical datasets, but can still be useful in comparing reported results.

The most extensive test of multifractal algorithms which used a testing framework of signals synthesized according to the model introduced by Benzi et al. (1993) was reported by Turiel et al. (2006). Briefly, it is a wavelet-based method for constructing a signal with predefined properties of multifractal structuring with explicit relation to its singularity spectrum. Since the latter can be manipulated, the features of the resulting multifractal signal could be better controlled. The philosophy of this approach is very similar to that of Davies and Harte (1987) in that a family of multifractal signals of identical singularity spectra can be generated by incorporating predefined distributions (log-Poisson or log-Normal) giving rise to controlled variability of realizations. Additionally, using log-Poisson distribution would yield multifractals with a bounded set of Hölder exponents in that being similar to those of empirical multifractals. To conclude, this testing framework should merit further investigation.

Analytical Strategy

In this article we expand our previously published monofractal analytical strategy to incorporate some fundamental issues associated with multifractal analyses keeping how these can be applied to BOLD time series in focus. Progress along the steps of the perplexed fractal analysis should be guided by a consolidated – preferably model-based– view on the issues involved (See Figure 8).

Figure 8.

Figure 8

Analytical strategy for fractal time series analysis. Toward obtaining a reliable (multi)fractal parameter, which is the purpose of the analysis, the first step to take is to collect a high definition dataset representing the temporal signal, X(t), ensuring adequate definition. Provided that quality-controlled, adequate length of signal, Xi, was acquired at a sufficient frequency sampling X(t) (Eke et al., 2002), scale-free processes can be characterized in terms of either a single global or a distribution of many local scaling exponents, the former pertinent to a monofractal, the latter to a multifractal signal, respectively (Figure 1). A detailed flowchart of our monofractal analytical strategy has been reported earlier (Eke et al., 2000, 2002), hence only some of its introductory elements are incorporated here. The signal-to-noise ratio – as part of signal definition – is a source of concern in preprocessing the signal. Ensuring the domination of the underlying physiological processes over inherent noise is a critical issue, which – if not dealt with properly – will have a detrimental effect on the correlation structure of the signal. Endogenous filtering algorithms of the manufacturers of MRI scanners could be operating in potentially relevant frequency ranges of fractal analysis aimed at trend or noise removal (Jezzard and Song, 1996). In case of BOLD signals, this problem may prove hard to track as the system noise may cause a temporally (i.e., serially) correlated error in the measurement (Zarahn et al., 1997). This may alter the autocorrelation structure of the signal with embedded physiological content (Herman et al., 2011). Various aspects of temporal smoothing have been discussed in Friston et al. (2000). To conclude, scale-free properties of the signal must be preserved during steps carried out before fractal analysis, otherwise the physiologically relevant internal structuring of the BOLD signal cannot possibly be revealed (Herman et al., 2011). Once a multifractal has been isolated by a class-independent method, such as MF-DFA, we can only assume that the multifractal structuring of the signal is due to serial correlation. As autocorrelation structure of the signal can reflect a broad probability distribution, surrogate analysis is needed on a shuffled signal – which destroys this correlation – to ensure that the origin of the scale-invariance is due to genuine autocorrelation in the signal (Kantelhardt:2002]). The null-hypothesis (the signal is not multifractal) is rejected if multifractal measures determined for the raw and surrogate sets are different. This procedure is similar to verifying the presence of deterministic chaos (Herman:2006]). Attention should be given to select the scaling range properly: involving the finest and coarsest scales in calculating H(q) would greatly impair its estimate. The range of moments should be selected such that sufficient range of singularity spectrum is revealed, allowing for the calculation of scalar multifractal descriptors such as Pc. Next, one has to decide as to which path of the detailed multifractal analysis to choose (indirect vs. direct or time vs. time-frequency domain)? Each of these paths would have advantageous and disadvantageous contributions to the final results to consider. The methods of analysis must be selected compatible to the path taken. Once methods have been chosen, their performance (precision) ought to be evaluated. With adequate performance verified, the multifractal analyses can then be followed by attempts to find physiological correlates for the estimates of (multi)fractal parameters.

A fundamental question should be answered whether it is worthy at all to take on the demanding task of fractal analysis? This can only be answered if one characterizes the signal in details according to the guideline shown in Figure 8 using tools of descriptive statistics and careful testing; first for the presence of monofractal and later that of multifractal scale-free features. At this end, we present here a new tool for an instantaneous and easy-to-do performance analysis (called “performance vignette”), which can facilitate this process and does not require special knowledge needed to carry out detailed numerical experiments on synthesized signals (Figure 9). The latter, however, cannot be omitted when full documentation of any particular fractal tool’s performance is needed. In that the vignette has been designed for prompt selection, overview, and comparison of various methods; not for their detailed analysis.

Figure 9.

Figure 9

Fractal tool performance vignette. It provides a quick assessment of any fractal time series tool’s performance. As such can be useful as a method of standardization and/or comparison of various algorithms. Technically, a vignette is created as any given fractal time series method evaluates a volume of synthesized time series for a particular fractal parameter. The results are converted to extended H′ as H′ = HfGn, H′ = HfBm + 1 using a conversion table between H and other fractal parameters (Eke et al., 2002). The signals are generated for a range of length, L [Lmin, Lmax] in increments of ΔL, and for the full range of the fGn/fBm dichotomy at β or H′ at given increments of the exponent, ΔH’ by the DHM method (Davies and Harte, 1987; Eke et al., 2000). The volume is created from these signals arranged in a square raster, which will correspond to one of four identical quadrants of the vignette. Once the analysis by a fractal tool has been carried out the results are plotted in a square array as shown in (A) such a way that fGn signals occupy a square created by the four identical quadrants. The 1/f boundary separating the fGn from the fBm range can be easily identified as plotted with a midscale color. Warmer colors indicate over-, cooler colors underestimation of the scaling exponent at the particular signal length or degree of correlation. When applied to class-independent or dependent methods (B), like PSD, SSC (B, upper half) or Disp (dispersional analysis) and bdSWV (bridge detrended Scaled Window Variance) (B, lower half), respectively, an immediate conclusion on signal performance can be drawn: PSD and SSC can be used for fGn and fBm signals alike (except in the vicinity of the 1/f boundary) and SSC is more precise. Disp (Bassingthwaighte and Raymond, 1995; Eke et al., 2000, 2002) and bdSWV (Eke et al., 2000, 2002), two class-dependent methods of excellent performance (note the midscale colored area in the fGn and fBm domains, respectively) do show up accordingly. The vignette is applicable to indicate the performance of multifractal methods, too. The monofractal H can be determined in two ways: in case of q = 2 from τ(q), and in case of q = 0 from hmax in the singularity spectrum.

We sustain our recommendation that proper class-dependent or class-independent methods should be chosen.

We feel, that calculating global measures of multifractal scaling, such as Pc (Shimizu et al., 2004) or W (Wink et al., 2008), can help consolidating experimental findings in large fMRI BOLD volumes across many subjects and experimental paradigms. Based on our tests, we conclude that straightforward recommendations for multifractal analysis for the purpose of fMRI BOLD time series analysis needs further investigations.

Pitfalls

Sources of error

Problems emerging from inadequate signal definition (measurement sensitivity, length, sampling frequency)

Measurement sensitivity

The precondition of a reliable fMRI time series analysis is that the BOLD signal has adequate definition in terms of being a true-to-life representation of the underlying biology it samples. In particular, the fMRI BOLD measurement is aimed at detecting the contrast around blood filled compartments in magnetic susceptibility of blood and the surrounding medium in a uniform high field (Ogawa and Lee, 1990). A contrast develops from tissue water relaxation rate being affected by the paramagnetic vs. diamagnetic state of hemoglobin. The contrast increases with decreasing oxygenation of blood, a feature that renders the technique capable of detecting the combined effect of neuronal metabolism coupled via hemodynamics throughout the brain (Smith et al., 2002). As Ogawa and Lee (1990) demonstrated, the BOLD contrast increases with the strength of the main magnetic field, B0 (i.e., due to the sensitivity of the relaxation rate).

In his early paper (Lauterbur, 1973), Lauterbur gave clear evidence of the fact that resolution of magnetic resonance signals will strongly depend on B0. Newer generations of scanners with continuously improved performance were constructed utilizing this relation by incorporating magnets of increased strength (in case of human scanners from, i.e., 1.5–7T, in small animal scanners due to the smaller brain size with strength in the 4–17.2T range). Bullmore et al. (2001) showed indeed, that the performance of some statistical method and their results depended on the magnetic field used (1.5 vs. 3T); calling for caution and continuous reevaluation the methods in the given MRI settings.

In order to confirm the impact of B0 on the sensitivity on the definition of the BOLD signal fluctuations, we have compared the spectral index (ß) of resting-state BOLD fluctuations in vivo to those post mortem and in a phantom in 4, 9.4, and 11.7T in anesthetized rats (Figure 10). What we have learned from this study was that in contrast with amplitude-wise optical measurements of cerebral oxygenation and hemodynamics such as near infrared spectroscopy (Eke et al., 2006), due to the contrast-detecting foundations of fMRI, signal definition cannot be characterized by comparing fluctuation ranges in vivo vs. post mortem. After death deoxyhemoglobin molecules are still present in the MRI voxels post-sacrifice and thus generate susceptibility-induced magnetic field gradients that would impact diffusion of tissue water molecules (Herman et al., 2011), a process that can generate fluctuating BOLD contrast without ongoing physiology. What matters is that in vivo the blood gets oxygenated and via the combined impact of neuronal metabolism, blood flow, and blood volume, the internal structuring of the BOLD contrast signal will change from close to random to a more correlated level as indicated by β, which is in vivo significantly higher than post mortem. Increasing field strength enhances this effect and yields a more articulated topology of β throughout the brain. Conversely, low field measurements favor the dominance of instrument noise in addition to being less sensitive in detecting the BOLD contrast. The inference of these preliminary data is that, given the BOLD contrast (and presumably even the spatial resolution) of our animal imaging, a 1.5T human scanner may not be of sufficient sensitivity to detect BOLD fluctuations at adequate definition for a reliable monofractal analysis, not to mention multifractal analysis known to require a much higher signal definition for an optimal performance that can be achieved in higher field scanners (Ciuciu et al., 2012).

Figure 10.

Figure 10

Definition of spontaneous BOLD fluctuations critically depends on main field strength. Exemplary coronal scans are shown obtained in anesthetized rat in MR scanner applying 4, 9.4, and 11.7T main external field. All fMRI data were collected at 5 Hz in length of 4096 (212) images with gradient echo planar imaging (EPI) sequence using 1H surface coils (Hyder et al., 1995). (A) shows in vivo and post mortem maps of spectral index, β. β was calculated from the spectra of the voxel-wise BOLD time series by the PSD method for a restricted range of fluctuation frequency (0.02–0.3 Hz) found to exhibit inverse power law relationship [fractality; indicated by vertical dashed lines on the PSD plots in (B)]. In order to achieve a suitable contrast for the topology, β are color coded within the fGn range (from 0 to 1). Hence voxel data with β >  1 indicating the presence of fBm type fluctuations are displayed saturated (in red). β maps for water phantoms placed in the isocenter are also shown for comparison. Note, that the fractal pattern of internal structuring of the spontaneous BOLD signal cannot be captured at adequate definition at 4T as opposed to 11.7T, where the rate of scale-free rise of power toward low frequencies are thus the highest at about the same region of interest (ROI) located in the brain cortex. This dependence translates into an articulate in vivo topology with increasing B0. Also note that in vivo 4T cannot yield a clear topology of β when compared to post mortem, and that the well defined topology achieved at higher fields vanished post mortem indicating the link between β and the underlying physiology.

While the use of fMRI is typically qualitative where the baseline is conveniently differenced away to reveal focal area(s) of interest (Shulman et al., 2007), this practice would not interfere with fractal time series analysis, given that scaling exponent is invariant to mean subtraction.

Length and sampling frequency

A signal is a sampled presentation of the underlying process, which generates it. Hence the sampling frequency must influence the extent the signal captures the true dynamics of the process, which is in the focus of fractal analysis irrespective if its analyzed in the time (in form of fluctuations) or in the frequency domain (in form of power distribution across the frequency scale). The sampling frequency should preferably be selected at least a magnitude higher than the highest frequency of the observed dynamics we would aim to capture.

The relationship between length and frequency can best be overviewed in the frequency domain along with the frequency components and aliasing artifact of the spectrum as seen in Figure 12 of Eke et al. (2002). Note, that the dynamics of interest can be best captured hence analyzed if the signal length is long; the sampling frequency is high, because it will provide a spectrum of many components with a weak artifactual impact of aliasing. Herman et al. (2011) have recently demonstrated this relationship on resting-state BOLD time series and concluded that lower frequency dynamics are better sampled by longer BOLD signals, whereas a high sampling rate is needed to capture dynamics in a wide bandwidth signal (See Figure 3 in Herman et al., 2011). In other words, inadequately low frequency is more detrimental to the result of fractal analysis than somewhat truncated signal.

Due to the discrete representation within the bounded temporal resolution of the signal, the precision of its fractal analysis increases with its length as demonstrated on simulated signals of known (true) fractal measures by the bias and variance of its estimates. The minimum length at which reasonable results can be expected depends not only on signal length but on the method of analysis and the degree of long-range correlation in the signal (as characterized by its H); an issue that has been explored in details for monofractal time series by the groups of Bassingthwaighte and Raymond (1994, 1995); Eke et al. (2000, 2002); Delignieres et al. (2006), and for multifractal methods by Turiel et al. (2006).

Multifractal analysis can be considered as an extension of monofractal analysis, which is explicitly true for moment-based methods: while in case of monofractals a scale-free measure is obtained at q = 2, the procedure for multifractals uses a set of different q-order moments. Think of q as a magnifier glass: different details of the investigated scale-free structure can be revealed at different magnification. However, if signal definition is poor due to short length or small sampling frequency, estimates of D(h) will become imprecise at large ±q (Figure 6). Since the order of q needed to obtain characteristic points of the singularity spectrum usually falls beyond q = ± 2, a longer time series is required to guarantee the needed resolution in this range. Hence, dependence of precision on signal length in case of multifractals is a more complicated issue, where the effect of spectral characteristics interacts with that of signal length (Turiel et al., 2006).

A reasonable conclusion is that the recommended minimum length for a reliable multifractal analysis ought to be longer than that found earlier for monofractal series (Eke et al., 2002; Delignieres et al., 2006).

Problem of signal class (fGn vs. fBm)

In fractal analysis, signal classification is a central issue (Eke et al., 2000) and should be regarded as a mandatory step when a tool is to be chosen from the class-dependent group. Living with the relative convenience of using a class-independent method does not render signal classification unnecessary given the great importance of proper interpretation of the findings that can be enhanced by knowing signal class.

Recently, Herman et al. (2011) found in the rat brain using monofractal analysis (PSD) that a significant population of fMRI BOLD signal fell into the non-stationary range of β. These non-stationary signals potentially interfere with resting-state connectivity studies using spatio-temporal volumes of fMRI BOLD. It is even more so, if SSC is used for signal classification (Figure 11) and analysis (Figure 12) shifting the histogram of H′ to the right.

Figure 11.

Figure 11

Classifying rat fMRI BOLD data. Signal classification was performed on the 11.7T BOLD dataset shown in Figure 10 by the PSD and SSC methods (A) previous tested in this capacity by Eke et al. (2002); misclassification rates for PSD and SSC are shown in the plots of (B) the lower panel. Because SSC is a much better classification tool, than PSD is, the classification topology will be drastically different for these two methods. The ROI’s corresponding to voxel-wise signals identified by SSC as non-stationary indeed do clearly delineate the anatomical boundaries of the brain cortex, while those by the PSD only the spots of highest β.

Figure 12.

Figure 12

Fractal analyses of rat fMRI BOLD data. The 11.7T BOLD dataset shown in Figure 10 was analyzed monofractally (A) in the frequency domain by PSD, in the time domain by SSC, and multifractally (B) in the time domain by MF-DFA and MF-DMA methods. Estimates of spectral index were converted to extended Hurst exponent, H′. Our tool performance vignette is displayed next to the methods. Histograms of H′ computed from the fractal image data by SSC are shown. The vignette data reconfirms that SSC is superior over PSD as a monofractal tool. Due to the downward bias of PSD in the anticorrelated fGn range, H′ are significantly underestimated. Because SSC’s estimates are unbiased, the SSC topology should be considered realistic, which translates into a right shift of the SSC H′ histogram relative to that of PSD’s. Based on the vignette pattern, among the multifractal tools, MF-DFA works quite well on fGn and fBm signals, alike, while MF-DMA with fair performance in the fGn range but closer to the 1/f boundary, and fails on fBm signals of the set. For reasons mentioned above, the estimates of SSC should be taken as precise. Given that most values in the fGn range fall into the range of complete uncertainty of the MF-DMA (See Figure 6 at q = 2) and that MF-DMA cannot handle fBm signals, all estimates ends up being 1.0. Differencing the signals (including those of the vignette) changed the situation dramatically. As seen on the vignette, the originally fBm signals would be mapped into the fGn range that can be handled by MF-DMA very well. Actually better than the original fGn signals where slight overestimation is seen. This kind of behavior of MF-DMA may have inference with the findings of Gao et al. (2006). Also note, that the double differenced fGn signals end up being overestimated. These effects are worth to investigate in order to characterize the impact of the fGn/fBm dichotomy on the performance of these time domain multifractal tools when signals are being converted between the two classes. Pc – as a global multifractal measure – captures a topology similarly to the monofractal estimates. The corresponding singularity spectra do separate with the likelihood that the underlying multifractalities indeed differ.

For multifractals the problem and proposed solution is generally the same, but the impact of the fGn/fBm dichotomy on the multifractal measures is not a trivial issue. Our preliminary results reported here (Figure 12) are steps in this direction, but this issue calls for continuing efforts in the future. It seems that at least stationarity vs. non-stationarity is a valuable piece of information for selecting a concise model of multifractals.

Distinguishing monofractals from multifractals

Multifractal analysis of an exact monofractal rendered at ideal resolution (in infinite length, sampled at infinite frequency, at infinite sensitivity of detection) would yield a constant H(q), a linear dependence of τ on q and a point-like Mandelbrot spectrum with its Hölder exponent (hmax) equal with its Hurst exponent.

Due to the finite and discrete nature of the signal, the singular behavior of a suspected scale-free process cannot be quantified perfectly. As a consequence, the homogeneity of a monofractal’s singularities cannot be captured by a multifractal analysis. The reason being is that due to numerical background noise (Grech and Pamula, 2012) – resulting from factors mentioned above – it would always smear the point-like singularity spectrum into one mimicking that of a multifractal. This is confirmed by the apparent uncertainties associated with the estimates of H(q) obtained at various moments in our simulations. All in all, multifractal analyses have been conceived in a manner that tends to view a monofractal as a multifractal.

In order to avoid false interpretation of the data, time series should be produced at the highest possible definition to ameliorate this effect and criteria should also be set up to distinguish the two entities in the signal to be analyzed. Numerical simulation has been demonstrated as a useful tool to work out a parameter that can be used to substantiate a monofractal/multifractal classification (Grech and Pamula, 2012; Figure 6).

Trends and noises

Empirical time series are typically non-linear, non-stationary and can be contaminated by noise and other signal components foreign to the fractal analysis of the system under observation. Trend is deterministic in its character and of typically low frequency in contrast with noise, which has a completely random structuring in a higher frequency range. Monofractal analysis methods are quite robust with respect of noise, thus in case of monofractals do not require preprocessing (Bassingthwaighte and Raymond, 1995). When uncorrelated noise is added to a multifractal process, the shape of its singularity spectrum will also be preserved (Figliola et al., 2010). However with correlated noise present, – known to impact fMRI BOLD time series – preprocessing should be considered (Friston et al., 2000), and if carried out, it should be done with an appropriate adaptive filter (Gao et al., 2010, 2011; Tung et al., 2011).

In case of wavelet-based methods, a polynomial trend can be removed based on the analyzing wavelet’s properties. However, if the trend has a different character (i.e., trigonometric or exponential), or it has more vanishing moments than that of the analyzing wavelet, the estimation of singularity spectrum will be impaired (See theorem 6.10 in Mallat’s book; Mallat, 1999).

Various detrending schemes have been developed to enhance performance of fluctuation analysis (FA) on detrended signals, which has been compared (Bashan et al., 2008). The most common trend removal is based on fitting a low-degree polynomial to local segments of the signal such as employed in DFA (Figure 4). In particular, DFA’s trend removal is credited for being very effective, however – as recently reported (Bryce and Sprague, 2012) – it can become inadequate if the trend ends up having a character different from the coded algorithm, which scenario cannot at all be excluded. A further problem is that the signal arbitrarily divided into analyzing window of different sizes in which trend removal is carried out based on a priori assumption (e.g., polynomial). This problem is exaggerated as by using partitioning of the signal into a set of non-overlapping windows and performing detrending in a window-based manner would not guarantee that the trend in each and every window would be identical with the assumed one. This is especially true for small windows, where trend tends to deviate from that in larger windows. Contrary to expectations, this critical finite size effect is always present, thus this pitfall can only be avoided if explicit detrending is applied by using adaptive methods (Gao et al., 2011).

To conclude, the recently reported uncontrollable bias to the results of DFA (Bryce and Sprague, 2012) raised major concern as to the reliability of FA with this detrending scheme. Thus if DFA is to be used, it should be done with special care taken in the application of more adaptive detrending analyses.

Finally, empirical mode decomposition (EMD) is a promising adaptive approach, one of whose feature is the ability to estimate trend explicitly. It also creates an opportunity to combine EMD with other fractal analysis methods like those based on FA to achieve a more reliable scale-free method (Qian et al., 2011).

Problems of moment-based methods

Using moment-based methods to estimate the Mandelbrot spectrum is a common approach with some drawbacks. Due to the discretized nature of the signal under analysis, small fluctuations cannot be resolved perfectly and therefore the Hölder exponents become biased in the range of their large negative moments (corresponding to the right tail of the singularity spectrum; Turiel et al., 2006). All moment-based methods are influenced by the linearization of the right tail thus yielding biased estimates of the negative statistical moments of the measure, μ (Turiel et al., 2006). This type of error cannot be eliminated with increasing the signal’s length (Turiel et al., 2006). In case of large fluctuations in the signal, numerical limitations become problematic when calculating large positive moments.

Problems associated with moment-based methods can be summarized as follows. Firstly, a carefully selected set of different order (q) statistical moments of μ should be calculated. Selecting too large negative and positive moments would lead to imprecise generalized Hurst exponent [H(q)] or multiscaling exponent (Figure 6; Ihlen, 2012). A sufficient range of q is needed, however, in order to characterize the global singular behavior of the studied time series. This is especially important in the evaluation of the spectrum, but from a practical point of view, the spectrum width at half maximum is sufficient to obtain Pc, or W + /W−, that are frequently used lumped parameters in describing multifractal fMRI BOLD signals, too (Shimizu et al., 2004; Wink et al., 2008). In summary, precise estimation of singularity strength is needed at characteristic points of the spectrum: around its maximum (i.e., at q ≈ 0) and at its half maximum a dense definition is recommended. Thus, the optimal selection depends on the signal character and needs to be analyzed with several sets of q. In general, estimating spectrum between q = −5 and q = 5 is sufficient in biomedical applications, as proposed by Lashermes and Abry (2004). Secondly, methods implementing direct estimation of singularity spectra can be applied (Figure 3). One typical example is the gradient modulus wavelet projection (GMWP) method, which turned out to be superior to all other tested methods (WTMM, too) in terms of precision as reported by Turiel et al. (2006). It was shown that direct approaches can give quite good results in spite of the numerical challenges imposed by calculating the Hölder exponents (h) locally and without the need of using statistical moments and Legendre transform (Turiel et al., 2006). Strategies including the latter two approaches are widely used and can be considered reasonably, but not exclusively reliable in terms of their handling the numerical difficulties associated with multifractal analysis.

Problems of wavelet transform modulus maxima methods

In case of monofractals, the average wavelet coefficient method is the most effective and the easiest to implement (Simonsen and Hansen, 1998; Eke et al., 2002). It can be used for fBm and cumulatively summed fGn signals.

There are other issues related to this method, whose nature can be numerical on the one hand and theoretical on the other. For example, the first and last points of the signal exhibits artifactual scaling, improperly selected scales would impair the results considerably, etc. A well-selected analyzing wavelet also ensures reliable results, which is also proven for certain indirectly calculated partition functions (via Boltzmann weights; Kestener and Arneodo, 2003). The effect of the modifications addressing these issues is discussed in Faghfouri and Kinsner (2005) and a detailed test of WTMM is reported by Turiel et al. (2006).

Due to the difficulties in the reliable application of WTMM, other methods were developed in the field, the most promising one being the Wavelet Leader method (Lashermes et al., 2005; Serrano and Figliola, 2009), which has recently been applied to human fMRI BOLD signals (Ciuciu et al., 2012). As refinements of WTMM, the wavelet leader is beyond the scope of this review, the reader is referred to the cited references.

Identifying the spectral extent of monofractality within a signal

Verifying the presence of self-similarity, as one of the fundamental properties of monofractals is a key element of the analytical strategy of fractal time series analysis (Eke et al., 2000; Figure 8). It should be present within a sufficiently wide scaling range. In case of exact (mathematical) fractals the scaling range is unbounded. In natural fractal time series however it is typically restricted to a set of continuous temporal scales as demonstrated by Eke et al. (2006) for fluctuating cerebral blood volume in humans and Herman et al. (2011) for resting-state fMRI BOLD signals in rat. As shown in the frequency domain by spectral analysis, in both species, scale-free structuring of the signal was present across a range of frequencies well below the Nyquist frequency (half of the sampling frequency). It was characterized by a systematically and self-similarly increasing power toward lower frequencies that could be modeled by Eq. 25 yielding a spectral index of β > 0, which is an indication of serial correlation between the temporal events (long-term memory). Above this range, the fluctuations were found random with β ≈ 0 meaning that subsequent temporal events were not correlated. The separation of these ranges therefore is crucial because failing to do so would cause a bias in the estimate of β.

For fractal time series analysis a proper scaling range should be selected where fluctuations are scale-invariant. Optimization of the sampling process, as well as the regression analysis on log-log representations of measures vs. scales yielding the scaling exponent is essential (Eke et al., 2002). In case of time domain methods such as DFA, DMA, and AFA as well introduced by Gao et al. (2011), optimizing the goodness-of-fit of the regression analysis is an example. Detailed recommendations as to how to deal with this problem can be found elsewhere (Peng et al., 1994; Cannon et al., 1997; Eke et al., 2002; Gao et al., 2006). When a signal’s spectrum contains other than monofractal components, it may prove difficult to select a monofractal scaling range even by isolating local scaling ranges and fitting local slopes for the spectral index. This procedure should be carefully carried out given that local ranges may end up containing inadequately few spectral estimates for a reliable fitting of the trendline. When the aim is to assess the topology of the measure, this criterion can be relaxed (Herman et al., 2011).

Faghfouri and Kinsner (2005) reported that improper selection of scaling range has a detrimental effect on the results of WTMM. Different scales correspond to different window sizes in MF-DFA and MF-DMA method, and discarding the smallest and largest window sizes was even suggested by Peng et al. (1994) for the original DFA. Cannon et al. (1997) and Gao et al. (2006) suggested an optimization for the appropriate range of analyzing window sizes (i.e., scales). While this can be regarded as best practice in carrying out MF-DFA, some degree of bias is still introduced to the results arising mainly from the smallest window sizes (Bryce and Sprague, 2012).

Dualism in multifractal formalism

Amongst the indirect, moment-based methods, WTMM uses a different approach to obtain the singularity spectrum than MF-DFA and MF-DMA. Convergence of this dualism is very unlikely, as the relationship of exponents in MF-DFA to the multifractal formalism is reported to be valid only in special cases (Yu and Qi, 2011). The seminal paper of MF-DFA Kantelhardt et al. (2002) established a relationship between the generalized Hurst exponent and multiscaling exponent. The validity of this equation was reported to be valid only if H = 1 (Yu and Qi, 2011), and thus another derivation for τ(q) was proposed. In addition, singularity spectra reported with MF-DFA – as it follows from the Legendre transform of τ(q) (Eq. 9) – always reaches their maxima at 1, while this does not hold for wavelet methods. In our opinion, revision of results obtained with MF-DFA may be necessary along with consolidating the multifractal formalism published in the field, using the original papers as a starting point of reinvestigation (Frisch and Parisi, 1985; Mandelbrot, 1986; Barabási and Vicsek, 1991; Muzy et al., 1993; Arneodo et al., 1998).

Demonstration

Scrutinizing relevant data in selected previous works recognized as having proven or potential impact on the development of the field will likely demonstrate some typical pitfalls.

Significance of system noise in the interpretation of fMRI BOLD fluctuations

Zarahn et al. (1997) demonstrated early in a careful analysis on spatially unsmoothed empirical human fMRI BOLD data (collected under null-hypothesis conditions) that the examined datasets showed a disproportionate power at lower frequencies resembling of 1/f type noise. In spite of the very detailed analysis, these authors treated the 1/f character as a semi-quantitative feature of fMRI noise and accepted its validity over a decaying exponential model as the form of the frequency domain description of the observed intrinsic serial, or autocorrelation. The spectral index, β, however was not reported but can be reconstructed from the power slope by converting the semilog plot of power vs. frequency in their Figure 3D panel to a log-log plot compatible to |A(f ) |2∝1/fβ model. A β value of ∼3.3 is yielded, which is exceedingly higher than the values of 0.6 < β < 1.2 reported recently for an extensive 3T dataset by He (2011). This precludes the possibility that the collected resting-state 1.5T BOLD dataset would have been of physiological origin. Our recently reported results for the rat brain with −0.5 < β < 1.5 reconfirms this assertion (Herman et al., 2011). In fact, Zarahn et al. (1997) wished to determine if the 1/f component of the noise observed in human subjects was necessarily due to physiological cause, but had to reject this hypothesis because they found no evidencing data to support this hypothesis. Zarahn et al. (1997) felt the AC structure (in the time domain, which is equivalent to the inverse power law relationship in the frequency domain) may not be the same for datasets acquired in different magnets, not to mention the impact of using various fMRI scanning schemes (Zarahn et al., 1997). Accordingly, and in light of our rat data for magnets 4, 9.4, and 11.7T, a less than optimal field strength could have led to a signal definition inadequate to capture the 1/fβ type structuring of the BOLD signal of biological origin that must have been embedded in the human datasets Zarahn et al. (1997) but got overridden by system noise. Most recently, Herman et al. (2011) and He (2011) referred to the early study of Zarahn et al. (1997) as one demonstrating the impact of system noise on fMRI data, while Fox et al. (2007) and Fox and Raichle (2007) as the first demonstration of 1/f type BOLD noise with the implication that the 1/f pattern implied fluctuations of biological origin.

Significance of the general 1/fβ vs. the strict 1/f model in the interpretation of fMRI BOLD noise data

Fox et al. (2007) reported on the impact of intrinsic BOLD fluctuations within cortical systems on inter-trial variability in human behavior (response time). In conjecture of the notion that the variability of human behavior often displays a specific 1/f frequency distribution with greater power at lower frequencies, they remark “This observation is interesting given that spontaneous BOLD fluctuations also show 1/f power spectrum (Figure S4). While the 1/f nature of BOLD fluctuations has been noted previously (Zarahn et al., 1997), we show that the slope is significantly between −0.5 and −1.5 (i.e., 1/f ) and that this is significantly different from the frequency distribution of BOLD fluctuations observed in a water phantom,” and in their Figure S4 conclude that “the slope of the best fit regression line (red) is −0.74, close to the −1 slope characteristic of 1/f signals.” This interpretation of the findings implies that the spontaneous BOLD fluctuations can be adequately described by the “strict” 1/f model, where the spectral index, β, in 1/fβ – known as the “general” inverse power law model – is treated as a constant of 1, not a variable carrying information on the underlying physiology. Incidentally, studies of Gilden and coworkers (using a non-fMRI approach) have indeed demonstrated (Gilden et al., 1995; Gilden, 2001; Gilden and Hancock, 2007) that response time exhibits variations that could not be modeled by a strict 1/f spectrum but by one incorporating a varying scaling exponent (Gilden, 2009).

Scrutinizing the data of Figure S4 can offer an alternative interpretation as follows. In terms of the hardware, the use of 3T magnet must have ensured adequate signal definition for the study. In their Figure S4, spectral slopes were reported in a lumped manner, in that power at each and every frequency were averaged for the 17 human subjects first (thus creating frequency groups), and then mean slopes along with their statistical variation were plotted for the frequency groups. The mean slope of −0.74 (of thee lumped spectrum) was obtained by regression analysis. This treatment of the data implies that the |A(f )|2 ∝ 1/fβ model (Mandelbrot and Ness, 1968; Eke et al., 2000, 2002) was a priori rejected otherwise the slope should have been determined for each and every subject in the group across the range of observed frequencies and their associated power estimates (of the true spectrum) first, followed by the statistical analysis for the mean and variance within the group of 17 subjects, for the following reasons. The spectral index is found by fitting a linear model of |A(f )|2 ∝ 1/fβ across spectral estimates for a range of frequencies. In our opinion when it comes to provide the mean spectral index, it is indeed reasonable (Gilden and Hancock, 2007; Gilden, 2009) to come up with statistics on the fractal estimates for a group of time series data first by obtaining the estimates, proper. Averaging spectral estimates at any particular frequencies and assembling an average spectrum from them tend to abolish the fractal correlation structure for any particular time series and develop one for which the underlying time series is indeed missing. Because the transformation between the two treatments is not linear, the true mean slope of the scale-free analysis cannot be readily reconstructed from the reported slope of the means. Nevertheless, if we regard its value as an approximation and convert it to β, which being less than 1 warrants the use of H′ = (βfGn + 1)/2, one would yield a value of β = 0.77 and H′ = 0.87, respectively.

A recent review by Fox and Raichle (2007) offers an impressive overview and insight of how to delineate cooperative areas (or systems) in the brain based on functional connectivity that emerges from spatial cross-correlation maps of regional fluctuating BOLD signals in the resting brain (Biswal et al., 1995). These authors place the spontaneous activity of the brain as captured in BOLD fluctuations in spatio-temporal domains of fMRI data in the focus of the review emphasizing that itis a fingerprint of a newly recognized mode of functional operation of the brain referred to as default or intrinsic mode (Fox and Raichle, 2007). They argue that the ongoing investigation of this novel aspect of the mode of brain’s operation using fractal analysis of resting-state fMRI BOLD may lead to a deeper and better understanding of the way the brain – on the expense of very high baseline energy production and consumption by glucose and oxidative metabolism – maintains a mode capable of selecting and mobilizing these systems in order to respond to a task adequately (Hyder et al., 2006). One has to add that the default or intrinsic mode of operation has been demonstrated and investigated in overwhelming proportions by connectivity analyses based on cross-correlating BOLD voxel-wise signals as opposed on AC of single voxel-wise BOLD time series.

Fox and Raichle (2007) emphasize “spontaneous BOLD follows a 1/f distribution, meaning that there is an increasing power in the low frequencies.” In their furthering on the nature of this 1/f type distribution they refer to the studies of Zarahn et al. (1997) and Fox et al. (2007) in the context it was described above (Fox et al., 2007) reaching the same conclusion, in that the characteristic model of human spontaneous BOLD is the 1/f (meaning the “strict”) model. We would like to suggest that the notion of 1/f distribution having a regression slope of close to −1 on the log-log PSD plot is somewhat misleading.

In an attempt to consolidate this issue, we suggest that the data be fitted to a model in the form of 1/fβ, where β is a variable (Eke et al., 2000, 2002) responding to states of physiology (Thurner et al., 2003; He, 2011) of characteristic topology (Thurner et al., 2003; Herman et al., 2011) in the brain, not a constant of 1. A potential advantage of this model is that by regarding β as a scaling exponent the distribution can then be described to be scale-free (or fractal).

Significance of the 1/fβmodel and the dichotomous fGn/fBm analytical strategy in analyzing scaling laws and persistence in human brain activity

As seen above, from the modeling point of view the issue of a reliable description of the autoregressive signal structuring of spontaneous BOLD, is fundamental and critical in resting-state. If it is done properly, it can lend a solid basis for assessing changes in the scaling properties in response to changing activity of the brain. The study of Thurner et al. (2003) was probably the first to demonstrate that spontaneous BOLD in the brain was scale-free and that the scaling exponent of inactive and active voxels during sensory stimulation differed. At the time of the publication of their study, the monofractal analytical strategy of Eke et al. (2000, 2002) based on the dichotomous fGn/fBm model of Mandelbrot and Ness (1968) did not yet reached the fMRI BOLD community, hence Thurner et al. (2003) did not rely on it, either. In this section we will demonstrate the implications of this circumstance in terms of the validity and conclusions of their study. We will do it in a detailed, didactical manner so that our reader should gain a hands-on experience with the perplexed nature of the issue.

Subtracting the mean from the raw fMRI signal precedes the analysis proper, Īx(t), yielding Ix(t) in Eq. 39,

Ix(t)=Īx(t)-Īx(t)t, (39)

which step is compatible with (D)FA (Eke et al., 2000).

Subsequently, in Eq. 40, the temporal correlation function, Cx(τ), is calculated

Cx(τ)=Ix(t)Ix(t+τ)=1N-τt=1N-τIx(t)Ix(t+τ). (40)

In fact in this step of the analysis the covariance was calculated given that a division by variance was missing. Hence, it is slightly misleading to regard Eq. 40 as the temporal (or auto) correlation (see Eke et al., 2000, Eq. 2). Only, if assumed that the signal is fGn, whose variance is known to be constant over time, the covariance function can be taken as equivalent to the AC function. Because the authors have not tested and proven the signal’s class was indeed fGn (Eke et al., 2000), there is no basis for the validity of this assertion.

In Eq. 41, the signal is summed yielding Xnx(τ), in order to eliminate problems in calculating the AC function due to noise, non-stationarity trends, etc.

Xnx(τ)=t=1nIx(t) (41)

This form of the signal is further referred to as “voxel-profile.”

Note, that the signal remains in this summed form for the rest of the analysis (i.e., analyzed as fBm). As a consequence, spectral analysis later in the study was applied to a summed – hence processed – signal and the results were thus reported for this and not the raw fMRI signal, which circumstance prevented reaching a clear conclusion.

Furthermore, the authors indicated that the temporal correlation function would characterize persistence. It seems the two terms (correlation vs. persistence) are used as synonyms of one another whereas they are not interchangeable terms: persistence is a property of fBm, while correlation is that of an fGn signal (Eke et al., 2000). Please note, as the raw signal has been summed, the covariance here characterized persistence that was not present in the raw fMRI signal.

In the next step (Eq. 42), the AC function is approximated by a power law function with γ as its exponent

Cx(τ)~τ-γ,0<γ<1. (42)

Based on the equation of the AC function using the Hurst exponent, H, γ must be proportional to 2H (Eke et al., 2000, Eq. 15).

Subsequently, as a part of a FA of the authors (cited in their Reference 19 as unpublished results of their own), the statistics (Fx(τ), standard deviation) was calculated for the AC function in Eq. 43

Fx(τ)=Xn+τx-Xnx2n1/2. (43)

In the left side of Eq. 44, a general power law was applied to the fluctuation from Eq. 43 as Fx(τ)~τα

Fx(τ)~τα,α=1-γ/2. (44)

(Note, as the fluctuations have not been detrended, this method is not the DFA of Peng et al., 1994 but strongly related to it).

Consider the scaling exponent, α, on the left side of Eq. 44. According to Peng et al. (1994) and Eke et al. (2002) α = H only if the raw signa l,Ix(t), is an fGn. However, because at this point the summed raw signal, Xnx(τ), is the object of the analysis, α and H should relate to each other as α = H + 1. Given that the signal was summed in Eq. 41 leading up to Eq. 43, and the values for “outside the brain” were reported as α ≈ 0.5, and for “inside the brain” as 0.5 < α < 1, α must have been improperly calculated because α cannot possibly yield a value of 0.5 for a summed signal given that H scales between 0 and 1 and for an fBm series α = H + 1 holds. The reported value of 0.5 < α < 1 can be regarded correct only for Ix(t), the raw fMRI signal, which therefore had to be an fGn process. On the other hand, the reported values 2 < β < 3 are correct for the Xnx signal, only (for reasons given later). Hence the reported α and β values lacking an indication of their respective signal class ended up being ambiguous.

Next, consider the right side of Eq. 44, which expresses α by using γ introduced earlier. We just pointed out that the raw fMRI signal must have been an fGn with α ≡ H. Consequently, α can be substituted for H in Eq. 44 as H = 1 − γ/2 and γ expressed as

γ=2-2H. (45)

The authors referring to power law decays in the correlations relate the spectral index, β, to γ as

β=3-γ, (46)

and further to α as

β=2α+1.

Note, that these relations between β, γ, and α in principle do depend on signal class that was not reported.

Now, let us substitute γ as expressed in Eq. 45 into Eq. 46

β=3-2+2H=1+2H,

then express H

H=β-12. (47)

As shown by Eke et al. (2000), Figure 2; in Eke et al. (2002), Table 1, based on the dichotomous fGn/fBm model, Eq. 47 would have equivocally identified the case of an fBm signal. As pointed out earlier, the raw fMRI signal was summed before the actual fractal analysis. Consequently, the relationship β = 3 – γ ends up holding only if the raw fMRI signal was an fGn process. This is therefore the second piece of evidence suggesting that the class of the raw fMRI signal must have been fGn. Nevertheless, the relationship β = 2α + 1 could not hold concomitantly for reasons that follow. In an earlier paper of the group (Thurner et al., 2003), the authors stated “The relationship is ambiguous, however, since some authors use the formula α = 2H + 1 for all values of α, while others use α = 2H−1 for α < 1 to restrict H to range (0,1). In this paper, we avoid this confusion by considering α directly instead of H.” The fGn/fBm model (Eke et al., 2002) helps resolving this issue as neither of these relationships between α and H holds because if α is calculated with the signal class recognized and determined, the relationship between α and H is equivocally α = HfGn and α = HfBm + 1. Based on the fGn/fBm model, the relationship between β and α given in Eq. 46 as β = 2α + 1 needs to be revised, too, to is correct form of β = 2α − 1 (See Table 1 in Eke et al., 2002).

Thurner et al. (2003) concluded: “Outside the brain and in non-active brain regions voxel-profile activity is well described by classical Brownian motion (random walk model, α ∼ 0.5 and β ∼ 2).” Recall, the “voxel-profile” is not the raw fMRI signal (intensity signal, Ix(t), most probably an fGn), but its summed form, Xnx(τ), an fBm.

Our conclusion on the above analysis by Thurner et al. (2003) is as follows: (i) α was improperly calculated by the authors’ FA method because α ∼ 0.5 cannot possibly be valid for an fBm signal given that αfBm > 1 (Peng et al., 1994), (ii) β ∼ 2 is only formally valid given that it was calculated based on Eq. 46 from an improperly calculated α and by using an arbitrary relationship between α and β. The subsequent and opposite effects of these rendered the value of β to β ∼ 2.

When the results of Thurner et al. (2003) are interpreted according to the analytical strategy of Eke et al. (2000) based on the dichotomous fGn/fBm model of Mandelbrot and Ness (1968), the reported values of Thurner et al. can be converted for their fMRI “voxel-profile” data Xnx to αfBm ∼ 1.5, βfBm ∼ 2, HfBm ∼ 0.5 or for the raw fMRI intensity signal Ix(t) to αfGn ∼ 0.5, βfGn ∼ 0, HfGn ∼ 0.5. This interpretation of the data reported for humans by Thurner et al. (2003) is fully compatible with the current findings by He (2011) on the human and by Herman et al. (2009, 2011) on the rat brain.

Multifractal analyses on rat fMRI BOLD data

Exemplary analysis on empirical BOLD data is presented on the 11.7T coronal scan shown in Figure 10 to demonstrate the inner workings of these methods when applied to empirical data, and point to potential artifacts, too (See Figure 12). For monofractal analysis, we recommend using monofractal SSC for it gives unbiased estimates across the full range of the fGn/fBm dichotomy. For this reason, the topology is well defined and not as noisy as on the PSD maps. MF-DFA, due to its inferior performance in the strongly correlated fGn range (See Figure 6 at q = 2), failed with this particular BOLD dataset. Also note, that the histograms obtained for the same datasets evaluated by these different methods do differ indicating that method’s performance were different. Proper interpretation of the data therefore assumes an in-depth understanding of the implication of method’s performance on the analysis. Pc and most certainly W seems a promising parameter to map from the BOLD temporal datasets. Their proper statistical analyses along with those of singularity spectra for different anatomical locations in the brain should be a direction of future research.

Physiological Correlates of Fractal Measures of fMRI BOLD Time Series

Eke and colleagues suggested and demonstrated that β should be regarded as a variable responding to physiology (Eke et al., 1997, 2000, 2002, 2006; Eke and Herman, 1999; Herman and Eke, 2006; Herman et al., 2009, 2011).

Soon, Bullmore et al. (2001) suggested treating 1/f type fMRI BOLD time series as realizations of fBm processes for the purpose of facilitating their statistical analysis using pre-whitening strategies. For this reason, signal classification did not emerge as an issue to address. Then Thurner et al. (2003) demonstrated that human resting-state fMRI BOLD is not only a scale-free signal, but do respond to stimulation of the brain. Their analysis yielded this conclusion in a somewhat arbitrary manner in that the importance of the fGn/fBm dichotomy was not recognized at the time that led to flaws in the calculation of the scaling exponent as demonstrated above. Hu et al. (2008) and Lee et al. (2008) also reported that H obtained by DFA can discriminate activation from noise in fMRI BOLD signal.

In later studies dealing with the complexity of resting-state and task-related fluctuations of fMRI BOLD, the issue of signal class has gradually shifted into the focus (Maxim et al., 2005; Wink et al., 2008; Bullmore et al., 2009; He, 2011; Ciuciu et al., 2012).

Recently Herman et al. (2011) found in the rat brain using PSD that a significant population of fMRI BOLD signal fell into the non-stationary range of β. The inference of this finding is the potential interference of non-stationary signals with resting-state connectivity studies using spatio-temporal volumes of fMRI BOLD. It is even more so, if SSC is used for signal classification (Figure 11) and analysis (Figure 12) shifting the population histogram of H′ to the right.

The β value converted from the reported human spectral slopes by Fox et al. (2007) (see above) fits very well within the range of human data reported most recently by He (2011) for the same instrument (3T Siemens Allegra MR scanner). He (2011) adopting the dichotomous monofractal analytical strategy of Eke et al. (2002) demonstrated that β of spontaneous BOLD obtained for multiple regions of the human brain correlates with brain glucose metabolism, a fundamental functional parameter offering grounds for the assertion that that β itself is a functional parameter. Herman et al. (2011) using the same analytical strategy (Eke et al., 2000, 2002) on resting-state rat BOLD datasets showed that β maps capture a gray vs. white matter topology speaking for the correlation of β and functional activity of the brain regions being higher in the gray than in the white matter.

With near infrared spectroscopy, – recommended by Fox and Raichle (2007) as a cost-effective, mobile measurement alternative of fMRI to capture resting-state hemodynamic fluctuations in the brain – a 1/fβ temporal distribution of cerebral blood volume (one of the determinant of BOLD) was found in humans, with an age and gender dependence on β (Eke et al., 2006). Furthermore, β determined from heart rate variability time series was found to differ between healthy and unhealthy individuals (Makikallio et al., 2001).

The above physiological correlates seem to have opened a new perspective in basic and clinical neurosciences (Hausdorff et al., 1997) by recognizing β as an experimental variable and applying adequate tools for its reliable assessment (Pilgram and Kaplan, 1998; Eke et al., 2000, 2002; Bullmore et al., 2009; He, 2011) with multifractal analyses as a dynamically expanding perspective (Ciuciu et al., 2012; Ihlen, 2012), too.

We propose that the inter-regional spatial cross-correlation (connectivity) as a means of revealing spatial organization in the brain be supplemented by a temporal AC analysis of extended BOLD signal time series by mapping β as an index of temporal organization of the brain’s spontaneous activity.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank the technicians, scientists, and engineers at MRRC (mrrc.yale.edu), and QNMR (qnmr.yale.edu). This work was supported by grants from the National Institutes of Health (R01 MH-067528 and P30 NS-52519 to Fahmeed Hyder) and from the Hungarian Scientific Research Found (OTKA grants I/3 2040, T 016953, T 034122, NIH Grants TW00442, and RR1243 to Andras Eke). Dr. Peter Mukli has been supported by the Semmelweis University Magister Project (TÁMOP-4.2.2/B-10/1-2010–0013).

References

  1. Arneodo A., Audit B., Bacry E., Manneville S., Muzy J. F., Roux S. G. (1998). Thermodynamics of fractal signals based on wavelet analysis: application to fully developed turbulence data and DNA sequences. Physica A 254, 24–45 10.1016/S0378-4371(98)00002-8 [DOI] [Google Scholar]
  2. Arneodo A., Bacry E., Muzy J. F. (1995). The thermodynamics of fractals revisited with wavelets. Physica A 213, 232–275 10.1016/0378-4371(94)00163-N [DOI] [Google Scholar]
  3. Bacry E., Muzy J. F., Arneodo A. (1993). Singularity spectrum of fractal signals from wavelet analysis – exact results. J. Stat. Phys. 70, 635–674 10.1007/BF01053588 [DOI] [Google Scholar]
  4. Bandettini P. A. (1993). “MRI studies of brain activation: temporal characteristic,” in Proceedings of the First Annual Meeting of the International Society of Magnetic Resonance in Medicine (Dallas: Society of Magnetic Resonance in Medicine), 143–151 [Google Scholar]
  5. Barabási A. L., Vicsek T. (1991). Multifractality of self-affine fractals. Phys. Rev. A 44, 2730–2733 10.1103/PhysRevA.44.2730 [DOI] [PubMed] [Google Scholar]
  6. Barnsley M. F. (1988). Fractals Everywhere. Boston: Academic Press [Google Scholar]
  7. Barunik J., Kristoufek L. (2010). On Hurst exponent estimation under heavy-tailed distributions. Physica A 389, 3844–3855 10.1016/j.physa.2010.05.025 [DOI] [Google Scholar]
  8. Bashan A., Bartsch R., Kantelhardt J. W., Havlin S. (2008). Comparison of detrending methods for fluctuation analysis. Physica A 387, 5080–5090 10.1016/j.physa.2008.04.023 [DOI] [Google Scholar]
  9. Bassingthwaighte J. B. (1988). Physiological heterogeneity: fractals link determinism and randomness in structures and functions. News Physiol. Sci. 3, 5–10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bassingthwaighte J. B., Liebovitch L. S., West B. J. (1994). Fractal Physiology. New York: Published for the American Physiological Society by Oxford University Press [Google Scholar]
  11. Bassingthwaighte J. B., Raymond G. M. (1994). Evaluating rescaled range analysis for time series. Ann. Biomed. Eng. 22, 432–444 10.1007/BF02368250 [DOI] [PubMed] [Google Scholar]
  12. Bassingthwaighte J. B., Raymond G. M. (1995). Evaluation of the dispersional analysis method for fractal time series. Ann. Biomed. Eng. 23, 491–505 10.1007/BF02584449 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Benzi R., Biferale L., Crisanti A., Paladin G., Vergassola M., Vulpiani A. (1993). A random process for the construction of multiaffine fields. Physica D 65, 352–358 10.1016/0167-2789(93)90060-E [DOI] [Google Scholar]
  14. Biswal B., Yetkin F. Z., Haughton V. M., Hyde J. S. (1995). Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magn. Reson. Med. 34, 537–541 10.1002/mrm.1910340409 [DOI] [PubMed] [Google Scholar]
  15. Bryce R. M., Sprague K. B. (2012). Revisiting detrended fluctuation analysis. Sci. Rep. 2, 1–6 10.1038/srep00315 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Bullmore E., Barnes A., Bassett D. S., Fornito A., Kitzbichler M., Meunier D., et al. (2009). Generic aspects of complexity in brain imaging data and other biological systems. Neuroimage 47, 1125–1134 10.1016/j.neuroimage.2009.05.032 [DOI] [PubMed] [Google Scholar]
  17. Bullmore E., Long C., Suckling J., Fadili J., Calvert G., Zelaya F., et al. (2001). Colored noise and computational inference in neurophysiological (fMRI) time series analysis: resampling methods in time and wavelet domains. Hum. Brain Mapp. 12, 61–78 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Caccia D. C., Percival D., Cannon M. J., Raymond G., Bassingthwaighte J. B. (1997). Analyzing exact fractal time series: evaluating dispersional analysis and rescaled range methods. Physica A 246, 609–632 10.1016/S0378-4371(97)00363-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Cannon M. J., Percival D. B., Caccia D. C., Raymond G. M., Bassingthwaighte J. B. (1997). Evaluating scaled windowed variance methods for estimating the Hurst coefficient of time series. Physica A 241, 606–626 10.1016/S0378-4371(97)00252-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Ciuciu P., Varoquaux G., Abry P., Sadaghiani S., Kleinschmidt A. (2012). Scale-free and multifractal time dynamics of fMRI signals during rest and task. Front. Physiol. 3:186. 10.3389/fphys.2012.00186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Cramer F. (1993). Chaos and Order. New York: VCH Verlagsgesellschaft [Google Scholar]
  22. Davies R. B., Harte D. S. (1987). Test for Hurst effect. Biometrika 74, 95–101 10.1093/biomet/74.1.33 [DOI] [Google Scholar]
  23. Delignieres D., Ramdani S., Lemoine L., Torre K., Fortes M., Ninot G. (2006). Fractal analyses for ‘short’ time series: a re-assessment of classical methods. J. Math. Psychol. 50, 525–544 10.1016/j.jmp.2006.07.004 [DOI] [Google Scholar]
  24. Delignieres D., Torre K. (2009). Fractal dynamics of human gait: a reassessment of the 1996 data of Hausdorff et al. J. Appl. Physiol. 106, 1272–1279 10.1152/japplphysiol.90757.2008 [DOI] [PubMed] [Google Scholar]
  25. Delignieres D., Torre K., Lemoine L. (2005). Methodological issues in the application of monofractal analyses in psychological and behavioral research. Nonlinear Dynamics Psychol. Life Sci. 9, 435–461 [PubMed] [Google Scholar]
  26. Eke A., Herman P. (1999). Fractal analysis of spontaneous fluctuations in human cerebral hemoglobin content and its oxygenation level recorded by NIRS. Adv. Exp. Med. Biol. 471, 49–55 10.1007/978-1-4615-4717-4_7 [DOI] [PubMed] [Google Scholar]
  27. Eke A., Herman P., Bassingthwaighte J. B., Raymond G. M., Balla I., Ikrenyi C. (1997). Temporal fluctuations in regional red blood cell flux in the rat brain cortex is a fractal process. Adv. Exp. Med. Biol. 428, 703–709 10.1007/978-1-4615-5399-1_98 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Eke A., Herman P., Bassingthwaighte J. B., Raymond G. M., Percival D. B., Cannon M., et al. (2000). Physiological time series: distinguishing fractal noises from motions. Pflugers Arch. 439, 403–415 10.1007/s004240050957 [DOI] [PubMed] [Google Scholar]
  29. Eke A., Herman P., Hajnal M. (2006). Fractal and noisy CBV dynamics in humans: influence of age and gender. J. Cereb. Blood Flow Metab. 26, 891–898 10.1038/sj.jcbfm.9600243 [DOI] [PubMed] [Google Scholar]
  30. Eke A., Herman P., Kocsis L., Kozak L. R. (2002). Fractal characterization of complexity in temporal physiological signals. Physiol. Meas. 23, R1–R38 10.1088/0967-3334/23/1/301 [DOI] [PubMed] [Google Scholar]
  31. Faghfouri A., Kinsner W. (2005). “Local and global analysis of multifractal singularity spectrum through wavelets,” in Canadian Conference on Electrical and Computer Engineering 2005, Saskatoon, 2163–2169 [Google Scholar]
  32. Falconer K. J. (1990). Fractal Geometry: Mathematical Foundations and Applications. Chichester: Wiley [Google Scholar]
  33. Figliola A., Serrano E., Paccosi G., Rosenblatt M. (2010). About the effectiveness of different methods for the estimation of the multifractal spectrum of natural series. Int. J. Bifurcat. Chaos 20, 331–339 10.1142/S0218127410025788 [DOI] [Google Scholar]
  34. Fougere P. F. (1985). On the accuracy of spectrum analysis of red noise processes using maximum entropy and periodogram methods: simulation studies and application to geographical data. J. Geogr. Res. 90, 4355–4366 [Google Scholar]
  35. Fox M. D., Raichle M. E. (2007). Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci. 8, 700–711 10.1038/nrn2201 [DOI] [PubMed] [Google Scholar]
  36. Fox M. D., Snyder A. Z., Vincent J. L., Raichle M. E. (2007). Intrinsic fluctuations within cortical systems account for inter-trial variability in human behavior. Neuron 56, 171–184 10.1016/j.neuron.2007.08.023 [DOI] [PubMed] [Google Scholar]
  37. Frisch U., Parisi G. (1985). “Turbulence and predictability in geophysical fluid dynamics and climate dynamics,” in Fully Developed Turbulence and Intermittency Appendix: On the Singularity Structure of Fully Developed Structure, eds Ghil M., Benzi R., Parisi G. (Amsterdam: North-Holland; ), 823 [Google Scholar]
  38. Friston K. J., Josephs O., Zarahn E., Holmes A. P., Rouquette S., Poline J. B. (2000). To smooth or not to smooth? Bias and efficiency in fMRI time-series analysis. Neuroimage 12, 196–208 10.1006/nimg.2000.0609 [DOI] [PubMed] [Google Scholar]
  39. Gao J., Hu J., Tung W.-W. (2011). Facilitating joint chaos and fractal analysis of biosignals through nonlinear adaptive filtering. PLoS ONE 6, e24331. 10.1371/journal.pone.0024331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Gao J., Hu J., Tung W. W., Cao Y., Sarshar N., Roychowdhury V. P. (2006). Assessment of long-range correlation in time series: how to avoid pitfalls. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 73, 016117. 10.1103/PhysRevE.73.036602 [DOI] [PubMed] [Google Scholar]
  41. Gao J., Sultan H., Hu J., Tung W. W. (2010). Denoising nonlinear time series by adaptive filtering and wavelet shrinkage: a comparison. IEEE Signal Process. Lett. 17, 3. 10.1109/LSP.2010.2050174 [DOI] [Google Scholar]
  42. Gao J. B., Cao Y., Tung W.-W., Hu J. (2007). Multiscale Analysis of Complex Time Series – Integration of Chaos and Random Fractal Theory, and Beyond. Hoboken, NJ: Wiley Interscience [Google Scholar]
  43. Gilden D. L. (2001). Cognitive emissions of 1/f noise. Psychol. Rev. 108, 33–56 10.1037/0033-295X.108.1.33 [DOI] [PubMed] [Google Scholar]
  44. Gilden D. L. (2009). Global model analysis of cognitive variability. Cogn. Sci. 33, 1441–1467 10.1111/j.1551-6709.2009.01060.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Gilden D. L., Hancock H. (2007). Response variability in attention-deficit disorders. Psychol. Sci. 18, 796–802 10.1111/j.1467-9280.2007.01982.x [DOI] [PubMed] [Google Scholar]
  46. Gilden D. L., Thornton T., Mallon M. W. (1995). 1/f Noise in human cognition. Science 267, 1837–1839 10.1126/science.7892611 [DOI] [PubMed] [Google Scholar]
  47. Gouyet J. F. (1996). Physics and Fractal Structure. Paris: Masson [Google Scholar]
  48. Grech D., Pamula G. (2012). Multifractal background noise of monofractal signals. Acta Phys. Pol. A 121, B34–B39 [Google Scholar]
  49. Gu G. F., Zhou W. X. (2006). Detrended fluctuation analysis for fractals and multifractals in higher dimensions. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 74, 1–7 10.1103/PhysRevE.74.061104 [DOI] [PubMed] [Google Scholar]
  50. Gu G.-F., Zhou W.-X. (2010). Detrending moving average algorithm for multifractals. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 82, 1–12 10.1103/PhysRevE.82.011136 [DOI] [PubMed] [Google Scholar]
  51. Hartmann A., Mukli P., Nagy Z., Kocsis L., Herman P., Eke A. (2012). Real-time fractal signal processing in the time domain. Physica A 392, 89–102 10.1016/j.physa.2012.08.002 [DOI] [Google Scholar]
  52. Hausdorff F. (1918). Dimension und äuß eres Maß. Math. Ann. 79, 157–179 10.1007/BF01457179 [DOI] [Google Scholar]
  53. Hausdorff J. M., Mitchell S. L., Firtion R. E., Peng C. K., Cudkowicz M. E., Wei J. Y., et al. (1997). Altered fractal dynamics of gait: reduced stride-interval correlations with aging and Huntington’s disease. J. Appl. Physiol. 82, 262–269 [DOI] [PubMed] [Google Scholar]
  54. He B. J. (2011). Scale-free properties of the functional magnetic resonance imaging signal during rest and task. J. Neurosci. 31, 13786–13795 10.1523/JNEUROSCI.2111-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Herman P., Eke A. (2006). Nonlinear analysis of blood cell flux fluctuations in the rat brain cortex during stepwise hypotension challenge. J. Cereb. Blood Flow Metab. 26, 1189–1197 10.1038/sj.jcbfm.9600165 [DOI] [PubMed] [Google Scholar]
  56. Herman P., Kocsis L., Eke A. (2009). Fractal characterization of complexity in dynamic signals: application to cerebral hemodynamics. Methods Mol. Biol. 489, 23–40 10.1007/978-1-59745-543-5_2 [DOI] [PubMed] [Google Scholar]
  57. Herman P., Sanganahalli B. G., Hyder F., Eke A. (2011). Fractal analysis of spontaneous fluctuations of the BOLD signal in rat brain. Neuroimage 58, 1060–1069 10.1016/j.neuroimage.2011.06.082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Hosking J. R. M. (1984). Modeling persistence in hydrological time series using fractional differencing. Water Resour. Res. 20, 1898–1908 10.1029/WR020i012p01898 [DOI] [Google Scholar]
  59. Hu J., Lee J. M., Gao J., White K. D., Crosson B. (2008). Assessing a signal model and identifying brain activity from fMRI data by a detrending-based fractal analysis. Brain Struct. Funct. 212, 417–426 10.1007/s00429-007-0166-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Hyder F., Patel A. B., Gjedde A., Rothman D. L., Behar K. L., Shulman R. G. (2006). Neuronal-glial glucose oxidation and glutamatergic-GABAergic function. J. Cereb. Blood Flow Metab. 26, 865–877 10.1038/sj.jcbfm.9600263 [DOI] [PubMed] [Google Scholar]
  61. Hyder F., Rothman D. L., Blamire A. M. (1995). Image reconstruction of sequentially sampled echo-planar data. Magn. Reson. Imaging 13, 97–103 10.1016/0730-725X(94)00068-E [DOI] [PubMed] [Google Scholar]
  62. Ihlen E. A. F. (2012). Introduction to multifractal detrended fluctuation analysis in matlab. Front. Physiol. 3:141. 10.3389/fphys.2012.00141 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Jezzard P., Song A. W. (1996). Technical foundations and pitfalls of clinical fMRI. Neuroimage 4, S63–S75 10.1006/nimg.1996.0056 [DOI] [PubMed] [Google Scholar]
  64. Kantelhardt J. W., Zschiegner S. A., Koscielny-Bunde E., Havlin S., Bunde A., Stanley H. E. (2002). Multifractal detrended fluctuation analysis of nonstationary time series. Physica A 316, 87–114 10.1016/S0378-4371(02)01383-3 [DOI] [Google Scholar]
  65. Kestener P., Arneodo A. (2003). Three-dimensional wavelet-based multifractal method: the need for revisiting the multifractal description of turbulence dissipation data. Phys. Rev. Lett. 91, 194501. 10.1103/PhysRevLett.91.194501 [DOI] [PubMed] [Google Scholar]
  66. Kwong K. K., Belliveau J. W., Chesler D. A., Goldberg I. E., Weisskoff R. M., Poncelet B. P., et al. (1992). Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation. Proc. Natl. Acad. Sci. U.S.A. 89, 5675–5679 10.1073/pnas.89.12.5675 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Lashermes B., Abry P. (2004). New insight in the estimation of scaling exponents. Int. J. Wavelets Multi. 2, 497–523 10.1142/S0219691304000597 [DOI] [Google Scholar]
  68. Lashermes B., Jaffard S., Abry P. (2005). “Wavelet leader based multifractal analysis,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP ′05. IV (Philadelphia: Institute of Electrical and Electronics Engineers, IEEE), 161–164 [Google Scholar]
  69. Lauterbur P. C. (1973). Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature 242, 190–191 10.1038/242190a0 [DOI] [PubMed] [Google Scholar]
  70. Lee J. M., Hu J., Gao J., Crosson B., Peck K. K., Wierenga C. E., et al. (2008). Discriminating brain activity from task-related artifacts in functional MRI: fractal scaling analysis simulation and application. Neuroimage 40, 197–212 10.1016/j.neuroimage.2007.11.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Liebovitch L. S., Tóth T. (1989). A fast algorithm to determine fractal dimensions by box counting. Phys. Lett. A 141, 386–390 10.1016/0375-9601(89)90854-2 [DOI] [Google Scholar]
  72. Makikallio T. H., Huikuri H. V., Hintze U., Videbaek J., Mitrani R. D., Castellanos A., et al. (2001). Fractal analysis and time- and frequency-domain measures of heart rate variability as predictors of mortality in patients with heart failure. Am. J. Cardiol. 87, 178–182 10.1016/S0002-9149(00)01312-6 [DOI] [PubMed] [Google Scholar]
  73. Makowiec D., Rynkiewicz A., Wdowczyk-Szulc J., Zarczynska-Buchowiecka M. (2012). On reading multifractal spectra. Multifractal age for healthy aging humans by analysis of cardiac interbeat time intervals. Acta Phys. Pol B 5, 159–170 10.5506/APhysPolBSupp.5.159 [DOI] [Google Scholar]
  74. Mallat S. (1999). A Wavelet Tour in Signal Processing. San Diego: Academic Press [Google Scholar]
  75. Mallat S., Hwang W. L. (1992). Singularity detection and processing with wavelets. IEEE Trans. Inf. Theory 38, 617–643 10.1109/18.119727 [DOI] [Google Scholar]
  76. Mandelbrot B. B. (1967). How long is the coast of Britain? Statistical self-similarity and fractional dimension. Science 155, 636–638 10.1126/science.156.3775.636 [DOI] [PubMed] [Google Scholar]
  77. Mandelbrot B. B. (1980). Fractal aspects of the iteration of z → Λ (1- z) for complex l and z. Ann. N. Y. Acad. Sci. 357, 249–259 10.1111/j.1749-6632.1980.tb29690.x [DOI] [Google Scholar]
  78. Mandelbrot B. B. (1983). The Fractal Geometry of Nature. New York: W. H. Freeman and Co [Google Scholar]
  79. Mandelbrot B. B. (1985). Self-affine fractals and fractal dimension. Phys. Scripta 32, 257–260 10.1088/0031-8949/32/4/001 [DOI] [Google Scholar]
  80. Mandelbrot B. B. (1986). “Fractals and multifractals: noise, turbulence and non-fractal patterns in Physics,” in On Growth and Form: Fractal and Non-Fractal Pattern in Physics, eds Stanley H. E., Ostrowski N. (Dordrecht: Nijhof; ), 279 [Google Scholar]
  81. Mandelbrot B. B., Ness J. W. V. (1968). Fractional Brownian motion, fractional noises and applications. SIAM Rev. Soc. Ind. Appl. Math. 10, 422–437 [Google Scholar]
  82. Marmelat V., Delignieres D. (2011). Complexity, coordination, and health: avoiding pitfalls and erroneous interpretations in fractal analyses. Medicina (Kaunas) 47, 393–398 [PubMed] [Google Scholar]
  83. Maxim V., Sendur L., Fadili J., Suckling J., Gould R., Howard R., et al. (2005). Fractional Gaussian noise, functional MRI and Alzheimer’s disease. Neuroimage 25, 141–158 10.1016/j.neuroimage.2004.10.044 [DOI] [PubMed] [Google Scholar]
  84. Muzy J. F., Bacry E., Arneodo A. (1991). Wavelets and multifractal formalism for singular signals: application to turbulence data. Phys. Rev. Lett. 67, 3515–3518 10.1103/PhysRevLett.67.3515 [DOI] [PubMed] [Google Scholar]
  85. Muzy J. F., Bacry E., Arneodo A. (1993). Multifractal formalism for fractal signals – the structure-function approach versus the wavelet-transform modulus-maxima method. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 47, 875–884 10.1103/PhysRevE.47.875 [DOI] [PubMed] [Google Scholar]
  86. Muzy J. F., Bacry E., Arneodo A. (1994). The multifractal formalism revisited with wavelets. Int. J. Bifurcat. Chaos 4, 245–302 10.1142/S0218127494000204 [DOI] [Google Scholar]
  87. Ogawa S., Lee T. M. (1990). Magnetic resonance imaging of blood vessels at high fields. Magn. Reson. Med. 16, 9–18 10.1002/mrm.1910160103 [DOI] [PubMed] [Google Scholar]
  88. Ogawa S., Lee T. M., Kay A. R., Tank D. W. (1990). Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc. Natl. Acad. Sci. U.S.A. 87, 9868–9872 10.1073/pnas.87.24.9868 [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Ogawa S., Menon R. S., Tank D. W., Kim S., Merkle H., Ellermann J. M., et al. (1993a). Functional brain mapping by blood oxygenation level-dependent contrast magnetic resonance imaging – a comparison of signal characteristics with a biophysical model. Biophys. J. 64, 803–812 10.1016/S0006-3495(93)81441-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Ogawa S., Lee T. M., Barrere B. (1993b). The sensitivity of magnetic resonance image signals of a rat brain to changes in the cerebral venous blood oxygenation. Magn. Reson. Med. 29, 205–210 10.1002/mrm.1910290208 [DOI] [PubMed] [Google Scholar]
  91. Peng C.-K., Buldyrev S. V., Havlin S., Simons M., Stanley H. E., Goldberger A. L. (1994). Mosaic organization of DNA nucleotides. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 49, 1685–1689 10.1103/PhysRevE.49.R1796 [DOI] [PubMed] [Google Scholar]
  92. Phelan S. E. (2001). What is complexity science, really? Emergence 3, 120–136 10.1207/S15327000EM0301_08 [DOI] [Google Scholar]
  93. Pilgram B., Kaplan D. T. (1998). A comparison of estimators for 1/f noise. Physica D 114, 108–122 10.1016/S0167-2789(97)00188-7 [DOI] [Google Scholar]
  94. Qian X. Y., Gu G. F., Zhou W. X. (2011). Modified detrended fluctuation analysis based on empirical mode decomposition for the characterization of anti-persistent processes. Physica A 390, 4388–4395 10.1016/j.physa.2011.07.008 [DOI] [Google Scholar]
  95. Raichle M. E., MacLeod A. M., Snyder A. Z., Powers W. J., Gusnard D. A., Shulman G. L. (2001). A default mode of brain function. Proc. Natl. Acad. Sci. U.S.A. 98, 676–682 10.1073/pnas.98.2.676 [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Raichle M. E., Mintun M. A. (2006). Brain work and brain imaging. Annu. Rev. Neurosci. 29, 449–476 10.1146/annurev.neuro.29.051605.112819 [DOI] [PubMed] [Google Scholar]
  97. Razavi M., Eaton B., Paradiso S., Mina M., Hudetz A. G., Bolinger L. (2008). Source of low-frequency fluctuations in functional MRI signal. J. Magn. Reson. Imaging 27, 891–897 10.1002/jmri.21283 [DOI] [PubMed] [Google Scholar]
  98. Serrano E., Figliola A. (2009). Wavelet leaders: a new method to estimate the multifractal singularity spectra. Physica A 388, 2793–2805 10.1016/j.physa.2009.03.043 [DOI] [Google Scholar]
  99. Shimizu Y., Barth M., Windischberger C., Moser E., Thurner S. (2004). Wavelet-based multifractal analysis of fMRI time series. Neuroimage 22, 1195–1202 10.1016/j.neuroimage.2004.03.007 [DOI] [PubMed] [Google Scholar]
  100. Shulman R. G., Rothman D. L., Hyder F. (2007). A BOLD search for baseline. Neuroimage 36, 277–281 10.1016/j.neuroimage.2006.11.035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Simonsen I., Hansen A. (1998). Determination of the Hurst Exponent by use of wavelet transforms. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 58, 79–87 10.1103/PhysRevE.58.2779 [DOI] [Google Scholar]
  102. Smith A. J., Blumenfeld H., Behar K. L., Rothman D. L., Shulman R. G., Hyder F. (2002). Cerebral energetics and spiking frequency: the neurophysiological basis of fMRI. Proc. Natl. Acad. Sci. U.S.A. 99, 10765–10770 10.1073/pnas.122612899 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Thurner S., Windischberger C., Moser E., Walla P., Barth M. (2003). Scaling laws and persistence in human brain activity. Physica A 326, 511–521 10.1016/S0378-4371(03)00279-6 [DOI] [Google Scholar]
  104. Tung W., Gao J., Hu J., Yang L. (2011). Detecting chaos in heavy-noise environments. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 83, 046210. 10.1103/PhysRevE.83.038601 [DOI] [PubMed] [Google Scholar]
  105. Turiel A., Pérez-Vicente C. J., Grazzini J. (2006). Numerical methods for the estimation of multifractal singularity spectra on sampled data: a comparative study. J. Comput. Phys. 216, 362–390 10.1016/j.jcp.2005.12.004 [DOI] [Google Scholar]
  106. Weitkunat R. (1991). Digital Biosignal Processing. New York: Elsevier Science Inc [Google Scholar]
  107. Wink A., Bullmore E., Barnes A., Bernard F., Suckling J. (2008). Monofractal and multifractal dynamics of low frequency endogenous brain oscillations in functional MRI. Hum. Brain Mapp. 29, 791–801 10.1002/hbm.20593 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Yu L., Qi D. W. (2011). Applying multifractal spectrum combined with fractal discrete Brownian motion model to wood defects recognition. Wood Sci. Technol. 45, 511–519 10.1007/s00226-010-0341-7 [DOI] [Google Scholar]
  109. Zarahn E., Aguirre G. K., D’Esposito M. (1997). Empirical analyses of BOLD fMRI statistics. I. Spatially unsmoothed data collected under null-hypothesis conditions. Neuroimage 5, 179–197 10.1006/nimg.1997.0263 [DOI] [PubMed] [Google Scholar]

Articles from Frontiers in Physiology are provided here courtesy of Frontiers Media SA

RESOURCES