Abstract
In this paper, we analyse the convergence, accuracy and stability of the intrinsic frequency (IF) method. The IF method is a descendant of the sparse time frequency representation methods. These methods are designed for analysing nonlinear and non-stationary signals. Specifically, the IF method is created to address the cardiovascular system that by nature is a nonlinear and non-stationary dynamical system. The IF method is capable of handling specific nonlinear and non-stationary signals with less mathematical regularity. In previous works, we showed the clinical importance of the IF method. There, we showed that the IF method can be used to evaluate cardiovascular performance. In this article, we will present further details of the mathematical background of the IF method by discussing the convergence and the accuracy of the method with and without noise. It will be shown that the waveform fit extracted from the signal is accurate even in the presence of noise.
Keywords: intrinsic frequency, instantaneous frequency, pulse wave analysis, cardiovascular disease, ventricular/arterial coupling
1. Introduction
1.1. An introduction to the origins of the intrinsic frequency method
Accessing and using the information hidden in signals requires methods for processing and analysing them. Such methods must be able to denoise and analyse the signal properly in order to process the data. Mathematically, the easiest way to construct a signal processing method is to project the recorded signal on a predetermined algebraic basis or dictionary. A classical method of doing this task is the Fourier transform (FT) method and a more recent one is the wavelet transform (WT) method [1,2]. The FT method is based on the strong assumption of periodicity and lacks the time–frequency localization. On the other hand, the WT method was proposed as a method that incorporates a time–frequency analysis of the signal by constructing a large dictionary of some orthonormal functions.
The FT and WT methods share one common property: decomposition is performed on a predefined basis, which is troublesome if the signal is not stationary. Recently, Huang et al. proposed empirical mode decomposition (EMD), a new method of adaptive signal processing [3–5] in which the basis of the projection is adaptive. EMD, which uses multiscale data-driven decompositions called intrinsic mode functions (IMFs), is a step forward in data analysis. It has eliminated most of the issues present in the FT and WT methods [3–5].
In particular, EMD can produce a faithful extraction even if the signal is not periodic, and makes a sparser time–frequency analysis of the data. Projection into a basis is not the ultimate goal in many recently developed signal processing methods. Therefore, researchers have tried to use projections that are as sparse as possible. In other words, it is important to have a representation of the signal in a basis by keeping only a few coefficients containing the pertinent information. In fact, in these methods, one should project the observed signal on a large overdetermined basis (dictionary [6–8]).
Because the IMFs are extracted adaptively from the data in EMD, the final decomposition is in general sparser than FT or WT methods. If the data have a certain frequency scale-separation property, then the extracted IMFs convey certain physical properties of the signal. Unfortunately, the empirical nature of the EMD makes it hard to analyse the results rigorously [9–11]. In order to eliminate this problem, Hou and Shi have proposed a rigorous mathematical system as a counterpart of the EMD method [9–11]. This method is called the sparse time–frequency representation (STFR) method.
All STFR methods are based on the assumption that a relatively large subclass of the oscillatory signals are signals of the form
| 1.1 |
with only one extrema between the zeros of the signal, in which the envelope is strictly positive, a(t)>0, and the phase function θ(t) is a one-to-one, strictly increasing map between the time coordinate, t, and the phase coordinate, θ. The time derivative of this phase function is called the instantaneous frequency. Physically speaking, θ(t) carries information about the rate of the change of the signal in time. With some abuse of notation, we can say that the STFR methods deal with signals that have both amplitude modulation and frequency modulation. The type of signal in equation (1.1) is called an IMF in STFR terminology. A finite linear combination of a collection of the IMFs is called an intrinsic signal. The goal of the STFR method is to decompose a signal into the sparsest set of IMFs.
A number of methods can extract each IMF from a combination of many IMFs, with different levels of accuracy. Methods that perform such extraction well include, but are not limited to, EMD [4], ensemble EMD [12], optimization-based EMD [13], wavelet [2], STFR [9–11], and synchrosqueezed wavelet transforms [14]. However, when it comes to signals with strong frequency modulation, these methods have difficulties extracting a unique IMF specifically when the data are polluted with noise [9–11,15,16]. Among these methods, the STFR method provides a better physical and mathematical understanding [15].
The intrinsic frequency (IF) method is, in fact, a modified version of the STFR methods [17] that is specialized to analyse certain physiological signals; in this paper, the cardiovascular pulse pressure.
1.2. Intrinsic frequency and its clinical importance
When examining the cardiovascular system and the pressure waveforms output by the heart during the cardiac cycle, we see a characteristic signal which is generated by the contraction of the left ventricle, the closure of the aortic valve (dicrotic notch), and the dynamics of the aorta and its branches. In this regard, the phase of the cardiac cycle during left ventricular contraction prior to the dicrotic notch is referred to as systole and the remainder as diastole. By applying the periodic STFR method to the arterial pressure waveforms, we observed that the instantaneous frequency changes its range of oscillation before and after the closure of heart aortic valve (i.e. dicrotic notch); see fig. 6 in [17]. This behaviour was consistently observed across a number of different cardiovascular conditions such as changes in heart rate and aortic rigidity [17,18]. Based on this observation, we proposed a modified version of the STFR to find the dominant frequencies during each respective phase of the cardiac cycle, systole and diastole. We have called this modified version the IF method [17,18].
Figure 6.

Extracted trend for the IMF and noise case.
The IF methodology encompasses the non-stationary dynamics of the cardiovascular system that has been ignored in Windkessel models [19]. Westerhof et al. mentioned that ‘Windkessel model is a lumped model of the arterial system or part thereof. Wave transmission and wave travel cannot be studied. Blood flow distribution and changes in the distribution cannot be represented. Effects of local vascular changes, e.g., change in aortic compliance while other vessels are not affected, cannot be studied’ [19]. Therefore, the Windkessel models are extremely limited by their own nature. The IF method, on the other hand, does not make any simplifications or assumptions about the underlying cardiovascular system [17,18]. In this sense, the IF approach and Windkessel approach are fundamentally different. The IF method is constructed based on a more physical model encompassing wave dynamics1 and all other dynamical aspects of the cardiovascular system [17,18].
The IF method assumes that because of the dynamics of the heart and aorta, there are two constant dominant frequencies before and after the dicrotic notch. These dominant frequencies have been called IFs. The clinical relevance of the IF method for evaluating cardiovascular performance and detecting cardiovascular disease has been established [17]; e.g. in this paper, we have explicitly shown how ω2 (see §2) decreases with age (decreasing compliance). Furthermore, the IF method is capable of approximating the left ventricular ejection fraction via non-invasive measurements [26].
Ignoring the effect of Mayer waves, we can assume that the pressure waveform at the entrance of aorta is almost periodic. Using STFR terminology here, we are trying to extract a single IMF from the pressure wave signal. We have shown that this IMF conveys the dynamic characteristics of the heart–aorta system [17,18]. However, the IMF of the pressure wave has a sharp edge at the location of the dicrotic notch (a sudden drop in pressure that occurs at the instant of aortic valve closure). Hence, the definition of an IMF is slightly abused; see [9–11] for a rigorous mathematical definition of an IMF. However, we still call it an IMF. Attempting to extract this IMF using EMD or STFR may fail in some specific cases or produce a blurred extraction, primarily because the change from one frequency regime before the dicrotic notch into another after the closure of the heart valve is accompanied by an abrupt change in frequency of the whole cardiovascular system or by a discontinuity in the first time derivative of the pressure waveform at the dicrotic notch. In the best case, using STFR methods would just capture a vague picture of the instantaneous frequency response of the system, which is good solely for qualitative interpretation and an initial guess for the possible IFs. As a result, we use a modified version of the STFR method that has less mathematical regularity and focuses on the IF rather than on the instantaneous frequency.
This paper illustrates the algorithmic and mathematical properties of the IF algorithm and serves as an extension of previously published work which to date has been presented in a purely physiological context [17,18]. Herein, we demonstrate the convergence of our method with and without noise. Using examples, we express the accuracy of this method both numerically and analytically.
2. Intrinsic frequency formulation
It is assumed that before and after the dicrotic notch, we have the following simple waveforms for the general IMF of the aortic pressure wave at time t:
| 2.1 |
This assumption has shown its credibility as an index to characterize the heart and cardiovascular diseases [17,18]. In this formula, i=1 corresponds to the behaviour of the IMF before the valve closure, and i=2 to the behaviour of the IMF after that. Here, ai,bi are constants and correspond to the envelopes of the IMF. The constants ω1,ω2 correspond to the IFs of the IMF. is the mean pressure during the heart beat period. As the IMF is composed of two different sinusoids, continuous at the dicrotic notch, we can write (2.1) in a more compact form. Take [0,T] to be the whole period of the pressure wave and T0 as the time of the dicrotic notch: 0<T0<T. Also, define the indicator function as
Now, the main IMF of the pressure waveform can be expressed as
| 2.2 |
If 0≤t<T0, then one would get the part of the IMF corresponding to the action of heart and aorta before the valve closure, i.e. . On the other hand, if T0≤t<T, then the part of the IMF that reflects the behaviour of the aorta after the valve closure is depicted by . In general, we are interested in extracting the IMF (2.2) and the corresponding IFs ω1, ω2.
At this stage, the goal is to extract the IMF carrying most of the energy and consequently the IFs, ω1,ω2, from the observed aortic pressure waveform f(t). Taking t as a continuous variable, one can use least-squares minimization to find the unknowns:
| 2.3 |
In this optimization problem, . The first linear condition in this optimization enforces the continuity of the extracted IMF at the dicrotic notch. The second one imposes the periodicity. In practice, as the data are sampled on discrete temporal points, one must solve the discrete version of (2.4). Assume that the data are sampled on time instances 0=t1<t2<⋯<tn=T, then one can convert problem (2.4) into a discrete least-squares of the form
| 2.4 |
Note that problem (2.5) is not convex. Therefore, we used a brute-force algorithm to solve it.
3. Algorithm
In algorithm 1, we break down the problem into a convex part and a global domain search [27]. The domain search part is the brute-force part of the algorithm. For this algorithm, the frequency domain is
| 3.1 |

which is characterized by a constant parameter, C, that depends on the physics of the problem. The basic idea behind algorithm 1 is to freeze (ω1,ω2), solve (2.5), and find the minimum of the function
where , , are the values upon which (2.5) is minimized for a fixed vector (ω1,ω2). We collect all possible values of P(ω1,ω2) and find the minimum of them among (ω1,ω2). The minimizer of P over (ω1,ω2) would then be the IFs that is the solution of the optimization problem.
The second step of algorithm 1 is just solving a linearly constrained least-squares algorithm on ai,bi and . This brute-force algorithm can also be made parallel computationally, because step (ii) can be solved separately for different (l,m) pairs.
4. Convergence analysis
In this section, we analyse the convergence properties of algorithm 1. In order to discuss the algorithm’s convergence and accuracy, we need the following lemma [28] and theorem.
Lemma 4.1 —
The minimum of a function can first be found over a few variables and then over the remaining ones:
Lemma 4.1 allows us to first find the minimization (2.5) on and then on ω1,ω2. Further, we need to make sure that the second step of the algorithm has a unique minimizer, which can be provided by the following theorem [29].
Theorem 4.2 —
If , where p≤n, n≤m+p and is of full column rank, then the optimization problem
has a unique solution.
As the algorithm freezes the frequency parameters (ω1,ω2) and then solves a least-squares problem on other variables, we can form a notation similar to that used in theorem 4.2. Take the matrix A to be
and the matrix C and vector x to be
Sample points t1,…,tn0 correspond to the trend before the dicrotic notch and tn0+1,…,tn to the points after it. The vector b would be the observed sampled signal, {b}j=f(tj), j=1,2,…,n. Finally, d=0 is enforcing the periodicity and continuity of the waveform when imposed on the right-hand side of Cx=d. These matrices and vectors satisfy the conditions of theorem 4.2. Hence, the second step of the algorithm always has a unique solution. This fact, combined with lemma 4.1, guarantees that algorithm 1 always has at least one unique solution for the IF minimization problem (2.5).
4.1. Noise-free condition
4.1.1. Existence
Assume there is no noise in the observation and the observed signal is exactly of type (2.2). As a result, the signal can be expressed as for some and . If there is a well resolved , then for some l, m, one would obtain . At these specific frozen frequencies, the solution of the second step of the algorithm is nothing but , based on theorem 4.2. More specifically, the problem that is being solved at this step is
Therefore, considering the definition of P(ω1,ω2) and lemma 4.1, it is guaranteed that there exists at least one minimizer.
4.1.2. Uniqueness
Furthermore, this minimizer is unique. In fact, if there exists another set of x and (ω1,ω2) as the solution of problem (2.5), namely and , then the two trends, and , arising from these parameters must be equal. For a finely sampled observation f, equality of these trends implies the equality of the parameters , . In short, we can state the following theorem.
Theorem 4.3 —
In the absence of noise, if the observed signal is of the form (2.2), for a well resolved algorithm 1 finds the unique minimizer of
4.2. Noisy measurements
Assume here that the IMF (2.2) is polluted with noise that is independent of the IMF itself. In other words, taking the noise to be ε, . Here, the algorithm will not extract the exact values of and , but it is possible to find an error bound on the distance between the extracted and the true IMF. If x* and are the extracted values by the algorithm, then one can write
The first inequality comes from the fact that any set of x and (ω1,ω2), where Cx=0, is a feasible point; consequently, the second inequality follows if x and (ω1,ω2) are assigned the values of and . Now, using the triangle inequality one can show
Using this, the following theorem can be proposed.
Theorem 4.4 —
In the presence of a trend-independent noise in (2.2), for a well resolved algorithm 1 finds the minimizer of (2.5) with an error having at most the same order as the noise.
Remember that in this theorem, a well-resolved is a discretized domain in which the distance between two adjacent mesh points is sufficiently small. In this theorem, it means that it is a continuum, and in practice, we have found that the maximum distance of 0.001 between the mesh points can construct a well-resolved .
How much the solution of the noisy problem differs from the real solution depends on the noise level. If the noise level ∥ε∥ is sufficiently small, then the distance between x*, and , is also of O(∥ε∥); see [30]. In practice, because the 2-norm of the trend is large compared with the noise level, the relative error in finding the trend is extremely low. In mathematical terms, we have
So, in general, algorithm 1 enables us to extract the IFs with a bounded error, even in the presence of noise perturbation. In real data, where a lot of reflected waves are superposed with the heart–aorta wave system, the signal is, in general, a combination of multiple IMFs. Usually, these waves have higher frequencies compared with the main IMF. This point will be made clearer in the next subsection.
4.3. Higher-order intrinsic mode functions
For the sake of simplicity, we assume that the added IMFs are of high frequency and that time is a continuous variable and the signal is not sampled on discrete points (it is a continuous variable). Take the recorded signal to be
| 4.1 |
where is the IMF of form (2.2), and DM(t) is a combination of IMFs with higher frequencies compared with . Without loss of generality, take DM(t) to have a Fourier series of the form
| 4.2 |
Implicitly, we have assumed that the added IMFs are of high-frequency nature. Having this terminology in mind, we can state the following theorem.
Theorem 4.5 —
The optimum curve S*(t), which is the solution of the minimization problem
4.3 satisfies
4.4 provided that DM(t)∈Cm, and m≥2.2
Proof. —
As is a feasible point, the minimizer S*(t) of (4.3) must satisfy
4.5 Define the Fourier series of as
Hence, we have
Using Parseval’s identity3 and (4.5) gives
Simplifying and using triangle inequality result in
4.6 Because is continuous, . Using the Cauchy–Schwartz inequality gives . On the other hand, as DM(t)∈Cm, then we get . Using these estimates and bounding the sum by an integral, (4.6) will result in
□
Remark 4.6 —
The bound (4.4) shows that as the minimum frequency M of the IMFs increases, then the optimum curve S*(t) and the true curve get closer. This bound is in fact a measure of the ‘scale separation condition’ mentioned in [9–11]. In simple words, if the IMFs are orthogonal to the original IMF, , then the extracted optimum curve S*(t) is almost the true IMF . Hence, the frequencies found in S*(t) are close to true IF values. Note that, in deriving this bound, we have not used the structure of the main IMF. Therefore, this bound is in general an orthogonality condition. In practice, the bounds are even smaller than (4.4). This means that the algorithm performs better than the error estimate.
5. Results
Here, we will show how the results of the proposed algorithm 1 perform using synthetic and clinical examples.4
Example 5.1 —
Assume that the signal has intrinsic frequencies of ω1= 9.5 rad s−1 and ω2=5.42 rad s−1. The envelope of the first part of the trend is taken to be , and the envelope of the second part is taken in a way that it matches a signal of period 0.898 s and a dicrotic notch at T0=0.3293 s. The mean value of the signal is . is defined in a way that the resolution of the frequency domain, , is 0.08 rad s−1. For this well-resolved domain, the extraction is accurate up to machine error precision: the curves are indistinguishable (figure 1). The IFs are found with no error.
Figure 1.

Synthetic data, without noise and with a well-resolved domain.
Example 5.2 —
To test the algorithm for a case in which is not well resolved, we use the signal from example 5.1 and define , so that the resolution of the frequency domain is 0.1 rad s−1. This resulted in a faithful extraction of the curve, with less than 0.4% relative error5 (figure 2). The IFs are and rad s−1, which have less than 5% relative error (less than 0.47 rads−1 in absolute measure).
Figure 2.

Synthetic data, without noise and without a well-resolved domain.
In example 5.3, we will investigate the effect of noise on the extracted IFs. Here, is well resolved.
Example 5.3 —
To show the stability of the algorithm and to test the effect of noise on a signal with well-resolved , we use the same signal and define , so that the resolution of the frequency domain is 0.027 rad s−1. The signal is polluted with normal random Gaussian (mean zero and variance one) noise (figure 3) and the relative error in extraction is less than 0.7% (figure 4). The extracted IFs were and , which have relative error of less than 6%. This example shows the stability of the algorithm.
Figure 3.

Original noisy data.
Figure 4.

Extracted curve versus the original curve in a noisy environment.
Example 5.4 —
(a) To synthetically examine the effect of adding an IMF to the main IMF, a simple sine wave of form is added to the same signal used in previous examples. The extracted IFs are again accurate. In fact, if has a resolution of 0.027 rad s−1, the error in extracted IFs is zero up to three digits of accuracy. With a resolution of 0.25 rad s−1, the relative error was at most 8%. These examples show that the algorithm works better than the bound provided by (4.4).
(b) To test whether a noisy observation with an added low frequency IMF would be still a tractable problem for the algorithm, we take the IMF from example 5.1 and add (figure 5). Here, is the white (Gaussian) noise with mean zero and variance one. For a resolution of 0.08 rad s−1, the extracted IFs are and and the maximum relative error in extracted IFs is less than 14%. If a higher resolution is used, the results are much better. For example, for a resolution of 0.027 rad s−1, the extracted IFs are and , having a maximum relative error of just 7% (figure 6). The curve extraction has a relative error of 2%.
Figure 5.

Synthetic trend plus IMF and noise.
Example 5.5 —
We have tested the performance of the IF algorithm on clinical data [17,18]. Here, we present the details of some of the cases presented in that work, in figure 7. The data are collected from the ascending aorta of subjects using a catheter. This original recorded signal is shown in blue. The extracted IMFs are represented in red. In all cases, the dicrotic notch was identified based on eye inspection by an expert in the field. One can observe how the IMF is faithfully extracted from the data in most cases. The extraction in case 2, in figure 7, needs more attention: as can be seen from figure 7, if the rising portion in systole is shorter in time than the falling part (effect of the second IMF), the extracted IMF is not as good as other cases. We believe this issue can be addressed by some modifications in the current IF method. We will address these special cases in a future work.
Figure 7.
Recorded clinical data. The data were collected from the ascending aorta of subjects using a catheter (blue curves). The data were then analysed using the IF algorithm (red curves).
Example 5.6 —
In this example, we intend to experimentally observe the sensitivity of the algorithm in the presence of noise, as a preliminary work (figure 8). The data were created using a deterministic form like (2.1). Then, noise is added to the signal based on the norm of the oscillatory part of the signal. From figure 8, one can observe that the data are greatly affected by noise. The dicrotic notches in these cases were predefined. The results of the IF algorithm are shown in figure 9. Figure 9 shows the accuracy of the algorithm in the presence of noise. Table 1 includes the extracted IF values for these cases. A more rigorous statistical analysis, similar to this example, on the sensitivity of our method will be done in our future work.
Figure 8.
Synthetic data with noise. The data were created using deterministic functions with added noise.
Figure 9.
Synthetic data for IMF extraction. The original signal without the noise is shown in black. The extracted IMF using the IF algorithm is shown in red.
Table 1.
The results of the IF algorithm: original frequencies in units of rad s−1 are shown as ω1 and ω2. The extracted values, , , and the absolute errors, , , are also expressed.
| ω1 | ω2 | |||||
|---|---|---|---|---|---|---|
| case 1 | 9.5 | 9.6105 | 0.1105 | 3.42 | 3.5881 | 0.1681 |
| case 2 | 9.5 | 9.6105 | 0.1105 | 3.42 | 3.4412 | 0.0212 |
| case 3 | 9.5 | 9.4636 | 0.0364 | 3.42 | 3.4412 | 0.0212 |
| case 4 | 9.5 | 8.8761 | 0.6239 | 3.42 | 2.7068 | 0.7132 |
| case 5 | 9.5 | 8.1417 | 1.3583 | 3.42 | 2.8537 | 0.5663 |
| case 6 | 9.5 | 9.4636 | 0.0364 | 3.42 | 3.4412 | 0.0212 |
| case 7 | 8.5 | 8.4354 | 0.0646 | 3.42 | 2.8537 | 0.5663 |
| case 8 | 7.5 | 6.8197 | 0.6803 | 3.42 | 2.8537 | 0.5663 |
| case 9 | 7 | 7.1134 | 0.1134 | 3.42 | 3.2944 | 0.1256 |
| case 10 | 9.5 | 9.4636 | 0.0364 | 5.42 | 5.2039 | 0.2161 |
| case 11 | 8.5 | 8.5823 | 0.0823 | 7.42 | 7.4072 | 0.0128 |
| case 12 | 10.5 | 10.4919 | 0.0081 | 8.42 | 9.4636 | 1.0436 |
| case 13 | 12.5 | 12.5483 | 0.0483 | 6 | 5.7915 | 0.2085 |
| case 14 | 11 | 10.7856 | 0.2144 | 4 | 4.3226 | 0.3226 |
6. Discussion and conclusion
In this article, we sketched the algorithmic and mathematical properties of the IF algorithm. We have demonstrated the convergence of our method with and without noise. Using examples, we showed the accuracy of this method that is in accord with the mathematical accuracy bounds presented in the paper. The convergence and stability of the algorithm, combined with the physiological intuition of the heart–aorta model [17,18], make it a suitable method for rigorous pulse pressure analysis.
In the clinical data examples, we could clearly show that the IF method can capture the behaviour of the pulse pressure waveform with good accuracy. As these examples show, the characteristics of an aortic waveform (figure 7) can be expressed with only a few parameters . In other words, the IF can work as a dimensionality reduction method to accurately capture the complex physics of such waveforms using a limited set of parameters. Furthermore, the clinical data have confirmed that the IFs, ω1 and ω2, are the most physiologically relevant parameters [17,18]. The other five parameters are auxiliary parts of the mathematical construction of the method.
The IF is superior to the FT representation because the IF can localize the frequency behaviour of the waveform. For example, the FT is not capable of recognizing the time instance of the harmonics, whereas the IF does not suffer from this issue. In fact, the IF is constructed from a nonlinear and non-stationary point of view to dynamical systems [17,18]. The FT, by contrast, assumes the system is both linear and stationary in its approach. From a physiological point of view, there are two main non-stationary effects in the heart and vascular system. The first is heart rate variability. The second is the closure of the aortic valve which depends on dynamics of both the heart and vascular system. The localization behaviour of the IF allows us to clearly separate these non-stationary effects. Physiologically speaking, this means the IF can distinguish between waveform abnormalities originating from the heart versus the aortic and arterial system [17,18]. Last but not least, the FT represents the pulse pressure with at least five linear harmonics distributing the energy among them. This distribution of energy loses critical information. On the contrary, the IF represents the pulse pressure with only two nonlinear harmonics (ω1 and ω2). Hence, the IF is a simpler, more meaningful concept that is capable of accurately quantifying the complex system dynamics of heart and aorta.
Acknowledgements
The authors are grateful to Sean Brady and Frank Becking for their constructive comments regarding the composition and presentation of the manuscript. The authors are also grateful to anonymous reviewers as their comments and questions definitely improved the quality of the manuscript.
Footnotes
This bound can be made sharper if DM(t)∈Cm and . Here the p-norm is defined as for p≥1.
In an inner-product vector space with an orthonormal basis B, for any vector x in this space, we have .
The datasets supporting this article have been uploaded as part of the electronic supplementary material [31].
Relative error for the signal is defined as
Data accessibility
Data used in this study can be downloaded from http://dx.doi.org/10.5061/dryad.21q9m.
Authors’ contributions
P.T. conceived of the study, carried out the modelling, programming and mathematical analysis of the method, carried out part of the design of the numerical examples and drafted the manuscript; T.Y.H. coordinated the mathematical part of the study, helped draft the manuscript and supervised the mathematical analysis of the method; D.G.R. helped draft the manuscript, and revised the manuscript critically for important intellectual content; N.M.P. carried out part of the design of the numerical examples, interpreted the physiological meaning of the method, correlated the main parameters of the method with physiological parameters, collected relevant clinical data, coordinated the general scope of the study and helped draft the manuscript. All authors gave final approval for publication.
Competing interests
We declare we have no competing interests.
Funding
This work was supported by NSF grant no. DMS-1318377.
References
- 1.Blatter C. 2002. Wavelets: a primer. New York, NY: Taylor & Francis. [Google Scholar]
- 2.Mallat SG. 1999. A wavelet tour of signal processing. San Diego, CA: Academic Press. [Google Scholar]
- 3.Huang NE, Shen SS. 2005. Hilbert–Huang transform and its applications, vol. 5 Singapore: World Scientific. [Google Scholar]
- 4.Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH. 1998. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. A 454, 903–995 (doi:10.1098/rspa.1998.0193) [Google Scholar]
- 5.Huang NE, Wu Z, Long SR, Arnold KC, Chen X, Blank K. 2009. On instantaneous frequency. Adv. Adapt. Data Anal. 1, 177–229. (doi:10.1142/S1793536909000096) [Google Scholar]
- 6.Candes EJ, Tao T. 2006. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52, 5406–5425. (doi:10.1109/TIT.2006.885507) [Google Scholar]
- 7.Chen SS, Donoho DL, Saunders MA. 1998. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61. (doi:10.1137/S1064827596304010) [Google Scholar]
- 8.Mallat SG, Zhang Z. 1993. Matching pursuits with time–frequency dictionaries. IEEE Trans. Signal Process. 41, 3397–3415. (doi:10.1109/78.258082) [Google Scholar]
- 9.Hou TY, Shi Z, Tavallali P. 2014. Convergence of a data-driven time–frequency analysis method. Appl. Comput. Harmon. Anal. 37, 235–270. (doi:10.1016/j.acha.2013.12.004) [Google Scholar]
- 10.Hou TY, Shi Z. 2011. Adaptive data analysis via sparse time–frequency representation. Adv. Adapt. Data Anal. 3, 1–28. (doi:10.1142/S1793536911000647) [Google Scholar]
- 11.Hou TY, Shi Z. 2013. Data-driven time–frequency analysis. Appl. Comput. Harmon. Anal. 35, 284–308. (doi:10.1016/j.acha.2012.10.001) [Google Scholar]
- 12.Wu Z, Huang NE. 2009. Ensemble empirical mode decomposition: a noise-assisted data analysis method. Adv. Adapt. Data Anal. 1, 1–41. (doi:10.1142/S1793536909000047) [Google Scholar]
- 13.Huang B, Kunoth A. 2012. An optimization based empirical mode decomposition scheme. J. Comput. Appl. Math. 240, 174–183. (doi:10.1016/j.cam.2012.07.012) [Google Scholar]
- 14.Daubechies I, Lu J, Wu H. 2011. Synchrosqueezed wavelet transforms: an empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 30, 243–261. (doi:10.1016/j.acha.2010.08.002) [Google Scholar]
- 15.Hou TY, Shi Z, Tavallali P.2013. Sparse time frequency representations and dynamical systems. (http://arxiv.org/abs/1312.0202. )
- 16.Tavallali P, Hou TY, Shi Z. 2014. Extraction of intrawave signals using the sparse time–frequency representation method. SIAM Multiscale Model. Simul. 12, 1458–1493. (doi:10.1137/140957767) [Google Scholar]
- 17.Pahlevan NM, Tavallali P, Rinderknecht DG, Petrasek D, Matthews RV, Hou TY, Gharib M. 2014. Intrinsic frequency for a systems approach to haemodynamic waveform analysis with clinical applications. J. R. Soc. Interface 11, 20140617 (doi:10.1098/rsif.2014.0617) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Pahlevan N, Rinderknecht D, Tavallali P, Petrasek D, Matthews R, Gharib M. 2014. Intrinsic frequency method for noninvasive diagnosis of left ventricular systolic dysfunction. In Proc. 67th Annual Meeting of the APS Division of Fluid Dynamics, San Francisco, CA, 23–25 November 2014, vol. 59, abstract BAPS.2014.DFD.D7.6. [Google Scholar]
- 19.Westerhof N, Lankhaar J-W, Westerhof BE. 2009. The arterial windkessel. Med. Biol. Eng. Comput. 47, 131–141. (doi:10.1007/s11517-008-0359-2) [DOI] [PubMed] [Google Scholar]
- 20.Cooper LL, Rong J, Benjamin EJ, Larson MG, Levy D, Vita JA, Hamburg NM, Vasan RS, Mitchell GF. 2015. Components of hemodynamic load and cardiovascular events. Circulation 131, 354–361. (doi:10.1161/CIRCULATIONAHA.114.011357) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Pahlevan NM, Gharib M. 2011. Low pulse pressure with high pulsatile external left ventricular power: influence of aortic waves. J. Biomech. 44, 2083–2089. (doi:10.1016/j.jbiomech.2011.05.016) [DOI] [PubMed] [Google Scholar]
- 22.Pahlevan NM, Gharib M. 2013. In vitro investigation of a potential wave pumping effect in human aorta. J. Biomech. 46, 2122–2129. (doi:10.1016/j.jbiomech.2013.07.006) [DOI] [PubMed] [Google Scholar]
- 23.Parragh S, Hametner B, Bachler M, Kellermair J, Eber B, Wassertheurer S, Weber T. 2015. Determinants and covariates of central pressures and wave reflections in systolic heart failure. Int. J. Cardiol. 190, 308–314. (doi:10.1016/j.ijcard.2015.04.183) [DOI] [PubMed] [Google Scholar]
- 24.Torjesen AA, Wang N, Larson MG, Hamburg NM, Vita JA, Levy D, Benjamin EJ, Vasan RS, Mitchell GF. 2014. Forward and backward wave morphology and central pressure augmentation in men and women in the Framingham heart study. Hypertension 64, 259–265. (doi:10.1161/HYPERTENSIONAHA.114.03371) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Tsao CW, Pencina KM, Massaro JM, Benjamin EJ, Levy D, Vasan RS, Hoffmann U, O’Donnell CJ, Mitchell GF. 2014. Cross-sectional relations of arterial stiffness, pressure pulsatility, wave reflection, and arterial calcification. Arterioscler. Thromb. Vasc. Biol. 34, 2495–2500. (doi:10.1161/ATVBAHA.114.303916) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Pahlevan NM, Tran TT, Tavallali P, Rinderknecht DG, Csete M, Gharib MM. 2015. New intrinsic frequency measures of cardiac function vs. cardiac MRI as a gold standard. In ISMRM 23rd Annu. Meeting and Exhibition, Toronto, Canada, 30 May–5 June 2015.
- 27.Kolda TG, Lewis RM, Torczon V. 2003. Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45, 385–482. (doi:10.1137/S003614450242889) [Google Scholar]
- 28.Boyd S, Vandenberghe L. 2004. Convex optimization. Cambridge, UK: Cambridge University Press. [Google Scholar]
- 29.Demmel JW. 1997. Applied numerical linear algebra. Philadelphia, PA: Society for Industrial Mathematics.
- 30.Bertsekas DP. 1999. Nonlinear programming. Belmont, MA: Athena Scientific.
- 31.Tavallali P, Hou TY, Rinderknecht DG, Pahlevan NM. Data from: on the convergence and accuracy of the cardiovascular intrinsic frequency method. () [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data used in this study can be downloaded from http://dx.doi.org/10.5061/dryad.21q9m.



