Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Nov 1.
Published in final edited form as: IEEE Trans Med Imaging. 2019 Mar 20;38(11):2582–2595. doi: 10.1109/TMI.2019.2906600

Noninvasive Reconstruction of Transmural Transmembrane Potential With Simultaneous Estimation of Prior Model Error

Sandesh Ghimire 1, John L Sapp 2, B Milan Horáček 3, Linwei Wang 4
PMCID: PMC6913037  NIHMSID: NIHMS1061951  PMID: 30908200

Abstract

To reconstruct electrical activity in the heart from body-surface electrocardiograms (ECGs) is an ill-posed inverse problem. Electrophysiological models have been found effective in regularizing these inverse problems by incorporating a priori knowledge about how the electrical potential in the heart propagates over time. However, these models suffer from model errors arising from, for example, parameters associated with tissue properties and the earliest sites of excitation. We present a Bayesian approach to simultaneously estimate transmembrane potential (TMP) signals and prior model errors, exploiting sparsity of the error in the gradient domain in the form of a novel sparse prior based on variational lower bound of the generalized Gaussian distribution. In synthetic and real-data experiments, we demonstrate the improvement of accuracy in TMP reconstruction brought by simultaneous model error estimation. We further provide theoretical and empirical justifications for the change of performances in the presented method at the presence of different model errors.

Index Terms-: Model error, sparsity, inverse problem, ECG, Bayesian inference, Fenchel duality, graphical model

I. Introduction

NONINVASIVE electrophysiological (EP) imaging aims at a mathematical reconstruction of cardiac electrical sources from high density electrocardiogram (ECG) signals. It requires solving an inverse problem that is ill-posed because, by the principle of electromagnetics, there exists many possible transmural cardiac source configurations that give rise to the same ECG data [1] on the body surface. This is further exacerbated by the availability of a small number of ECG measurements compared to the number of unknowns to be solved for, especially when the unknown of interest is transmurally distributed throughout the myocardium.

The success of noninvasive EP imaging, therefore, largely relies on an effective incorporation of prior knowledge about the solutions via regularization techniques. Representative constraints include the smoothness of the electrical potential in space and/or time at different orders of derivatives, enforced through techniques such as Tikhonov regularization [2], truncated SVD [3], and spatio-temporal regularization [4]. Other constraints exploit sparsity of the cardiac signal in a certain domain, such as the gradient domain, by utilizing L1 norm [5] or total variation [6] as the regularization cost. Similar constraints on smoothness and sparsity can be incorporated within a probabilistic formulation where they enter into the equation as the prior distribution on the source signal. For example, Gaussian prior [7] is used for smoothness and total variation prior for sparsity [8], while generalized Gaussian prior [9] adapts between smoothness and sparsity.

Alternatively, model based regularization has been used to encode a priori physiological knowledge about the electrical propagation inside the heart. Examples include step jump functions [10] and logistic functions [11] to describe the activation of action potential, and parameterized curves modeling the wavefront velocity as trigonometric functions and the potential as a step response of a second order linear system [12]. When estimating transmural sources throughout the myocardium, 3D EP simulation models of the spatiotemporal propagation of action potential have been used to provide dynamic constraints of the inverse problem [13], [14], [15].

While the use of these models is effective in regularizing the ill-posed problem, there is an unresolved challenge: parameters of these models, often associated with tissue properties and excitation points, are unknown a priori. For example, parameters controlling the shape of transmembrane potential (TMP) will vary across the heart depending upon whether the underlying tissue is healthy or diseased. In the absence of prior knowledge about these parameters, common practice is to assume values commonly used in literature, introducing errors in the models. Such model inaccuracies and their effect on the inverse solution have been studied. Utilizing convex relaxation of the original non-convex inverse problem, Erem et al [16] studied how the solution differs if the model assumption about a uniform TMP amplitude throughout the heart is violated due to the presence of infarction. Similarly, Xu et al [8] investigated uncertainty in the inverse solution due to model errors, and showed the importance of considering the resulting solution uncertainty in addition to a point estimate. While these works have highlighted the importance of acknowledging prior model errors in EP imaging, addressing these errors remains a challenge.

In this paper, we present a probabilistic framework to allow estimation of the prior model error while reconstructing transmural TMP under the constraint of a 3D EP simulation model. To overcome the challenge of estimating high-dimensional model errors, we exploit the low-dimensional nature of cardiac wavefront propagation to formulate a sparse representation for the model error. We then present a Bayesian inference method to estimate the posterior distribution of transmural TMP and the sparse error of the prior EP model. Building upon our previous work [17], we provide a rigorous treatment to the inference by explicitly introducing error random variable and jointly inferring its posterior distribution. This enables proper estimation of model uncertainty as a combination of propagated uncertainty from previous time and the estimated uncertainty at the present time.

We evaluate the performance of the presented method on simulated and real data on its ability to detect and correct model errors resulting from the presence of myocardial infarction and unknown excitation points. We compare its performance with the previously-described model-constrained approach to TMP reconstruction [14] that does not consider errors in the a priori model. The main contributions of this paper include:

  1. We present a new probabilistic approach to EP imaging that is able to estimate the error in the prior model by leveraging the sparsity of model errors.

  2. We show that the presented method can detect and correct errors in prior model predictions, improving the accuracy of the estimated TMP signals in the presence of unknown infarction and excitation locations.

  3. We provide theoretical and experimental analysis relating the performance of the presented method to the interplay between ECG data and the singular value decomposition of the forward matrix.

  4. We relate the presented method to algorithms in machine learning community, such as relevance vector machines (RVM) and Empirical Bayes, to provide further insights into the nature of the solution.

II. Background

ECG data and transmural TMP are related by a quasi-static approximation of the Maxwell’s equations on the electromagnetic field between the heart and the torso [14], [18]. Solving these equations numerically on subject specific heart-torso models provides a linear measurement model: yk = Huk, where H is termed as the forward matrix, and yk and uk respectively denote a vector of ECG measurements and TMP inside the heart at any time instant k. The inverse problem involves the estimation of uk given measurements, yk, for all time k, which is ill-posed.

To compensate for the lack of information in yk, we use knowledge of the temporal propagation of TMP, uk, through a cardiac EP model. Considering the balance between model plausibility and computational feasibility, we choose the Aliev-Panfilov model [19] described by two differential equations. These equations can be numerically solved over the discrete mesh of the ventricles as described in [14] to arrive at:

ut=M1Ku+g1(u,v),vt=g2(u,v) (1)

where v is the vector of recovery current. Matrices M and K encode the 3-D myocardial structure and its conductive anisotropy. Solving eq. (1) numerically over time provides:

uk=f(uk1) (2)

where f denotes the routine for numerically solving eq.(1) and does not necessarily have a closed form.

III. Methodology

A. Probabilistic Formulation for EP Imaging

We represent the generation of ECG sequence by a probabilistic graphical model as shown in Fig. 1(a), where TMP uk is the latent random variable generating ECG data through the linear measurement model, and the hidden state uk is related to the previous state by the prior EP model (eq.2). Furthermore, to account for modeling errors in the prior model given by eq.(2), we introduce a prediction error ηk through:

uk=f(uk1)+ηk (3)

Fig. 1.

Fig. 1.

Probabilistic graphical models of (a) ECG sequence and (b) ECG at a time instant.

While existing works often model ηk as a known Gaussian noise with a pre-defined variance [14], we assume ηk to be unknown with a prior distribution parameterized by θk.

The joint posterior distribution of uk for all time instants is analytically intractable because of the lack of closed form solution of eq.(2). Therefore, we sequentially solve for the marginal probability density function (pdf) of uk for each time instant given ECG data till present time, y1,…k. Since uk depends on previous ECG data y1,…k−1 through uk−1 (see Fig. 1(a)), given the posterior distribution of uk−1, uk is independent of y1,…k−1. This brings us to the graphical model in Fig. 1(b) for the generation of ECG data at time instant k. Components of this graphical model are detailed below:

1). Likelihood:

ECG data yk is generated from TMP uk through the linear measurement model described earlier considering a zero-mean Gaussian noise with variance βk1:

p(yk|uk,βk)=N(yk|Huk,βk1I) (4)

where βk is modeled with the conjugate Gamma prior:

p(βk|c,d)=dcΓ(c)βk1exp(dβk) (5)

2). Conditional Prior of uk:

We model the prior of action potential uk conditioned on previous ECG data as well as the prediction error ηk. Given the posterior distribution of action potential uk−1 at the previous time instant, p(uk−1|β1…k−1, y1…k−1, θ1…k−1), we have:

p(uk|ηk,β1k1,y1k1,θ1k1)=p(uk|uk1,ηk)p(uk1|β1k1,y1k1,θ1k1)duk1 (6)

where p(uk|uk−1, ηk) can be defined as N(uk|f(uk1)+ηk,0) based on the prior EP model in eq.(3). Because f is not in a closed form, the integral in eq.(6) cannot be solved analytically but has to be approximated numerically. To do so, we sample from the posterior distribution of uk−1 and pass them through the physiological model f. The mean ud and covariance Σd of f(uk−1) are then approximated from the output samples. Let ωk be the Gaussian approximation of f(uk−1), i.e., ωk~N(ωk|ud,Σd), we have:

p(uk|ηk,β1k1,y1k1,θ1k1)=N(uk|f(uk1)+ηk,0)×p(uk1|β1k1,y1k1,θ1k1)duk1=N(uk|ωk+ηk,0)N(ωk|ud,Σd)dωk (7)
=N(uk|ud+ηk,Σd) (8)

where eq.(7) uses the fact about transformation of random variables : ∫ g(f (u))p(u)du = ∫ g(ω)p(ω) if ω = f (u).

3). Error Model:

Finally, we model the prediction error ηk with a prior distribution p(ηk|θk). Because we model ηk independently for each time instant, below we drop k from the formulation for the sake of simplicity.

To model η, we exploit its low-dimensional structure by considering the physiological phenomenon that TMP wavefront (which can be thought as spatial gradient of TMP) is spatially localized. It is therefore reasonable to assume the spatial gradient vector of uk to be sparse with a lot of zeros, as illustrated in the examples in Fig. 2. At any time instant, the difference between the gradient of true TMP and that predicted by an erroneous model would capture the difference in their wavefronts, which would be localized in space and can be reasonably approximated by a sparse representation. This is demonstrated in Fig. 2, where the actual wavefront (left column) is delayed by the annotated infarct region when moving from the apex towards the base of the ventricles. In comparison, propagation produced by a prediction model unaware of the infarct (middle column) does not exhibit this delay. The difference between these two wavefront, computed as the difference of TMP gradient, is also sparse (spatially localized) at any time instant as illustrated in the last column of Fig. 2.

Fig. 2.

Fig. 2.

Spatial gradient of true and predicted TMP and their difference.

One common practice to enforce sparsity is to use L1 penalty and correspondingly Laplacian prior distribution. More recently, L p norm (0 ≤ p < 1) 1 has been used to generate sparse solutions [20], [21] in compressed sensing. Both of these cases can be incorporated within a single framework of Generalized normal distribution with L p norm in the exponent:

pgn(η|α)=(p2αΓ(1/p))nexp ((Dηpα)p) (9)

where α is the hyperparameter and D is the 3D spatial gradient operator. As we decrease p from 2 towards 0, the tail of this distribution gets heavier encouraging sparser solutions. One key difficulty in calculating the posterior distribution using generalized normal prior is the presence of the L p norm in exponent of eq.(9). Hence, to perform principled inference, we derive a variational lower-bound of eq.(9) below.

Theorem 1: Let x = (x1, x2, … xn) be a vector with independent components each following a generalized normal distribution with the same parameters α and p, with a joint pdf: p(x|α)=(p2αΓ(1/p))nexp((xpα)p).

Then,

p(x|α)=supλ>0Cαnexp(xTΛx22p2(α2p)pp2iλipp2)

where C=(p2αΓ(1/p))n and Λ = diag(λ).

The proof of Theorem 1 is provided in the Appendix. It makes use of the Fenchel-Lagrange duality and uses conjugate of a convex function to derive a variational lower bound. Fig. 3 illustrates the crux of Theorem 1: the red curve represents a function with the negative of L p norm, raised to the pth power, in the exponent – the generalized normal distribution is composed of such functions in each component. This function is lowerbounded by functions with a negative quadratic term (like Gaussian) in the exponent. So, essentially we have replaced a complicated function with a family of simpler lower bounding functions. Obviously, it comes with the additional set of variational parameters, each corresponding to one Gaussian-like function. However, the advantage of this formulation is that, conditioned on fixed variational parameter, the function is Gaussian (multiplied by some constant). This brings forth the tractability of Gaussian distributions and the consequent computational advantage during the development of the inference algorithm that will be elaborated in section III-B.

Fig. 3.

Fig. 3.

At each point x, and fixed p, there exists a lower bounding Gaussian-like function that tangentially touches exp( − |x|p).

Setting x = Dη in Theorem 1, we obtain,

pgn(η|α)=supλ>0Cαnexp(uTDTΛDu22p2×(α2p)pp2iλipp2) (10)

Dropping the supremum in eq.(10) gives us a lower bound for the generalized normal distribution for any λ. This lower bound is used as the prior distribution of η, p(η|θ), treating λ as a variational parameter to be optimized during the inference.

p(η|θ)=Cαnexp(ηTDTΛDη22p2(α2p)pp2iλipp2) (11)

where θ = {α, λ}.

By definition, the gradient operator D has a null space: a vector containing all ones (say 1). Using this D thus will fail to correct the constant bias in TMP. To address this issue, weaugment D with one more row of all ones, i.e. D = (DT, 1)T.

B. Joint Inference of Transmural TMP and Prediction Errors

As the inference is iteratively carried out for each time instant, we drop k from equations for simplicity. Given the probabilistic formulation described in the previous section, we have the following joint pdf of interest:

p(y,u,η,β|ud,Σd,θ,c,d)=p(y|u,β)p(u|η,ud,Σd)p(η|θ)p(β|c,d) (12)

We make notation uncluttered by writing this distribution as p(y, u, η, β|θ) where ud, Σd, c, d are understood as given. We are interested in jointly estimating the random variables u, η, β as well as parameter θ = {λ, α}. We propose to do this in two steps. First, we estimate the parameter θ as the maximum likelihood estimate by integrating out the variables u, η, β, i.e.,

θ^=argmax   θp(y|θ), (13)

where

p(y|θ)=p(y,u,η,β|θ)dudηdβ

Once we obtain the optimum θ^, we then compute posterior distribution p(u,η,β|y,θ^). However, eq.(13) is difficult to solve due to the need to integrate out the random variables u, η and β. We therefore present an iterative procedure which yields us both the optimum θ^ and p(u,η,β|y,θ^).

We decompose log p(y|θ) as:

log p(y|θ)=L(q,θ)+KL(qp) (14)

where

L(q,θ)=q(u,η,β)log (p(y,u,η,β|θ)q(u,η,β)) (15)
KL(qp)=q(u,η,β)log (q(u,η,β)p(u,η,β|y,θ)) (16)

Since Kullback-Leibler divergence K L(qp) between q and p(u, η, β|y, θ) is non-negative, L is the lowerbound of log p(y|θ) with the gap given by K L(qp). To maximize log p(y|θ), we can thus minimize the KL divergence gap K L(qp) and maximize the lowerbound L. We achieve this by two alternating optimization: i) Posterior approximation where we fix θ and minimize KL by making q as close to true posterior p(u, η, β|y, θ) as possible via Variational Bayes, and ii) Parameter optimization, where we fix q and maximize the lowerbound L with respect to θ. This style of alternatively estimating parameter and posterior distribution of hidden variable is known as expectation maximization (EM).

1). Posterior Approximation of u,η, β:

Given the estimate at previous iteration, θold = {λold, αold}, true posterior distribution is:

p(u,η,β|y,θold)p(y|β,u)p(u|η)p(η|λold,αold)p(β) (17)

Note that, through the variational lower-bound p(η|θ) derived in Theorem 1, p(η|λold, αold) becomes Gaussian when conditioned on known values of λold and αold. In another word, by combining Theorem 1 and the EM algorithm, we are able to replace a complex distribution (Lp norm in the exponent) with a Guassian distribution and greatly simplify calculation of the posterior distribution (and its approximation).

The approximated joint distribution q(u, η, β) is obtained using Variational Bayes with mean field approximation: q(u, η, β) = q(u, η)q(β). Note that we only assume the independence to exist between β and (η, u), not between η and u since the action potential and model error is closely related. From eq.(17), Variational Bayes yields:

log q(u,η)=Eq(β)[log[p(y|β,u)p(u|η)p(η|λold,αold)]]+c=12[Eq(β)[β](yHu)T(yHu)+ηTDTΛold×Dη+(uudη)TΣd1(uudη)]+c (18)
q(u,η)=C exp(12[β¯(yHu)T(yHu)+(uudη)TΣd1(uudη)+ηTDTΛoldDη]) (19)

where β¯=Eq(β)[β] and q(u, η) is jointly Gaussian. Marginal distributions q(u) and q(η) can then be analytically derived:

q(u)=N(u|u¯,Σu)

where Σu1=βHTH+(Σd+Λold1)1, u¯=Σu(βHTy+(Σd+Λold1)1ud)

q(η)=N(η|η¯,Ση)

where Ση1=DTΛoldD+HT(βold1I+HΣdHT)1H, η¯=ΣηHT(βold1I+HΣdHT)1(yHud).

Using Variational Bayes, q(β) can be calculated as,

log q(β)=Eq(u,η)[log[p(y|β,u)p(β)]]+c=β[d+12Eq(u,η)[(yHu)T(yHu)]+(c1+m2)log[β]+c (20)
q(β)=Gamma(c+m2,d+12Eq(u)[(yHu)T(yHu)) (21)
β¯=m+2c2d+yHu¯2+tr(ΣuHTH) (22)

2). Parameter Optimization:

Fixing the posterior approximation q obtained from the previous step, maximization of L in eq.(15) is equivalent to maximization of Eq(u,η)[log p(y, u, η, β|θ)] with respect to θ. In log (y, u, η, β|θ), the only term depending on θ is log (p(η|θ)). In taking expectation of this term, we can marginalize out u, β, leaving the optimization of Eq(η) [log (p(η|θ))] which is achieved by equating its first derivative to zero:

θ(Eq(η)[log (p(η|θ))])=0 (23)

The details of derivations are in appendix B. The complete algorithm is summarized in Algorithm 1.

a). Limiting case of p:

Choosing p close to zero makes our prior sparser which we expect to work better. Fortunately, we can derive the limiting case expression for p → 0 in Algorithm 1:

 If p0 then s1,λi1tr([η¯η¯T+Ση]didiT) (24)

We report results using Algorithm 1 with this limiting case in all experiments unless stated otherwise. The effect of different values of p is investigated in section IV-B.

2).

Upon convergence, q(u) provides the posterior distribution of TMP at the current time instant, k. It will then be used to predict the prior distribution of TMP at the next time instant, k + 1, as described in the previous section.

C. Reducing Computational Cost

A main computational cost of the presented method comes from the inversion of matrices listed in steps 8–10 in Algorithm 1. In specific, let HM × N where M ~ 120 and N ~ 2000; steps 8–10 require three inversions of matrices of size N × N. To reduce this cost, we rearrange equations in those steps and equivalently invert M × M matrices instead of N × N:

Σu=(βHTH+Σp1)1=ΣpΣpHT(HΣpHT+β1I)1HΣp (25)
u¯=Σu(βHTy+Σp1ud)=ΣpHT(HΣpHT+β1I)1y+(βΣpHTH+I)1ud (26)

where Σp = Σd + (DT ΛD)−1.

Proof: Proof is in Appendix C. □

Using this reformulation we reduced the computational time by ~30% using Tesla K20m GPU (5GB), 2.2 GHz processor and using matlab inbuilt functions supporting GPU. As described earlier, we reduce cost by decreasing the number of inversion of heavy matrices. Therefore, in a setup where matrix inversion is made very efficient using GPU or alternative parallel architecture and/or low level programming language, a smaller gain may be expected by this rearrangement.

D. Connection With Sparse Bayesian Learning

We note that the prior distribution on the model error (eq.(11)) is a variational distribution with a quadratic term in the exponent. This is reminiscent of works in sparse Bayesian learning (SBL), where a zero mean Gaussian prior with unknown variance is used to enforce sparsity [22], [23]. If we rearrange the presented error prior in eq.(11) in the form of SBL, we will obtain:

p(η|θ)=pN(η|θ)psbl(θ) (27)
=N(0,(DTΛD))exp(Ψ(θ)) (28)

where,

Ψ(θ)=logαnZ+12log |DTΛD|+2p2(α2p)pp2iλipp2 (29)

In this form, the presented prior is similar to the SBL-variant presented in [24], where the prior covariance is represented as a linear combination of basis matrices with unknown weights modeled with a hyperprior. Here, the precision instead of covariance matrix is expressed as a linear combination of basis matrices, with the basis matrices being the outer product of the columns(di) of the gradient matrix DT, i.e. the precision matrix is given by i=1n+1λididiT. This results naturally from assuming the gradient of TMP (wavefront) to be sparse.

1). Parameter Estimation for the Additional Row of D:

As described earlier, we add one more row of ones to the matrix D, i.e., dn+1 = 1. Our inference procedure alternatingly estimates parameters and random variables. Parameters λis are estimated from the ECG through η (see line 16 of Algorithm 1). Then η and u are updated according to λi (see line 8 through 13 in Algorithm 1). The whole precision matrix, given by i=1n+1λididiT affects how much the inverse TMP estimate (u) should adjust prediction of dynamic model (ud) according to ECG data (y) (see line 10 of Algorithm 1). Intuitively, when the value of λididiT is high, less correction will occur for the i-th element in diTu and the estimated value will be more heavily determined by the model prediction. However, since the vector of ones, i.e. 1, lies in the null space of the forward matrix H, the last λn+1 – corresponding to 1 added to matrix D – cannot be estimated from the ECG during our inference. Therefore, we heuristically set this λn+1 high such that when u is estimated, the bias 1T u is only minimally corrected with respect to prediction from previous time instant. This is based on the assumption that initial u we start from has the bias (1T u) approximately correct and we maintain the bias in the same range throughout. This is a reasonable assumption because 1) we are focusing on the error in the gradient of u, and 2) we do not have any other source of information to learn this bias. Note that we want λn+1 to be sufficiently high but not too high so as to put heavy constraint on the inverse estimate. To maintain a high value of λn+1, we always keep it above a threshold λthreshold. Above this threshold, we set λn+1 to be max(λ(1 : n)). This helps in gradually increasing λn+1 over the iteration as other values of λ increase and reaches much high value than λthreshold.

IV. Results

A. Synthetic Experiment 1: Errors in Model Parameters

We first evaluate the ability of the presented method to detect and correct model errors arising from model parameters that represent tissue properties. In specific, we consider the presence of local myocardial infarcts unknown to the prior physiological model. Experiments were carried out on three image-derived heart-torso models, including 34 settings of myocardial infarcts of various sizes and locations in the ventricles. In specific, we divided each left ventricle into 17 segments according to the American Heart Association (AHA) recommendations [25], and set each infarct to two of the 17 segments. 120-lead ECG data was then simulated and corrupted with 20dB noise for inverse reconstruction of 3D transmural TMP signals. The inverse reconstruction utilized a prior EP model without knowledge of the presence of the infarct. From the reconstructed TMP signals, activation time was calculated as the time of the steepest upstroke and the region of infarct was extracted from where TMP signals have duration below 50% of normal values. Quantitative accuracy of the solutions was evaluated using two metrics: correlation coefficient between the true and reconstructed activation time, and Dice coefficient between the true and estimated regions of infarcts. Using these metrics, we also compared the performance of the presented method against model-constrained EP reconstruction without correcting model errors as described in [14].

Fig. 4 shows examples of the simulated and reconstructed TMP sequences. As shown in the ground truth (bottow row), the TMP propagation was blocked at the region of an infarct located at the basal infero-lateral region of the heart. Without model error correction (top row), the reconstructed TMP sequence was not able to reflect this conduction block until after the depolarization stage. In comparison, the presented method (middle row) was able to detect and correct the prior model error at an early stage of the depolarization, capturing the conduction block at the correct location of the heart.

Fig. 4.

Fig. 4.

Comparison of TMP propagation sequences between simulated ground truth and reconstructions with and without model error correction. Scar region has been delineated with black contour.

To better understand how model uncertainty helps in correcting the posterior estimate of TMP, we note that, in line 10 of Algorithm 1, the prediction from the previous time instant, ud is multiplied by the precision matrix Pu, i.e., the inverse of the covariance matrix Σd + (DT ΛD)−1. The covariance matrix is the sum of the propagated uncertainty Σd from the previous step and model error (DT ΛD)−1 estimated at this time step, capturing true uncertainty in the model predicted TMP. We plot variances, diagonal elements of the covariance matrix, in Fig. 5. As shown, the presented method detected high uncertainty (variance) in the predicted TMPs at the infarcted region, but low uncertainty at the healthy region. Also note that the variance was particularly high at the boundary of the infarct, which was a natural result of modeling the prediction error to be sparse in the spatial gradient domain.

Fig. 5.

Fig. 5.

Examples of variance plots. Left: spatial plot at one time instant. Right: Temporal plot at selected locations.

Fig. 6 shows additional examples of activation time maps derived from the reconstructed TMP sequence, with and without model error correction, in comparison to the simulated ground truth. As shown, at the presence of infarcts at different locations of the heart, the presented method was able to more closely reconstruct the conduction block despite the absence of knowledge about these infarcts in the prior EP model.

Fig. 6.

Fig. 6.

Comparison of activation time reconstructed with and without model error correction at different scar settings.

Fig. 7 summarize the quantitative comparison between the results obtained with and without model error correction. As shown in Fig. 7.a. accuracy of the presented method – in both activation time and infarct detection – is significantly higher than that without error correction. Noting the high standard deviation of the presented results in Fig. 7.a., we further compare the performance (Dice coefficient) of the methods regarding whether the infarcts were septal. As shown in Fig.7.b., 1) in both methods, the performance was poor when the infarct is septal, and 2) the correction of model error brought a significant improvement in accuracy when the infarct was non-septal. This suggests that the ability to reconstruct septal information in the heart may be fundamentally limited by its observability in surface ECG data, while for cases where this observability is not an issue, the presented method performs well. We further analyze the sensitivity of algorithm on infarct settings in greater detail in section V.

Fig. 7.

Fig. 7.

(a) Quantitative comparisons of reconstructions obtained with and without error correction, at the presence of infarcts unknown to the prior model. (b) Quantitative comparisons when reconstructing septal and non-septal infarcts. Left: without model error correction; Right: with model error correction

B. Sensitivity Analysis

1). Sensitivity to the Value of p:

The generalized normal distribution would enforce sparsity for values 0 ≤ p ≤ 1. We performed analysis on a single geometry to understand the sensitivity of the presented algorithm to different values of p ranging between 0 and 2. As shown in Fig. 8, with any value of p within the range of 0 ≤ p ≤ 1, the presented method performed better than that did≤not consider model error correction. However, contrary to expectation, the performance of the presented method did not improve as we decreased p from 1 to 0. In fact, the presented method performed better when p = 1 in comparison to 0 < p < 1, although the best was obtained at the limiting case p → 0 as derived before. We report results using this limiting case of the algorithm throughout this paper.

Fig. 8.

Fig. 8.

Sensitivity of the presented algorithm with respect to different values of p in the generalized Gaussian prior.

2). Sensitivity to the Added Vector in D:

As described in subsection III-D.1, to preserve a bias term that is applicable to the prior electrophysiological model, we added one row of ones to the gradient operator D and heuristically updated the corresponding λn+1. The vector of ones and updating heuristic of λn+1 was empirically found to work well. To understand the effect of this strategy on the presented algorithm, we investigated two scenarios. First, we replaced the vector of ones with different weighting factors while keeping the value of λn+1 fixed. The performance of the presented algorithm is summarized in Fig. 9(a). As shown, the performance dropped when the weighting factor was either higher or lower than one, although the drop was much more significant if the weighting factor was less than one. This is because a high weighting factor imposes too high a bias constraint and does not allow much change to TMP in accordance to ECG, while a low weighting imposes a weak bias constraint which may allow the TMP solution to wander beyond a feasible range. The latter causes a much bigger problem because, once the value of TMP goes beyond the range feasible for the prior electrophysiological model, the model prediction becomes unstable and may even crash.

Fig. 9.

Fig. 9.

Sensitivity of the presented algorithm with respect to different weighting factors of the added vector of ones to the gradient matrix D.

We then tested the second scenario where we adjusted λn+1 accordingly when multiplying the row of ones with a weighting factor. As summarized in Fig. 9(b), with simultaneous adjustment of λn+1 following the strategy adopted in the presented algorithm, the performance remained more or less unchanged over the range of weighting factors tested.

C. Synthetic Experiments 2: Errors in Initial Conditions

We then evaluate the ability of the presented method to detect and correct model errors arising from the initial condition of the prior EP model – locations of the earliest excitation points in the ventricles. In each of the three patient-specific geometrical models, we considered the following error settings: 1) the prior EP model missed one excitation point from the ground truth, 2) the prior EP model included an extra excitation point not in the ground truth, and 3) the excitation point in the prior EP model was at a different location from the ground truth. In all cases, 120-lead ECG data was simulated and corrupted with 20dB noise for TMP reconstruction. Quantitative accuracy of the reconstructed TMP sequence in comparison to the ground truth was measured by two metrics: 1) normalized mean square error, and 2) correlation coefficient.

Fig. 10 shows an example of the reconstructed and simulated TMP sequence where the simulated TMP started from two excitation points while the TMP reconstruction was constrained by a prior EP model starting with only one of the excitation points. While the reconstructed TMP was unable to capture the missing excitation point without model error correction, the presented method was able to quickly correct that error 20ms into the depolarization. In comparison, as shown in Fig. 11, we found that it was more difficult for both methods to correct an extra excitation point that was not in the ground truth. Quantitative comparison between the two methods is summarized in Fig. 12, showing a statistically significant improvement brought by the presented method.

Fig. 10.

Fig. 10.

Reconstructed versus true TMP propagation when the prior model missed one of the two excitation points.

Fig. 11.

Fig. 11.

Reconstructed versus true TMP propagation when the prior model included an extra excitation point absent in the ground truth.

Fig. 12.

Fig. 12.

Quantitative comparisons of reconstructions obtained with and without model error correction, at the presence of model errors in excitation points.

D. Real Data Experiments

We performed real data experiments on two patients who underwent catheter ablation due to post-infarction ventricular arrhythmia [26]. For each patient, patient-specific heart-torso geometry was extracted from CT images, on which transmural TMP signals were reconstructed from 120-lead ECG data acquired during sinus rhythm. From the reconstructed TMP signals, the region of infarct was identified as where the duration of TMP falls below 50% of the normal value. The obtained region of infarct was compared with in-vivo bipolar voltage maps, and reconstructions obtained with and without model error correction were compared.

These results are visually summarized in Fig. 13 and quantitatively summarized in Table I in terms of the Dice coefficient between the detected infarct region and the low-voltage region (≤ 1.5 mV) in bipolar voltage maps. In case 1, infarct reconstructed considering model error (third column) is visually closer to the bipolar voltage map registered to the CT-derived mesh. In case 2, both reconstructions were visually less consistent with the bipolar voltage map. The Dice coefficients in Table I suggests that both methods performed less satisfactorily in case 2, although the presented method was able to bring evident improvement in both cases.

Fig. 13.

Fig. 13.

Regions of infarcts extracted from reconstructed TMP sequences in reference to in-vivo bipolar voltage maps.

TABLE I.

Dice Coefficients Between Infarcted Regions Extracted From Reconstructed TMP and Bipolar Voltage Maps

Case 1 Case 2
Without Model Error Correction 0.2406 0.1053
With Model Error Correction 0.3053 0.2237

We noted that the improvement in inverse reconstructions brought by the presented method was not as significant in real data as it was in simulated data. This might be attributed to several reasons. First, the forward matrix H was treated as known in simulated experiments while, in real-data experiments, the accuracy of inverse reconstructions is directly affected by the errors in the forward matrix itself. Second, if the error between the true TMP propagation and that from the prior EP model is too high, the assumption of sparse model error might not hold. This error can come from multiple sources. For example, realistic infarcts may have much more complex spatial distributions than the simple-shaped infarcts used in simulated experiments. The number and location of excitation points are also less predictable in real-data experiments in comparison to simulated settings. Third, the correspondence of bipolar voltage to the reconstructed TMP sequence is not straightforward. As a result, we resorted to a secondary comparison where we identified infarcts from both data for comparison. These intermediate steps might be another source of errors. Finally, the registration of the bipolar voltage map to CT-derived mesh may introduce additional errors that further compounded the validation process.

V. Discussion

A. Algorithm Performance vs. Error Observability

As observed in section IV-A, the performance of the presented method changes with the location of the infarct. If we decompose the observed ECG for each case into the following two components: yk = H(uprediction + η) = yprediction + yη, it is clear that — given the same dynamic prediction model unaware of any infarct settings — the difference in ECG data from different infarct settings are introduced by the model error η. Therefore, here we attempt to rationalize how the observation of error η on ECG might be related to the quality of the estimation results. To do so, we revisit the approach for maximum-likelihood estimation of parameter θ derived in eq.(13), and reformulate it to focus on the error observation in ECG yη (rather than the overall ECG observation y used in eq. (13).

Consider that η is observed on the surface ECG data as the data error yη = , we have p(yη|η,β)=N(yη|Hη,β1I), where the prior density of η is characterized by hyperparameters λ and α as defined in eq.(28) and eq.(29). As mentioned in section III-B, our optimization scheme is to first obtain a parameter that maximizes the likelihood of y after marginalizing over intermediate random variables. Following the same line of derivation, we marginalize over η to obtain yη as a Gaussian distribution characterized by λ and α as p(yη|Λ,β)=N(yη|0,Σyη), where:

Σyη=β1I+H(DTΛD)1HT (30)

Note that marginalization over η is now possible because unlike before, where we also had another latent variable u, now we only have η as latent variable. We obtain parameter estimation equations as

λ^,α^=argmaxλ,α    log[p(yη|Λ,β)p(λ,α)]=argminλ,α[yηTΣyη1yη+log |Σyη|+2Ψ(λ,α)] (31)

Now, we want to understand how this optimization leads to better performance in certain error cases than others. Note that we want to analyze difference in performance with respect to ECG error yη. In estimating optimal λ^, the only term that constrains λ to fit ECG data error is the first term, yηTΣyη1yη, which we call data fitting constraint. We decompose it into terms that do and do not depend on λ in following result.

Result 1: If H = USVT is the singular value decomposition, z = UT yη, and 〈., .〉 denotes inner product, then,

yηTΣyη1yη=βyηTyηzzT,S(β1VTDTΛDV+STS)1ST

Proof is in the appendix D.

Our argument here is that if there are multiple minima, then it would be difficult for the algorithm to find the true minima. Let’s say λ*, α* minimizes eq.(31) and let yηΣyη1yη=C* at this minima. Because only the second term in Result 1 depends on λ, the data fitting constraint yηΣyη1yη=C* is satisfied by all the λs such that the inner product 〈zzT, S(β−1VTDT ΛDV + ST S)−1ST〉 remains unchanged. Note that in this inner product, zero elements in z will mask the matrix S(β−1VTDT ΛDV + ST S)−1ST that contains λ. Therefore, if z is highly sparse, there will be a large number of λ values that could satisfy data fitting constraint, and therefore would be the minimizer of eq.(31), thereby increasing the feasible solution space. We use L0 norm of z, denoted by |z|0 to quantify how dense the vector is. Above analysis suggests that a lower value of |z|0 = |UT yη|0 – a small number of left eigenvectors of H present in yη – may correspond to a higher difficulty to find the true optimum. Note that a lower value of |z|0 is only sufficient to ensure multiple minima, but is not necessary because even if z is not sparse at all, the inner product might be small depending on the alignment of two matrices.

To test this hypothesis, we carried out experiments in various settings of infarcts considered in section IV. In each case, we set the vector η to be one in the infarct and zero elsewhere. We calculated yη = Hη and then plot |z|0 = |UT yη|0 against the Dice coefficient of the solution obtained earlier. As shown in Fig. 14, we observed that whenever |z|0 is low – for example below the threshold annotated in figure – the Dice coefficient of the obtained results is also low. This is in agreement with our hypothesis. Additionally, we found that most of the settings with septal infarcts had a low value of |z|0, explaining the difficulty of reconstruction in these cases. For the cases where |z|0 is high, however, the Dice coefficient is mixed. This result of mixed dice coefficient for high |z|0 is also consistent with our argument above that the smaller value of |z|0 is only sufficient but not necessary. This is because from Result 1 higher value of |z|0 may also lead to smaller inner product depending on the two matrices. Thus, the experimental results support our theory.

Fig. 14.

Fig. 14.

Values of |z|0 = |UT yη|0 versus Dice coefficient in three geometrical models, where U is a matrix of left singular vectors in H.

We do caution that the above tests were conducted with the following simplifications: 1) we assumed that the model error is either zero (in healthy region) or one (in infarcted region), 2) we considered only one time instant, and 3) we assumed that the Dice coefficient is a direct measure of the reconstruction accuracy. More rigorous experimental testings may be devised in the future for the presented theoretical analysis.

B. Relation to Relevance Determination

We now examine the presented method from the perspective of relevance determination, and lay out its similarity with and difference from relevance vector machines [22]. As the matrix DT ΛD in eq. (30) is approximately block diagonal, we re-express the covariance matrix of the data error yη as: Σyη=β1I+kHkAk1HkT where Ak is the k-th block of DT ΛD. Following the reasoning in [22], if data error yη is generated by a Gaussian distribution, the empirical covariance yηyηT must be approximately equal to the covariance Σyη. Therefore, if Ak were a 1 × 1 block (say ak), we would be estimating ak such that β1I+khkak1hkT matches the yηyηT. This would be the same as the relevance vector machines [22] and automatic relevance determination [27], which work by selecting relevant columns hk’s that are most closely aligned to the data vector while driving the rest ak1 towards 0. Here, given the block matrices Aks that couples the columns of H, we speculate that, instead of a single column, the presented method is forced to choose a set of columns such that the covariance is close to the data covariance. Consequently, only a portion of the columns in the solution will be closely aligned to the data vector due to block selection.

To experimentally examine this mechanism of block selection, we carried out experiments in a setting similar to that described in the previous subsection. In each experiment, We computed the angle between each column in H and yη, focusing particularly on those columns with small angles to yη (i.e., relevant vectors). Fig. 15 shows the percentage of small-angle columns out of all columns in the reconstructed infarct, plotted against the Dice coefficient. We note that having a higher percentage of relevant vectors in the solution was related to higher Dice coefficients, suggesting the nature of relevance determination in the presented method. At the same time, we also note that the percentage of relevant vectors remains moderate even when the Dice coefficient is high, supporting our speculation of block selection.

Fig. 15.

Fig. 15.

Percentage of relevant vectors (columns in H that have a small angle with the ECG error vector) in the reconstructed region of infarct.

C. Limitations and Future Work

We observed limited performance of the presented method in real data experiments. Compared to synthetic experiments where the prior model error was controlled to one source, model errors in real data experiments can arise from multiple sources, such as the error in the prior dynamic model, and the error in the forward measurement model that relates the TMP in the heart to ECG data on the body surface. Investigation of methods that can detect and correct errors in the forward measurement model is an interesting direction of future work, such as those presented in [28].

We may need to consider additional prior knowledge about the error or better model error, for example, by considering temporal correlation. Future work may also consider an alternative approach to incorporate prior physiological knowledge, for example, through a data-learnt generative model extracting knowledge from physiological models but with latent factors that can be more easily adapted to ECG data while retaining complex relationship [29].

With an interest in understanding why the presented method performs differently in different cases, we presented mathematical justifications and initial empirical support that the performance of the presented method is related to how the model error is observed on ECG data. We hope that this result will encourage researchers in inverse electrophysiological imaging to look closely into the relatively unexplored area of how and why a new reconstruction method performs differently in different pathological conditions. This also raises an open question: can we devise reconstruction methods for electrophysiological imaging that are less sensitive (in terms of performance) to the particular type of clinical application of interest?

Finally, the presented method performs inference sequentially using only past ECG data. This is largely limited by the nature of the prior EP model as it is not possible to reverse the model in time. This further suggests alternatives means to extracting knowledge from the EP models without explicitly utilizing these models within the inference.

VI. Conclusions

We presented a Bayesian framework to jointly infer from ECG data the posterior distribution of TMP signals and the error in the prior EP model, exploiting the sparse nature of error in the gradient domain. We have shown that by considering and correcting the error in the prior model, we can improve TMP reconstruction. Future work will focus on alternative means to incorporating prior physiological knowledge such that the model elements to be estimated from ECG data is more expressive in generating the TMP sequence.

Acknowledgments

This work was supported in part by the National Science Foundation CAREER Award ACI-1350374, and in part by the National Institute of Health under Award no. R01HL145590.

Appendix A. Derivation of Variational Lower Bound

Lemma 1:

|x|p=supγ>0(x22γ(2p2)(1pγ)pp2)

Proof: We use Fenchel-Legendre duality for convex functions to prove this theorem. Let’s define a variable y as y = x2 and a function f as:

f(y)=|x|p=yp/2,y0 (32)

which is convex function of y in the domain y > 0. Hence, Fenchel-Legendre duality is used to obtain

f(y)=supλ(λyf*(λ)) (33)

where conjugate function f*(λ) is given by

f*(λ)=supy>0(λyf(y))=supy>0(λy+yp/2) (34)

Supremum in eq.(34) is obtained at y^=[(λ)2p]2p2. Substituting ŷ back in 34, we get,

f*(λ)=(λ)pp2(2p2)(2p)pp2=(λ)pp2z(p) (35)

where, z(p)=(2p2)(2p)pp2. Note that y is always positive in its domain, and thus λ is always negative. Substituting value of f*(λ) back to equation 33,

f(y)=supλ<0(λy(λ)pp2z(p)) (36)

Putting λ = −1/2γ, we have,

f(y)=supγ>0(y2γ(12γ)pp2z(p))=supγ>0(y2γ2p2(1pγ)pp2) (37)

Setting y = x2 completes the proof. □

Lemma 2:

exp(|x|pζ)=supτ>0  exp(x22τ) exp(2p2(1pτ)pp2ζ2p2)

Proof: Multiplying both sides of Lemma 1 by 1/ζ yields

f(y)ζ=supγ>0(y2γζ2p2ζ(1pγ)pp2).

Setting τ = ζγ, we have,

|x|pζ=supτ>0(y2τ2p2ζ(ζpτ)pp2)=supτ>0(y2τ2p2(1pτ)pp2ζ2p2) (38)

Taking exponent and replacing y = x2 completes the proof. □

Proof of Theorem 1:

Proof: Using Lemma 2 in p(x|α)=CαNexp(i|xi|pαp) yields

p(x|α)=supτ>0CαNexp (ixi22τii2p2(α2pτi)pp2)=supλ>0CαNexp (xTΛx22p2(α2p)pp2iλipp2)

where λi = 1/τi and Λ = diag(λ) □

Appendix B. Calculation of λ

λi(Eq(u,η)[log (p(η|α,λ))])=0
λi(Eq(u,η)[ηTDTΛDη2+2p2(α2p)pp2iλipp2])=0
tr(Eq(η)[ηηT]didiT)=(λiαpp)2p2
λi=pαp1(tr([η¯η¯T+Ση]didiT))2p2 (39)

The variational parameter λ depends on another parameter α. To obtain an optimum value of α, we repeat the same process but take derivative with respect to alpha.

α(Eq(u,η)[Nlog(α)+2p2(α2p)pp2iλipp2])=0
pαp=(Niλipp2)2p2 (40)

Using equation 40 into 39, we finally obtain,

λi=(str([η¯η¯T+Ση]didiT))2p2

where

s=Niλipp2

Appendix C. Reducing Computational Cost

Σu=(βHTH+Σp1)1=ΣpΣpHT(HΣpHT+β1I)1HΣp (41)
u¯=Σu(βHTy+Σp1ud)=ΣpHT(HΣpHT+β1I)1y+(βΣpHTH+I)1ud (42)

where Σp = Σd + (DT ΛD)−1.

Proof: Eq.(41) readily follows from Woodbury Inverse Identity. To prove eq.(42), we prove each term. For the first term, we multiply both sides of Σu1=βHTH+Σp1 on the right with ΣpHT to obtain eq.(43); and multiply it with Σu on the left and (pHT + β−1I)−1y on the right to obtain eq.(44):

Σu1ΣpHT=βHT(HΣpHT+β1I) (43)
ΣpHT(HΣpHT+β1I)1y=βΣuHTy (44)

To prove the second term, we start with the definition:

Σu1=Σp1(βΣpHTH+I) (45)

and multiply it with Σu on the left and Σp−1ud on the right:

(βΣpHTH+I)1ud=ΣuΣp1ud (46)

Appendix D. Proof of Result 1

Proof: Σyη=β1I+HA1HT where, A = (DT ΛD) Using Woodbury identity,

Σyη1=βIH(A+HTH)1HT=βIUS(β1VTAV+STS)1STUT
yηTΣyη1yη=βyηTyηtr(zzTS(β1VTAV+STS)1ST)=βyηTyηzzT,S(β1VTAV+STS)1ST

where H = USVT is the singular value decomposition and z=UTyη. Finally, replacing A = (DT ΛD), we obtain,

yηTΣyη1yη=βyηTyηzzT,S(β1VTDTΛDV+STS)1ST

Footnotes

1

L p norm is not a norm for (0 ≤ p < 1) in strict sense because it does not satisfy the triangle inequality which is easy to verify noting non-convexity of unit ball in L p space. Here, we refer to it as a norm for the sake of convenience.

Contributor Information

Sandesh Ghimire, B. Thomas Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY 14623 USA.

John L. Sapp, QEII Health Sciences Centre, Dalhousie University, Halifax, NS B3H 4R2, Canada, and also with the Department of Medicine, Dalhousie University, Halifax, NS B3H 4R2, Canada..

B. Milan Horáček, School of Biomedical Engineering, Dalhousie University, Halifax, NS B3H 4R2, Canada..

Linwei Wang, B. Thomas Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY 14623 USA.

References

  • [1].Burger M, Mardal K-A, and Nielsen BF, “Stability analysis of the inverse transmembrane potential problem in electrocardiography,” Inverse Problems, vol. 26, no. 10, 2010, Art. no. 105012. [Google Scholar]
  • [2].Rudy Y and Oster HS, “The electrocardiographic inverse problem,” Crit. Rev. Biomed. Eng, vol. 20, nos. 1–2, pp. 25–45, 1992. [PubMed] [Google Scholar]
  • [3].Okamoto Y, Teramachi Y, and Musha T, “Limitation of the inverse problem in body surface potential mapping,” IEEE Trans. Biomed. Eng, vol. BME-30, no. 11, pp. 749–754, November 1983. [DOI] [PubMed] [Google Scholar]
  • [4].Brooks DH, Ahmad GF, MacLeod RS, and Maratos GM, “Inverse electrocardiography by simultaneous imposition of multiple constraints,” IEEE Trans. Biomed. Eng, vol. 46, no. 1, pp. 3–18, January 1999. [DOI] [PubMed] [Google Scholar]
  • [5].Ghosh S and Rudy Y, “Application of L1-norm regularization to epicardial potential solution of the inverse electrocardiography problem,” Ann. Biomed. Eng, vol. 37, no. 5, pp. 902–912, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Xu J, Dehaghani AR, Gao F, and Wang L, “Noninvasive transmural electrophysiological imaging based on minimization of total-variation functional,” IEEE Trans. Med. Imag, vol. 33, no. 9, pp. 1860–1874, September 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Serinagaoglu Y, Brooks DH, and MacLeod RS, “Bayesian solutions and performance analysis in bioelectric inverse problems,” IEEE Trans. Biomed. Eng, vol. 52, no. 6, pp. 1009–1020, June 2005. [DOI] [PubMed] [Google Scholar]
  • [8].Xu J, Sapp JL, Dehaghani AR, Gao F, and Wang L, “Variational Bayesian electrophysiological imaging of myocardial infarction,” in Proc. Int. Conf. MICCAI Cham, Switzerland: Springer, 2014, pp. 529–537. [DOI] [PubMed] [Google Scholar]
  • [9].Rahimi A, Sapp J, Xu J, Bajorski P, Horacek M, and Wang L, “Examining the impact of prior models in transmural electrophysiological imaging: A hierarchical multiple-model Bayesian approach,” IEEE Trans. Med. Imag, vol. 35, no. 1, pp. 229–243, January 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Pullan AJ, Cheng LK, Nash MP, Bradley CP, and Paterson DJ, “Noninvasive electrical imaging of the heart: Theory and model development,” Ann. Biomed. Eng, vol. 29, no. 10, pp. 817–836, 2001. [DOI] [PubMed] [Google Scholar]
  • [11].Van Dam PM, Oostendorp TF, Linnenbank AC, and van Oosterom A, “Non-invasive imaging of cardiac activation and recovery,” Ann. Biomed. Eng, vol. 37, no. 9, pp. 1739–1756, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Ghodrati A, Brooks DH, Tadmor G, and MacLeod RS, “Wavefront-based models for inverse electrocardiography,” IEEE Trans. Biomed. Eng, vol. 53, no. 9, pp. 1821–1831, September 2006. [DOI] [PubMed] [Google Scholar]
  • [13].Nielsen BF, Lysaker M, and Grøttum P, “Computing ischemic regions in the heart with the bidomain model—First steps towards validation,” IEEE Trans. Med. Imag, vol. 32, no. 6, pp. 1085–1096, June 2013. [DOI] [PubMed] [Google Scholar]
  • [14].Wang L, Zhang H, Wong KC, Liu H, and Shi P, “Physiological-model-constrained noninvasive reconstruction of volumetric myocardial transmembrane potentials,” IEEE Trans. Biomed. Eng, vol. 57, no. 2, pp. 296–315, February 2010. [DOI] [PubMed] [Google Scholar]
  • [15].He B, Li G, and Zhang X, “Noninvasive imaging of cardiac transmembrane potentials within three-dimensional myocardium by means of a realistic geometry anisotropic heart model,” IEEE Trans. Biomed. Eng, vol. 50, no. 10, pp. 1190–1202, October 2003. [DOI] [PubMed] [Google Scholar]
  • [16].Erem B, van Dam P, and Brooks DH, “Identifying model inaccuracies and solution uncertainties in noninvasive activation-based imaging of cardiac excitation using convex relaxation,” IEEE Trans. Med. Imag, vol. 33, no. 4, pp. 902–912, April 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Ghimire S, Sapp JL, Horacek M, and Wang L, “A variational approach to sparse model error estimation in cardiac electrophysiological imaging,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent Cham, Switzerland: Springer, 2017, pp. 745–753. [Google Scholar]
  • [18].Wang L et al. , “Transmural imaging of ventricular action potentials and post-infarction scars in swine hearts,” IEEE Trans. Med. Imag, vol. 32, no. 4, pp. 731–747, April 2013. [DOI] [PubMed] [Google Scholar]
  • [19].Aliev RR and Panfilov AV, “A simple two-variable model of cardiac excitation,” Chaos, Solitons Fractals, vol. 7, no. 3, pp. 293–301, March 1996. [Google Scholar]
  • [20].Chartrand R and Yin W, “Iteratively reweighted algorithms for compressive sensing,” in Proc. IEEE ICASSP, Mar-Apr 2008, pp. 3869–3872. [Google Scholar]
  • [21].Daubechies I, DeVore R, Fornasier M, and Güntüurk CS, “Iteratively reweighted least squares minimization for sparse recovery,” Commun. Pure Appl. Math, vol. 63, no. 1, pp. 1–38, 2010. [Google Scholar]
  • [22].Tipping ME, “Sparse Bayesian learning and the relevance vector machine,” J. Mach. Learn. Res, vol. 1, pp. 211–244, September 2001. [Google Scholar]
  • [23].Wipf DP, Rao BD, and Nagarajan S, “Latent variable Bayesian models for promoting sparsity,” IEEE Trans. Inf. Theory, vol. 57, no. 9, pp. 6236–6255, September 2011. [Google Scholar]
  • [24].Wipf D and Nagarajan S, “A unified Bayesian framework for MEG/EEG source imaging,” NeuroImage, vol. 44, no. 3, pp. 947–966, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].American Heart Association Writing Group on Myocardial Segmentation and Registration for Cardiac Imaging et al. , “Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: A statement for healthcare professionals from the cardiac imaging committee of the council on clinical cardiology of the American Heart Association,” Circulation, vol. 105, no. 4, pp. 539–542, 2002. [DOI] [PubMed] [Google Scholar]
  • [26].Sapp JL, Dawoud F, Clements JC, and Horáček BM, “Inverse solution mapping of epicardial potentials: Quantitative comparison with epicardial contact mapping,” Circulat., Arrhythmia Electrophysiol, vol. 5, no. 5, pp. 1001–1009, 2012. [DOI] [PubMed] [Google Scholar]
  • [27].MacKay DJC, “Bayesian interpolation,” Neural Comput, vol. 4, no. 3, pp. 415–447, 1992. [Google Scholar]
  • [28].Erem B, Coll-Font J, Orellana RM, Štovíček P, and Brooks DH, “Using transmural regularization and dynamic modeling for noninvasive cardiac potential imaging of endocardial pacing with imprecise thoracic geometry,” IEEE Trans. Med. Imag, vol. 33, no. 3, pp. 726–738, March 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Ghimire S, Dhamala J, Gyawali PK, Sapp JL, Horacek M, and Wang L, “Generative modeling and inverse imaging of cardiac transmembrane potential,” in Proc. Int. Conf. MICCAI Cham, Switzerland: Springer, 2018, pp. 508–516. [Google Scholar]

RESOURCES