Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Jan 1.
Published in final edited form as: IEEE Trans Med Imaging. 2015 Aug 4;35(1):229–243. doi: 10.1109/TMI.2015.2464315

Examining the Impact of Prior Models in Transmural Electrophysiological Imaging: A Hierarchical Multiple-Model Bayesian Approach

Azar Rahimi 1,2,3,4,5, John Sapp 1,2,3,4,5, Jingjia Xu 1,2,3,4,5, Peter Bajorski 1,2,3,4,5, Milan Horacek 1,2,3,4,5, Linwei Wang 1,2,3,4,5
PMCID: PMC4703535  NIHMSID: NIHMS723158  PMID: 26259018

Abstract

Noninvasive cardiac electrophysiological (EP) imaging aims to mathematically reconstruct the spatiotemporal dynamics of cardiac sources from body-surface electrocardiographic (ECG) data. This ill-posed problem is often regularized by a fixed constraining model. However, a fixed-model approach enforces the source distribution to follow a pre-assumed structure that does not always match the varying spatiotemporal distribution of actual sources. To understand the model-data relation and examine the impact of prior models, we present a multiple-model approach for volumetric cardiac EP imaging where multiple prior models are included and automatically picked by the available ECG data. Multiple models are incorporated as an Lp-norm prior for sources, where p is an unknown hyperparameter with a prior uniform distribution. To examine how different combinations of models may be favored by different measurement data, the posterior distribution of cardiac sources and hyperparameter p is calculated using a Markov Chain Monte Carlo (MCMC) technique. The importance of multiple-model prior was assessed in two sets of synthetic and real-data experiments, compared to fixed-model priors (using Laplace and Gaussian priors). The results showed that the posterior combination of models (the posterior distribution of p) as determined by the ECG data differed substantially when reconstructing sources with different sizes and structures. While the use of fixed models is best suited in situations where the prior assumption fits the actual source structures, the use of an automatically adaptive set of models may have the ability to better address model-data mismatch and to provide consistent performance in reconstructing sources with different properties.

Keywords: Transmural electrophysiological imaging, Lp-norm regularization, multiple-model prior, Bayesian inference

I. Introduction

ADVANCES in medical imaging technologies and image processing techniques have substantially improved our ability in assessing the structure [1], kinematics (such as the deformation) [2], and mechanics (such as the strain distribution) [3] of the heart. Nevertheless, the heart is an electromechanically coupled organ, i.e., a coordinated electrical propagation throughout the heart muscle is required for an efficient contraction of the heart. Compared to advances in imaging cardiac structures and mechanics, there is a considerable inadequacy in our ability to observe the electrical activity of the heart. Current clinical approaches to assess individual cardiac electrophysiology is mainly restricted to either remote body-surface electrocardiograms (ECG), or invasive catheter mapping on the heart surface with limited spatial resolution.

To address the limitations of current clinical techniques, computational cardiac EP imaging has emerged to computationally reconstruct subject-specific cardiac source dynamics from noninvasive ECG or magnetocardiography (MCG) data. It has shown promise in the diagnosis of cardiac dysfunctions such as ischemia [4], infarction [5], atrial flutter [6], atrial fibrillation [7], and ventricular arrhythmias [8], [9]. In general, EP imaging solves an inverse problem that is both underdetermined and physically ill-posed [10]. The former is caused by the limited number of ECG/MCG measurements compared to the large degree of freedom in the unknowns of the heart. The latter is due to the underlying biophysics that, in a quasi-static electromagnetic field, different configurations of 3D sources may produce the same surface potential distributions [11]. Therefore, if the solution is sought transmurally, this inverse problem lacks a unique solution in its most unconstrained form. Proper assumptions of the solutions must be made in order to obtain a unique transmural solution.

Over the past three decades, many cardiac EP imaging approaches have been developed with a focus on finding proper prior models to overcome the ill-posedness of the problem. First, different equivalent source models are used as an implicit constraint on the solution. These models can be in general classified into two groups: 1) surface-based source models in forms of electrical potential on the epicardium and/or endocardium [12], [13], [6], [14], or activation time on the ventricular surface [15], [16], [17]; and 2) volumetric source models in forms of action potential [18], [19], [20], [21], [22], [23], or current density/activation front [24] throughout the myocardial wall. The surface-based source models implicitly overcome the lack of physically-unique solutions by constraining the solution to the heart surface, while the transmural source models rely on further assumptions to overcome this physical ill-posedness. Second, assumptions on different properties of the equivalent source models have been used, in either deterministic or probabilistic frameworks, to further regularize the ill-posed inverse problem. In a deterministic setting, spatial and/or temporal smoothness of the source models have been commonly enforced through different numerical techniques such as Tikhonov regularization (zero-order, first-order, and second-order) [25], [22] and truncated SVD [26], [27], [24]. Recently, L1-norm based sparsity models have also been used to enforce low-dimensional features of the solutions in space [28], [29], [30], [31], [18]. In a probabilistic setting, similarly, Gaussian prior density distributions have been used to enforce the smoothness of the epicardial source distribution [32], [33], [34], [35], [36], [37], [38], while total-variation prior has been recently adopted to preserve the structural sparsity of transmural source distribution [39]. In addition, physiological prior knowledge generated from computational models of 3D electrical excitation has also been used to constrain transmural EP imaging [21], [23].

While these pre-defined models are beneficial for circumventing the ill-posedness of the problem, they also rely on fixed assumptions that may not generalize to the varying spatiotemporal property of EP excitation being measured. Cardiac sources undergo a complex spatiotemporal process in each cardiac cycle. Initially, cardiac sources are only present in a few sparsely distributed focal points. Because of the fast depolarization properties of action potential [40], active cardiac sources then form an excitation wavefront during the depolarization phase that resembles a sharp gradient separating excited myocardium cells from resting cells. This is followed by a repolarization wavefront that is more extended compared to the excitation wavefront due to relatively slower repolarization properties of action potential [40]. Therefore, a sparsity model is intuitively ideal when the sources exhibit compact spatial properties, while a smoothness-based model may better suit extended distributions of cardiac sources. Furthermore, in a pathologic heart with increased heterogeneity in tissue properties, the sparsity or smoothness of sources becomes more difficult to predict a priori. Therefore, a fixed model has its limitations when used for constraining the complex spatiotemporal property of cardiac electrical sources; instead, the model may need to adapt to the spatiotemporal changes in cardiac sources in various conditions. Our previous work empirically examined this effect of fixed-model regularization using an Lp-norm constraint where the value of p is predefined. Our earlier experiments demonstrated that, when different source distributions are involved, different values of p (i.e., different prior models) are needed for satisfactory reconstruction accuracy [41].

To systematically investigate how a prior model may impact EP imaging solutions and whether allowing the measurement data to pick a favorable set of models may reduce this impact, in this paper we present a multiple-model approach to volumetric cardiac EP imaging. In specific, we present a hierarchical Bayesian approach that employs a continuous combination of prior models, each reflecting a specific spatial property for transmural sources. These models are incorporated as an Lp-norm prior (p value ranging between 1 and 2) for cardiac sources, and the hierarchical structure includes p as an unknown hyperparameter with a prior uniform distribution: in this way, a continuous set of models is included and distributed according to the posterior distribution of p. The posterior distribution of cardiac sources and hyperparameter p is calculated using a Markov Chain Monte Carlo (MCMC) technique. The posterior distribution of p describes how the 3D source estimation relies on different prior models, providing an quantitative illustration of how different combinations of prior models may be preferred by different measurement data. In addition, the hierarchical Bayesian structure allows us to automatically infer other hyperparameters from the ECG data, including the variance of the measurement noise and the source prior. This alleviates the challenge of finding optimal parameters that often arises in deterministic regularization [29], [30], [42].

The presented method is conceptually similar to the multiple-constraint method proposed by Brooks et al. in [42]. Brooks et al. method adopts a pre-defined finite set of models (constraints) with different order of L2-norm in a deterministic regularization scheme for imaging equivalent source distribution on the heart surface [42]. The proposed multiple-model Bayesian approach can be considered as a generalization of Brooks et al. method in two different aspects. First, it includes both L1-norm and L2-norm equivalent priors in the regularization. Second, it incorporates a continuous rather than discrete set of prior models through the use of hyperparameter p, and the model combination is automated in a Bayesian framework. The presented approach, in essence, can also be interpreted as the multiple model adaptive estimation (MMAE) that is commonly used for motion analysis and target tracking [43], [44], [45]. Like MMAE, a set of models are considered, where the weighted combination of all solutions determines the final solution. Nevertheless, there are two main differences that distinguish our work from MMAE. First, our approach considers a continuous distribution of models rather than a finite set of models. Therefore, there is no need to pre-define a finite number of models to sufficiently cover the model space. Second, MMAE is often used with a minimum mean square error estimator, such as Kalman filter, to provide a point estimation of the output, while we are interested in a full Bayesian analysis of the output distribution.

In two sets of synthetic and real-data experiments, the proposed multiple-model approach was applied to reconstruct 3D source distributions with different structures and sizes. To investigate the effect of fixing the prior model, solutions were also obtained by respectively pre-defining the value of p in the Lp-norm to 1 and 2 in the proposed method. The results showed that the posterior combination of models (the posterior distribution of p), as determined by the ECG data, differs substantially among different experimental settings. Furthermore, high variance and complex distributions of p could be observed when complex pathological conditions were involved. This underscores that: 1) it is important to adapt prior models to different underlying source structures; and 2) it may be preferable to utilize a combination of models rather than one single model for a better match with complex source properties. The use of an adaptable set of models, as demonstrated by the experiments, improved the robustness of imaging sources with different properties in comparison to fixed-model approaches.

While targeted at the application of noninvasive EP imaging in this paper, the underlying concept of the presented work can be generalized to a broader variety of inverse problems that face the challenge of model-data mismatch.

II. Methodology

A. Forward Measurement Model

Cardiac EP imaging essentially aims to reconstruct cardiac bioelectrical sources from measurements of the bioelectrical field in the torso volume conductor, in particular on the body surface. The quasi-static electromagnetism [11] explains the relation between the field measurements within the torso volume and cardiac bioelectrical sources as:

.(Dint+Dext)ϕe(r)=(Dintu(r)),rΩh, (1)
.Diϕi(r)=0,rΩi,Ωi=Ωth. (2)

where r stands for the 3D spatial coordinate.

Equation (1) describes a bidomain heart model [46] explaining how the extracellular potential ϕe within the heart volume Ωh originates from the action potential u. Dint and Dext are the effective intracellular and extracellular conductivity tensors. Summation of the intracellular and extracellular conductivity tensors, Dk = Dint + Dext, is termed as the bulk conductivity tensor. Equation (2) describes how the potential ϕi distributes within the volume conductor Ωi external to the heart with conductivity tensor Di, assuming that no other active electrical source exists within the torso. The construction of a numerical forward model often involves several different assumptions based on equations (1) and (2), as described below.

The adoption of different source models plays a first role in different treatments of equations (1) and (2). When surface-based equivalent source model are used, the forward model is formulated on the domain between the heart surface and body surface by considering only the Laplace's equation (2). The source models are then commonly defined in forms of electrical potential on the epicardium and/or endocardium [12], [13], [6], [14], or activation time on the ventricular surface [15], [16], [17]. When volumetric source models are used, both the Poisson's equation (1) and Laplace's equation (2) are considered. The corresponding source models are commonly in forms of action potential [18], [19], [20], [21], [22], [23], or current density/activation front [24]. In this paper, the equivalent source model we seek is the spatial gradient of action potential (∇u), which is briefly referred to as source (v) throughout the paper.

Within the torso volume (2), a multitude of studies has been reported that examined how assumptions on tissue conductivities affect the forward modeling [47], [48]. Based on these findings, common practice currently ranges from homogeneous torso models [49], [50], [28] to the inclusion of different levels of inhomogeneity [51], [52], [53], [54]. In this paper, homogeneous torso conductivity is assumed to convert the tensor Di to a scalar σt [47]. Torso conductivity value σt is assumed to be 0.2 S/m [55].

Within the myocardium (1), common assumptions on the conductivity tensors include anisotropy in both bulk and intracellular conductivities [55], isotropy in both [56], [57], and anisotropy in intracellular conductivity but isotropy in bulk conductivity [58], [59]. Since the anisotropic ratio of Dk is a magnitude smaller than that of Dint [58], in this paper we follow the third assumption to only retain the intracellular anisotropy and assume isotropic bulk conductivity, which converts the tensor Dk to a scalar σblk. This is similar to an oblique dipole layer model, where the anisotropy of primary current is considered whereas that of passive/secondary current is neglected [58]. Intracellular conductivity tensor Dint is obtained by mapping a 3D experimentally-derived mathematical fiber model to the personalized ventricular geometry of the subject [60], [61]. Its conductivity is assumed to be 0.24 S/m in longitudinal direction and 0.024 S/m in transversal direction [55]. Isotropic bulk conductivity σblk is calculated as an intermediate value between longitudinal and transversal conductivities (0.48 S/m and 0.12 S/m, respectively) [55].

Based on these assumptions, the forward relationship between cardiac source v (the spatial gradient of action potential) and body-surface potential data ϕ can be described as:

σblk2ϕe(r)=(Dintv(r)),rΩh, (3)
σt2ϕ(r)=0,rΩth. (4)

These equations are solved numerically on a subject-specific heart-torso model by coupling mesh-free and boundary element methods, as detailed in our previous work [21], [62]. This gives us a linear biophysical model:

b=Hv+n, (5)

where b represents the m×1 body-surface measurement vector at one time instant, v is n×1 cardiac sources, H denotes m×n transfer matrix, and n is m × 1 measurement noise.

B. Bayesian Inference

Bayesian inference refers to the general procedure of constructing a probability distribution for unknown parameters from given measurements, assuming both unknown parameters and known measurements to be random variables [63], [64].

Starting from the Bayes' rule of probability, the joint probability distribution of unknown parameters and given measurements, P (x, y), is represented as:

P(x,y)=P(xy)P(y), (6)

where x is the unknown parameters of interest and y is the given measurements. P(x|y) is the conditional probability distribution of x given y. P(y) is the marginalized probability distribution of y: P(y) = ∫P(x, y)dx.

Therefore, the posterior probability of x given y is constructed as:

P(xy)=P(x,y)P(y)=P(yx)P(x)P(y). (7)

Since probability P(y) is independent of x, it is considered as a normalizing constant that can be omitted:

P(xy)P(yx)P(x). (8)

Different techniques can be used to analyze this posterior distribution. For example, the mode of the posterior distribution can be obtained through a maximum a posteriori (MAP) estimator, providing a point estimate that maximizes the posterior probability. Alternatively, if the full posterior distribution is of interest but not analytically tractable, it can be characterized by simulation-based methods such as Markov Chain Monte Carlo (MCMC) or approximation methods such as Variational Bayes (VB) [63], [64].

In this paper, we focus on characterizing the full posterior distribution because we are interested in the posterior distribution of p to analyze different combinations of final models as determined by the ECG data.

C. Hierarchical Multiple-Model Bayesian Inference

From a Bayesian perspective, assuming the source v and the measurement data b as random variables, the posterior probability of the unknown cardiac source P(v | b), according to the Bayes' theory, is related to the prior distribution of the sources P(v) and the likelihood distribution of measurement data given sources P(b | v):

P(vb)P(bv)P(v). (9)

1) Multiple-Model Prior Term

Fixed-model approaches of EP imaging consider a specific prior distribution for the source v. For example, considering Gaussian distribution for the sources imposes smoothness on the solution, while assuming Laplace prior distribution for the sources induces sparsity on the solution.

Here, we consider an Lp-norm prior for statistically independent sources v, ||v||p = (Σii)p)1/p, represented using a zero-mean generalized Gaussian distribution:

P(vp,δ)=w(p)nδnexp(c(p)iυiδp), (10)
w(p)=pΓ(3p)122Γ(1p)32,c(p)=[Γ(3p)Γ(1p)]p2,

where n denotes the number of sources and p represents the order of the Lp-norm prior. Fixing the p value to 1 or 2 converts the source prior distribution (10) to a Laplace distribution or Gaussian distribution, respectively. To incorporate a continuous combination of multiple models, we let p be an unknown random variable (hyperprameter). To give no prior preference over the value of p, we assume it to follow a uniform distribution between 1 and 2:

P(p)={1,p[1,2]0,otherwise}. (11)

Parameter δ2 denotes the variance (inverse of precision) of the source prior, assumed to be equal for all sources. It plays a similar role to the regularization parameter in a deterministic setting, controlling the contribution of the source prior to the regularization. To avoid the challenge of finding an optimal value for δ through empirical studies, we also assume it to be an unknown hyperparameter with a uniform distribution.

2) Data Likelihood Term

Assuming the measurement noise in equation (5) to follow a zero-mean normal distribution and measurements b to be statistically independent (in space), the likelihood term follows a Gaussian distribution as:

P(bv,η)=1(2π)mηmexp(12η2(bHv)T(bHv)), (12)

where m denotes the number of body-surface measurements and η2 represents the noise variance, assumed to be equal for all measurements. The noise variance η2 controls the contribution of the data-fitting term to the regularization. It is also considered to be an unknown hyperparameter with a uniform distribution.

Substituting the likelihood and prior terms in equation (9), the joint posterior distribution (9) can be rewritten as:

P(v,Θb)P(bv,η)P(η)P(vp,δ)P(p)P(δ), (13)

where Θ = {δ, p, η} denotes a vector of size 3 including the three hyperparameters.

3) Sampling the Posterior using MCMC

A full Bayesian analysis of this problem is obtained by sampling the joint posterior distribution (13) using a MCMC technique called slice sampling [65]. Slice sampling generates samples of a random variable by uniformly sampling from under the curve of its density function. Unlike Gibbs sampling [66] that requires conditional distributions of unknown random variables, or Metropolis-Hastings scheme [64] that requires an accurate selection of the proposal distribution for an efficient random walk, slice sampling enables us to directly sample the joint posterior distribution with minimum tuning required.

Slice sampling for a one-dimensional random variable z with probability density function proportional to f(z) can be summarized into 4 steps according to [65]:

  • (a)

    Take an initial sample z0.

  • (b)

    Draw a real value g uniformly from interval [0, f(z0)], defining a horizontal slice S = {z : g < f(z)}.

  • (c)

    Find an interval I = (L, R) around z0 that contains all or most of the slice S: using stepping-out and shrinkage procedures, an interval of width w is randomly positioned around z0, and then expanded in steps of size w until both ends are outside the slice. These two ends determine interval I

  • (d)

    Draw a new sample z1 uniformly from the interval I, such that it belongs to the slice S. If it does not belong to slice S, use it to shrink the interval I and repeat step (d). If it belongs to slice S, accept it as a new sample and start over from step (b) using the new sample z1.

Different approaches can be utilized to select two ends of the interval I in step (c). Here, we use the stepping-out and shrinkage procedures, as illustrated in Fig. 1. For details of other interval selection methods refer to [65]. A multivariate distribution can be sampled by repeated use of univariate slice sampling to sample each variable in turn. To determine whether the samples are representative of the source distribution (convergence), several chains with different starting points are generated. Convergence then is identified when the chains are no longer differentiable in variance. In addition, early iterations are discarded to avoid the influence of starting points.

Figure 1.

Figure 1

Single variable slice sampling with stepping-out and shrinkage procedures for selecting and updating an interval: (a) assume an initial sample z0, (b) draw a vertical level, g, uniformly from [0, f(z0)] to define a horizontal slice S, (c) position an interval of width w randomly around z0, and then expand it in steps of size w until both ends are outside the slice, (d) find a new sample, z1, by selecting uniformly from the interval until a sample inside the slice is found. Samples outside the slice are used to shrink the interval.

4) Posterior Distribution of Sources

Given samples from the joint posterior distribution (13), the posterior distribution of sources P (v|b) is calculated as an integral over hyperparameters Θ:

P(vb)=P(v,Θb)dΘ. (14)

In implementation, this integration (14) can be converted to a summation over sampled hyperparameters as:

P(vb)=pηδP(v,p,η,δb). (15)

As shown in equation (15), a combination of multiple models contributes to the final posterior distribution of cardiac sources P (v|b), where each model is weighted/modulated by the posterior probability of its occurrence (posterior distribution of p) as determined by the data. For the purpose of evaluating the accuracy of the source reconstruction, posterior mean is calculated from P (v|b) as: E[P(vb)]=j=1kvjP(vjb), where k is the number of samples.

III. Experiments and Results

Synthetic and real-data experiments were conducted to understand model-data dependency, using the proposed multiple-model approach in comparison to fixed-model approaches. Both approaches were implemented within the same Bayesian framework described in section II: for the former, p was treated as a hyperparameter as detailed earlier; for the latter, the value of p was fixed at 1 and 2 to represent, respectively, a Laplace (sparsity) and Gaussian (smoothness) density prior. For all experiments, slice sampling was executed with 3 chains, each with a different initial value and 20000 samples. Hyperparameter p was initialized to a uniformly selected value between 1 and 2. Similarly, the initial value for source variance was selected from the interval [0, 3], where the bounds were defined large enough to cover all possible values for the source variance. Likewise, the initial value of noise variance was uniformly picked from the interval [0.1, 10e − 5] to cover different noise levels. The first 3000 samples of each chain were dropped from the sampling results to remove the impact of initial values. From the samples that represent the posterior distribution of interest (equation (13)), we focus on: 1) analysis of the posterior distribution of p as a description of the distribution of prior models preferred by the data; and 2) evaluation of the accuracy of source estimation in comparison to fixed-model approaches using the posterior mean of cardiac sources.

A. Synthetic Experiments

To test the impact of the prior model on the accuracy of source construction, we conducted three different sets of synthetic experiments on image-derived human heart-torso models. The torso surface was represented by 370 nodes [67]. The heart volumes were represented by evenly-distributed mesh-free nodes with resolution ranging between 4mm to 7mm. The accuracy of 3D source reconstruction was measured in terms of correlation coefficient (CC) between true sources and the posterior mean of estimated sources, defined as:

CC=veveveve.vsvsvsvs, (16)

where vs and ve are true sources and the posterior mean of estimated sources, respectively. ||.|| represents the magnitude of a vector and v denotes the mean of a vector.

In the following experiments, to distinguish from the notation of p in the Lp-norm, we use notation P-values for the significant test in all statistical studies.

1) Imaging Cardiac Sources with Various Sizes

In this set of experiments, we investigated the effect of prior models in reconstructing cardiac sources with different sizes, ranged from 5% to 40% of the ventricles. In total, 40 different settings were considered, where the region of active sources was placed at different locations in the ventricular myocardium. Cardiac sources within the active region were assigned with value 1, while the rest were assigned with value 0. It is noteworthy that these settings are not physiologically realistic. They were meant to mimic the change in sparse versus diffuse structures in cardiac sources in a simplified manner, with a focus on revealing how these changes may affect the need of different prior models. For each setting, 370-lead ECG data were simulated (located at the 370 nodes of the torso surface) and corrupted with 20 dB Gaussian noise as input to the source reconstruction.

Fig. 2 presents three examples, where the active source region was centered at the apex of the left ventricle (LV) and covered 6%, 17%, and 40% of the LV (Fig. 2a, 2b, and 2c, respectively). The use of a sparse Laplace prior was able to detect the focal source (CC= 0.69), but its performance decreased when the region of active sources expanded (CC= 0.60 and 0.40). In contrast, using a smooth Gaussian prior provided an overly-smooth solution for the focal source region (CC= 0.52), but it obtained better estimation of larger source regions (CC= 0.65 and 0.68). By using a multiple-model prior, the posterior mean was similar in accuracy to that obtained with a Laplace prior for the focal region (CC= 0.67, Fig. 2a), and it outperformed both fixed-model approaches in larger source regions with CC= 0.70 and 0.74 (Fig. 2b and 2c). Interestingly, distributions of hyperparameter p for these 3 cases (Fig. 2, right) suggested that higher weights were assigned by data to the value of p closer to 1 for the focal source. This weight distribution for p shifted to higher values closer to 2 when the source region expanded.

Figure 2.

Figure 2

Three examples of the posterior mean of reconstructed 3D sources (the spatial gradient of action potential) using a multiple-model prior versus Laplace and Gaussian priors. Active sources were centered at the apex of the LV and covered 6%, 17%, and 40% of the myocardium in a, b and c, respectively. Posterior distributions of hyperparameter p for the three cases are also shown in the rightmost panel, showing a mode shifted to a large value as the extent of cardiac sources increased from a to c.

As illustrated in Fig. 3, in these 40 different settings of source distributions, the use of multiple-model prior delivered consistent results for sources with various sizes, while the use of fixed-model priors tended to capture only specific types of source distributions. The accuracy of the multiple-model approach (CC= 0.52±0.08) was significantly higher than that of the Laplace-based (CC= 0.50 ± 0.11, P < 0.005, paired-t test) and Gaussian-based (CC= 0.50 ± 0.11, P < 0.005, paired-t test) approaches.

Figure 3.

Figure 3

Comparison of using a multiple-model prior versus Laplace or Gaussian priors in reconstructing active source regions with different sizes. Source size is defined in terms of percentage of the ventricular myocardium.

This set of experiments demonstrated that different models were required to successfully estimate sources with different sizes. While fixed-model approaches were better suited to specific source sizes that satisfy their prior assumptions, automatically changing the distribution of p (model combinations) according to the data was able to adapt to source regions with different sizes.

2) Imaging Cardiac Sources along the Infarct Border

In the second set of synthetic experiments, we examined the impact of prior models on estimating the source distribution along the infarct border during the ECG ST segment. During the ST segment of an ECG cycle, there is minimal current flow in a healthy heart. In an infarcted heart, in comparison, only the viable myocardium would exhibit coherent high action potential, while the necrotic tissue in the scar core exhibits low potential. These two regions are separated by localized active cardiac sources, as illustrated in Fig. 4. In total, we considered 116 cases of infarcts at different locations and with different border sizes ranging from 5% to 55% of the LV. In these experiments, action potential at the core of infarct regions was set to be 0, while it was set to be 1 at healthy regions. The true cardiac source distribution along the infarct border was then calculated as the spatial gradient of action potential. Finally, 370 body-surface ECG measurements (located at the 370 nodes of the torso surface) associated with each setting were simulated and corrupted with 20 dB (SNR) Gaussian noise as the input for the source reconstruction.

Figure 4.

Figure 4

Illustration of the spatial structure of action potential (left) and its spatial gradient (active cardiac sources, right) in an infarcted heart during the ECG ST segment.

Fig. 5 presents two examples of the posterior mean of estimated 3D source distribution along infarct border. In the top row, infarct core was located at the middle to apical region at the anterior wall of the LV; in the bottom row, infarct was centered at the apex of the LV (Fig. 5a). As shown, the use of Laplace prior detected a few sparse patches of active sources without providing information about the center or structure of the infarct region (Fig. 5b, CC= 0.25 top row, and CC= 0.37 bottom row). The use of Gaussian prior better identified the infarct region, though its estimations were smeared such that the estimated sources extended to the infarct center (Fig. 5c, CC= 0.43 top row, and CC= 0.44 bottom row). The source distributions along the infarct were significantly better estimated using the multiple-model prior, outlining both the location and shape of the infarct (Fig. 5d, CC= 0.67 top row, and CC= 0.63 bottom row). Furthermore, as shown in the estimated posterior distribution of p in Fig. 5d, the optimal ranges of p differed with cases, indicating different source structures preferred different set of prior models. In the example at the top row, the p distribution was centered around 1.2 and stretched equally on both sides, implying a region with small to medium size (37% of the LV). In the example at the bottom row, the value of p was mainly distributed between 1 and 1.2, indicating a relatively small source region (25% of the LV).

Figure 5.

Figure 5

Two examples of the posterior mean estimate of the source distribution (the spatial gradient of action potential) along the infarct border during the ECG ST segment, where the infarct was centered at the middle to apical region at the anterior wall of the LV (top row) and at the apex of the LV (bottom row). The posterior distribution of p obtained by our method is also shown at the rightmost panel.

Fig. 6 summarizes the accuracy of multiple-model versus fixed-model approaches in this set of experiments. While Laplace and Gaussian approaches were only better suited for specific types of source distributions, multiple-model approach obtained consistent results for all sources. Its overall accuracy (CC= 0.55 ± 0.09) was significantly better than that of Laplace and Gaussian approaches (CC= 0.45 0.12 and CC= 0.46±0.08, respectively) based on paired student's ± t-test (P < 0.005).

Figure 6.

Figure 6

Comparison of using a multiple-model prior versus Laplace or Gaussian priors in reconstructing active source distribution along the infarct border. Source size represents the size of active sources along the infarct border and is defined in terms of percentage of the left ventricle.

These experiments illustrated the advantage of using multiple-model prior in pathological hearts. Since the infarct border size/structure was not known a priori, fixed-model approaches ran the risk of imposing constraints that were not consistent with the underlying source structures. In contrast, multiple-model approach automatically adapted to the source structure through the weights estimated for the models (p distribution).

3) Imaging Excitation/Repolarization Wavefront

In this set of experiments, we considered source structures generated by simulation of Aliev-Panfilov model [68] describing spatiotemporal propagation of action potentials in the ventricles. Four different heart geometries were used in these experiments. These heart models were coupled with identical 370-nodes torso. For each heart model, 5 different random pacing locations were considered. For each pacing location, action potential propagation was simulated and the spatial gradient of action potential was calculated to obtain true cardiac source distribution representing excitation/repolarization wavefront. The corresponding 370-lead body-surface measurements (located at the 370 nodes of the torso surface) were simulated and corrupted with 20 dB noise. The performance of multiple-versus fixed-model approaches was evaluated in reconstructing the propagation of cardiac sources at 20 different time instants during a cardiac cycle.

Fig. 7 presents an example of excitation wavefront propagation at 4 time instants during the depolarization phase of an ECG cycle. In this example, the pacing location was at the base of the right ventricle (Fig. 7a, top row). The use of a Laplace prior produced relatively sparse solutions (Fig. 7b) that accurately detected source distribution at the very beginning of the pacing (t1, CC= 0.73). Its performance, however, decreased when the source region became larger such that only part of the active region was identified (t2: CC= 0.42, t3: CC= 0.37, and t4: CC= 0.26). Its solution also involved false source detection at two time instants (t2 and t4). Solutions with a Gaussian prior (Fig. 7c) were more consistent with the original source propagation, though it had a false-positive source region at t4 and a relatively diffuse over-estimation of the focal source at t1 (t1: CC= 0.69, t2: CC= 0.47, t3: CC= 0.41, and t4: CC= 0.35). The use of multiple-model prior obtained source estimation with the highest CC throughout different time instants and with less false-positives (Fig. 7d, t1: CC= 0.71, t2: CC= 0.59, t3: CC= 0.53, and t4: CC= 0.64).

Figure 7.

Figure 7

An example of imaging excitation wavefront propagation in cardiac pacing at 4 time instants during the depolarization phase of an ECG cycle. The pacing point was located at the base of the right ventricle. Active source region represents the spatial gradient of action potential.

To examine the temporal stability of the source reconstruction in more detail, Fig. 8 provides an example of the CC obtained on one case by multiple-model versus fixed-model approaches. The CC is shown at 5 consecutive time instants for each of the depolarization and repolarization phases. The lower standard deviation (STD) of multiple-model approach (±0.03) showed that it performed more consistently over time in comparison to fixed-model approaches (Laplace prior STD: ±0.08, and Gaussian prior STD: ±0.06). Fig. 9 summarizes the performance of the three approaches in three stages of a cardiac cycle. At early excitation phase, where sources were sparse, multiple-model approach performance compared with that of Laplace approach. In other phases of an ECG cycle, multiple-model approach outperformed both fixed-model approaches. Statistical tests confirmed that the multiple-model approach (CC= 0.53±0.05) obtained significantly higher accuracy than Laplace-based (CC= 0.44±0.07,P < 0.005, paired t-test) and Gaussian-based (CC= 0.42 ± 0.11, P < 0.005, paired t-test) approaches.

Figure 8.

Figure 8

An example of the temporal stability of multiple-model approach in comparison to Laplace and Gaussian approaches in one cardiac cycle.

Figure 9.

Figure 9

Overall comparison of multiple-model approach versus Laplace and Gaussian approaches in reconstructing excitation and repolarization wavefronts in three stages of an ECG cycle.

The posterior distribution of p was also consistent with the size of different sources (Fig. 7d, the rightmost panel). At t1, when the source region was very focal, p distribution favored values close to 1. When the source region expanded, at the subsequent time instants, the p distribution shifted toward 2, such that the largest source region (t3) favored p values very close to 2. The fact that the posterior distribution of p varied with the spatiotemporal change of the propagating source again demonstrated the need to automatically adapt the prior models when imaging spatiotemporally varying cardiac sources. Furthermore, it is noteworthy that the posterior distribution of p was associated with larger variance and more complex shape as the spatial structure of the cardiac sources became more complex in the second (Fig. 5) and third (Fig. 7) set of synthetic experiments, indicating the need of combining multiple models instead of identifying one single best model to constrain the imaging.

4) Relation and Comparison with Multiple-Constraint Regularization Method by Brooks et al. [42]

As mentioned earlier, the proposed multiple-model approach is conceptually similar to the multiple-constraint regularization method proposed by Brooks et al. for estimating epicardial potential distribution from body-surface measurements [42]. This deterministic approach imposes a finite set of prior models with different orders of L2-norm on the spatial property of equivalent cardiac sources on the epicardium. A special case of this regularization method with zero-order and first-order Tikhonov constraints was tested in [42], formulated as:

minvbHv2+λ1v2+λ2Rv2, (17)

where R is the gradient matrix, λ1 and λ2 are regularization parameters, and v is equivalent cardiac sources on the epicardium. For more details of this method refer to [42].

Here, we implemented and compared this method with the presented multiple-model Bayesian approach for imaging volumetric cardiac sources. Regularization parameters were determined using the L-surface method, as explained in [42], as an extension of the L-curve method. We considered 17 different settings, where active sources were located at different segments of the LV, defined according to the standard AHA 17-segment model of the LV [69]. The active source segment was set to be 1, while the rest was set to be 0. For each setting, the 370-lead body-surface measurements were simulated and corrupted with 20 dB Gaussian noise for source reconstruction.

Fig. 10 shows the results obtained by the presented multiple-model Bayesian approach compared to Brooks et al. multiple-constraint regularization. While both approaches obtained similar performance trend in these 17 different settings, the presented multiple-model Bayes provided better overall source estimation (CC =0.52 ± 0.14) compared to Brooks multiple-constraint regularization (CC=0.49±0.13, P = 0.005, paired t-test). A larger experiments set is required for a more conclusive comparison. It must also be noted that Brooks method was originally aimed to estimate epicardial potential distribution while its implementation in this work targeted estimation of volumetric sources in the myocardium.

Figure 10.

Figure 10

Comparison of the presented multiple-model Bayesian approach with Brooks et al. multiple-constraint regularization [42] in imaging volumetric sources located at different segments of the LV, defined according to the standard 17-segment model of LV.

5) Posterior Mean versus MAP Estimators

In this paper, the accuracy of the cardiac source estimation was evaluated on the posterior mean of the source distribution. This choice was made based on the assumption that the distribution of measurements and sources followed Gaussian or generalized Gaussian distributions. As a result, the posterior distribution of sources was expected to follow a generalized Gaussian distribution, which is unimodal and justifies the posterior mean as a valid point estimator for the sources. This unimodal distribution also indicates that MAP and posterior-mean estimators would provide close results. To verify, Fig. 11b lists MAP versus posterior-mean estimation of the source distribution in the second set of synthetic experiments. Examples of two source distributions obtained using slice sampling are also shown in Fig. 11a, where their unimodal distribution explained comparable results between MAP and posterior-mean estimators. A slightly lower accuracy of MAP estimator can be explained by its sensitivity to sampling errors in the solution while posterior-mean was more robust to these errors by averaging over a set of solutions.

Figure 11.

Figure 11

(a) Examples of posterior distribution of cardiac sources obtained using slice sampling. (b) Comparison of MAP versus posterior mean estimators for multiple-model and fixed-model approaches for the second set of synthetic experiments.

6) Impact of Hyperparameter Initialization

In the presented Bayesian approach source variance δ2 and noise variance η2 were assumed to be unknown hyperparameters and were estimated in addition to source distribution. These parameters affect the accuracy of Bayesian inference by controlling the relative contribution of the prior and the data-fitting terms. To verify the robustness of our approach to initial values of these parameters, we conducted synthetic experiments with different measurement noises (SNR levels ranging from 50 to 10 dB). For all experiments, δ and η were initialized with identical values. As shown in Fig. 12, the values of CC only changed slightly when different measurement noises were involved in the input ECG data.

Figure 12.

Figure 12

Correlation coefficient obtained by the presented multiple-model approach in the presence of white Gaussian noises with different SNR levels.

B. Real-Data Experiments

Real-data experiments were conducted on two different groups of post-infarction patients. As explained earlier, in an infarcted heart, cardiac source activity is expected to be seen along the border of the infarct core during the ECG ST segment.

1) Imaging Infarct Region using MRI as Reference

The first four patients underwent cardiac MRI for 3D infarct enhancement. The datasets included MR images of heart-torso geometry and 120-leads body-surface potential (BSP) recordings for each subject, made available to this study by 2007 PhysioNet / Computers in Cardiology Challenges [70]. MRI data of each subject's heart included 10 slices from base to apex with 8 mm inter-slice spacing and 1.33 mm pixel spacing. The torso surface was described by triangular elements with 370 vertices. Patient-specific heart-torso models were constructed from MRI images of individual patients. BSP measurements were recorded by Dalhousie University standards [67] at 120 known anatomical sites and interpolated to 370 nodes of the Dalhousie torso model [71]; each BSP recording consisted of a single averaged PQRST complex sampled at 2 kHz. Gold standards of the infarct were provided in terms of the location and size of the infarct using the 17-segment division of the LV according to American Heart Association (AHA) standards (Fig. 13, Ref) [69].

Figure 13.

Figure 13

Estimation of cardiac source activity (non-blue color) along the infarct region border during the ECG ST segment for 4 human subjects. Active source region represents the spatial gradient of action potential. The infarct center is outlined with black contour.

As shown in Fig. 13a, the center of infarct region (black contour) in case 1 was located at mid-anteroseptal of the heart. The sparse solution from using a Laplace prior only detected part of the infarct region border. The overly-diffuse estimation from using a Gaussian prior did not reveal any information about the location or structure of the infarct region. In comparison, multiple-model approach was able to outline the distribution of sources along the infarct region (non-blue regions). Similar performance was observed in case 2, where the infarct region extended from basal inferior and inferoseptal to mid-inferior and inferoseptal region of the LV. In case 3, the infarct region was centered at mid-inferior region of the LV and extended to septal and inferior LV. Multiple-model approach outperformed those using Laplace and Gaussian priors as both of the latter included a false positive infarct region at the right ventricle (RV). Case 4 had two infarct regions centered at basal anterior and apical inferior LV. Estimation with a Laplace prior behaved similarly to the other cases. Estimation using a Gaussian prior could only detect one of the infarct regions, which was falsely extended to RV. Multiple-model approach precisely detected the infarct region located at apical inferior LV, although it only showed some source activity around basal anterior LV without providing more information about the infarct structure/size. Interestingly, as shown in Fig. 13b, in cases 1 and 2 that had relatively compact infarct regions (30%), the posterior density of p exhibited a relatively compact distribution centered around 1.2. For case 3 that had a large infarct region (52%), the distribution of p shifted toward higher values (~ 1.5) with a larger tail on one side. In case 4, although the infarct region was small (14%), the source distribution was expected to be more diffuse because of the presence of two separate infarct regions. Accordingly, the posterior density of p was more evenly distributed from 1.4 to 1.6 with a larger variance. These results further demonstrated the capability and necessity of a multiple-model approach to automatically adjust the contribution of different models to the source structure.

For quantitative evaluation, we thresholded the solution to obtain the region bordering the infarct core. The threshold value was decided by repeated application of k-mean clustering, where k was first set to a large value and then was decreased in order to find an appropriate number of clusters. The difference between clusters' values determined the threshold value. This approach is similar to region growing technique, where a large k value in k-mean clustering results in a large number of small regions. By decreasing the k value, the size of regions start to grow until appropriate region size is achieved. We then calculated the infarct region in terms of location (using the 17-segment model of LV) and size (ratio of infarct region volume to the total LV volume).

Table 1 compares the accuracy of infarct quantification for cases 3 and 4 with the results obtained on the same data by five other methods. In brief, Xu et al. solved the problem by estimating the volumetric action potential using a total-variation prior [31]. Wang et al. obtained volumetric action potential using a physiological-model based maximum a posteriori estimation [72]. Dawoud et al. reconstructed epicardial potential from body-surface measurements using Tikhonov regularization, followed by a morphology classification to calculate infarct size and location [71]. Farina et al. deterministically optimized the center and radius of a spherical infarct model inside a detailed 3D cardiac excitation model [73]. Mneimneh et al. directly analyzed the ECG without solving any inverse EP imaging problem. [74]. As shown, in both cases, the results obtained by the multiple-model approach were comparable to volumetric action potential imaging methods described in [72], [31] with less false-positives, and outperformed the other three approaches [71], [73], [74].

Table 1.

Comparison of infarct quantification using multiple-model approach versus other existing methods

Reference Our method Xu Wang Mneimneh Dawoud Farina
case 3 size 52% 45% N/A 47% 27% 35% 10%
center 10/11 10 10/15 10 10/11 4 12
segments 3, 4, 5, 9, 10, 11, 12, 15, 16 3, 4, 9, 10, 11, 16 3, 8, 9, 10, 11, 12, 14, 15, 17 2, 3, 4, 5, 8, 11, 12, 16, 17 N/A 3, 4, 5, 10, 11 N/A
case 4 size 14% 12% N/A 11% 12% 40% 0.2%
center 15 10/15 15 9 N/A 4 11
segments 1, 9, 10, 11, 15, 17 9, 10, 11, 15, 17 1, 7, 9, 10, 11, 15, 17 1, 4, 6, 7, 9, 14 N/A 3, 4, 5, 6, 9, 10, 11 N/A

2) Imaging Infarct Region using Invasive Voltage Mapping as Reference

The other two patients in this study had ventricular tachycardia due to previous infarct and underwent catheter voltage mapping of the scar substrates. The datasets included CT images of heart-torso geometry as well as 120-lead BSP measurements for each subject that were acquired at the Cardiac Electrophysiology Laboratory of the QEII Health Sciences Center, Halifax, Canada [49]. Axial CT scans of these two cases included slices with 0.8–3 mm inter-slice spacing and 0.86 mm pixel spacing. The torso surface for these two cases was described by triangular elements with 120 vertices. Patient-specific heart-torso models were constructed from CT scans of individual patients. BSP measurements were recorded by Dalhousie University standards [67] at 120 known anatomical sites; each BSP recording consisted of a single averaged PQRST complex sampled at 2 kHz. For these two patients, the reference infarct data were obtained through invasive bipolar voltage recordings collected on the epicardium of the heart using the CARTO system [49]. Infarct core was identified as a region with peak-to-peak bipolar voltage < 0.5mV and the scar border was identified as the region with voltage between 0.5 and 1.5 mV. CARTO data were registered to CT first through a rigid alignment based on visual inspection using Amira 4.1 software (Mercury Computer Systems, Chelmsford, MA), followed by coherent point drifting method for non-rigid registration [5], [75]. Infarct region was identified after thresholding as described earlier, and its quantitative accuracy was evaluated in terms of source overlap (SO) with the infarct region delineated from the CARTO voltage map. SO was defined as the ratio of intersection between estimated and reference source regions over union of estimated and reference source regions. Note that SO is half of the Dice coefficient commonly used in segmentation techniques.

As shown in Fig. 14, the use of Laplace prior could partially identify source activity along the infarct border without providing detailed information about the structure of the infarct core (case 5: SO= 0.34, case 6: SO= 0.31). The use of Gaussian prior had a better estimation of source distribution along infarct border in case 5 (SO= 0.48) while its estimation in case 6 was smeared and extended to the infarct center (SO= 0.36). In comparison, multiple-model approach showed improved performance in delineating the infarct core in both cases (case 5: SO= 0.51, case 6: SO= 0.53). Specifically in case 6, multiple-model approach not only identified the infarct region, but also detected the source activity at heterogeneous region close to the apex. Similar to previous observations, in these real-data experiments with complex infarct structures, the posterior density of p exhibited a widespread and complex distribution with larger variances.

Figure 14.

Figure 14

(a) Original CARTO voltage map (left) and its spatial gradient (non-blue region) projected on CT-derived heart models (right). (b–d) Estimation of cardiac source activity (non-blue color) along the border of the infarct region. Active source region represents the spatial gradient of action potential. The infarct center is outlined with black contour.

IV. Discussions

1) Deterministic Formulation of the Proposed Method

The posterior distribution (13) being solved in this paper can be formulated in a deterministic form by taking negative logarithm from two sides of the equation (13) as:

log(P(v,Θb))log(P(bv,η))log(P(vp,δ))c, (18)

where the constant c is resulted from taking logarithm of the uniform distribution of the hyperparameters. Dropping the constant c and calculating the logarithm of the likelihood and the source prior terms would convert (18) to:

log(P(v,Θb))bHv22+λvpp, (19)

where the regularization parameter λ is determined from the ratio between the noise variance η2 and the source variance δ2. Using the posterior mean as a point estimator for cardiac sources is then equivalent to taking the expectation of (19).

Solving equation (18) with a Laplace prior is equivalent to setting p = 1 in functional (19), i.e., a L1-norm on source v. Solving equation (18) with a Gaussian prior is equivalent to setting p = 2 in functional (19), i.e., a L2-norm on source v. Note that solving (19) in a Bayesian framework, the regularization parameter (λ) is automatically estimated along with the source distribution. It is noteworthy that posterior distribution (13,18,19) instead of a MAP estimate is being sought in this work because of our interest in analyzing the full posterior distribution of the hyperparameter p, with the intention to examine the model combination as determined by ECG data.

2) Computational Cost

The proposed approach obtains full posterior distribution of sources and hyperparameters by relying on MCMC slice sampling technique to obtain samples from a high-dimensional solution space (on the order of 1000). As a result, it took an average of 1 minute to converge on an iMac 2.9GHz Quad-Core Intel Core i5 with 16GB DDR3 SDRAM. The computational cost of using Laplace and Gaussian priors implemented in this work (by fixing p value to 1 and 2, respectively) was similar because they also relied on sampling the same high-dimensional solution space with one less hyperprameter. In comparison, regularization methods took an average of 20 seconds to obtain a point estimate (rather than the full distribution) on the same machine.

3) Additional Remarks

Because clinical data in this paper were collected from different centers at different time, there was a disparity in the ECG data being used in this study. In terms of spatial processing, ECG data in the first set of real-data experiments (section III.B.1) were collected from 120 leads and interpolated to 370 vertices for the inverse solution. In the second set of real-data experiments (section III.B.2), ECG data collected from 120 leads were directly used for the inverse solution. We experimented with both types of inputs in our past work, and no substantial difference was observed. It might be because that the interpolated data are linearly dependent on the original recordings and thus does not provide extra information. In terms of temporal processing, ECG beats were averaged over time as inputs for source reconstruction. Beat-to-beat variability of the ECG signals was observed in our data and is a topic of our future study.

The forward modeling in this study is associated with several assumptions, which in turn may impact the inverse solutions. The anisotropy of the intracellular conductivity tensor depends on fiber structure that can be obtained experimentally or modeled mathematically. The former is not yet readily available in in-vivo human subjects, while the latter has been commonly used in literature [61]. In this work, the latter approach is adopted where an approximate of the patient-specific 3D fiber model is obtained by mapping a 3D experimentally-derived mathematical fiber model [60] to the personalized ventricular geometry of the subject. To obtain a detailed inhomogeneous model with exact subject-specific conductivity values is another challenging problem [76]. In this work, a common assumption of homogeneous torso model [47] and literature parameter values [55] are used. Assessing the impact of these model simplifications on inverse solutions is a challenging yet important question that need to be addressed in future work.

The CC measure of source reconstruction accuracy reported in this paper (on average 0.50) is lower than those reported in [77], [12], [28] (between 0.70 to 0.90). It is likely due to the use of transmural source models instead of surface-based source models. It may also has to do with the fact that the type of transmural source model being used (i.e., the spatial gradient of action potential) tends to exhibit complex structures. Additional error measures, such as relative difference measure (RDM) and magnification error (MAG), could be used to further evaluate the presented method results.

The presented work focuses on automatic adaptation of spatial prior models to source spatial properties. Therefore, comparison with temporal-constraint based approaches such as those presented in [72], [78] was not considered. In our future work, we plan to integrate the proposed spatial multiple-model approach with a temporal model of cardiac source propagation to guide the source evolution over time, following a similar hierarchical Bayes framework as presented in [79].

In this study, the infarct model was simplified to focus on the spatial gradient of action potential between viable and infarcted tissue. It did not consider heterogeneous infarct regions. In our future work, infarct border zone can be included in our simulation setting. We will also consider extending the presented work to imaging ischemic regions using previously reported cellular models of ischemia [80], [81].

In this work, we focused on source distributions with resolutions around 4–7 mm. In future work, we will study how the resolution of source distribution impacts the performance of multiple-model approach, the findings of which will enable us to optimize the computational cost of MCMC sampling by minimizing the number of sources (unknown parameters) to be estimated.

V. Conclusions

Noninvasive cardiac EP imaging involves solving an ill-posed problem, which is often regularized by imposing fixed prior models on the spatial distribution of sources. Although fixed-model techniques are able to accurately estimate source distributions with specific structures, they may not generalize well to time-varying or complex source structures that are not known a priori. In this paper, we proposed a multiple-model approach to investigate the impact of different prior models and to reduce model-data mismatch. Experimental results demonstrated that different combination of prior models may be preferred for estimating complex source structures and an adaptive multiple-model approach may help reduce the mismatch between actual and assumed source structures.

Acknowledgments

This work is supported by the National Science Foundation under CAREER Award No. ACI-1350374, the National Institute of Health, Lung and Blood Institute of the National Institutes of Health under Award No. R21HL125998, and the Advance RIT through the National Science Foundation Award No. HRD-1209115.

REFERENCES

  • [1].Frangi AF, Niessen WJ, Viergever MA. Three-dimensional modeling for functional analysis of cardiac images: A review. IEEE Trans. Med. Imag. 2001;20(1):2–25. doi: 10.1109/42.906421. [DOI] [PubMed] [Google Scholar]
  • [2].Wong CL, Zhang HY, Liu HF, Shi PC. Physiome-model-based state-space framework for cardiac deformation recovery. Acad. Radiol. 2007;14:1341–1349. doi: 10.1016/j.acra.2007.07.026. [DOI] [PubMed] [Google Scholar]
  • [3].Sermesant M, Moireau P, Camara O, Sainte-Marie J, Andriantsimiavona R, Cimrman R, Hill DL, Chapelle D, Razavi R. Cardiac function estimation from mri using a heart model and data assimilation: advances and difficulties. Med. Image Anal. 2006;10:642–656. doi: 10.1016/j.media.2006.04.002. [DOI] [PubMed] [Google Scholar]
  • [4].MacLeod RS, Gardner M, Miller RM, Horacek BM. Application of an electrocardiographic inverse solution to localize ischemia during coronary angioplasty. J. Cardiovasc. Electrophysiol. 1995;68(1):2–18. doi: 10.1111/j.1540-8167.1995.tb00752.x. [DOI] [PubMed] [Google Scholar]
  • [5].Wang L, Dawoud F, Yeung S, Shi P, Wong KCL, Liu H, Lardo AC. Transmural imaging of ventricular action potentials and post-infarction scars in swine hearts. IEEE Trans. Med. Imag. 2013 Apr;32:731–747. doi: 10.1109/TMI.2012.2236567. [DOI] [PubMed] [Google Scholar]
  • [6].Ramanathan C, Ghanem RN, Jia P, Ryu K, Rudy Y. Noninvasive electrocardiographic imaging for cardiac electrophysiology and arrhythmia. Nat. Med. 2004;10:422–428. doi: 10.1038/nm1011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Cuculich PS, Wang Y, Lindsay BD, Faddis MN, Schuessler RB, Damiano RJ, Li L, Rudy Y. Noninvasive characterization of epicardial activation in humans with diverse atrial fibrillation patterns. Circ. 2010;122(14):1364–1372. doi: 10.1161/CIRCULATIONAHA.110.945709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Fenici R, Melillo G. Magnetocardiography: ventricular arrhythmias. European Heart J. 1993;14:53–60. doi: 10.1093/eurheartj/14.suppl_e.53. [DOI] [PubMed] [Google Scholar]
  • [9].Muller H, Godde P, Czerski K, Agrawal R, Feilcke G, Reither K, Wolf K, Oeff M. Localization of a ventricular tachycardia-focus with multichannel magnetocardiography and three-dimensional current density reconstruction. J. Med. Eng. Tech. 1999;23(3):108–115. doi: 10.1080/030919099294258. [DOI] [PubMed] [Google Scholar]
  • [10].Gorodnitsky IF, Rao BD. Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 1997;45:600–616. [Google Scholar]
  • [11].Plonsey R. Bioelectric phenonmena. McGraw Hill; New York: 1969. [Google Scholar]
  • [12].Rudy Y, Messinger-Rapport B. The inverse problem of electrocardiography: solutions in terms of epicardial potentials. Crit. Rev. Biomed. Eng. 1988;16:215–268. [PubMed] [Google Scholar]
  • [13].Pullan AJ, Cheng LK, Nash MP, Bradley CP, Paterson DJ. Noninvasive electrical imaging of the heart: Theory and model development. Ann. Biomed. Eng. 2001;29:817–836. doi: 10.1114/1.1408921. [DOI] [PubMed] [Google Scholar]
  • [14].Erem B, Coll-Font J, Orellana RM, Stovicek P, Brooks DH. Using transmural regularization and dynamic modeling for noninvasive cardiac potential imaging of endocardial pacing with imprecise thoracic geometry. IEEE Trans. Med. Imag. 2014 Mar;33:726–738. doi: 10.1109/TMI.2013.2295220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Huiskamp G, Greensite F. A new method for myocardial activation imaging. IEEE Trans. Biomed. Eng. 1997;44(6):433–446. doi: 10.1109/10.581930. [DOI] [PubMed] [Google Scholar]
  • [16].van Dam PM, Oostendorp TF, Linnenbank AC, van Oosterom A. Noninvasive imaging of cardiac activation and recovery. Ann. Biomed. Eng. 2009;37(9):1739–1756. doi: 10.1007/s10439-009-9747-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Serinagaoglu Y, Brooks DH, Macleod RS. Improved performance of bayesian solution for inverse electrocardiography using multiple information sources. IEEE Trans. Biomed. Eng. 2006;53(10):2024–2034. doi: 10.1109/TBME.2006.881776. [DOI] [PubMed] [Google Scholar]
  • [18].He B, Wu D. Imaging and visualization of 3-d cardiac electric activity. IEEE Trans. Inf. Tech. Biomed. 2001;5(3):181–186. doi: 10.1109/4233.945288. [DOI] [PubMed] [Google Scholar]
  • [19].Nielsen BF, Lysaker M, Grottum P. Computing ischemic regions in the heart with the bidomain model: First steps towards validation. IEEE Trans. Med. Imag. 2013;32(6):1085–1096. doi: 10.1109/TMI.2013.2254123. [DOI] [PubMed] [Google Scholar]
  • [20].Wang D, Kirby RM, MacLeod RS, Johnson CR. Identifying myocardial ischemia by inversely computing transmembrane potentials from body-surface potential maps. International Conf. Bioelectromagn. (NFSI ICBEM).2011. pp. 121–125. [Google Scholar]
  • [21].Wang L, Zhang H, Wong K, Liu H, Shi P. Physiological-model-constrained noninvasive reconstruction of volumetric myocardial transmembrane potentials. IEEE Trans. Biomed. Eng. 2010;57(2):296–315. doi: 10.1109/TBME.2009.2024531. [DOI] [PubMed] [Google Scholar]
  • [22].He B, Li G, Zhang X. Noninvasive imaging of cardiac trans-membrane potentials within three-dimensional myocardium by means of a realistic geometry aniostropic heart model. IEEE Trans. Biomed. Eng. 2003;50:1190–1202. doi: 10.1109/TBME.2003.817637. [DOI] [PubMed] [Google Scholar]
  • [23].Ohyu S, Okamoto Y, Kuriki S. Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties. IEEE Trans. Biomed. Eng. 2002 Jun;49:509–519. doi: 10.1109/TBME.2002.1001964. [DOI] [PubMed] [Google Scholar]
  • [24].Liu Z, Liu C, He B. Noninvasive reconstruction of three-dimensional ventricular activation sequence from the inverse solution of distributed equivalent current density. IEEE Trans. Med. Imag. 2006;25(10):1307–1318. doi: 10.1109/tmi.2006.882140. [DOI] [PubMed] [Google Scholar]
  • [25].Rudy Y, Oster H. The electrocardiographic inverse problem. Crit. Rev. Biomed. Eng. 1992;20(1–2):25–45. [PubMed] [Google Scholar]
  • [26].Okamoto Y, Teramachi Y, Musha T. Limitation of the inverse problem in body surface potential mapping. IEEE Trans. Biomed. Eng. 1983;30(11):749–754. doi: 10.1109/tbme.1983.325190. [DOI] [PubMed] [Google Scholar]
  • [27].Messnarz B, Tilg B, Modre R, Fischer G, Hanser F. A new spatiotemporal regularization approach for reconstruction of cardiac transmembrane potential patterns. IEEE Trans. Biomed. Eng. 2004;51(2):273–281. doi: 10.1109/TBME.2003.820394. [DOI] [PubMed] [Google Scholar]
  • [28].Ghosh S, Rudy Y. Application of l1-norm regularization to epicardial potential solutions of the inverse electrocardiography problem. Ann. Biomed. Eng. 2009;37(5):692–699. doi: 10.1007/s10439-009-9665-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Shou G, Xia L, Liu F, Jiang M, Crozier S. On epicardial potential reconstruction using regularization schemes with the l1-norm data term. Phys. Med. Biol. 2011;56(1):57–72. doi: 10.1088/0031-9155/56/1/004. [DOI] [PubMed] [Google Scholar]
  • [30].Wang L, Qin J, Wong TT, Heng PA. Application of l1-norm regularization to epicardial potential reconstruction based on gradient projection. Phys. Med. Biol. 2011;56(19):6291–6310. doi: 10.1088/0031-9155/56/19/009. [DOI] [PubMed] [Google Scholar]
  • [31].Xu J, Dehaghani AR, Gao F, Wang L. Noninvasive transmural electrophysiological imaging based on minimization of total-variation functional. IEEE Trans. Med. Imag. 2014;33(9):1860–1874. doi: 10.1109/TMI.2014.2324900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Martin RO, Pilkington TC, Morrow MN. Statistically constrained inverse electrocardiography. IEEE Tran. Biomed. Eng. 1975 Nov;22:487–492. doi: 10.1109/tbme.1975.324470. [DOI] [PubMed] [Google Scholar]
  • [33].Serinagaoglu Y, Brooks DH, MacLeod RS. Bayesian solutions and performance analysis in bioelectric inverse problems. IEEE Trans. Biomed. Eng. 2005 Jun;52:1009–1020. doi: 10.1109/TBME.2005.846725. [DOI] [PubMed] [Google Scholar]
  • [34].Serinagaoglu Y, Brooks DH, MacLeod RS. Improved performance of bayesian solutions for inverse electrocardiography using multiple information sources. IEEE Trans. Biomed. Eng. 2006 Oct;53:2024–2034. doi: 10.1109/TBME.2006.881776. [DOI] [PubMed] [Google Scholar]
  • [35].Greensite F. A new treatment of the inverse problem of multivariate analysis. Inverse Problems. 2002;18(2):363–379. [Google Scholar]
  • [36].Barr RC, Spach MS. Inverse calculation of qrs-t epicardial potentials from body surface potential distributions for normal and ectopic beats in the intact dog. Circ. Research. 1978;42:661–675. doi: 10.1161/01.res.42.5.661. [DOI] [PubMed] [Google Scholar]
  • [37].van Oosterom A. The use of the spatial covariance in computing pericardial potentials. IEEE Trans. Biomed. Eng. 1999;46(7):778–787. doi: 10.1109/10.771187. [DOI] [PubMed] [Google Scholar]
  • [38].Zhang Y, Ghodrati A, Brooks DH. Analysis of spatial-temporal regularization methods for linear inverse problems from a common statistical framework. IEEE Int. Symp. Biomed. Imag.: Nano to Macro. 2004 Apr;:772–775. [Google Scholar]
  • [39].Xu J, Sapp JL, Rahimi A, Gao F, Wang L. Variational bayesian electrophysiological imaging of myocardial infarction. Proc. Med. Image Comput. Computer Assisted Intervention (MICCAI '14) 2014;8674:529–537. doi: 10.1007/978-3-319-10470-6_66. [DOI] [PubMed] [Google Scholar]
  • [40].Clayton RH, Panfilov AV. A guide to modelling cardiac electrical activity in anatomically detailed ventricles. Progress biophysics mol. biol. 2007;96(1–3):19–43. doi: 10.1016/j.pbiomolbio.2007.07.004. [DOI] [PubMed] [Google Scholar]
  • [41].Rahimi A, Xu J, Wang L. Lp-norm regularization in volumetric imaging of cardiac current sources. Comput. Math. Methods Med. 2013 doi: 10.1155/2013/276478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [42].Brooks DH, Ahmad IF, Macleod RS, Maratos GM. Inverse electrocardiography by simultaneous imposition of multiple constraints. IEEE Trans. Biomed. Eng. 1999;46:3–17. doi: 10.1109/10.736746. [DOI] [PubMed] [Google Scholar]
  • [43].Schiller GJ, Maybeck PS. Control of a large space structure using mmae/mmac techniques. IEEE Trans. Aerospace Electron. Syst. 1997;33(4):1122–1131. [Google Scholar]
  • [44].Lippiello V, Siciliano B, Villani L. Adaptive extended kalman filtering for visual motion estimation of 3d objects. Control Eng. Practice. 2007;15(1):123–134. [Google Scholar]
  • [45].Ficocelli M, Janabi-Sharifi F. Adaptive filtering for pose estimation in visual servoing. IEEE International Conf. Intell. Robot. Syst. 2001;1:19–24. [Google Scholar]
  • [46].Pullan AJ, Cheng LK, Buist ML. World Scientific. 2005. Mathematically Modelling the Electrical Activity of the Heart From Cell to Body Surface and Back Again. [Google Scholar]
  • [47].Ramanathan C, Rudy Y. Electrocardiographic imaging: II. effect of torso inhomogeneities on noninvasive reconstruction of epicardial potentials, electrograms, and isochrones. J. Cardiovasc. Electrophysiol. 2001;12(2):241–252. doi: 10.1046/j.1540-8167.2001.00241.x. [DOI] [PubMed] [Google Scholar]
  • [48].Klepfer RN, Johnson CR, MacLeod RS. The effects of inhomogeneities and anisotropies on electrocardiographic fields: a three-dimensional finite element study. IEEE Conf. Eng. Med. Biol. Soc. 1995 Sep;1:233–234. doi: 10.1109/10.605427. [DOI] [PubMed] [Google Scholar]
  • [49].Sapp JL, Dawoud F, Clements JC, Horacek BM. Inverse solution mapping of epicardial potentials quantitative comparison with epicardial contact mapping. Circ.: Arrhythmia and Electrophysiology. 2012:1001–1009. doi: 10.1161/CIRCEP.111.970160. [DOI] [PubMed] [Google Scholar]
  • [50].Milanic M, Jazbinsek V, MacLeod RS, Brooks DH, Hren R. Assessment of regularization techniques for electrocardiographic imaging. J. electrocardiol. 2014;47(1):20–28. doi: 10.1016/j.jelectrocard.2013.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].MacLeod RS, Johnson CR, Ershler PR. Construction of an inhomogeneous model of the human torso for use in computational electrocardiography. IEEE Eng. Med. Bio. Soc. 1991:688–689. [Google Scholar]
  • [52].Huiskamp G, Oosterom AV. Tailored versus realistic geometry in the inverse problem of electrocardiography. IEEE Trans. Biomed. Eng. 1989;36(8):827–835. doi: 10.1109/10.30808. [DOI] [PubMed] [Google Scholar]
  • [53].van Oosterom A, Huiskamp GH. A realistic torso model for magnetocardiography. Int. J. Card. Imag. 1991;7(3–4):169–176. doi: 10.1007/BF01797749. [DOI] [PubMed] [Google Scholar]
  • [54].Keller DUJ, Weber FM, Seemann G, Dossel O. Ranking the influence of tissue conductivities on forward-calculated ecgs. IEEE Trans. Biomed. Eng. 2010;57(7):1568–1576. doi: 10.1109/TBME.2010.2046485. [DOI] [PubMed] [Google Scholar]
  • [55].Fischer G, Tilg B, Modre R, Huiskamp GJM, Fetzer J, Rucker W, Wach P. A bidomain model based bem-fem coupling formulation for anisotropic cardiac tissue. Ann. Biomed. Eng. 2000;28:1229–1243. doi: 10.1114/1.1318927. [DOI] [PubMed] [Google Scholar]
  • [56].Greensite F. The mathematical basis for imaging cardia electrical function. Crit. Rev. Biomed. Eng. 1994;22(5–6):347–399. [PubMed] [Google Scholar]
  • [57].Czapski P, Ramon C, Huntsman LL, Bardy GH, Kim Y. On the contribution of volume currents to the total magnetic field resulting from the heart excitation process: a simulation study. IEEE Tran. Biomed. Eng. 1996;43(1):95–104. doi: 10.1109/10.477705. [DOI] [PubMed] [Google Scholar]
  • [58].ColliFranzone P, Guerri L, Viganotti C, Macchi E, Baruffi S, Spaggiari S, Taccardi B. Potential fields generated by oblique dipole layers modeling excitation wavefronts in the anisotropic myocardium. comparison with potential fields elicited by paced dog hearts in a volume conductor. Circ. 1982;51:330–346. doi: 10.1161/01.res.51.3.330. [DOI] [PubMed] [Google Scholar]
  • [59].Hren R, Nenonen J, Horacek BM. Simulated epicardial potential maps during paced activation reflect myocardial fibrous structure. Ann. Biomed. Eng. 1998;26(6):1022–1035. doi: 10.1114/1.73. [DOI] [PubMed] [Google Scholar]
  • [60].Nash M. PhD thesis. Univ. of Auckland; New Zealand: 1998. Mechanics and Material Properties of the Heart using an Anatomically Accurate Mathematical Model. [Google Scholar]
  • [61].Ringenberg J, Deo M, Filgueiras-Rama D, Pizarro G, Ibanez B, Peinado R, Merino JL, Berenfeld O, Devabhaktuni V. Effects of fibrosis morphology on reentrant ventricular tachycardia inducibility and simulation fidelity in patient-derived models. Clinical Medicine Insights: Cardiology. 2014;8:1–13. doi: 10.4137/CMC.S15712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [62].Wang L, Zhang H, Wong KC, Shi P. Medical Imaging and Augmented Reality, vol. 5128 of Lecture Notes in Computer Science. 2008. Coupled meshfree-bem platform for electrocardiographic simulation: Modeling and validations; pp. 98–107. [Google Scholar]
  • [63].Gelman A, Carlin JB, Stern HS, Rubin DB. Bayesian Data Analysis. Chapman and Hall; 2003. [Google Scholar]
  • [64].Kaipio J, Somersalo E. Statistical and Computational Inverse Problems. Vol. 160. Applied Mathematical Sciences, Springer; 2005. [Google Scholar]
  • [65].Neal RM. Slice sampling. Ann. Statist. 2003;31:705–767. [Google Scholar]
  • [66].Gilks WR, Richardson S, Spiegelhalter DJ. Markov Chain Monte Carlo in Practice. Chapman and Hall; London: 1996. ISBN: 0-412-05551-1. [Google Scholar]
  • [67].Title LM, Iles SE, Gardner MJ, Penney CJ, Clements JC, Horacek BM. Quantitative assessment of myocardial ischemia by electrocardiographic and scintigraphic imaging. J. Electrocardiol. 2003;36(Suppl):17–26. doi: 10.1016/j.jelectrocard.2003.09.004. [DOI] [PubMed] [Google Scholar]
  • [68].Aliev RR, Panfilov AV. A simple two-variable model of cardiac excitation. Chaos, Solitions & Fractals. 1996;7(3):293–301. [Google Scholar]
  • [69].Cerqueira MD, Weissman NJ, Dilsizian V, et al. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart. Circ. 2002;105:539–542. doi: 10.1161/hc0402.102975. [DOI] [PubMed] [Google Scholar]
  • [70].Goldberger AL, et al. Physiobank, physiotoolkit, and physionet components of a new research resource for complex physiological signals. Circ. 2000;101:215–220. doi: 10.1161/01.cir.101.23.e215. [DOI] [PubMed] [Google Scholar]
  • [71].Dawoud FD. Using inverse electrocardiography to image myocardial infarction. Proc. Computers in Cardiology. 2007 Sep;:177–180. [Google Scholar]
  • [72].Wang L, Wong K, Zhang H, Liu H, Shi P. Noninvasive computational imaging of cardiac electrophysiology for 3D infarct quantitation. IEEE Trans. Biomed. Eng. 2011;(4):1033–1034. doi: 10.1109/TBME.2010.2099226. [DOI] [PubMed] [Google Scholar]
  • [73].Farina D. Model-based approach to the localization of infarction. Proc. Computers in Cardiology [Google Scholar]
  • [74].Mneimneh MA, Povinelli RJ. Rps/gmm approach toward the localization of myocardial infarction. Proc. Computers in Cardiology [Google Scholar]
  • [75].Myronenko A, Song X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010 Dec;32:2262–2275. doi: 10.1109/TPAMI.2010.46. [DOI] [PubMed] [Google Scholar]
  • [76].Relan J, Chinchapatnam P, Sermesant M, Rhode K, Ginks M, Delingette H, Rinaldi C, Razavi R, Ayache N. Coupled personalization of cardiac electrophysiology models for prediction of ischaemic ventricular tachycardia. Interface Focus. 2011;1(3):396–407. doi: 10.1098/rsfs.2010.0041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Cheng LK, Bodley JM, Pullan AJ. Comparison of potential-and activation-based formulations for the inverse problem of electrocardiology. IEEE Trans. Biomed. Eng. 2003 Jan;50:11–22. doi: 10.1109/TBME.2002.807326. [DOI] [PubMed] [Google Scholar]
  • [78].Schulze WHW, Henar FE, Potyagaylo D, Loewe A, Stenroos M, Dossel O. Kalman filter with augmented measurement model: An ecg imaging simulation study. 2013;7945:200–207. [Google Scholar]
  • [79].Xu J, Sapp JL, Rahimi A, Gao F, Wang L. Bayesian integrate of physiological dynamic and spatial sparse priors for the robustness of 3d cardiac electrophysiology. Med. Image Comp. Computer-Assisted Intervention (MICCAI'15), accepted. 2015 [Google Scholar]
  • [80].Wilhelms M, Dossel O, Seemann G. In silico investigation of electrically silent acute cardiac ischemia in the human ventricles. IEEE Trans. Biomed. Eng. 2011 Oct;58:2961–2964. doi: 10.1109/TBME.2011.2159381. [DOI] [PubMed] [Google Scholar]
  • [81].Wilhelms M, Dossel O, Seemann G. Comparing simulated electrocardiograms of different stages of acute cardiac ischemia. Functional Imaging and Modeling of the Heart. 2011;6666:11–19. [Google Scholar]

RESOURCES