Abstract
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
I. Introduction
The recent research activity in photon-counting detectors has motivated a resurgence in the investigation of spectral computed tomography (CT). Photon-counting detectors detect individual X-ray quanta and the electronic pulse signal generated by these quanta has a peak amplitude proportional to the photon energy [1]. Thresholding these amplitudes allows for coarse energy resolution of the X-ray photons, and the transmitted flux of X-ray photons can be measured simultaneously in a number of energy windows. Theoretically, the energy-windowed transmission measurements can be exploited to reconstruct quantitatively the X-ray attenuation map of the subject being scanned [2]. The potential benefits are reduction of beam-hardening artifacts, and improved contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and quantitative imaging [1], [3], [4], [5]. For photon-counting detectors where the number of energy windows can be three or greater, the new advantage with respect to quantitative imaging is the ability to image contrast agents that possess a K-edge in the diagnostic X-ray energy range [6], [7], [8], [9], [10], [11].
Use of energy information in X-ray CT has been proposed almost since the conception of CT itself [12]. Dual-energy CT acquires transmission intensity at either two energy windows or for two different X-ray source spectra. Despite the extremely coarse energy-resolution, the technique is effective because for many materials only two physical processes, photo-electric effect and Compton scattering, dominate X-ray attenuation in the diagnostic energy range [2]. Within the context of dual-energy, the processing methods of energy-windowed intensity data have been classified in two broad categories: pre-reconstruction and post-reconstruction [13]. The majority of processing methods for multi-window data also fall into these categories.
In pre-reconstruction processing of the multi-energy data, the X-ray attenuation map is expressed as a sum of terms based on physical processes or basis materials [2]. The multi-energy data are converted to sinograms of the basis maps, then any image reconstruction technique can be employed to invert these sinograms. The basis maps can subsequently be combined to obtain images of other desired quantities: estimated X-ray attenuation maps at a single energy, material maps, atomic number, or electron density maps. The main advantage of pre-reconstruction processing is that beam-hardening artifacts can be avoided, because consistent sinograms of the basis maps are estimated prior to image reconstruction. Two major challenges for pre-reconstruction methods are the need to calibrate the spectral transmission model and to acquire registered projections. Photon-counting detectors ease the implementation of projection registration, because multiple energy-thresholding circuits can operate on the same detection element signal. Accounting for detection physics and spectral calibration by data pre-processing or incorporation directly in the image reconstruction algorithm remains a challenge for photon-counting detectors [1].
For post-reconstruction processing, the energy-windowed transmission data are processed by the standard negative logarithm to obtain approximate sinograms of a weighted energy-averaged attenuation map followed by standard image reconstruction. The resulting images can be combined to obtain approximate estimates of images of the same physical quantities as mentioned for the pre-reconstruction processing [14]. The advantage of post-reconstruction processing is that it is relatively simple, because it is only a small modification on how standard CT data are processed and there is no requirement of projection registration. The down-side, however, is that the images corresponding to each energy-window are susceptible to beam-hardening artifacts because the negative logarithm processed data will, in general, not be consistent with the projection of any object.
A third option for the processing of spectral CT data, however, does exist, which due to difficulties arising from the nonlinearity of the attenuation of polychromatic X-rays when passing through an object, is much less common than either pre- or post-reconstruction methods: direct estimation of basis maps from energy-windowed transmission data. This approach has the advantages that the spectral transmission model is treated exactly, there is no need for registered projections, and constraints on the basis maps can be incorporated together with the fitting of the spectral CT data. The main difficulty of the one-step approach is that it necessitates an iterative algorithm because the corresponding transmission data model is too complex for analytic solution, at present. Iterative image reconstruction (IIR) has been applied to spectral CT in order to address the added complexity of the data model [15], [16], [17], [18], [19], [20], [21], [22].
In this work, we develop a framework that addresses one-step image reconstruction in spectral CT allowing for non-smooth convex constraints to be applied to the basis maps. We demonstrate the algorithm with the use of total variation (TV) constraints, but the framework allows for other constraints such as non-negativity, upper bounds, and sum bounds applied to either the basis maps or to a composite image such as an estimated mono-chromatic attenuation map.
We draw upon recent developments in large-scale first-order algorithms and adapt them to incorporate the non-linear model for spectral CT to optimize the data-fidelity of the estimated image by minimizing the discrepancy between the observed and estimated data. We present an algorithm framework for constrained optimization, deriving algorithms for minimizing the data discrepancy based on least-squares fitting and on a transmission Poisson likelihood model. As previously mentioned, the framework admits many convex constraints that can be exploited to stabilize image reconstruction from spectral CT data. Section II presents the constrained optimization for one-step spectral CT image reconstruction; Sec. III presents a convex-concave primal-dual algorithm that addresses the non-convex data discrepancy term arising from the non-linear spectral CT data model; and Sec. IV demonstrates the proposed algorithm with simulated spectral CT transmission data.
II. One-step image reconstruction for spectral CT
A. Spectral CT data model
For the present work, we employ a basic spectral model for the energy-windowed transmitted X-ray intensity along a ray ℓ, where the transmitted X-ray intensity in the energy window w for ray ℓ is given by
Here ∫t∈ℓ denotes that we are integrating along the ray ℓ while ∫E integrates over the range of energy; Swℓ(E) is the product of the X-ray beam spectrum intensity and detector sensitivity for the energy window w and transmission ray ℓ at energy E; and μ(E, r⃗) is the linear X-ray attenuation coefficient for energy E at the spatial location r⃗. Let be the transmitted intensity in the setting where no object is present between the X-ray beam and the detector (i.e. attenuation is set to zero), given by
Then we can write
(1) |
where swℓ(E) represents the normalized energy distribution of X-ray intensity and detector sensitivity. Image reconstruction for spectral CT aims to recover the complete energy-dependent linear attenuation map μ(E, r⃗) from intensity measurements Iwℓ in all windows w and rays ℓ comprising the X-ray projection data set.
Throughout the article we use the convention that Nx is the dimension of the discrete index x. For example, the spectral CT data set consists of Nw energy windows and Nℓ transmission rays.
This inverse problem is simplified by exploiting the fact that the energy-dependence of the X-ray attenuation coefficient can be represented efficiently by a low-dimensional expansion. For the present work, we employ the basis material expansion
(2) |
where ρm is the density of material m; the X-ray mass attenuation coefficient μm(E)/ρm are available from the national institute of standards and technology (NIST) report by Hubbell and Seltzer [23]; and fm(r⃗) is the fractional density map of material m at location r⃗. For the present spectral CT image reconstruction problem, we aim to recover fm(r⃗)), which we refer to as the material maps.
Proceeding with the spectral CT model, we discretize the material maps fm(r⃗) by use of an expansion set
where are the representation functions for the material maps, respectively For the 2D/3D image representation standard pixels/voxels are employed, that is, k indexes the pixels/voxels. With the spatial expansion set, the line integration over the material maps is represented by a matrix X with entry Xℓk measuring the length of the intersection between ray ℓ and pixel k:
where formally we can calculate
This integration results in the standard line-intersection method for the pixel/voxel basis.
The discretization of the integration over energy E in Eq. (1) is perform by use of a Riemann sum approximation.
where i indexes the discretized energy E and
With the Riemann sum approximation we normalize the discrete window spectra,
Modeling photon-counting detection, we express X-ray incident and transmitted spectral fluence in terms of numbers of photons per ray ℓ (as before, the ray ℓ identifies the source detector-bin combinations) and energy window w:
(3) |
where Nwℓ is the incident spectral fluence and ĉwℓ is interpreted as a mean transmitted fluence. Note that in general the right hand side of Eq. (3) evaluates to a non-integer value and as a result the left hand side variable cannot be assigned to an integer as would be implied by reporting transmitted fluence in terms of numbers of photons. This inconsistency is rectified by interpreting the left hand side variable, ĉwℓ, as an expected value.
B. Constrained optimization for one-step basis decomposition
For the purpose of developing spectral CT image reconstruction of the basis material maps from transmission counts data, we formulate a constrained optimization involving minimization of a non-convex data-discrepancy objective function subject to convex constraints. The optimization problem of interest takes the following form
(4) |
where the measured counts data c are composed of individual measurements cwℓ, i.e. the measured counts in energy window w and ray l; D(·,·) is a generic data discrepancy objective function; and the indicator functions δ(Pi) enforce the convex constraints f ∈ Pi, the Pi are convex sets corresponding to the desired constraints (for instance, nonnegativity of the material maps). The indicator function is defined
(5) |
Use of constrained optimization with TV constraints is demonstrated in Sec. IV.
Data discrepancy functions
For the present work, we consider two data discrepancy functions: transmission Poisson likelihood (TPL) and least-squares (LSQ)
(6) |
(7) |
The TPL data discrepancy function is derived from the negative log likelihood of a stochastic model of the counts data
that is, minimizing DTPL is equivalent to maximizing the likelihood. Note that in defining DTPL we have subtracted a term independent of f from the negative log likelihood so that DTPL is zero when c = ĉ(f), and positive otherwise. From a physics perspective, the important difference between these two data discrepancy functions is how they each weight the individual measurements; the LSQ function treats all measurements equally while the TPL function gives greater weight to higher count measurements. We point out this property to emphasize that the TPL data discrepancy can be useful even when there are data inconsistencies due to other physical factors besides the stochastic nature of the counts measurement. This alternate weighting is also achieved without introducing additional parameters as would be the case for a weighted quadratic data discrepancy. From a mathematics perspective, both data functions are convex functions of ĉwℓ(f), but they are non-convex functions of f. It is the non-convexity with respect to f that drives the main theoretical and algorithmic development of this work. Although we consider only these two data fidelities, the same methods can be applied to other functions.
Convex constraints
The present algorithm framework allows for convex constraints that may improve reconstruction of the basis material maps. In Eq. (4) the constraints are coded with indicator functions, but here we express the constraints by the inequalities that define the convex set to which the material maps are restricted. When the basis materials are identical to the materials actually present in the subject, the basis maps can be highly constrained. Physically, the fractional densities represented by each material map must take on a value between zero and one, and the corresponding constraint is
(8) |
Similarly, the sum of the fractional densities cannot be greater than one, leading to a constraint on the sum of material maps
(9) |
Care must be taken, however, in using these bound and sum constraints when the basis materials used for computation are not the same as the materials actually present in the scanned object. The bounds on the material maps and their sum must likely be loosened, and therefore they may not be as effective.
In medical imaging, where multiple soft tissues comprise the subject, it is standard to employ a spectral CT materials basis which does not include many of the tissue/density combinations present. The reason for this is that soft tissues such as muscle, fat, brain, blood, etc., all have attenuation curves similar to water, and recovering each of these soft tissues individually becomes an extremely ill-posed inverse problem. For spectral CT, it is common to employ a two-material expansion set, such as bone and water, and possibly a third material for representing contrast agent that has a K-edge in the diagnostic X-ray energy range. The displayed image can then be the basis material maps or the estimated X-ray attenuation map for a single energy E, also known as a monochromatic image
(10) |
A non-negativity constraint can be applied to the monochromatic image
at one or more energies. This constraint makes physical sense even when the basis materials are not the same as the materials in the subject.
Finally, we formulate ℓ1-norm constraints on the gradient magnitude images, also known as the total variation, in order to encourage gradient magnitude sparsity in either the basis material maps or the monochromatic image. In applying TV constraints to the basis material maps, we allow for different constraint values γm for each material
where ∇ represents the finite-differencing approximation to the gradient, and we use | · | to represent a spatial magnitude operator so that |∇fm| is the gradient magnitude image (GMI) of material map m. Similarly, a TV constraint can be formulated so that it applies to the monochromatic image at energy E
where the constraint can be applied at one or more values of E.
The constraints involving TV of the material maps and the monochromatic image are specified in Sec. IV. Many other convex constraints can be incorporated into the presented framework such as constraints on a generalized TV computed from multiple monochromatic images [24]. The convex constraints not covered explicitly in this article can be incorporated by the methods described in Refs. [25], [26].
III. A First-Order Algorithm for Spectral CT Constrained Optimization
The proposed algorithm derives from the primal-dual algorithm of Chambolle and Pock (CP) [27], [28], [25]. Considering the general constrained optimization form in Eq. (4), the second term coding the convex constraints can be treated in the same way as shown in Refs. [26], [29]. The main algorithmic development, presented here, is the generalization and adaptation of CP's primal-dual algorithm to the minimization of the data discrepancy term, the first term of Eq. (4). We derive the data fidelity steps specifically focusing on the deriving steps for DTPL and DLSQ.
Optimizing the spectral CT data fidelity
We first sketch the main developments of the algorithm for minimizing the non-convex data discrepancy terms, and then explain each step in detail. The overall design of the algorithm is comprised of two nested iteration loops. The outer iteration loop involves derivation of a convex quadratic upper bound to the local quadratic Taylor expansion about the current estimate for the material maps. The inner iteration loop takes descent steps for the current quadratic upper bound. Although the algorithm construction formally involves two nested iteration loops, in practice the number of inner loop iterations is set to one. Thus, effectively the algorithm consists only of a single iteration loop where a re-expansion of the data discrepancy term is performed at every iteration.
The local convex quadratic upper bound, used to generate descent steps for the non-convex data discrepancy terms, does not fit directly with the generic primal-dual optimization form used by CP. A convex-concave generalization to the CP primal-dual algorithm is needed. The resulting algorithm called mirrored convex-concave (MOCCA) algorithm is presented in detail in Ref. [30]. For the spectral CT image reconstruction algorithm we present: the local convex quadratic upper bound, a short description of MOCCA and its application in the present context, preconditioning, and convergence checks for the spectral CT image reconstruction algorithm.
A. A local convex quadratic upper bound to the spectral CT data discrepancy terms
1) Quadratic expansion
We carry out the deriviations on DLSQ and DTPL in parallel. The local quadratic expansion for each of these data discrepancy terms about the material maps f = f0 is
where
(11) |
To obtain the desired expansions, we need expressions for the gradient and Hessian of each data discrepancy The gradient of LTPL(f) is derived explicitly in Appendix A; we do not show the details for the other derivations. The data discrepancy gradients are:
(12) |
(13) |
where r and r(log) denote the residuals in terms of counts or log counts:
(14) |
(15) |
Z represents the combined linear transform that accepts material maps, performs projection, and then combines the resulting sinograms to form monochromatic sinograms at energy Ei:
(16) |
and A(f) is a term that results from the gradient of the logarithm of the estimated counts log ĉ(f):
(17) |
Using the same variable and linear transform definitions, the expressions for the two Hessians are
(18) |
(19) |
Substituting either Eq. (18) or (19) for the Hessian and either Eq. (12) or (13) for the gradient into the Taylor expansion in Eq. (11), yields the quadratic approximation to the data discrepancy terms of interest. This quadratic is in general non-convex because both Hessian expressions can have negative eigenvalues.
2) A local convex upper bound to L(f)
The key to deriving a local convex upper bound to the quadratic expansion of L(f) is to split the Hessian expressions into positive and negative components. Setting the negative components to zero and substituting this thresholded Hessian into the Taylor's expansion, yields a quadratic term with non-negative curvature. (As an aside, a tighter convex local quadratic upper bound would be attained by diagonalizing the Hessian and forming a positive semi-definite Hessian by keeping eigenvectors corresponding to only non-negative eigenvalues in the eigenvalue decomposition, but for realistic sized tomography configurations such an eigenvalue decomposition is impractical.) The Hessian can be split into the form
where and are both positive semidefinite (see Appendix B for more details). The resulting split expressions are:
(20) |
(21) |
and
(22) |
(23) |
where
and similarly
To summarize, the expression for the convex local upper bound to the quadratic approximation in Eq. (11) is
(24) |
where is used instead of in the quadratic term. Here Q depends parametrically on the counts data c, through the function L (see Eq. (11)), and the expansion center f0. The gradients of L at f0 are obtained from Eqs. (12) and (13), and the Hessian upper bounds are available from Eqs. (20) and (22). Note that the quadratic expression in Eq. (24) is not necessarily an upper bound of the data discrepancy functions, even locally, because we bound only the quadratic expansion. We employ the convex function Q (c, f0; f) combined with convex constraints to generate descent steps for the generic non-convex optimization problem specified in Eq. (4).
B. The motivation and application of MOCCA
1) Summary of the Chambolle-Pock (CP) primal-dual framework
The generic convex optimization addressed in Ref. [27] is
(25) |
where F and G are convex, possibly non-smooth, functions and K is a matrix multiplying the vector x. The ability to handle non-smooth convex functions is key for addressing the convex constraints of Eq. (4). In the primal-dual picture this minimization is embedded in a larger saddle point problem
(26) |
using the Legendre transform or convex conjugation
(27) |
and the fact that
(28) |
if F is a convex function. The CP primal-dual algorithm of interest solves Eq. (26) by iterating on the following steps
(29) |
(30) |
(31) |
where n is the iteration index; σ > 0 and τ > 0 are the primal and dual step sizes, respectively, and these step sizes must satisfy the inequality
where ‖K‖2 is the largest singular value of K. Because this algorithm solves the saddle point problem, Eq. (26), one obtains the solution to the primal problem, Eq. (25) along with its Fenchel dual
(32) |
The fact that both Eqs. (25) and (32) are solved simultaneously provides a convergence check: the primal-dual gap, the difference between the objective functions of Eqs. (25) and (32), tends to zero as the iteration number increases.
In some settings, the requirement may be impractical or too conservative, and the CP algorithm can instead be implemented with diagonal matrices Σ and T in place of σ and τ [28], with the condition ‖Σ1/2KT1/2‖ < 1 and the revised steps
(33) |
(34) |
(35) |
where for a positive semidefinite matrix A the norm ‖z‖A is defined as .
2) The need to generalize the CP primal-dual framework
To apply the CP primal-dual algorithm to Q for fixed f0, we need to write Eq. (24) in the form of the objective function in Eq. (25). Manipulating the expression for Q and dropping all terms that are constant with respect to f, we obtain
(36) |
The matrices D and E are nonnegative and depend on c and f0; b is a vector which also depends on c and f0; and K is a matrix that depends only on f0. Both terms of Q are functions of Kf and accordingly Q is identified with the function F in the objective function of Eq. (25)
(37) |
Because Q is a convex function of f, FQ is convex as a function of f. The function FQ, however, is not a convex function of z. Because D and E are non-negative matrices, FQ is a difference of convex functions of z,
where FQ+(z) and FQ−(z) are convex functions of z. That FQ(z) is not convex implies that FQ cannot be written as the convex conjugate of , and performing the maximization over y in Eq. (26) no longer yields Eq. (25).
3) Heuristic derivation of MOCCA
To generalize the CP algorithm to allow the case of interest, we consider the function F to be a convex-concave
where F+ and F− are both convex. The heuristic strategy for MOCCA is to employ a convex approximation to F(z) in the neighborhood of a point z = z0
(38) |
(again we drop terms that are constant with respect to z). We then execute an iteration of the CP algorithm on the convex function Fconvex(z0; z); and then modify the point of convex approximation z0 and repeat the iteration. The question then is how to choose z0, the center for the convex approximation, in light of the fact that the optimization of F in the CP algorithm happens in the dual space with F*, see Eq. (29).
A corresponding primal point to a point in the dual space can be determined by selecting the maximizer of the objective function in the definition of the Legendre transform. Taking the gradient of the objective function in Eq. (28) and setting it to zero, yields
(39) |
We use this relation to find the expansion point for the primal objective function that mirrors the current value of the dual variables.
Incorporating the convex approximation Fconvex(z0; z) about the mirrored expansion point z0 into the CP algorithm, yields the iteration steps for MOCCA
(40) |
(41) |
(42) |
(43) |
where is convex conjugate to Fconvex(z0; z) with respect to the second argument; the first line obtains the mirror expansion point using Eq. (39) and the right hand side expression is found by setting to zero the gradient of the objective function in Eq. (41); the second line makes use of convex approximation Fconvex in the form of its convex conjugate; and the remaining two lines are the same as the those of the CP algorithm. For the simulations in this article, all variables are initialized to zero. Convergence of MOCCA, the algorithm specified by Eqs. (40) - (43), is investigated in an accompanying paper [30], which also develops the algorithm for a more general setting.
4) Application of MOCCA to optimization of the spectral CT data fidelity
The MOCCA algorithm handles a fixed convex-concave function F, convex function G, and linear transform K. In order to apply it to the spectral CT data fidelity, we propose: employing the local quadratic expansion in Eq. (36) to which we apply MOCCA, re-expand the spectral CT data discrepancy at the current estimate of the material maps, and iterate this procedure until convergence. We refer to iterations of the core MOCCA algorithm as “inner” iterations, and the process of iteratively re-expanding the data discrepancy and applying MOCCA are the “outer” iterations. Because MOCCA allows for non-smooth terms, the convex constraints described in Sec. II-B can be incorporated and the inner iterations aim at solving the intermediate problem
(44) |
For the remainder of this section, for brevity, we drop the constraints and write the update steps taking only for the spectral CT data fidelity The full algorithm with the convex constraints discussed in Sec. II-B can be derived using the methods described in [25] and an algorithm instance with TV constraints on the material maps is covered in Appendix C.
In applying MOCCA to Q (cwℓ, f0; f), we use the convex and concave components from FQ in Eq. (37) to form the local convex quadratic expansion needed in MOCCA, see Eq. (38),
(45) |
The corresponding dual function
(46) |
is needed to derive the MOCCA dual update step at Eq. (41). We note that because the material maps f enter Q (cwℓ, f0; f) only after linear transformation, Kf, and comparing with the generic optimization problem in Eq. (25), we have G(f) = 0 for the present case where we only consider minimization of the data discrepancy.
In using an inner/outer iteration, a basic question is how accurately does the inner problem need to be solved. It turns out that it is sufficient to employ a single inner iteration, so that effectively the proposed algorithm no longer consists of nested iteration loops. Instead, the proposed algorithm performs re-expansion at every iteration:
(47) |
(48) |
(49) |
(50) |
(51) |
(52) |
where f(0), f(−1), f̄(0), y(0), and y(−1) are initialized to zero vectors.
Before explaining each line of the spectral CT algorithm specified by Eqs. (47)-(52), we point out important features of the use of re-expansion at every iteration: (1) There are no nested loops. (2) The size of the system of equations is significantly reduced; note that only the first matrix block of K, D, E, and b (see Eq. (36) for their definition) appears in the steps of the algorithm. By re-expanding at every iteration the set of update steps for the second matrix block becomes trivial. (3) Re-expanding at every step is not guaranteed to converge, and an algorithm control parameter λ is introduced that balances algorithm convergence rate against possible unstable iteration, see Sec. IV for a demonstration on how λ impacts convergence. A similar strategy was used together with the CP algorithm in the use of non-convex image regularity norms, see [26].
The first line of the algorithm, Eq. (47), explicitly assigns the current material maps estimate to the new expansion point. In this way it is clear in the following steps whether f̄(n) enters the equations through the re-expansion center or through the steps of MOCCA. For the spectral CT algorithm it is convenient to use the vector step-sizes Σ(n) and T(n), defined in Eq. (48), from the pre-conditioned form of the CP algorithm [28], because the linear transform K1(f0) is changing at each iteration as the expansion center changes. Computation of the vector step-sizes only involves single matrix-vector products of |K1(f0)| and |K1(f0)|⊤ with a vector of ones, 1, where the operator | · | applied to a matrix is element-wise absolute value. Computationally this is much cheaper than performing the power method on K1(f0) to find the scalar step-sizes σ and τ, which would render re-expansion at every iteration impractical. In Eq. (48), the parameter λ enters in such a way that the product ‖Σ1/2KT1/2‖ remains constant. For the preconditioned CP algorithm, λ defined in this way will not violate the step-size condition. The dual and primal steps in Eqs. (50) and (51), respectively, are obtained by analytic computation of the minimizations in Eqs. (41) and (42) using Eq. (45) and G(f) = 0, respectively. The primal step at Eq.(51) and the primal variable prediction step at Eq. (52) are identical to the corresponding CP algorithm steps at Eqs. (30) and (31), respectively. The presented algorithm accounts only for the spectral CT data fidelity optimization. For the full algorithm incorporating TV constraints used in the results section, see the pseudocode in Appendix C.
C. One-step algorithm μ-preconditioning
One of the main challenges of spectral CT image reconstruction is the similar dependence of the linear X-ray attenuation curves on energy for different tissues/materials. This causes rows of the attenuation matrix μmi to be nearly linearly dependent, or equivalently its condition number is large. There are two effects of the poor conditioning of μmi: (1) the ability to separate the material maps is highly sensitive to inconsistency in the spectral CT transmission data, and (2) the poor conditioning of μmi contributes to the overall poor conditioning of spectral CT image reconstruction negatively impacting algorithm efficiency To address the latter issue, we introduce a simple preconditioning step that orthogonalizes the attenuation curves. We call this step “μ”-preconditioning to differentiate it from the preconditioning of the CP algorithm. To perform μ-preconditioning, we form the matrix
(53) |
and perform the eigenvalue decomposition
where the eigenvalues are ordered s1 ≥ s2 ≥ ⋯ ≥ sNm. The singular values of μ are given by the 's and its condition number is . The preconditioning matrix for μ is given by
(54) |
Implementation of μ-preconditioning consists of the following steps:
-
Transformation of material maps and attenuation matrix - The appropriate transformation is arrived at through inserting the identity matrix in the form of P−1P into theexponent of the intensity counts data model in Eq. (3):
(55) where
(56) (57) -
Substitution into the spectral CT algorithm - Substitution of the transformed materialmaps and attenuation matrix into the spectral CT algorithm given by Eqs. (47)-(52) is fairlystraight-forward. All occurrences of f are replaced by f′, and the linear transform K1 isreplaced by
where, using Eqs. (16) and (17),
and
Using μ-preconditioning, care must be taken in computing the vector stepsizes Σ′ and T′ in Eq. (48). Without μ-preconditioning, the absolute value symbols are superfluous, because K1 has non-negative matrix elements. With μ-preconditioning, the absolute value operation is necessary, because may have negative entries through its dependence on Z′ and in turn μ′.
-
Formulation of constraints - the previously discussed constraints are functions of the untransformed material maps. As a result, in using μ-preconditioning where we solve for the transformed material maps, the constraints should be formulated in terms of
(58) The explicit pseudocode for constrained data-discrepancy minimization using μ-preconditioning is given in Appendix C.
After applying the μ-preconditioned spectral CT algorithm the final material maps are arrived at through Eq. (58).
D. Convergence checks
Within the present primal-dual framework we employ the primal-dual gap for checking convergence. The primal-dual gap that we seek is the difference between the convex quadratic approximation using the first matrix block in Eq. (45), which is the objective function in the primal minimization
(59) |
and the objective function in the Fenchel dual maximization problem
(60) |
These problems are derived from the general forms in Eqs. (25) and (32), and the constraint in the dual maximization comes from the fact that G(f) = 0 in the primal problem, see Sec. 3.1 in [25]. For a convergence check we inspect the difference between these two objective functions. Note that the constant term cancels in this subtraction and plays no role in the optimization algorithm, and could thus be left out. If the material maps f(n) attain a stable value, the constraint is necessarily satisfied from inspection of Eq. (51). When other constraints are included the estimates of the material maps should be checked against these constraints and the primal-dual gap is modified. Because the minimization problems of interest are non-convex, convergence checks indicate convergence to a local minimum.
IV. Results
We demonstrate use of the spectral CT algorithm on simulated transmission data modeling an ideal photon-counting detector. The X-ray spectrum, shown in Fig. 1, is assumed known. In modeling the ideal detector, the spectral response of an energy-windowed photon count measurement is taken to be the same as that of Fig. 1 between the bounding threshold energies of the window and zero outside. We conduct two studies. The first is focused on demonstrating convergence and application of the spectral CT algorithm with recovery of material maps for a two-material head phantom using the following minimization problems
Fig. 1.
Normalized spectrum of a typical X-ray source for CT operating at a potential of 120kV.
and
The pseudo-code for TPL-TV and LSQ-TV is given explicitly in Appendix C. The second study simulates a more realistic study demonstrating application on an anthropomorphic chest phantom simulating multiple tissues/materials at multiple densities. For this study we demonstrate spectral CT image reconstruction of a mono-energetic image at energy E using
Note that for monoenergetic image reconstruction, the TV constraint is placed on the monoenergetic image, but the optimization is performed over the individual material maps fm and the monoenergetic image is formed after the optimization using Eq. (10).
Aside from the system specification parameters, such as number of views, detector bins, and image dimensions, the algorithm parameters are the TV constraints γm for TPL-TV and LSQ-TV or γmono for TPL-monoTV and the primal-dual step size ratio λ. The TV constraints γm or γmono affect the image regularization, but λ is a tuning parameter which does not alter the solution of TPL-TV, LSQ-TV, or TPL-monoTV. It is used to optimize convergence speed of the spectral CT image reconstruction algorithm.
A. Head phantom studies with material map TV-constraints
For the present studies, we employ a two-material phantom derived from the FORBILD head phantom shown in Fig. 2. The spectral CT transmission counts are computed by use of the discrete-to-discrete model in Eq. (3). The true material maps fk,bone and fk,brain are the 256×256 pixel arrays shown in Fig. 2 and the corresponding linear X-ray coefficients μbone,i and μbrain,i are obtained from the NIST tables available in Ref. [23] for energies ranging from 20 to 120 KeV in increments of 1 KeV. By employing the same data model as that used in the image reconstruction algorithm, we can investigate the convergence properties of the spectral CT algorithm.
Fig. 2.
Bone and brain maps derived from the FORBILD head phantom. Both images are shown in the gray scale window [0.9, 1.1].
For the head phantom simulations, the scanning configuration is 2D fan-beam CT with a source to iso-center distance of 50 cm and source to detector distance of 100 cm. The physical size of the phantom pixel array is 20 × 20 cm2. The number of projection views over a full 2π scan is 128 and the number of detector bins along a linear detector array is 512. This configuration is undersampled by a factor of 4 [31]. Two X-ray energy windows are simulated with a spectral response for each window given by the spectrum shown in Fig. 1 in the energy ranges [20 KeV, 70 KeV] and [70 KeV, 120 KeV] for the first and second energy windows, respectively.
Ideal data study
For ideal, noiseless data several image metrics are plotted in Fig. 3 for different values of λ, and it is observed that the conditional primal-dual (cPD) gap and data discrepancy tend to zero while the material map TVs converge to the designed values. For this problem the convergence metrics are the cPD and material map TVs; the data discrepancy only tends to zero here due to the use of ideal data and in general when data inconsistency is present the minimum data discrepancy will be greater than zero. The convergence metrics demonstrate convergence of the spectral CT algorithm for the particular problem under study. It is important, however, to inspect these metrics for each application of the algorithm, because there is no theoretical guarantee of convergence due to the re-expansion step in Eq. (47). From the present results it is clear that progress towards convergence depends on λ; thus it is important to perform a search over λ.
Fig. 3.
Convergence metrics for LSQ-TV and TPL-TV and for different values of λ with ideal, noiseless data. First, second, third, and fourth rows show the conditional primal-dual (cPD) gap, data discrepancy objective function, difference between the TV of estimated bone map and that of the phantom bone map, and same for the brain map TV. Note that the expressions for the gap and data discrepancy are different for TPL and LSQ; thus those quantities are not directly comparable.
To demonstrate convergence of the material map estimates to the corresponding phantom, we plot image root-mean-square-error (RMSE) in Fig. 4 as a function of iteration number and show the map differences at the last iteration performed in Fig. 5. The material map estimates are seen to converge to the corresponding phantom maps despite the projection view-angle under-sampling. Thus we note that the material map TV constraints are effective at combatting these under-sampling artifacts just as they are for standard CT [32], [33]. The image RMSE curves only give a summary metric for material map convergence, and it is clear from the difference images displayed in narrow gray scale window that convergence can be spatially non-uniform.
Fig. 4.
Convergence of the material map estimates to the phantom material maps for LSQ-TV and TPL-TV and for different values of λ with ideal, noiseless data.
Fig. 5.
Difference between estimated brain and bone maps after 5,000 iterations and the corresponding phantom map shown in a 1% gray scale window [-0.01, 0.01] for TPL-TV and a 0.1% window [-0.001, 0.001] for LSQ-TV and different values of λ with ideal, noiseless data. The difference images are displayed in a region of interest around the sinus bones.
For these idealized examples the pre-conditioned spectral CT algorithm appears to be more effective for LSQ-TV than TPL-TV as the image RMSE attained for the former is significantly lower than that of the latter. In Fig. 4 curves for LSQ-TV at λ = 1 × 102, the image RMSE curves plateau at 10−5 due to the fact that the solution of LSQ-TV is achieved to the single precision accuracy of the computation.
Noisy data study
The noisy simulation parameters are identical to the previous noiseless study except that the spectral CT data are generated from the transmission Poisson model. The mean of the transmission measurements is arrived at by assuming 4 × 106 total photons are incident at each detector pixel over the complete X-ray spectrum. As the simulated scan acquires only 128 views, the total X-ray exposure is equivalent to acquiring 512 views at 1×106 photons per detector pixel.
We obtain multiple material map reconstructions varying the TV constraints among values greater than or equal to the actual values of the known bone and brain maps. The results for TV-TPL are shown in Figs. 6 and 7, and those for TV-LSQ are shown in Figs. 8 and 9. In all images the bone and brain maps recover the main features of the true phantom maps, and the main difference in the images is the structure of the noise. The noise texture of the recovered brain maps appears to be patchy for lower TV and grainy for larger TV constraints. Also, in comparing the brain maps for TPL-TV in Fig. 7 and LSQ-TV in Fig. 9, streak artifacts are more apparent in the latter particularly for the larger TV constraint values.
Fig. 6.
Reconstructed bone map by use of TPL-TV from simulated noisy projection spectral CT transmission data. The material map TV constraints are varied according to fractions of the corresponding phantom material map TV.
Fig. 7.
Reconstructed brain map by use of TPL-TV from simulated noisy projection spectral CT transmission data. The material map TV constraints are varied according to fractions of the corresponding phantom material map TV.
Fig. 8.
Reconstructed bone map by use of LSQ-TV from simulated noisy projection spectral CT transmission data. The material map TV constraints are varied according to fractions of the corresponding phantom material map TV.
Fig. 9.
Reconstructed brain map by use of LSQ-TV from simulated noisy projection spectral CT transmission data. The material map TV constraints are varied according to fractions of the corresponding phantom material map TV.
It is instructive to examine the convergence metrics in Fig. 10 and image convergence in Fig. 11 for this noisy simulation. The presentation parallels the noiseless results in Figs. 3 and 4, respectively. The differences are that results are shown for a single λ and the image RMSE is shown for two different TV-constraint settings in the present noisy simulations. The cPD and TV plots all indicate convergence to a solution for TPL-TV and LSQ-TV. We remind the reader, however, that for non-convex problems cPD can only indicate convergence to a local minimizer. It is possible to test the robustness of the obtained solution by performing reconstruction with alternate initial material maps, but in general there is no guarantee that the obtained solution is a global minimizer.
Fig. 10.
Same as Fig. 3 except that only one value of λ is shown and the results are for noisy data and the TV constraints for the bone and brain maps are set to 1.1 × TVbone and 1.1 × TVbrain, respectively. The TV constraint settings correspond to the center images in Figs. 6-9.
Fig. 11.
Convergence of the material map estimates to the phantom material maps for LSQ-TV and TPL-TV and for noisy data with two different settings of the TV constraints. The TV factor applies to both the bone and brain maps, so that a TV factors of 1.1 and 1.2 correspond to the center and bottom, left images of Figs. 6-9.
We note that the value of the data discrepancy objective function settles on a positive value as expected for inconsistent data. The data discrepancy, however, does not provide a check on convergence. It is true that if the data discrepancy changes with iteration we do not have convergence, but the inverse is not necessarily true. It is also reassuring to observe that the convergence rates for the set values of λ are similar between the noiseless and noisy results. This similarity is also not affected by the fact that the TV constraints are set to different values in each of these simulations.
The RMSE comparison of the recovered material maps with the true phantom maps shown in Fig. 11 indicate an average error less than 1% for the bone map and just under 2% for the brain map (100 × the RMSE values can be interpreted as a percent error because the material maps have a value of 0 or 1). The main purpose of showing these plots is to see quantitatively the difference between the TPL and LSQ data discrepancy terms. We would expect to see lower values of image RMSE for TPL-TV, because the simulated noise is generated by a transmission Poisson model. Indeed the image RMSE is lower for TPL-TV and the gap between TPL-TV and LSQ-TV is larger for looser TV constraints. We do point out that image RMSE may not translate into better image quality, because image quality depends on the imaging task for which the images are used. Task-based assessment would take into account features of the observed signal, noise texture, and possibly background texture and observer perception [34].
One of the benefits of using the TV constraints instead of TV penalties is that the material maps reconstructed using the TPL and LSQ data discrepancy terms can be compared meaningfully. The TV constraint parameters will result in material maps with exactly the chosen TVs, while to achieve the same with the penalization approach the penalty parameters must be searched to achieve equivalent TVs. Also generating simulation results becomes more efficient, because we can directly make use of the known phantom TV values.
B. Chest phantom studies with a mono-energetic image TV constraint
For the final set of results we employ an anthropomorphic chest phantom created from segmentation of an actual CT chest image. Different tissue types and densities are labeled in the image totaling 24 material/density combinations, including various soft tissues, calcified/bony regions, and Gadolinium contrast agent. To demonstrate the spectral CT algorithm on this more realistic phantom model, we select the TPL-monoTV optimization problem in Eq. (61) for the material map reconstruction. The material basis is selected to be water, bone, and Gadolinium contrast agent. Using TPL-monoTV is simpler than TPL-TV in that only the energy for the mono-energetic image and a single TV constraint parameter is needed instead of three parameters – the TV for each of the material maps. There are potential advantages to constraining the TV of the material maps individually, but the purpose here is to demonstrate use of the spectral CT algorithm and accordingly we select the simpler optimization problem.
For the chest phantom simulations, the scanning configuration is again 2D fan-beam CT with a source to iso-center distance of 80 cm and source to detector distance of 160 cm. The physical size of the phantom pixel array is 29 × 29 cm2. The number of projection views is 128 over a 2π scan, and the number of detector pixels is 512. Five X-ray energy windows are simulated in the energy ranges [20, 50], [50, 60], [60,80], [80,100], and [100, 120] keV. The lowest energy window is selected wider than the other four to avoid photon starvation. Noise is added in the same way as the previous simulation. The transmitted counts data follow a Poisson model with a total of 4 × 106 photons per detector pixel. The monoenergetic image at 70 keV along with unregularized image reconstruction by TPL are shown in Fig. 12. The TPL mono-energetic image reconstruction demonstrates the impact of the simulated noise on the reconstructed image.
Fig. 12.
(Left) Chest phantom displayed at 70 KeV in a gray scale window of [0, 0.5] cm−1. (Right) Reconstruction by use of unregularized TPL. The estimated material maps are combined to form the shown monochromatic image estimate at 70 KeV (gray scale is also [0, 1.0] cm−1). For reference the TV values of the phantom and unconstrained reconstructed image are 2,587 and 7,686, respectively.
In Fig. 13 we show the resulting monoenergetic images from TPL-monoTV at three values of the TV constraint. The reconstructed images are shown globally in a wide gray scale and in an ROI focused on the right lung in a narrow gray scale window. The values of the TV constraint are selected based on visualization of the fine structures in the lung. For viewing these features, relatively low values of TV are selected. We note that in the global images the same TV values show the high-contrast structures with few artifacts. We point out that the spectral CT algorithm yields three basis material maps, shown in Fig. 14, and the mono-energetic images are formed by use of Eq. (10).
Fig. 13.
Estimated monochromatic images by use of TPL-monoTV. The left column shows the complete image in a gray scale window of [0, 0.5] cm−1. The right column magnifies a region of interest (ROI) in the right lung, and the gray scale is narrowed to [0, 0.1] cm−1 in order to see the soft tissue detail. The top set of images correspond to the phantom. The location of the ROI is indicated in the left phantom image inset by use of the narrow [0, 0.1] cm−1 gray scale. The second, third, and fourth rows correspond to images obtained by different TV constraints of the monoenergetic image at 70 KeV.
Fig. 14.
Basis material maps: water (left), bone (middle), and Gadolinium contrast agent solution (right), corresponding to the monoenergetic image with TV of 1000 shown in Fig. 13. The basis material maps are shown in a gray scale window of [-0.2, 1.2]. The resulting reconstructed material maps agree well with the phantom maps in terms of structure, but interestingly the reconstructed maps show a larger noise level than the corresponding monochromatic image in Fig. 13, which is a linear combination of the shown material maps.
The selected optimization problems and simulation parameters are chosen to demonstrate possible applications of the proposed image reconstruction algorithm for spectral CT. Comparison of the TPL and LSQ data discrepancy in Figs. 7 and 9 does show fewer artifacts for TPL-TV, where the simulated noise model matches the TPL likelihood. In practice, we may not see the same relative performance on real data – the simulations ignore some important physical factors of spectral CT, and image quality evaluation depends on the task for which the images are used.
V. Conclusion
We have developed a constrained minimization algorithm for inverting spectral CT transmission data directly to basis material maps. The algorithm addresses the associated non-convex data discrepancy terms by employing a local convex quadratic upper bound to derive the descent step. While we have derived the algorithm for TPL and LSQ data discrepancy terms, the same strategy can be applied to derive an image reconstruction algorithm for other data fidelities. The spectral CT algorithm derives from the convex-concave optimization algorithm, MOCCA, which we have developed for addressing an intermediate problem arising from use of the local convex quadratic approximation. The simulations demonstrate the spectral CT algorithm for TV-constrained data discrepancy minimization, where the TV constraints can be applied to the individual basis maps or to an estimated monochromatic X-ray attenuation map.
Future work will investigate robustness of the algorithm to data inconsistency due to spectral miscalibration error, X-ray scatter, and various physical processes involved in photon-counting detection. The spectral CT algorithm's ability to incorporate basis map constraints in the inversion process should provide a means to control artifacts due to such inconsistencies. We are also pursuing a generalization to the present algorithm to allow for auto-calibration of the spectral response of the CT system.
Acknowledgments
This work was supported by NIH Grants R21EB015094, CA158446, CA182264, and EB018102. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.
Appendix A
Gradient of LTPL
We derive the gradient in Eq. (12), motivating the definition of the linear transform A. Recall Eqs. (3) and (16):
The gradient of LTPL is
Continuing the algebraic manipulation we insert Iℓℓ′:
The other necessary gradient and Hessian computations follow from similar manipulations.
Appendix B
Positive semidefiniteness of and
Recall Eq. (17),
For ease of presentation, we collapse the double indices of A
We show that
(61) |
This inequality can be used to prove that the Hessians in Eqs. (20), (21), (22), and (23) are positive semidefinite by setting b equal to , respectively.
To prove Eq. (61), we expand b in unit vectors
and show for any vector u that
(62) |
Fixing s, we define the vector v with components vt = A(f)s,t or using the unit vector ês
From the definition of A, we note that vt ≥ 0 for all t ∈ {1,…, Nt} and Σtvt = 1. By the definition of v, the left-hand side of Eq. (62), lhs, is
and by the Cauchy-Schwartz inequality,
This proves the inequality in Eq. (62).
Using Eq. (62), we prove the inequality in Eq. (61)
(63) |
where the inequality is shown by noting that bs ≥ 0, by assumption, and the sum is thus a linear combination of positive definite matrices with non-negative coefficients.
Appendix C
Derivation of spectral CT algorithm for TPL-TV and LSQ-TV with μ-preconditioning
TV-constrained optimization
To derive the algorithm used in the Results section, we write down the intermediate convex optimization problem that involves the first block of the local quadratic upper bound to DTPL or DLSQ
(64) |
That only the first block of the full quadratic expression is explained in Sec. III-B4 and the form of D1, E1 and b1 given in Sec. III-B2 determines whether we are addressing TPL-TV or LSQ-TV. The data discrepancy term of this optimization problem is the same as the objective function of Eq. (45), but it differs from Eq. (45) in that we have added the convex constraints on the material map TV values. We write Eq. (64) using indicator functions (see Eq. (5)) to code the TV constraints and we introduce the μ-preconditioning transformation described in Sec. III-C
(65) |
where f′ = Pf are the transformed (μ-preconditioned) material maps from Sec. III-C. Note that the TV constraints apply to the untransformed material maps f = P−1 f′.
Writing constrained TV optimization in the general form F(Kx)+ G(x)
To derive the CP primal-dual algorithm, we write Eq. (65) in the form of Eq. (25). We note that all the terms involve a linear transform of f, and accordingly we make the following assignments
where
Note that we use the short-hand that the gradient operator, ∇, applies to each of the material maps in the composite material map vector, P−1f′. The Legendre transform in Eq. (27) provides the necessary dual functions , , and G*. By direct computation
(66) |
From Sec. 3.1 of Ref. [25]
(67) |
Convex conjugate of F2
We sketch the derivation of , and for this derivation we drop the “grad” subscript.
The Legendre transform maximization over the variable y′, dual to the material map gradients, is reduced to a maximization over the spatial magnitude g′ = |y′| because the indicator function is independent of the spatial direction of y′ and the term y⊤ y′ is maximized when the spatial direction of y′ line up with y; hence the term y⊤ y′ is replaced by g⊤ g′, which we explicitly write as a sum over the material index m. The maximization and summation order can be switched, because each of the terms in the summation are independent of each other. Evaluation of the maximization over can be seen in the diagram shown in Fig. 15. Accordingly we find
Fig. 15.
Schematic illustrating the solution of . The input vector gm and the maximizing vector are indicated on a 2D schematic, but the argument applies for the full Nk-D space of gm. Because is a vector of magnitudes, each component is non-negative . The indicator function confines below the line (hyper-plane), . The combination of these constraints confines to the schematic, shaded triangle. The maximizer is the vector that maximizes the dot product, (or equivalently the projection of onto gm as indicated by the dashed line from the head of to the arrow indicating gm). Maximization of this dot product is achieved by choosing such that it is aligned along the unit vector corresponding to the largest component of gm. The largest component of gm is also known as the “infinity-norm”, ‖gm‖∞. Thus we have (γmêk−max)┬gm = γm ‖gm‖∞.
(68) |
Dual maximization of Eq. (65)
Using Eqs. (32), (66), (67), and (68), we obtain the maximization dual to Eq. (64)
(69) |
The objective functions of the primal and dual problems, in Eqs. (64) and (69) respectively, are needed to generate the conditional primal-dual gap plots in Fig. 3.
The material map TV proximity step
In order to derive the TPL-TV and LSQ-TV algorithms, we need to derive the proximity minimization in Eq. (41)
The proximity problem splits into “sino” and “grad” sub-problems and the “sino” sub-problem results in Eq. (50). We solve here the “grad” proximity optimization to obtain the pseudo-code for TPL-TV and LSQ-TV
Dropping the “grad” subscript on y′, we employ the Moreau identity which relates the proximity optimizations between a function and its dual
The dual “grad” update separates into the individual material map m components
To simplify the proximity minimization we set
The proximity minimization is a projection of onto a weighted ℓ1-ball.
If g is inside the weighted ℓ1-ball, i.e. ‖g/w‖1 ≤ γ, the function Proj (g; {g, ‖g/w‖ ≤ γ}) returns g. If g is outside the weighted ℓ1-ball, i.e. ‖g/w‖1 > γ, there exists an α0 such that
The parameter α0 is defined implicitly
and it can be determined by any standard root finding technique applied to
where the search interval is α ∈ [0, ‖g/w‖∞].
The pseudocode for TPL-TV and LSQ-TV
Having derived the TV constraint proximity step, we are in a position to write the complete pseudocode for the spectral CT algorithm including the TV constraints. We do employ the μ-preconditioning that orthogonalizes the linear attenuation coefficients, but we drop the prime notation on f and K.
The final material maps after N iterations are obtained by applying the inverse preconditioner
For all the results presented in the article, all variables are initialized to zero.
Contributor Information
Rina Foygel Barber, Department of Statistics, The University of Chicago, 5734 S. University Ave., Chicago, IL 60637, USA.
Emil Y. Sidky, Department of Radiology, The University of Chicago, 5841 S. Maryland Ave., Chicago, IL 60637, USA
Taly Gilat Schmidt, Department of Biomedical Engineering, Marquette University, Milwaukee, WI 53233, USA.
Xiaochuan Pan, Department of Radiology, The University of Chicago, 5841 S. Maryland Ave., Chicago, IL 60637, USA.
References
- 1.Taguchi K, Iwanczyk JS. Vision 20/20: Single photon counting x-ray detectors in medical imaging. Med Phys. 2013;40:100901. doi: 10.1118/1.4820371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Alvarez RE, Macovski A. Energy-selective reconstructions in X-ray computerised tomography. Phys Med Biol, vol. 1976;21:733–744. doi: 10.1088/0031-9155/21/5/002. [DOI] [PubMed] [Google Scholar]
- 3.Shikhaliev PM. Energy-resolved computed tomography: first experimental results. Phys Med Biol, vol. 2008;53:5595–5613. doi: 10.1088/0031-9155/53/20/002. [DOI] [PubMed] [Google Scholar]
- 4.Schmidt TG. Optimal “image-based” weighting for energy-resolved CT. Med Phys. 2009;36:3018–3027. doi: 10.1118/1.3148535. [DOI] [PubMed] [Google Scholar]
- 5.Alessio AM, MacDonald LR. Quantitative material characterization from multi-energy photon counting CT. Med Phys. 2013;40:031108. doi: 10.1118/1.4790692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Roessl E, Proksa R. K-edge imaging in x-ray computed tomography using multi-bin photon counting detectors. Phys Med Biol. 2007;52:4679–4696. doi: 10.1088/0031-9155/52/15/020. [DOI] [PubMed] [Google Scholar]
- 7.Schlomka JP, Roessl E, Dorscheid R, Dill S, Martens G, Istel T, Bäumer C, Herrmann C, Steadman R, Zeitler G, Livne A, Proksa R. Experimental feasibility of multi-energy photon-counting K-edge imaging in pre-clinical computed tomography. Phys Med Biol. 2008;53:4031–4047. doi: 10.1088/0031-9155/53/15/002. [DOI] [PubMed] [Google Scholar]
- 8.Roessl E, Cormode D, Brendel B, Engel KJ, Martens G, Thran A, Fayad Z, Proksa R. Preclinical spectral computed tomography of gold nano-particles. Nucl Inst Meth A. 2011;648:S259–S264. [Google Scholar]
- 9.Cormode DP, Roessl E, Thran A, Skajaa T, Gordon RE, Schlomka JP, Fuster V, Fisher EA, Mulder WJM, Proksa R, Fayad ZA. Atherosclerotic plaque composition: Analysis with multicolor CT and targeted Gold nanoparticles. Radiology. 2010;256:774–782. doi: 10.1148/radiol.10092473. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Roessl E, Brendel B, Engel KJ, Schlomka JP, Thran A, Proksa R. Sensitivity of photon-counting based K-edge imaging in X-ray computed tomography. IEEE Trans Med Imaging. 2011;30:1678–1690. doi: 10.1109/TMI.2011.2142188. [DOI] [PubMed] [Google Scholar]
- 11.Schirra CO, Roessl E, Koehler T, Brendal B, Thran A, Pan D, Anastasio MA, Proksa R. Statistical reconstruction of material decomposed data in spectral CT. IEEE Trans Med Imaging. 2013;32:1249–1257. doi: 10.1109/TMI.2013.2250991. [DOI] [PubMed] [Google Scholar]
- 12.Hounsfield GN. Computerized transverse axial scanning (tomography): Part 1. Description of system. Brit J Radiol. 1973;46:1016–1022. doi: 10.1259/0007-1285-46-552-1016. [DOI] [PubMed] [Google Scholar]
- 13.Maaß C, Baer M, Kachelrieß M. Image-based dual energy CT using optimized precorrection functions: A practical new approach of material decomposition in image domain. Med Phys. 2009;36:3818–3829. doi: 10.1118/1.3157235. [DOI] [PubMed] [Google Scholar]
- 14.Brooks RA. A quantitative theory of the Hounsfield unit and its application to dual energy scanning. J Comp Assist Tomography. 1977;1:487–493. doi: 10.1097/00004728-197710000-00016. [DOI] [PubMed] [Google Scholar]
- 15.Fessler JA, Elbakri IA, Sukovic P, Clinthorne NH. Maximum-likelihood dual-energy tomographic image reconstruction. vol. 2002;4684:38–49. [Google Scholar]
- 16.Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE Trans Med Imaging. 2002;21:89–99. doi: 10.1109/42.993128. [DOI] [PubMed] [Google Scholar]
- 17.Chung J, Nagy JG, Sechopoulos I. Numerical algorithms for polyenergetic digital breast tomosynthesis reconstruction. SIAM J Imaging Sci. 2010;3(1):133–152. [Google Scholar]
- 18.Cai C, Rodet T, Legoupil S, Mohammad-Djafari A. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography. Med Phys. 2013;40:111916. doi: 10.1118/1.4820478. [DOI] [PubMed] [Google Scholar]
- 19.Ruoqiao Z, Thibault JB, Bouman CA, Sauer KD, Hsieh J. Model-based iterative reconstruction for dual-energy X-ray CT using a joint quadratic likelihood model. IEEE Trans Med Imaging. 2014;33:117–134. doi: 10.1109/TMI.2013.2282370. [DOI] [PubMed] [Google Scholar]
- 20.Long Y, Fessler JA. Multi-material decomposition using statistical image reconstruction for spectral CT. IEEE Trans Med Imaging. 2014;33:1614–1626. doi: 10.1109/TMI.2014.2320284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Sawatsky A, Xu Q, Schirra CO, Anastasio MA. Proximal admm for multi-channel image reconstruction in spectral X-ray CT. IEEE Trans Med Imaging. 2014;33:1657–1668. doi: 10.1109/TMI.2014.2321098. [DOI] [PubMed] [Google Scholar]
- 22.Nakada K, Taguchi K, Fung GSK, Armaya K. Joint estimation of tissue types and linear attenuation coefficients for photon counting CT. Med Phys. 2015;42:5329–5341. doi: 10.1118/1.4927261. [DOI] [PubMed] [Google Scholar]
- 23.Hubbell JH, Seltzer SM. Ionizing Radiation Div, Tech Rep. National Inst of Standards and Technology-PL; Gaithersburg, MD (United States): 1995. Tables of x-ray mass attenuation coefficients and mass energy-absorption coefficients 1 keV to 20 MeV for elements Z= 1 to 92 and 48 additional substances of dosimetric interest. [Google Scholar]
- 24.Rigie DS, Riviere PJL. Joint reconstruction of multi-channel, spectral CT data via constrained total nuclear variation minimization. Phys Med Biol. 2015;60:1741–1762. doi: 10.1088/0031-9155/60/5/1741. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Sidky EY, Jørgensen JH, Pan X. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm. Phys Med Biol. 2012;57:3065–3091. doi: 10.1088/0031-9155/57/10/3065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Sidky EY, Chartrand R, Boone JM, Pan X. Constrained TpV minimization for enhanced exploitation of gradient sparsity: Application to CT image reconstruction. J Trans Engineer Health Med. 2014;2:1800418. doi: 10.1109/JTEHM.2014.2300862. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Chambolle A, Pock T. A first-order primal-dual algorithm for convex problems with applications to imaging. J Math Imag Vis. 2011;40:120–145. [Google Scholar]
- 28.Pock T, Chambolle A. International Conference on Computer Vision (ICCV 2011) Barcelona, Spain: IEEE; 2011. Diagonal preconditioning for first order primal-dual algorithms in convex optimization; pp. 1762–1769. [Google Scholar]
- 29.Jørgensen JS, Sidky EY. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray CT. Phil Trans Royal Soc A. 2015;373:20140387. doi: 10.1098/rsta.2014.0387. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Barber RF, Sidky EY. MOCCA: mirrored convex/concave optimization for nonconvex composite functions. 2015 http://arxiv.org/abs/1510.08842. [PMC free article] [PubMed]
- 31.Jørgensen JS, Sidky EY, Pan X. Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in X-ray CT. IEEE Trans Med Imaging. 2013;32:460–473. doi: 10.1109/TMI.2012.2230185. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Sidky EY, Kao CM, Pan X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J X-ray Sci Tech. 2006;14:119–139. [Google Scholar]
- 33.Sidky EY, Pan X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys Med Biol. 2008;53:4777–4807. doi: 10.1088/0031-9155/53/17/021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Barrett HH, Myers KJ. Foundations of Image Science. Hoboken, NJ: John Wiley & Sons; 2004. [Google Scholar]