Abstract
We present a method for obtaining accurate image reconstruction from highly sparse data in diffraction tomography (DT). A practical need exists for reconstruction from few-view and limited-angle data, as this can greatly reduce required scan times in DT. Our method does this by minimizing the total variation (TV) of the estimated image, subject to the constraint that the Fourier transform of the estimated image matches the measured Fourier data samples. Using simulation studies, we show that the TV-minimization algorithm allows accurate reconstruction in a variety of few-view and limited-angle situations in DT. Accurate image reconstruction is obtained from far fewer data samples than are required by common algorithms such as the filtered-backpropagation algorithm. Overall our results indicate that the TV-minimization algorithm can be successfully applied to DT image reconstruction under a variety of scan configurations and data conditions of practical significance.
1. INTRODUCTION
In diffraction tomography (DT), properties of an object are reconstructed from the object’s diffracted scattered field [1]. This modality has been investigated for its potential applications to breast imaging, geophysical imaging, and weather prediction, among others [2–4]. Under the assumption of weak scattering one can use the Born or Rytov approximations to derive the so-called Fourier diffraction projection theorem (FDPT) [1,5], which relates the 2D Fourier transform of the object to the measured data. This serves as a basis for the most widely used image reconstruction algorithms in DT, such as the filtered-backpropagation (FBPP) algorithm [6,7]. The FBPP algorithm and other methods based on the FDPT are not applicable in few-view situations; rather, they require a large set of interrogating waves, tightly packed over the unit sphere. Few-view applications of DT are of practical significance because they can greatly reduce required scan times and associated costs. There has therefore been recent effort to develop reconstruction algorithms for few-view applications, such as the inverse-scattering algorithm of [8], which is not based on the FDPT.
In this work, we investigate and apply a new algorithm for image reconstruction from few-view and limited-angle Fourier data that is compatible with the assumptions of the FDPT. This algorithm iteratively minimizes the total variation (TV) of the estimated image subject to the constraint that the reconstructed image match the measured object data [9]. This method should be effective for image reconstruction of objects that have sparse gradients, and has been applied successfully to CT images reconstructed from sparse data [10]. Here we investigate and apply this algorithm to reconstructing images in sparse data problems that arise due to few-view and limited-angle scanning.
In Section 2, we describe the DT data model and present the details of the TV-minimization algorithm. In Section 3, we apply the algorithm to reconstructing DT images from few-view data and limited-angle scan data. We also demonstrate how algorithm performance is affected by varying numbers of views and data samples per view. Finally we assess the effects of some simplifying assumptions made in our analysis. Our conclusions are summarized in Section 4.
2. METHOD
In this section we present the TV-minimization algorithm following a brief description of the 2D classical DT data model. The results of this section are readily generalizable to 3D problems and other DT data models.
A. Data Model
In transmission DT imaging we seek to reconstruct an object from its measured scattered field. In practice, one reconstructs the discrete image of continuous object function f(r⃗). Unless otherwise specified, the term “image” used in this study refers to the discrete reconstruction of the underlying object function. Here we specifically study ultrasound diffraction tomography, in which the refractive index function n(r⃗) is the quantity of interest. The object function f(r⃗), sometimes called the differential acoustical refractive index, is related to n(r⃗) as f(r⃗) = n2(r⃗) − 1.
Under the classical DT model we assume that the object is illuminated with monochromatic plane-wave radiation of frequency ν0 and that the transmitted data are measured along a straight line perpendicular to the direction of propagation of the incident wave. The quantity measured is the total amplitude of the wave field, which may be complex and, while it is generally referred to as the “scattered” field, includes both diffracted and undiffracted components. These data are obtained at various angles φ, each representing one “view” of the object.
A schematic representation of the imaging configuration is shown in Fig. 1. The object–detector configuration is arbitrary, the only requirement being that the line along which the wavefield amplitude is measured not overlap with the object support. A rotated coordinate system (ξ, η) is defined at angle φ to x and y for each view; ξ and η are related to polar coordinates (r, θ) as ξ =r cos(φ−θ) and η = r sin(φ−θ). The incident wavefield is denoted ui(ξ, φ) and propagates along the η axis, while the scattered wavefield is denoted us(ξ, φ) and is measured along the line η=ℓ. We take Us(νm, φ) to be the one-dimensional FT of us(ξ, φ) with respect to ξ:
(1) |
Fig. 1.
(a) Classical DT scan configuration, with incident radiation along η axis and scattered field measured along η=ℓ. (b) From the FDPT, the ID FT of the scattered data along η=ℓ equals the 2D FT of the object function along semicircle AOB with radius ν0.
Under the Born approximation of weak scattering, in which |us| ≪ |ui|, one can derive a relationship [1,5,11–14] between Us(ν, φ) and the object f(r⃗) as
(2) |
where U0 is the amplitude of the incident plane wave, , and the integral is taken over all of object space. The quantities in front of the integral in Eq. (2) are either known or measurable. We therefore focus only on the integral in Eq. (2), which we extract and denote M(ν, φ):
(3) |
Closer inspection reveals that M(ν, φ) is actually the 2D Fourier transform of the object function f(r⃗) measured along a semicircle of radius ν0 oriented at angle φ, as depicted by semicircular arc AOB in the right-hand panel of Fig. 1. The combination of Eqs. (1)–(3) therefore gives us an analytic relationship between a measurable function us and the object function we wish to recover, f(r⃗), through their Fourier transforms. This is a statement of the FDPT: Under the Born approximation, the 1D FT of the scattered field us is simply related to the 2D FT of f(r⃗) measured along a semicircle of radius ν0. Equally spaced measurements along η = ℓ ultimately correspond to unequally spaced measurements of M(ν, θ), since the measurements along the line represented by ν in Fourier space project perpendicularly from ν onto semicircular arc AOB in Fig. 1 [6].
In practice the Fourier data are sampled at finite intervals, and we therefore adopt a discrete notation that we follow in the remainder of the paper. Discrete Fourier data samples are denoted
(4) |
where the k and l subscripts indicate Fourier space values. Similarly, the reconstructed image of underlying object f(r⃗) is in a discrete form, which is denoted as f⃗ with elements fi,j, where the i and j subscripts indicate pixel values in the discrete image array. We take Eq. (4), which describes the discrete, unequally spaced measurements of M(νk, φl) along semicircular arc AOB, as our data model in the analysis below.
B. Image Reconstruction through TV Minimization
Methods have been investigated for reconstructing images in situations where the measured data are not theoretically sufficient to exactly reconstruct the image of an interrogated object. We focus here on methods that seek to minimize the ℓ1-norm of some sparse representation of the discrete trial image (e.g., [15]). A wide class of medical images, for instance, contain extended distributions that are relatively uniform over most regions and rapidly varying only in confined regions, such as the edges of organs. While such images are not sparse themselves, their gradient-magnitude images (GMIs) are approximately sparse.
The GMI of a discrete image is the ℓ2 norm of its discrete gradient:
(5) |
The objective function to be minimized is the ℓ1-norm of the GMI, also known as the total variation (TV) of the reconstructed image:
(6) |
where i and j denote row and column indices.
Minimizing the TV forms the foundation of our iterative method for image reconstruction from sparse data samples. This algorithm seeks to minimize the TV of the reconstructed image subject to the constraint that the FT of this image match the known FT data samples to within a specified ℓ2 distance [9,16].
Here we seek to reconstruct the discrete image f⃗ of the continuous object function f(r⃗), such that ||f⃗||TV is minimized while the discrete FT (DFT) of the reconstructed f⃗ matches the discretely measured object data—which is simply denoted as g⃗ with elements {gk,l}—to within a given threshold. For simplicity we assume an absorber-less object, meaning f⃗ is real. Additionally we assume indices of refraction n(r⃗) ≥ 1, a standard assumption in ultrasound. This corresponds to object function values f(r⃗) ≥ 0.
A detailed mathematical motivation for the TV-minimization algorithm and its optimization, characterizing it as a second-order cone problem that can be framed as the solution to a variational inequality, can be found in [17]. In short, the algorithm can be interpreted as an optimization problem which can be expressed, for the discrete data considered here, as
f⃗* = argmin ||f⃗||TV such that:
(7) |
where f⃗* denotes a solution and DFT is the discrete Fourier transform operator. The DFT operator yields the data estimates from the estimated image f⃗ for comparison with the relevant Fourier data samples g⃗. The inequality in the data constraint accounts for data inconsistency, and ε is a parameter that can be chosen, for example, according to the level of noise within the data and the total amount of Fourier data available. It represents the maximum allowable ℓ2 distance between the estimated and actual data. A solution to Eq. (7) is obtained by iteratively alternating two steps: The data consistency step, which enforces the three constraints in Eq. (7), and the gradient descent step, which reduces the TV of the trial reconstructed image. These steps are described in more detail below.
C. Algorithm
In the implementation of the TV-minimization algorithm, which has been applied successfully to the reconstruction problem in cone-beam CT [17], the gradient descent step distance is made to alternately overtake and fall behind the data consistency step distance. These distances are defined as the changes induced in the estimated image by the gradient descent and data consistency steps, respectively. Distances are calculated as the square root of the sum of differences between all pixels squared, before and after the relevant step. This process allows the data residual to gradually approach the maximum allowed ℓ2 distance ε while reducing the TV of the reconstructed image.
In fan-beam and cone-beam CT implementations, the algebraic reconstruction technique (ART) is used to project the current discrete estimate of the object f⃗ onto hyperplanes in the allowed solution space consistent with the measured data. This step, also known as projection onto convex sets (POCS), enforces consistency with the data. The POCS step distance is gradually reduced by scaling it with a steadily decreasing parameter β≤1; this limits the overall number of iterations and improves algorithm performance [17]. ART is used to enforce data consistency instead of the FT inversion method because divergent-beam CT has no central slice theorem to bring the projection data into the image’s Fourier space [9,10,17]. The FDPT, however, does bring the projection data into the Fourier space of the DT images, albeit along semicircles rather than straight lines. We therefore use the FT inversion technique for our data consistency step but with several key modifications to improve efficiency and better emulate acquisition of DT data.
The data consistency step consists of five substeps [9]: FT of the trial reconstructed image, copying of the measured samples into the estimated data to enforce data consistency, inverse FT to return to the reconstructed image space, enforcement of the condition that the reconstructed image must be real, and enforcement of the positivity condition. These five substeps enforce the constraints of Eq. (7), and together compose one POCS step in the terminology of [10] and [17]. For clarity we refer to it instead as the data-consistency step, but note that it includes enforcement of the positivity and real data conditions in addition to data consistency.
According to the FDPT, the data space in DT consists of discrete measurements of the FT of the object function along semicircular trajectories whose radii are determined by the frequency ν0 of the illuminating radiation [Eq. (3)]. However, Fourier inversion is most efficient when performed on a Cartesian grid, since fast Fourier transform (FFT) methods can then be used. We address the issue of interpolating data samples from semicircular trajectories onto a Cartesian grid by zero-padding the estimated image by, e.g., a factor of seven. The trial data in the algorithm therefore lie on a 2D Cartesian grid with 49 times as many elements as the final reconstructed image. We compensate for the increased number of unknowns by relaxing values outside the central image array after the inverse DFT; this is done with multiplication by a factor (1−β), where β is the same steadily decreasing parameter [17]. This parameter is also used in the data copy step; instead of directly copying the measured data into the estimated data, we copy in the quantity (1 − β)g̃k,l + βg̃k,l0, where g̃k,l is the current estimated data in each step, and g̃k,l0 is the measured data.
We then set any imaginary part of an image pixel to zero and enforce positivity by substituting zeros in pixels with negative values. We can then calculate the data consistency step distance by measuring the difference between the images before and after the data consistency step. Relaxation via the parameter β forces the data consistency step distance to decrease with increasing iterations. We find that this relaxation allows the algorithm to reach a solution rapidly.
The gradient descent step is performed following the cone-beam CT treatment [17]. We calculate the derivative of the image TV with respect to each pixel, ∂||f⃗||TV/∂fi,j, and return it as an image. This image is then normalized, and a fraction of it is subtracted from the trial image to reduce TV. The gradient descent step distance is then calculated. The fraction subtracted is adaptively controlled so that the magnitude of the gradient descent step distance is comparable to the magnitude of the data consistency step distance, although they can act in different directions.
As described above, the data consistency step distance must decrease as the algorithm proceeds because β is progressively reduced. To achieve efficient algorithm performance, the fraction to be subtracted in the gradient descent step is reduced only when two conditions are met simultaneously: The gradient descent step distance is greater than some predetermined fraction of the data consistency step distance, and the data residual, defined as the difference between the measured and estimated data, is greater than ε from Eq. (7). The gradient descent step distance therefore alternately overtakes and falls behind the data consistency step distance, allowing the data residual to cross back and forth over the maximum allowed distance ε while reducing the TV of the reconstructed image.
3. NUMERICAL STUDIES
We have performed numerical studies to demonstrate and validate the TV algorithm. The true object in these studies, unless otherwise specified, is taken to be the Shepp–Logan phantom discretized on a 128×128 pixel grid. This phantom is shown in panel (a) of Fig. 2. Also included are vertical and horizontal profiles [panels (b) and (c), respectively] that are used to demonstrate results in the analysis below. The overlaid dotted and dashed lines in the phantom image of panel (a) correspond to these profiles.
Fig. 2.
(a) Original 128×128 high-contrast Shepp–Logan phantom. Grayscale spans range from 0.15 to 0.7 in arbitrary intensity units. The dotted white vertical and dashed white horizontal lines indicate the slicing directions for profiles. (b) The dotted vertical profile corresponds to the dotted white line in panel (a). (c) The dashed horizontal profile corresponds to the dashed white line in panel (a).
The Shepp–Logan phantom itself is clearly not sparse. Its GMI, however, is sparse with only 1085 nonzero pixels, as illustrated in Fig. 3. This is only ~13% of the 8168 nonzero pixels of the discretized Shepp–Logan itself, and ~7% of the total number of pixels.
Fig. 3.
(a) Original 128×128 high-contrast Shepp–Logan phantom. Grayscale spans range from 0.15 to 0.7 in arbitrary intensity units. (b) GMI of image in (a); the GMI is clearly sparse.
The DT data gk,l are generated by padding the phantom with zeros to a size of 896×896, then taking the discrete FT numerically to generate high-resolution data in Fourier space. These high-resolution data are then sampled discretely along semicircular trajectories. Nearest-neighbor interpolation is used to place the discrete data points on the semicircular sampling pattern. An analytic expression for the FT of a superposition of ellipses exists, and algorithm performance for analytically generated data is examined in Subsection 3.D.
In the analysis below we evaluate algorithm performance by varying three scan parameters: Number of views, number of samples per view, and angular range scanned. We apply the TV-minimization algorithm to both noiseless and noisy data, and also perform image reconstructions using the FBPP algorithm for comparison. In Subsection 3.A we fix the number of samples per view at 256 and use an angular range of a full 360°. The number of views is then varied (consequently varying the total amount of data), and image reconstruction is evaluated for several scenarios. This is done with both noiseless and noisy data, and FBPP is also used for comparison.
In Subsection 3.B the number of samples is again fixed at 256. The scan angle is varied and the number of views varies accordingly to maintain the same approximate density of views. Again, both noiseless and noisy data are used for the reconstruction, and FBPP is also used for comparison. In Subsection 3.C, all three scan parameters are varied to obtain accurate reconstructions from as little total data as possible using the TV-minimization algorithm. FBPP is also used for comparison.
A. Reconstruction from Few-View Data
Here we examine algorithm performance in the case of few-view data. The number of samples acquired by the receiving transducer is set to 256 for each view. The algorithm is then applied to image reconstruction for decreasing numbers of views. We emphasize that all examples presented in this subsection are for scans covering a full 360°.
The best possible algorithm performance can be assessed by considering image reconstruction from data with no added noise. To do so we use the sampling function shown in panel (a) of Fig. 4, which corresponds to 17 views and approximately 3900 unique data samples. In Fig. 5(a), we show the image reconstructed from the noiseless data by use of the TV-minimization algorithm; reconstruction from the FBPP algorithm is shown in Fig. 5(b). Figures 5(c) and 5(d) show vertical and horizontal profiles, respectively, in which the solid and dashed curves show the profiles reconstructed from the TV and FBPP algorithms. For comparison, we also display the true profiles (dotted curves). Both the reconstructed images and profiles in Fig. 5 show little observable difference, demonstrating that image reconstruction with the TV-minimization algorithm applied to few-view data is numerically exact when no noise is present.
Fig. 4.
(a) Data sampling function used for full 360° scan, showing 17 views. Scale is arbitrary for illustrative purposes. (b) Data sampling function used for 180° scan, 17 views.
Fig. 5.
Reconstruction results from noiseless few-view data using 17 views and 256 samples per view, (a) Image reconstructed using the TV algorithm. (b) Image reconstructed using the FBPP algorithm. (c) Vertical profile, where solid and dashed curves show reconstruction from TV and FBPP algorithms, respectively. For comparison, the true profile is displayed with a dotted curve. It is indistinguishable from the solid curve due to the near-exactness of the reconstruction. (d) Horizontal profile.
On the other hand, although the FBPP algorithm can also yield visually reasonable reconstructions, its results are poorer than the reconstruction with the TV-minimization algorithm. In fact the FBPP reconstructed image has RMS of 0.06, a substantial fraction of the image amplitude.
While the evaluation described above is useful in assessing the best possible performance of the algorithm, data are inherently noisy, and the algorithm must be assessed in the presence of noise. We have therefore added Gaussian noise with zero mean and standard deviation equal to 0.05% of the magnitude of the central Fourier component to every data point in the finely sampled Fourier space. The reconstruction algorithm then proceeds as before.
In each row of Fig. 6 we show the reconstructed image (left), vertical profile (center, solid curve), and horizontal profile (right, solid curve) reconstructed with the TV-minimization algorithm from noisy data acquired at (a) 129, (b) 33, and (c) 17 views, evenly spaced over 360°. In these studies, 256 samples per view are considered unless otherwise specified. The dotted curves in Fig. 6 correspond to the image profiles obtained by simple inverse FT of the full noisy Fourier dataset, and are included for comparison.
Fig. 6.
(a) Reconstructed image (left), vertical profile (center, solid curve), and horizontal profile (right, solid curve) reconstructed with the TV algorithm from noisy data acquired at 129 views evenly spaced over 360°. The corresponding profile obtained from inverse FT of the full noisy Fourier dataset (dotted curve) is also shown. (b) Same for 33 views. (c) Same for 17 views.
The reconstructions from noisy data are not exact, but object features are still well recovered by the TV-minimization algorithm. The accuracy of the reconstructions is seen to deteriorate with the number of views. The total number of unique data samples is 27,000 for 129 views (3.3% of the available data), 7500 for 33 views (0.9% of the available data), and 3900 for 17 views (0.5% of the available data).
Closer inspection of the 129-view case shows that recovery of larger scale features, best demonstrated by the horizontal profile, is nearly exact and is much smoother than the underlying profile generated from inverse FFT of the full noisy Fourier dataset (dotted curves). This demonstrates the ability of the TV-minimization algorithm to reduce reconstruction artifacts. The smoother profile has lower TV; in fact the RMS noise level of the image reconstructed using the TV-minimization algorithm is 3×10−4, compared to an RMS noise level of 7×10−3 in the image reconstructed by inverse FT of the full noisy dataset. Object features are still recovered, however, because of the requirement that the estimated data match the measured data.
Smaller scale features, best demonstrated by the vertical profile, are also well recovered by the algorithm, although the profile shows that they are slightly over-smoothed in the 129-view case. This is likely due to our choice of ε in Eq. (7); a smaller ε would enforce stronger consistency with the data and less smoothing, although it generally leads to small-scale speckle artifacts that may be undesirable depending on the application. The choice of ε is therefore specific to the application, and in practice one could optimize a system by calculating the distance between a phantom object and its reconstruction for many choices of ε, choosing that which gave the least distance for a given system design and typical noise level. Such an analysis is beyond the scope of this paper, but is worthy of further investigation.
We are most interested in the case of very few views, better illustrated by the 33-view and 17-view reconstructions. Here the RMS noise of the reconstructed images is higher (7×10−3 and 8×1−3, respectively), owing to the much smaller amount of data available. However, all major features of the object are still recovered with no significant artifacts.
FBPP reconstructions for the same noise level considered above are shown in Fig. 7 for (a) 129, (b) 33, and (c) 17 views. Clearly the reconstructed images have a much higher level of artifacts with FBPP; in fact these reconstructions have RMS of 0.05, 0.1, and 0.2, an order of magnitude or more greater than the reconstructions with the TV-minimization algorithm. This demonstrates the substantial improvement in reconstructed image quality possible with the TV-minimization algorithm.
Fig. 7.
FBPP reconstructions with the same noise level as in Fig. 6 for (a) 129, (b) 33, and (c) 17 views.
B. Reconstruction from Limited-Angle Scans
As seen in panel (a) of Fig. 4, there is some redundancy in the sampling for a 360° scan, particularly near the center of the data space where many of the semicircles overlap. This redundancy can be used to perform image reconstruction from limited-angle data. It has been shown that exact image reconstruction is possible with the FBPP algorithm using a scan that covers 3π/2 radians [14]. However, the ability of the TV-minimization algorithm to handle few-view data may allow accurate reconstruction over an even smaller angular range.
We have added Gaussian noise to the data at the same level as in Subsection 3.A, and performed reconstructions for limited angle scans covering 270°, 180°, and 120° with 25 views, 17 views, and 11 views, respectively. This maintains an approximately constant view density consistent with the 33-view, 360° scan of Subsection 3.A. The sampling function for 17 views over 180° is shown in Fig. 4(b).
In each row of Fig. 8 we show the image reconstructed using the TV-minimization algorithm (left), corresponding horizontal profile (center, solid curve), and the FBPP reconstruction for the same number of views from noisy data acquired for (a) 25 views over 270°, (b) 17 views over 180°, and (c) 11 views over 120°. For comparison, we again display the corresponding profiles obtained from inverse FT of the full noisy Fourier dataset as dotted curves in the center panels.
Fig. 8.
(a) Reconstructed image (left) and horizontal profile (center, solid curve) reconstructed with the TV algorithm from noisy data acquired for 25 views over 270°. The corresponding profile obtained from inverse FT of the full noisy Fourier dataset (dotted curve) is also shown. At right is the FBPP reconstruction for the same noise level. (b) Same for 17 views over 180°. (c) Same for 11 views over 120°.
Comparison of Figs. 8 and 6 shows that reducing the scan angle to as little as 180° does not substantially affect the quality of the reconstruction, but has the great benefit of reducing the required scan time by as much as a factor of 2. However, for scan angles smaller than ~180° there is insufficient coverage in Fourier space to reconstruct the full object. Inspection of the FBPP reconstructed images in Fig. 6 shows that, as with the few-view case in Subsection 3.A, these images have a higher level of artifacts than the corresponding TV-minimization algorithm reconstructions, making small-scale structures difficult to distinguish.
C. Reconstruction from Highly Sparse Data
We have also performed tests using a series of different scan configurations, trading off between decreasing the number of views and increasing the number of samples per view, in an effort to demonstrate accurate image reconstruction from extremely sparse data. Results of some typical cases are shown in Fig. 9. In rows (a) and (b) of Fig. 9 we show the image reconstructed with the TV-minimization algorithm (left), the corresponding horizontal profile (center, solid curve), and the FBPP reconstruction (right) for the same number of views for noisy data acquired at (a) 30 views with 125 samples per view and (b) 20 views with 175 samples per view, evenly spaced over 360°. We use the same noise level as in Subsection 3.A. We also display the corresponding profiles obtained from inverse FT of the full noisy Fourier dataset in the center panels (dotted curves). The total number of unique samples used for (a) and (b) is 3300 and 3100, respectively.
Fig. 9.
(a) Reconstructed image (left) and horizontal profile (center, solid curve) reconstructed with the TV algorithm from noisy data acquired at 30 views with 125 samples per view, evenly spaced over 360°. The corresponding profile obtained from inverse FT of the full noisy Fourier dataset (dotted curve) is also shown. At right is the FBPP reconstruction for the same noise level. (b) Same for 20 views with 175 samples per view. (c) Same for 15 views and 175 samples per view over 180°.
Row (c) of Fig. 9 shows the image reconstructed with the TV-minimization algorithm (left), the corresponding horizontal profile (center, solid curve) and the FBPP reconstruction (right) for 15 views and 175 samples per view over 180° for the same noise level used above. The corresponding profile from the inverse FT of the full noisy Fourier dataset is also shown for comparison in the middle panel (dotted curves). The tradeoff between samples and views for such sparse data has limitations. For instance using 60 or more views entails using 75 or fewer samples to get approximately 3000 samples for the reconstruction. However, sampling becomes too sparse at large radii in the Fourier plane, leading to a high level of artifacts even when noiseless data are used. Similarly, using fewer than 20 views leads to undersampling of the Fourier components even for large numbers of samples. The result is oversmoothing of the final reconstructed image.
The contrast between the TV-minimization and FBPP algorithms is particularly evident in Fig. 9, as the FBPP algorithm generally requires dense sampling of the data space and sampling here is especially sparse. Comparison of image RMS demonstrates the difference in reconstructions quantitatively: RMS for 30 views, 125 samples is 0.005 and 0.3 for TV and FBPP, respectively; RMS for 20 views, 175 samples is 0.03 and 0.3 for TV and FBPP; and RMS for 15 views, 175 samples of 180° is 0.007 and 0.2 for TV and FBPP.
D. Additional Tests
Here we summarize several additional tests in our analysis. We first test the algorithm on an additional phantom to ensure that the Shepp–Logan does not have fortuitous properties to which the algorithm is uniquely suited. The main requirement of the algorithm is simply that the underlying object function have a reasonably sparse GMI, and this is satisfied by the random superposition of ellipses shown in Fig. 10(a). In Fig. 10(b), we show results of reconstruction using 30 views and 125 samples per view, corresponding to the minimum data scan of Subsection 3.C. The reconstruction is virtually exact using the TV-minimization algorithm.
Fig. 10.
(a) Additional phantom used in testing which is a random superposition of ellipses. Grayscale ranges from 0 to 1.2. (b) Reconstruction of the phantom in (a) from noiseless data.
We also note that the same noise model was used throughout our analysis, with Gaussian noise added to the underlying Fourier data in the same way each time. The robustness of the algorithm to noise can be tested by varying the noise properties of the data; for instance, we change the seed in the random noise generator to obtain a different noise realization and show the resulting reconstruction from 30 views, 125 samples in Fig. 11(a). Comparison of this panel with the top row of Fig. 9 demonstrates that the quality of the reconstructed image is not strongly sensitive to the detailed distribution of data noise.
Fig. 11.
(a) Reconstruction with 30 views, 125 samples using a different noise realization. (b) Reconstruction using data generated from the analytic FT of an ellipse rather than the discrete FT.
Finally, as previously noted the simulated data used in this analysis are generated by padding the phantom with zeros, then taking the numerical discrete FT to generate high-resolution data in Fourier space. In practice, however, one samples the true Fourier transform of the data, not this discrete approximation. We therefore repeat the analysis for the case of 30 views, 125 samples by generating data samples with the analytic Fourier transform of a superposition of ellipses [6]. The resulting analytic data must be filtered to avoid ringing artifacts associated with the sharp sampling cutoff in Fourier space; we use a modified Hanning filter for this purpose.
The resulting reconstruction is shown in Fig. 11(b). Features of the object are slightly smoothed relative to the discrete reconstructions because the filtering attenuates data at the highest frequencies in Fourier space. However, object features are still recovered with few artifacts and a low noise level, particularly compared to FBPP. A detailed analysis with analytic data and finer tuning of the data filter are beyond the scope of this paper.
4. DISCUSSION AND CONCLUSION
We have developed and applied an algorithm based on minimization of the reconstructed image TV for accurate image reconstruction in DT. The algorithm is effective for a range of few-view and limited-angle situations, with quality of the reconstructed images superior to that of the FBPP algorithm, particularly for very sparse data sampling. The few-view and limited-angle cases are of practical significance, as reducing the number of views along with the total scan angle can greatly reduce required scan times. Furthermore, the TV-minimization algorithm should be broadly applicable as it requires only that the underlying object function have a sparse GMI. This is a reasonable assumption for many medical applications in particular, as medical images are often relatively uniform over most regions, such as organs, with rapid variation confined to regions of overlap. The presence of noise means that GMI sparseness will hold only approximately, and that some level of inconsistency will always be present in the data. However, the algorithm has been shown to be reasonably robust to the effects of noise. Future studies include an extension of our analysis to 3D imaging, as both the Fourier transform and gradient descent are well-defined in 3D. Additional evaluation of the algorithm in 2D and comparison with additional DT reconstruction algorithms is clearly worthwhile, but is beyond the scope of this paper.
Acknowledgments
The authors thank M. Anastasio for helpful discussion. E. Y. Sidkey was supported by National Institutes of Health (NIH) grant K01 EB003913. This work was also supported in part by NIH grants R01 EB00225 and CA120540 and a UC-Argonne Research Grant.
References
- 1.Slaney M, Kak AC. Diffraction tomography. In: Devaney AJ, editor. Inverse Optics. Vol. 14. SPIE; 1983. pp. 2–19. [Google Scholar]
- 2.Andre MP, Martin PJ, Otto GP, Olson LK, Barrett TK, Spivey BA, Palmer DA. A new consideration of diffraction computed tomography for breast imaging: Studies in phantoms and patients. Acoust Imaging. 1995;21:379–390. [Google Scholar]
- 3.Devaney A. Geophysical diffraction tomography. IEEE Trans Geosci Remote Sens. 1984;22:3–13. [Google Scholar]
- 4.Kunitsyn VE, Andreeva ES, Tereschenko ED, Khudukon BZ, Nygren T. Investigations of the ionosphere by satellite radiotomography. Int J Imaging Syst Technol. 1994;5:112–127. [Google Scholar]
- 5.Mueller R, Kaveh M, Wade G. Reconstructive tomography and applications to ultrasonics. Proc IEEE. 1979;67:567–587. [Google Scholar]
- 6.Pan SX, Kak AC. A computational study of reconstruction algorithms for diffraction tomography: Interpolation versus filtered backpropagation. IEEE Trans Acoust, Speech, Signal Process. 1983;31:1262–1275. [Google Scholar]
- 7.Devaney A. A filtered backpropagation algorithm for diffraction tomography. Ultrason Imaging. 1982;4:336–350. doi: 10.1177/016173468200400404. [DOI] [PubMed] [Google Scholar]
- 8.Guo P, Devaney AJ. Comparison of reconstruction algorithms for optical diffraction tomography. J Opt Soc Am A. 2005;22:2338–2347. doi: 10.1364/josaa.22.002338. [DOI] [PubMed] [Google Scholar]
- 9.Candes E, Romberg J, Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory. 2006;52:489–509. [Google Scholar]
- 10.Sidky EY, Kao C, Pan X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J X-Ray Sci Technol. 2006;14:1–21. [Google Scholar]
- 11.Wolf E. Three-dimensional structure determination of semi-transparent objects from holographic data. Opt Commun. 1969;1:153–156. [Google Scholar]
- 12.Pan X. Unified reconstruction theory for diffraction tomography, with consideration of noise control. J Opt Soc Am A. 1998;15:2312–2326. doi: 10.1364/josaa.15.002312. [DOI] [PubMed] [Google Scholar]
- 13.Kak AC, Slaney M. Principles of Computerized Tomographic Imaging. SIAM; 2001. [Google Scholar]
- 14.Pan X, Anastasio MA. Minimal-scan filtered backpropagation algorithms for diffraction tomography. J Opt Soc Am A. 1999;16:2896–2903. doi: 10.1364/josaa.16.002896. [DOI] [PubMed] [Google Scholar]
- 15.Li MH, Yang HQ, Kudo H. An accurate iterative reconstruction algorithm for sparse objects: Application to 3D blood vessel reconstruction from a limited number of projections. Phys Med Biol. 2002;47:2599–2609. doi: 10.1088/0031-9155/47/15/303. [DOI] [PubMed] [Google Scholar]
- 16.Candes E, Tao T. Near optimal signal recovery from random projections: Universal encoding strategies. IEEE Trans Inf Theory. 2004;52:5406–5425. [Google Scholar]
- 17.Sidky EY. Pan are preparing a manuscript to be called “Image reconstruction in circular cone-beam computed tomography by total variation minimization”. University of Chicago, Department of Radiology; 5841 S. Maryland Ave. MC2026, Chicago, Illinois 60637, USA, and X: [DOI] [PMC free article] [PubMed] [Google Scholar]