Abstract
Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms.
Keywords: level-set method, X-ray computed tomography, image reconstruction, two-phase flow, ultrafast
1. Introduction
Computed tomography (CT) is a useful imaging tool in medicine, non-destructive testing and many scientific areas. The basic principle of collecting projection data from different views to reconstruct a non-superimposed cross-sectional image can be realized with different signal carriers, such as light, electric current or ionizing radiation. From the theory of tomographic image reconstruction, projections need to be captured in all directions within the imaging plane. However, this condition cannot be fulfilled in every application. Reasons for limited projection data include restricted accessibility of objects of investigation (e.g. large engine parts, extended planar objects like walls), technical limitations of the measurement system (e.g. large electrodes required for electrical tomography methods) or temporal limitations, if the object does not remain static during the measurement. For this variety of extended tomography problems, specially adapted image reconstruction methods are required, which are able to cope with the reduced projection data and still provide acceptable reconstruction results.
In the application considered here, an ultrafast X-ray CT technique [1] is applied to two-phase flow measurement. Although the measurement interval must be very short to capture the dynamic changes within the flow, the temporal aspect is not the primary reason for data limitation in this case. Instead, a fixed set-up consisting of a detector arc and an elongated target, on which a moving X-ray source is generated by a deflected electron beam, is the basis of the ultrafast CT technique. For a complete tomographic projection dataset, the detector arc and X-ray source path on the target must overlap in a certain angular range. Technical realization is thus possible in two ways: either the X-ray source path and detector arc are in different planes [2] or parts of the projection dataset are missing. This article deals with the latter case, in which the angular range of projections is limited, while the uncertainty in the axial direction is minimized. It is also relevant for extending the measurement principle to two-plane [3] and multi-plane (three-dimensional CT) [4] approaches, in which the axial offset between the detector arc and the X-ray source path is no longer acceptable due to the extended target geometry. Different approaches [5,6] have shown that it is difficult, if not impossible, to create a reconstruction algorithm that is generally applicable to those limited-angle problems in the method of suppressing the artefacts appearing when standard reconstruction methods are applied. Only a reduction of the artefacts by optimizing the geometry is feasible [7]. Instead, some kind of a priori information about the object of investigation has to be available to fill the gap in missing projection data. In [8], for example, the problem of reconstructing circular objects has been solved on the basis of the Hough transform. For two-phase flows, such as gas–liquid [9] or gas–solid flows [10], knowledge about the two distinct phases—each with constant attenuation coefficient—can be very useful, because the problem is reduced to recovering binary distributions. However, the way of incorporating this a priori knowledge is not trivial, as the basic nature of the tomography problem is changed essentially. We here propose a reconstruction technique that is based on the level-set method for solving this special kind of problem. The paper is organized as follows. In the succeeding sections, the problem is described in connection with the targeted application. Subsequently, the principle of the level-set method (§4) and the developed level-set reconstruction (LSR) algorithm (§5) are described. Finally, the performance of the algorithm is demonstrated using simulated and measured projection data.
2. Measurement technique and problem description
The ultrafast X-ray CT system considered here consists of a moving X-ray source based on electron beam deflection and a semicircular detector arc. As shown in figure 1, the electron beam generated in the electron gun is guided towards a linear heavy-metal target and focused onto the X-ray-emitting focal spot. By deflecting the electron beam along the target, a moving X-ray source spot is generated, which gives projections of the object of investigation from different angular views. The intensities of the transmitted X-ray radiation are measured by the static detector arc, which comprises 256 room-temperature semiconductor detector elements.
Figure 1.
(a) Principle and (b) geometry of limited-angle electron beam X-ray CT. Different projection angles are achieved within this static set-up by generating a moving X-ray source spot on the elongated target.
This kind of set-up with X-ray source path and detector elements lying within one plane provides the best possible axial resolution by keeping the axial uncertainty as small as possible. However, as the angular projection range is limited in this set-up, the resulting projection dataset is incomplete, which results in typical artefacts in the reconstructed images, as shown in figure 2. The exact reconstruction of an object in the space domain requires measured samples of projection in the Radon domain for the complete angular range of 180° and all displacements covering the maximum diameter of the object of investigation (figure 2a). Missing values in the Radon domain, as in the current set-up, result in artefacts, as shown in figure 2b, which get worse with growing distance from the X-ray source. The available angular projection range for a certain point P=(x,y) within the object can be maximized by increasing the length of the target and by reducing the distance between this point and the X-ray target. Consequently, this measurement technique is best applicable for smaller-diameter objects, as the severity of the artefacts increases with the distance from the source.
Figure 2.

Influence of missing projections in the Radon domain on the reconstructed images. Comparison of complete projection dataset (a) with a limited-angle dataset (b).
3. Algebraic image reconstruction algorithms
Tomographic image reconstruction is based on recovering a parameter distribution μ={μj}, with j∈J and J being the set of pixel indices, from a set of projections p={pk}, with k∈K and K being the set of projection indices, gathered from different angular views. In our case, μ corresponds to the distribution of the linear X-ray attenuation coefficient and p contains the measured X-ray attenuation values. One possible way to describe this problem is the algebraic formulation, which assumes a constant parameter value in each pixel of the reconstructed image. This assumption leads to a linear equation system of the form
| 3.1 |
in which A is a matrix comprising the geometric mapping of projection lines and pixels. The inverse problem, i.e. the problem of reconstruction, consists of finding the inverse or pseudo-inverse matrix A−1 or A+, respectively, which is not trivial due to the dimension of A. In general, the algebraic reconstruction technique (ART) [11] is most favoured to solve the algebraic reconstruction problem. This iterative technique is based on updating μ according to the two-step formula
![]() |
3.2 |
wherein λ is a relaxation factor with value within [0, 2] to steer convergence. To yield better convergence behaviour, the mapping of the projection indices k onto the minor iteration index i is mostly chosen as random, with the restriction that each k is used exactly once per major iteration. The restriction towards positive values of μj assures physically meaningful values of the linear attenuation coefficient and does not affect convergence.
For the problem of recovering a binary distribution, we derived an alternative reconstruction algorithm, which we refer to as binary ART. It adopts the basic principle of ART with the difference of using a binary version of μ(i) to calculate the forward projection ak⋅μ(i) in the correction term of equation (3.2). The binary distribution μ#(i) comprises only the attenuation coefficient values of the two involved phases, while μ(i) can take every positive value. Thus μ#(i) is derived from μ(i) by using a threshold at the mean of both attenuation coefficients. In the case of one phase being a gas in X-ray tomography, the problem further simplifies, because the attenuation coefficient of gas is negligible. Thus, only the attenuation coefficient of the liquid μL and zero are permitted values for the elements of μ#(i) and the threshold is set to μL/2. One iteration of this bimodal approach then consists of three steps as
![]() |
3.3 |
The resulting binary distribution matches the measured projection data within the range of the measured noise and the degree of under-determination of the problem.
4. Basic principle of the level-set method
The level-set method is a numerical method that was originally designed to describe interface motions [1] in order to simulate boundary movements in fluid mechanics and related fields. The main difference compared to Lagrangian methods lies in the implicit formulation in terms of Eulerian partial differential equations. As the basic principle of the level-set method, the interface is defined as the zero level S of an underlying level-set function φ(x,y,t), which is defined on a two-dimensional Cartesian grid (x,y) as a function of time t (figure 3). Besides the easier computation, this implicit formulation has the advantage of coping with arbitrary topologies, which are not known a priori. Even topology changes, such as splitting or merging of regions, are accomplished without further assumptions. By definition, regions with φ<0 belong to the ‘inside’ region and regions with φ>0 to the ‘outside’ region. Movements of the interface are only considered in the normal direction. The equation of motion is therefore defined as
| 4.1 |
wherein F is a force function that determines the type of motion of the interface. It can be arbitrarily defined for different kinds of applications [12,13].
Figure 3.

Definition of an interface and its motion (a) within the context of the level-set approach as the zero level of the underlying continuous function φ (b).
5. The level-set reconstruction algorithm
(a). Basic algorithm
The ART is a standard algorithm that does not account for the limited-angle problem. The binary ART algorithm is quite straightforward in the attempt to force a binary distribution but still insufficiently considers the characteristics of the problem. In order to find a more sophisticated approach, we employ the level-set method to solve the limited-angle problem. Therein, the binary distribution μ#(i) is defined by the sign of the discrete representation of the level-set function φ as
![]() |
5.1 |
The main task is to find a suitable definition of the force function to solve the tomographic reconstruction problem and to consider the main characteristics of the object to find an adequate solution.
To fulfil equation (3.1), we derived a discrete force function term F(i) similar to the ART iteration formula (equation (3.2)) as
![]() |
5.2 |
The main difference results from summing the deviations of all projection lines contrary to the ray-by-ray approach of ART. A similar approach has been proposed by Elangovan & Whitaker [14], however without the ART-specific weighting of the projections.
Besides this, a second force term K(i) is introduced, which provides some degree of smoothing of the interface, which is reasonable for the application of two-phase flow imaging, as the phase boundaries are of limited curvature. Therefore, the force acts against high values of the curvature κ in the form of
| 5.3 |
The degree of smoothing is steered by the weighting parameter ε.
(b). Numerical implementation
The level-set function φ(x,y,t) is transferred into a discrete function
with elements according to
| 5.4 |
wherein Δx, Δy and Δt are the discretization steps in x, y and t, respectively, and m∈M,n∈N and i∈I are the respective indices. The same discretization applies for the force functions F and K. The discretization in space is chosen according to the discretization of the projection data in the real measurement set-up as Δx=Δy=0.5 mm. The conversion to the vector representation of μ#(i) is achieved simply by re-sorting the indices according to
.
For the numerical implementation of the differential operator ∂f/∂a of a given function fk=f(k⋅Δa), there are in general three options:
| 5.5 |
| 5.6 |
| 5.7 |
which are named forward, backward and central difference, respectively.
For the differential operator in time (∂φ/∂t), the forward difference
![]() |
5.8 |
is applied, because the dependence is one-directional. In the spatial domain, one has to consider the direction of the information flow, which needs to be in accordance with numerics and physics. Therefore, the following scheme is applied for ∇φ in combination with the respective force function terms:
![]() |
5.9 |
with
| 5.10 |
and
| 5.11 |
which follows the Enquist–Osher scheme proposed in [15] to fulfil the entropy condition at discontinuous parts of the level-set curve.
The chosen discretization is sufficient for mapping the expected two-phase flow structures onto the level-set function as well as for evolving the resulting interface. However, a simple binarization of
for the determination of μ#(i) causes inadequate high discretization errors during the forward projection, especially for smaller or irregular structures. A higher accuracy is achieved by allowing values in the interval [0, 1] for
near the interface. The procedure to determine μ#(i) from the current level-set function
is as follows. In the first step, sampling points of the zero level S are determined in each row as
| 5.12 |
and in each column as
| 5.13 |
by linear interpolation between the discrete elements of
. In the second step, the sampling points are connected by a nearest-neighbour approach, which considers also the direction of the level-set curve with respect to the sign of the level-set function. The connections between the sampling points of S are modelled as straight lines. In the third step, the value of
is determined as
| 5.14 |
with
being the area fraction of pixel j, which lies within the zero level-set curve S(i).
(c). Initialization and re-initialization
The initial level-set function
is normally defined as a signed distance function of an initial level-set curve S(0), as the signed distance function provides a constant gradient distribution over the whole domain and thus homogeneous conditions for the evolution of the curve. It has been found that the shape of S(0) has no influence on the final reconstruction result. However, the number of iterations to reach this result can be reduced if the initial curve topography is similar to the final one. Therefore, the result of one iteration of the ART algorithm is used here to extract the initial curve.
During the evolution of the level-set function, the initial condition of a homogeneous gradient distribution is decreasingly satisfied. To prevent the evolution from getting stuck, different remedies are known. Besides the well-known re-initialization [15] of the level-set function, different approaches have been proposed, for example, in [16], which circumvent the additional computational effort of the re-initialization by supplementing certain conditioning terms in the level-set equation (equation (4.1)). As the extraction of the level-set curve is already part of the proposed algorithm, the re-initialization with a new signed distance function on the basis of the extracted curve is not much of a drawback and is therefore applied in this approach.
(d). Practical algorithm
Figure 4 shows the procedure of the LSR algorithm using the example of a generic material distribution. In the initialization step, one global iteration of the ART algorithm (equation (3.2)) with consecutive binarization at μL/2 is applied to gain μ#(0) from the projection data p using a zero matrix as initial distribution for μ. The initial level-set curve S(0) is extracted from μ#(0) using marker points along the interface equivalently to the above described method. The initial level-set function
is calculated as the signed distance function of S(0). Each of the iteration steps then includes the update of the level-set function
according to (5.9), the extraction of the level-set curve S(i+1) and the calculation of the new object distribution μ#(i+1) as described above (equation (5.14)).
Figure 4.

Illustration of the LSR algorithm. The result of one ART iteration (a) is used to define the initial zero level set S(0) (b), from which
is calculated as a signed distance function (c). Within each iteration,
is updated (c), S(i) is extracted (d) and μ#(i) with continuous interface is determined (e).
The algorithm includes a number of free parameters, which can be used to steer the convergence behaviour of the reconstruction and to some degree the reconstruction results. These parameters are the weighting factor ε for the curvature-dependant smoothing, the time step Δt, the frequency of re-initialization frein and the number of iterations Nit. While ε influences the shape of the final interface and should therefore be chosen in accordance with the physical shape properties of the expected distribution, the other parameters do not alter the final distribution, as long as the algorithm is stable. They can be optimized with respect to computation time and stability based on the structure to be reconstructed and the quality of the available projection data.
6. Validation of the level-set reconstruction algorithm
For the validation of the algorithm, different defined distributions μ* have been created, two of which are presented here (figure 5). The first one (phantom 1) is a software phantom, which has been used to simulate projection datasets with and without superposition of noise. The second distribution (phantom 2) has been manufactured as a three-dimensional PMMA structure, from which a projection dataset has been measured by the ultrafast X-ray CT system (as described above) during a (nearly) free-fall experiment. From the time-resolved dataset of the whole structure, only one slice is considered here. The limited-angle measurements and the simulation thereof have been performed according to the geometrical properties in figure 1b, with the orientation of the phantoms being as plotted in this paper.
Figure 5.

Test distributions for the validation of the algorithm in the form of (a) a software phantom (phantom 1) and (b–d) a hardware phantom (phantom 2): (b) structure, (c) photograph of structured slice, (d) photograph of three-dimensional-phantom. The hole marked with a cross in (c) is filled with a locating pin in the three-dimensional structure and does not appear as an air bubble in the measurements. (Online version in colour.)
In order to quantify the reconstruction result, a binary error measure is defined as
![]() |
5.15 |
It is applicable to the results of all three reconstruction algorithms considered here, provided that a threshold of μL/2 has been used subsequently to binarize the ART reconstructions.
In a first simulation experiment, a noise-free projection dataset of phantom 1 has been computed, which served as input for all three reconstruction algorithms. As their roles in the iterative algorithms are equivalent, both the time step in the LSR algorithm and the relaxation factor in the ART reconstruction have been set to Δt=λ=0.01 for better comparison. Re-initialization within the LSR algorithm has been performed every 10 iterations ( frein=0.1). After each iteration, the binary deviation e(i)bin has been determined for all three reconstruction algorithms. The results of the experiment are presented in figure 6.
Figure 6.

Comparison of the reconstruction algorithms for a simulated noise-free dataset of phantom 1. Convergence behaviour during 500 iterations (a), ART reconstruction with Nit=100 (b), ART reconstruction with Nit=500 (c), binary ART reconstruction with Nit=100 (d), binary ART reconstruction with Nit=500 (e), LSR reconstruction with Nit=100 (f) and LSR reconstruction with Nit=500 (g).
The reconstructed images in figure 6b–g illustrate the different performance of the three algorithms. While the ART algorithm suffers from the missing projection data and shows the typical limited-angle artefacts in the images, the binary ART algorithm is able to recover the true structure in general, but generates speckle noise due to the bimodal approach. The LSR algorithm, however, is able to cope with both, which results in a nearly complete reconstruction of the original distribution without artefacts or artificial noise. The only drawback of LSR is the undefined structures appearing at the intermediate level (figure 6f), which might be misinterpreted if the reconstruction is stopped too early. The diagram in figure 6a confirms the higher remaining error of ART and binary ART after 500 iterations. It also shows that none of the algorithms tends to diverge at a certain point in this noise-free case. There are also some differences in the convergence behaviour of the three algorithms. Binary ART converges very rapidly compared with the others and shows some irregular fluctuations. LSR produces regular fluctuations in the binary deviation, which are associated with the re-initialization. The re-initialization itself produces a small error increase, as the position of the zero level set is shifted slightly during the procedure. However, owing to the equalization of the gradients, a large decrease in the binary error measure is achieved in the subsequent iteration. ART converges very smoothly, but ends up with the highest deviation.
In a second simulation experiment, a projection dataset of phantom 1 with Poisson distributed noise has been subject to reconstruction. Again, Δt=λ=0.01 and frein=0.1 have been applied and the binary deviation e(i)bin has been determined after each iteration for all three reconstruction algorithms. The results of the experiment are presented in figure 7.
Figure 7.

Comparison of the reconstruction algorithms for a simulated dataset of phantom 1 with Poisson distributed noise. Convergence behaviour during 500/800 iterations (a), ART reconstruction with Nit=22 (b), ART reconstruction with Nit= 200 (c), binary ART reconstruction with Nit=3 (d), binary ART reconstruction with Nit=500 (e), LSR reconstruction with Nit=100 (f) and LSR reconstruction with Nit=800 (g).
As can be seen in the diagram in figure 7a, ART and binary ART converge towards an optimal reconstruction and diverge afterwards due to the noise within the projection dataset. Figure 7b,d shows the reconstructions at this optimal point. Unfortunately, this optimal iteration number for ART and binary ART, for which the binary deviation is lowest, depends on the unknown distribution and is therefore difficult to determine in practical applications. While the ART reconstruction becomes increasingly worse during further iterations, the binary deviation of binary ART remains constant at a slightly higher level than the optimum. Still, the binary ART is no longer able to cope with the missing data in an adequate manner and shows some limited-angle artefacts in addition to the speckle noise. Only the LSR seems little affected by the noise level and reveals the true structure at a very low level of the binary deviation, though it takes some more iterations.
The experiments with the PMMA structure (phantom 2) enable the validation of the LSR algorithm with realistic noise characteristics. Figure 8a shows the original structure of the phantom and figure 8b–d the reconstructions with the three different reconstruction algorithms at their respective optimal number of iterations and relaxation factor/time steps. Although limited-angle artefacts are less severe in this structure, the LSR algorithm is still superior to the other algorithms with respect to noise reduction and preferring smooth boundaries. The binary deviation is nearly the same for ART and LSR and higher than in the simulations, which might be due to manufacturing and positioning tolerances.
Figure 8.

Comparison of the reconstruction algorithms for a measured dataset of phantom 2. Original distribution (a), ART reconstruction with Nit=5 and λ=0.1 (b), binary ART reconstruction with Nit=3 and λ=0.01 (c) and LSR reconstruction with Nit=40 and Δt=0.01 (d).
Finally, it has been investigated how the weighting parameter ε influences the reconstruction result of the multiple sized structures in phantom 2. Therefore, ε has been varied from 0 to 20 and the binary deviation has been determined. Figure 9 shows the progress of the binary deviation for all cases as well as the reconstructed images after 40 iterations at Δt=0.01. As can be seen from the diagram, there is an optimum between ε=5 and ε=10. A close look at the images reveals that, at ε=10, there are already some structures missing, which were still captured up to ε=5. Although there seems to be no qualitative difference in the images between ε=0 and ε=5, the quantitative results suggest that a regulated contribution of the curvature flow has a positive influence on the reconstruction result. As the value of ε correlates to the highest curvature value allowed in the image, it is feasible to determine a suitable value in practical applications.
Figure 9.

Influence of the curvature weighting parameter ε on the LSR of the measured dataset of phantom 2. Convergence behaviour during 40 iterations (a), original distribution (b), and LSR reconstruction with ε=0 (c), ε=2 (d), ε=5 (e), ε=10 (f) and ε=20 (g).
In order to demonstrate the application of the LSR to the measurement of real two-phase flow applications, a water–air two-phase flow has been produced within an experimental flow loop with inner diameter 50 mm and measurements with ultrafast X-ray CT have been performed. The reconstruction results of a sample slice and three-dimensional representations of the reconstructed dataset are depicted in figure 10. It shows clearly that the main advantages mentioned in the context of the validation experiments also apply in the two-phase flow experiment. The LSR algorithm is able to suppress the limited-angle artefacts as well as the noise from the projection data and reveals a structure that most probably corresponds to the true one.
Figure 10.

Comparison of the reconstruction algorithms for a measured dataset of a water–air two-phase flow. ART reconstruction Nit=5 and λ=0.1 (a), ART reconstruction with Nit=100 and λ=0.1 (b), binary ART reconstruction with Nit=10 and λ=0.01 (c), LSR reconstruction with Nit=40, Δt=0.01 and ε=0 (d), LSR reconstruction with Nit=40, Δt=0.01 and ε=10 (e), three-dimensional plot of 200 frames (80 ms) (f), and three-dimensional plot of 1000 frames (400 ms) (g). (Online version in colour.)
7. Conclusion
For the reconstruction of binary distributions from ultrafast limited-angle X-ray CT, a new reconstruction algorithm based on the level-set method has been proposed. During validation experiments with simulated and measured projection datasets of known structures, the performance and the advantages of this LSR in comparison to the well-known ART algorithm and a binary derivative of it have been demonstrated. The influence of the tunable parameters in this algorithm (the weighting factor ε for the curvature-dependant smoothing, the time step Δt, the frequency of re-initialization frein and the number of iterations Nit) has been analysed. It turned out that ε directly influences the minimum curvature that is allowed in the reconstructed images, and therefore can be chosen adequately for the respective application. The other parameters influence the stability and convergence behaviour of LSR, although with relatively small sensitivity. Thus, assured convergence is achieved by low Δt and high Nit, but reasonable results are also gained with higher Δt and lower Nit. As a positive effect arising from the convergence behaviour of LSR, it could be observed that no diverging takes place during the reconstruction. Thus, there is no need to find an optimal stop criterion as is the case in ART and binary ART. Instead, the reconstruction can be run until no significant changes occur any more. Re-initialization of the level-set function is useful to speed up convergence. A frequency of frein=0.1 turned out to be reasonable.
Altogether one can conclude that LSR converges towards a binary distribution with the same projection result as the measured projection data. Besides the phase assignment being directly integrated in the reconstruction process, a further advantage of the LSR is its capability to provide smooth boundaries and thus to prevent noise in the projection data from reducing the image quality excessively. Most advantageous for this limited-angle application is the potential of this approach to effectively suppress typical limited-angle artefacts. The lack of information from the inaccessible viewing angles is compensated by the a priori information about the attenuation coefficient of the involved phases.
Data accessibility
Original reconstructed datasets are available in Zenodo at .
Funding statement
This work was supported by Deutsche Forschungsgemeinschaft (DFG) grant no. HA 3088/3-1.
References
- 1.Bieberle M, et al. 2007. Ultrafast limited-angle-type X-ray tomography. Appl. Phys. Lett. 91, 123516 ( 10.1063/1.2787879) [DOI] [Google Scholar]
- 2.Fischer F, Hampel U. 2010. Ultra fast electron beam X-ray computed tomography for two-phase flow measurement. Nucl. Eng. Des. 240, 2254–2259. ( 10.1016/j.nucengdes.2009.11.016) [DOI] [Google Scholar]
- 3.Bieberle M, Schleicher E, Fischer F, Koch D, Menz H-J, Mayer H-G, Hampel U. 2010. Dual-plane ultrafast limited-angle electron beam X-ray tomography. Flow Meas. Instrum. 21, 233–239. ( 10.1016/j.flowmeasinst.2009.12.001) [DOI] [Google Scholar]
- 4.Bieberle M, Barthel F, Menz H-J, Mayer H-G, Hampel U. 2011. Ultrafast three-dimensional X-ray computed tomography. Appl. Phys. Lett. 98, 034101 ( 10.1063/1.3534806) [DOI] [PubMed] [Google Scholar]
- 5.Prince JL, Willsky AS. 1990. Constrained sonogram restoration for limited-angle tomography. Opt. Eng. 29, 535–544. ( 10.1117/12.55622) [DOI] [Google Scholar]
- 6.Andersen AH. 1989. Algebraic reconstruction in CT from limited views. IEEE Trans. Med. Imaging 8, 50–55. ( 10.1109/42.20361) [DOI] [PubMed] [Google Scholar]
- 7.Bieberle M, Schleicher E, Hampel U. 2008. Simulation study on electron beam X-ray CT arrangements for two-phase flow measurements. Meas. Sci. Technol. 19, 094003 ( 10.1088/0957-0233/19/9/094003) [DOI] [Google Scholar]
- 8.Vega-Alvarado V, Cordova MS, Taboada B, Galindo E, Corkidi G. 2004. Online Sauter diameter measurement of air bubbles and oil droplets in stirred bioreactors by using Hough transform. In Image analysis and reconstruction. 10 (doi:1007/978-3-540-30126-4_101) [Google Scholar]
- 9.Bieberle M, Fischer F, Schleicher E, Koch D, Menz H-J, Mayer H-G, Hampel U. 2009. Experimental two-phase flow measurement using ultra fast limited-angle-type electron beam X-ray computed tomography. Exp. Fluids 47, 369–78. ( 10.1007/s00348-009-0617-6) [DOI] [Google Scholar]
- 10.Bieberle M, Fischer F, Schleicher E, Menz H-J, Mayer H-G, Hampel U. 2010. Ultrafast cross-sectional imaging of gas-particle flow in a fluidized bed. AIChE J. 56, 2221–2225. ( 10.1002/aic.12121) [DOI] [Google Scholar]
- 11.Gordon R, Bender R, Herman GT. 1970. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 29, 471–481. ( 10.1016/0022-5193(70)90109-8) [DOI] [PubMed] [Google Scholar]
- 12.Taylor SH, Garimella SV. 2014. Level-set shape reconstruction of binary permittivity distributions using near-field focusing capacitance measurements. Meas. Sci. Technol. 25, 105602 ( 10.1088/0957-0233/25/10/105602) [DOI] [Google Scholar]
- 13.Calderer A, Kang S, Sotiropoulos F. 2014. Level set immersed boundary method for coupled simulation of air/water interaction with complex floating structures. J. Comput. Phys. 277, 201–227. ( 10.1016/j.jcp.2014.08.010) [DOI] [Google Scholar]
- 14.Elangovan V, Whitaker RT. 2001. From sinograms to surfaces: a direct approach to the segmentation of tomographic data. In Medical image computing and computer-assisted intervention (MICCAI 2001). 10 (doi:1007/3-540-45468-3_26) [Google Scholar]
- 15.Sethian JA. 1999. Level set methods and fast marching methods 2nd edn Cambridge, UK: Cambridge University Press. [Google Scholar]
- 16.Li C, Xu C. 2010. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 19, 3243–3254. ( 10.1109/TIP.2010.2069690) [DOI] [PubMed] [Google Scholar]
- 17.Osher S, Sethian JA. 1988. Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79, 12–49. ( 10.1016/0021-9991(88)90002-2) [DOI] [Google Scholar]
- 18.Dorn O, Miller EL, Rappaport C. 2000. A shape reconstruction method for electromagnetic tomography using adjoint fields and level sets. Inverse Probl. 16, 1119–1156. ( 10.1088/0266-5611/16/5/303) [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Original reconstructed datasets are available in Zenodo at .








