Abstract
Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.
Keywords: C-arm, cone-beam CT, geometric calibration, image-guided interventions, 3D-2D image registration, image quality, task-driven imaging
1. Introduction
To reconstruct a 3D image from its 2D projections in cone-beam computed tomography (CBCT), the geometric parameters relating 3D voxel coordinates to 2D pixel locations on the detector must be known accurately for each projection. The parameters characterizing the geometric relationship of the x-ray source and detector constitute a geometric calibration for the imaging system, and errors in calibration give rise to image artifacts such as blur, distortion, and streaks. Systems for CBCT in image-guided interventions (IGI) tend to include open gantries with fairly adaptable source-detector orbits (c.f. closed ring gantries in CT) and are often mobile and mechanically less rigid. While there are several means by which nominal circular orbits can be reliably calibrated for such systems (Navab et al 1998, Noo et al 2000, Cho et al 2005, Li et al 2010), it may be impractical to calibrate all anticipated orbits, and non-circular orbits may defy conventional calibration approaches. Moreover, geometric errors can arise from out-of-date calibration in which system geometry changes over time (Daly et al 2008) or from irreproducibility in the orbit—for example, vibration during C-arm motion (Dennerlein and Jerebko 2012). Some scenarios may also concern imaging configurations for which the geometry is simply unknown.
Robotic C-arms such as the Artis Zeego (Siemens Healthcare, Forcheim Germany) are capable of fairly general orbits that can intentionally depart from a circular orbit. Such capability provides acquisition modes that increase field of view (Herbst et al 2014) and/or improve image quality (e.g. reduction of cone-beam artifacts as in Noo et al (1998), Pack et al (2004), and Pearson et al (2010)). Additionally, recent work points to ‘task-driven’ image acquisition approaches (Stayman and Siewerdsen 2013) that customize the source-detector orbit based on the individual patient anatomy and imaging task. Such approaches raise a challenge for geometric calibration due to the patient-specific nature of the orbit and the inability to anticipate all possible trajectories that the system might undertake.
The geometry of a point x-ray source and a flat, rigid detector can be defined by 9 degrees of freedom (DoF) that describe the source and detector position for each projection view, forming a projection matrix that maps the 3D image reconstruction voxels to the 2D projection image pixels (Rougée et al 1993). Various methods have been proposed to measure the geometric parameters associated with these DoF, largely separated into two categories: offline and online calibration. Offline methods perform a calibration of the system (before the CBCT scan is acquired) using various phantoms typically consisting of a known arrangement of radiopaque markers. Using the measured locations of the markers within the projection images and a knowledge of the marker configuration, a geometric calibration of the imaging system can be obtained (Navab et al 1998, Noo et al 2000, von Smekal et al 2004, Cho et al 2005, Yang et al 2006, Mennessier et al 2009, Li et al 2010, Ford et al 2011, Hu et al 2011, Li et al 2011). CBCT reconstruction proceeds under the assumption that the system geometry is precisely reproduced in subsequent scans and such methods are common for most CBCT imaging systems. However, these calibrations can become out-of-date (‘aging’ of the calibration as the system undergoes gradual mechanical change) and do not account for irreproducibility in the orbit. For CBCT systems in clinical use (e.g. in image-guided radiotherapy (Jaffray et al 2002), interventional radiology (Fahrig et al 2006), and surgery (Zhang et al 2009)), a fairly high degree of geometric reproducibility is required (and commonly achieved), and offline geometric calibration is the norm, with periodic quality assurance by updating calibrations as required through repeat calibration.
Online calibration methods, on the other hand, compute the system geometry from the scan projection data directly by exploiting knowledge of the object being imaged. Some online methods take advantage of data redundancy in 2D projection images (Panetta et al 2008, Patel et al 2009, Meng et al 2013) while others operate by enforcing desired characteristics within the 3D image reconstruction by iterative optimization—such as image entropy minimization or sharpness maximization (Kyriakou et al 2008, Vidal-Migallón et al 2008, Kingston et al 2011). Such methods have demonstrated the ability to solve the source-detector geometry accurately for uncalibrated systems, and there is ongoing research concerning performance of various objective functions (e.g. entropy, sharpness, and combinations thereof). Such iterative algorithms can involve fairly long computation time for 3D image reconstruction that may not be compatible with clinical workflow (e.g. in image-guided surgery).
In the work described below, we propose an online geometric calibration method that registers the 2D projection data to a previously acquired 3D image of the subject, providing a ‘self-calibration’ of the system. The 3D–2D registration process solves for the affine transformation representing the system geometry for each projection. The registration is rigid but incorporates a similarity metric that has been previously shown to be fairly robust against realistic deformation (Otake et al 2013) and includes means for masking of deformed regions. Initial target applications include cranial neurosurgery, neurovascular interventions, and orthopaedic trauma surgery, where rigid bony structures driving the registration are consistent with the rigid transformation model. This method allows calibration of arbitrary source-detector orbits, since it assumes a fairly general 9 DoF system geometry (alternatively a 6 DoF approximation, as investigated below), and it accommodates irreproducibility in the scan orbit, since it derives the system geometry from the projection data for each acquisition. The self-calibration algorithm does not require the use of fiducial markers, and by using dense image-based measurements for registration (c.f. a sparse array of fiducials) has potentially higher accuracy. The method is also less computationally intense in comparison to iterative image reconstruction methods.
The sections below detail the proposed method for self-calibration, assess registration performance, and evaluate the resulting CBCT image quality in comparison to conventional offline reference calibration. The method was tested on an experimental CBCT bench using a simple cylindrical phantom and an anthropomorphic head phantom. The algorithm was also applied to data acquired using a robotic C-arm (Artis Zeego) to validate performance on a clinically realistic system. Finally, application of the method to non-circular orbits was tested on the CBCT test bench. Clinical applications of the method are discussed, including the capability to improve reconstruction accuracy for presumably well-calibrated systems, provide a sentinel alert on degradation of geometric calibration, enable geometric calibration for non-circular orbits and ‘task-driven’ imaging scenarios, and provide a basis for patient motion correction.
2. Method for self-calibration
2.1. Overview of the method: self-calibration for CBCT
In IGI, a high quality CT image of the patient is commonly acquired prior to the procedure for diagnostic or planning purposes. Furthermore, during IGI, a series of CBCT images may be acquired—one at the beginning of the case, followed by CBCT acquisitions at particular milestones during, or at the conclusion of, the procedure. In these scenarios, the patient-specific 3D image can be registered to the 2D projection data acquired in subsequent CBCT acquisitions. Similar scenarios have been described for prior-image-based 3D image reconstruction to improve image quality and/or reduce radiation dose (Chen et al 2008, Stayman et al 2013, Dang et al 2014). For 3D–2D registration, a projection matrix (PM) characterizing the system geometry is required for the forward projection of the 3D volume to create a 2D digitally reconstructed radiograph (DRR) to be registered to a 2D projection. The PM can be decomposed in terms of the 9 DoF describing the source position (Ts) and detector position (Td) and rotation (Rd), where Ts = [Ts,x, Ts,y, Ts,z]T, Td = [Td,x, Td,y, Td,z]T and Rd = [Rd,x, Rd,y, Rd,z]T as shown in figure 1(a). A simplifying assumption is that the source position, Ts, is fixed with respect to the detector, reducing the system geometry to 6 DoF. It is possible to determine the system geometry for each projection by solving for these 6 or 9 DoF using 3D–2D registration, and repeating the registration for all projections yields a geometric calibration of the system that can be used for 3D image reconstruction. Figure 2 provides a flowchart for the self-calibration method: for each projection, the registration is initialized, registered via 3D–2D registration, and checked for outliers. Once a system geometry is found for all projections, a 3D volume is reconstructed—for example, by filtered backprojection (FBP) for simple circular orbits or by model-based image reconstruction (MBIR) for non-circular trajectories.
Figure 1.
CBCT system geometry and coordinate frames. (a) The detector position relative to the CT volume coordinate system is described by 6 DoF in translation (Td) and rotation (Rd). The source position relative to the detector is positioned by 3 DoF in translation (Ts). (b) Initialization of the ith registration (for views i = 3, …, N) by linear extrapolation of the previous (i − 1) and (i − 2) registrations.
Figure 2.
Flowchart for the self-calibration process. The system geometry for each 2D projection in a CBCT acquisition is determined by registering it to a previously acquired 3D image using a robust 3D–2D registration algorithm. The i th registration is initialized by a simple predictor based on previous registrations. Outliers are detected in results that violate constraints on the smoothness of the orbit or other known characteristics of system geometry (e.g. abrupt change or spurious values of magnification). Registration of all projection views provides the geometric calibration required for 3D image reconstruction.
2.2. Initialization
A PM is required to initialize the registration of each projection, PMinit. For initialization of the first (i = 1) projection, we use a coarse estimation of the projection matrix based on nominal parameters of the system geometry. Specifically, Td,z and Ts,z are initialized according to the object-detector distance and detector-source distance, respectively. The orientation of the i = 1 projection with respect to the patient (e.g. posterior–anterior, anterior–posterior, left-lateral, or right-lateral) could be simply obtained from information available in the image header data on patient and image orientation. As a brute force check on the initial i = 1 orientation, we changed the initial rotational values in Rd by increments of 90° about the 3 cardinal axes to account for all possible orientations, registered each of the 24 permutations (called PMest in figure 2) and selected whichever yielded maximum similarity as PM1. The second (i = 2) view was initialized simply using PM1 from registration of the first projection. For projections i = 3, …, N, the registration is initialized as illustrated in figure 1(b) using a predicted projection matrix, PMpredict, computed using the geometries of the previous two views.
2.3. Predicting the next view (i > 2)
To initialize views i = 3, …, N, as illustrated in figure 1(b), a prediction estimates the position of the detector as it moves around the object and is used to compose PMpredict. This prediction is a linear extrapolation in the 6 DoF describing detector position and rotation, (Td, Rd). The three DoF describing the source position (Ts) are not extrapolated as it is not expected that the source should move significantly with respect to the detector. The prediction is formed based on the geometries of the previous two views by solving the transformation from (Td, Rd)i−2 to (Td, Rd)i−1, where i is the current view:
(1) |
The transform indicates the homogeneous transformation from 3D detector coordinates to 3D detector coordinates for the ith view, and the transform indicates the homogeneous transformation from 3D detector coordinates for the (i − 2)th view to the (i − 1)th view. This transformation is then applied to (Td, Rd)i−1 to obtain a prediction for (Td, Rd)i:
(2) |
which is then taken as initialization for registering the ith view.
2.4. 3D–2D image registration
The 3D–2D registration method central to the self-calibration method is based on the work of Otake et al (2012, 2013), which incorporates normalized gradient information (NGI) as a robust similarity metric within the covariance matrix adaptation-evolution strategy (CMA-ES) optimizer. A linear forward projector implemented on GPU computes the DRR for a particular system pose. Similarity (NGI) is computed between the CT (by way of its DRR, taken as the moving image, IM) and the 2D projection (taken as the fixed image, IF) as:
(3) |
where
(4) |
(5) |
and
(6) |
Previous work (e.g. Otake et al 2013) shows NGI to exhibit robustness against content mismatch arising from non-rigid anatomical deformation or the presence of surgical tools introduced in the radiograph.
The CMA-ES optimizer was used to solve for the transformation that maximizes NGI:
(7) |
Parameter selection in the CMA-ES optimization followed that of Otake et al (2012) and Hansen (2006), with downsampling of both the DRR and the projection image by a factor of 3 and population size (Npop) set to 100. The stopping criterion was set to changes (Δ) in translation or rotation of less than 0.1 mm or 0.1° respectively, or a maximum of 106 iterations (Nmax).
From the resulting geometry estimate of the source and detector, a PM is formed as:
(8) |
where R3×3 represents a 3D rotation matrix with center of rotation at the origin of the CT volume coordinate system. With respect to order of operations, the rotational operations are applied before translations. Whereas previous work solved such a registration for one view (Otake et al 2012, 2013) or a small number of views (Uneri et al 2014), the self-calibration method generates a PM for all projections acquired in a CBCT scan.
2.5. Outlier detection
It is possible to identify outliers in pose estimation by detecting spurious values of the system parameters (or combinations of system parameters) resulting from image registration. We selected system magnification (the ratio of Ts,z and (Ts,z − Td,z)) as a simple metric for outlier detection, because the ratio allows fluctuations in scale that do not affect the forward- or backprojection of rays in 3D image reconstruction, but traps errors that would distort the projection matrix. Each registration result is checked as a possible outlier. For the i = 1 projection, the resulting magnification must be within 10% of that calculated from the known, nominal system magnification (Magnom, computed from the source-object distance (Ts,z − Td,z) and source-detector distance (Ts,z) provided for initialization of the first view). If the magnification is not within this range, then the algorithm must be reinitialized. For the i = 2 projection, the magnification must be within 1% of the magnification associated with the i = 1 projection for the algorithm to continue. If the magnification does not fall within this range, then registration for the i = 2 projection is restarted using the same initialization method as the i = 1 projection as detailed in section 2.2. For all subsequent (i ≥ 3) projections, the magnification must be within 1% of the magnification associated with the previous projection, and for any view implying magnification outside of this range, the registration is restarted using PMi−1 to initialize (instead of PMpredict). If the resulting magnification is again not within the 1% error range, then the registration is restarted with the same initialization method as the i = 1 projection. After this second repetition of the registration (Nrep = 2), the result is accepted as the correct geometry, and the self-calibration algorithm continues to the next projection. (In the current study, as detailed below, there were few, if any, outliers for the fairly smooth orbits considered.) If the registration result is not an outlier, the geometry estimate is used to compose PMi.
The outlier detection method was tested by running the self-calibration algorithm on CBCT data acquired in a circular orbit with 360 projections and a magnification of 1.5 using the experimental bench described below. The predicted pose for each view was purposely perturbed with Gaussian noise with σ = 20 mm and 20° to stress the registration.
3. Experimental methods
3.1. Imaging systems and phantoms
The proposed methodology was tested using the CBCT imaging bench and clinical robotic C-arm (Artis Zeego, Siemens Healthcare) shown in figures 3(a) and (b) respectively. The bench includes an x-ray tube (RAD13, Dunlee, Aurora IL), flat-panel detector (PaxScan 4030CB, Varian, Palo Alto CA), and computer-controlled motion system (Compumotor 6k8, Parker Hannifin, Rohnert Park CA) for acquisition of CBCT data in a variety of system configurations. For all studies involving the experimental bench, Td,z and Ts,z were fixed to the nominal values of the Zeego C-arm (40 and 120 cm, respectively). Other aspects of the bench are described in previous work (Zhao et al 2014), and the nominal scan technique involved 360 projections over 360° at 70 kVp and 227 mAs. For the Zeego system, Td,z and Ts,z were nominally fixed to 40 and 120 cm respectively, and acquisitions obtained 496 projections over 200° at 87.2 kVp and 229 mAs. The nominal geometric calibration for the bench system was formed using the method of Cho et al (2005) using a cylindrical phantom containing a pattern of steel ball bearings (BBs) from which the full 9 DoF geometry of the source and detector can be determined for each projection in a CBCT scan. Alternatively, the nominal calibration for the Zeego C-arm was obtained using the standard clinical calibration tool—a cylindrical phantom with a spiral BB pattern derived from the method of Navab et al (1998). In each case, the nominal geometric calibration is referred to below as the ‘Reference Calibration’.
Figure 3.
(a) CBCT imaging bench with an anthropomorphic phantom shown on the rotation stage. (b) The Artis Zeego with phantom and coordinate frames.
CBCT images from the bench and Zeego systems were reconstructed by FBP for cases of a nominally circular orbit (Experiments 1, 2, and 3, below). A MBIR method was used to reconstruct images for the case of a non-circular orbit considered in Experiment 4 (below). The MBIR utilized a forward model that accounted for a generalized (non-circular) orbit to solve for the reconstructed image by maximizing the consistency of the image with the projection data, while also accounting for the statistics of the measured data. A penalized-likelihood (PL) objective function was used for this maximization, and the reconstructed image was computed in 50 iterations of 20 subsets with regularization strength β = 102 (Wang et al 2014).
Two imaging phantoms were used to evaluate the performance of the ‘Self-Calibration’ in comparison to the ‘Reference Calibration’. The first (figure 4(a)) used the same cylindrical phantom as used in the Cho calibration (above) with the addition of a 0.13 mm diameter tungsten wire suspended along the central axis and a 2 mm diameter lead BB and 3 acrylic spheres (5, 6.5, and 10 mm diameter) attached to the surface of the cylinder. This configuration provided data in which the geometric calibration data (derived from the steel BBs) and the data for imaging performance assessment (derived from the tungsten wire, lead BB, and acrylic spheres) were identical, eliminating the question of orbit reproducibility. A second phantom (figure 4(b)) involved a natural human skull in tissue-equivalent plastic with the addition of a 0.13 mm diameter tungsten wire inserted in the oropharynx and a 2 mm diameter lead BB attached to the surface.
Figure 4.
Imaging phantoms. (a) Cylindrical phantom that combines the reference calibration of Cho (two circular patterns of steel BBs) with a tungsten wire, lead BB, and acrylic spheres to test geometric accuracy of the CBCT reconstruction. (b) Anthropomorphic head phantom with a tungsten wire and lead BB.
3.2. Experimental plan
Four experiments were conducted to test the performance of the self-calibration method, progressing systematically from simple geometries and objects (e.g. the bench and cylinder phantom) to more complicated scenarios (the Zeego and head phantom). In each case, the reference calibration was acquired using the Cho or spiral BB phantom as described in section 3.1. The 3D image input to the self-calibration method was a distinct scan in each case (i.e. not the same as the projection data acquired in the current CBCT scan)—formed either from a previous CBCT scan or a previous CT scan on a diagnostic CT scanner. In each case, the calculated system geometry and CBCT images reconstructed using the self-calibration method (for both 6 and 9 DoF characterization of the system geometry) were compared to those from the reference calibration.
3.2.1. Experiment 1: cylinder phantom on imaging bench
Experiment 1 involved the cylinder phantom imaged on the CBCT bench to test the feasibility of the self-calibration method and obtain quantitative analysis of basic performance. A circular orbit was used, with the nominal scan technique described in section 3.1. A previous CBCT scan of the phantom formed the 3D image input to the self-calibration method, with the previous scan acquired with an angular offset in projection views so that the projections used in 3D reconstruction were not identical to those in 3D–2D registration.
3.2.2. Experiment 2: anthropomorphic head phantom on imaging bench
Experiment 2 involved the anthropomorphic head phantom imaged on the CBCT bench to test the robustness of 3D–2D registration under more clinically/anatomically realistic conditions of x-ray scatter, image noise, and complexity of the subject. A previous scan of the head phantom on a diagnostic CT scanner (Siemens Somatom Definition, 120 kVp, 227 mAs, (0.46 × 0.46 × 0.40 mm3) voxels) formed the 3D image input to the self-calibration method.
3.2.3. Experiment 3: anthropomorphic head phantom on robotic C-arm
Experiment 3 involved the anthropomorphic head phantom imaged on the Artis Zeego to test the method in a clinically realistic system geometry and orbit. A previous CBCT scan of the head phantom acquired using the Zeego formed the 3D image input to the self-calibration method. To challenge the method further, we introduced a realistic, pronounced change in image content between the previous 3D image and the projection images acquired in the current CBCT scan—viz. a 2 mm diameter steel biopsy needle placed in the nasal sinuses and positioning the head with a strong (~30°) canthomeatal tilt to mimic a typical clinical setup. The reference calibration (using the spiral BB phantom mentioned above) was performed by the system service engineer as part of regular preventative maintenance within 6 months of the current scan in accordance with standard clinical practice.
3.2.4. Experiment 4: non-circular orbit
Experiment 4 tested the self-calibration algorithm on a non-circular orbit—specifically, a saddle-shaped orbit that could be used to extend the longitudinal field of view, reduce cone-beam artifacts, or improve image quality in the manner of task-driven imaging—all cases for which a conventional geometric calibration acquired prior to the scan may be irreproducible or infeasible. The scan was conducted on the CBCT bench using the anthropomorphic head phantom with the same nominal scan protocol as above, except that the source and detector were moved along the Y and Z axes (as defined in figure 1 and shown in figure 3) during the scan to produce the saddle trajectory illustrated in figure 5. The total deviation in both Td,y and Ts,y was ±25 mm to maintain approximately the same field of view as previous experiments within the constraints of the test bench system. Total deviation in Td,z was ±50 mm with Ts,z held constant for a range in magnification from 1.41 to 1.6. As in Experiment 2, a previous diagnostic CT scan of the head provided the 3D image input to the self-calibration method. A CBCT image was reconstructed using the MBIR method described above and the self-calibration result for system geometry. Since the reference calibration method (Cho et al 2005) strictly holds only for circular orbits, an image of the head phantom scanned in a circular orbit was used as a reference and basis of image quality comparison (using the same MBIR method for reconstruction).
Figure 5.
Illustration of the saddle orbit for Experiment 4. (a) Polar plot showing magnification for the saddle and circular orbits. (b) Td,y and Ts,y for the saddle and circular orbits.
3.3. Performance evaluation
Performance was evaluated in terms of three measures of image quality/geometric accuracy of the self-calibration method in comparison to conventional reference calibration. The first was the full-width at half-maximum (FWHM) of a point spread function (PSF) measured from the tungsten wire in each phantom. From CBCT images reconstructed with (0.05 × 0.05 × 0.05) mm3 isotropic voxels, line profiles through the center of the wire in 10 axial images were sampled radially over 360°. A Gaussian distribution was fit to each line profile, and the FWHM was averaged over all line profiles and slices.
The second performance measure was the reprojection error (RPE) associated with the position of (the centroid of) the lead BB placed on the surface of both phantoms. The BB centroid was localized in each 2D projection of the scan data using a Gaussian fit about the BB position. The centroid position was then transformed into 3D space using the PMi corresponding to each projection, and its location on the detector was connected to the calibrated 3D source location by a line segment. This process was repeated for all projections, and the closest point of intersection for line segments spaced 90° apart was computed, yielding a point cloud. The width of this point cloud was evaluated using principal component analysis (PCA) and averaging the lengths of the principal components (9).
(9) |
where Vk is a principal component of the 3D data and K ≤ 3. Analysis in terms of PCA is analogous to simply evaluating the width of the point cloud (e.g. by Gaussian fit) but better accommodates possible bias in the orientation of the point cloud.
Finally, the performance of geometric calibration was assessed with respect to the quality of 3D image reconstructions themselves. Each was qualitatively evaluated in terms of blur, noise, and artifacts associated with geometric calibration errors—e.g. streak artifacts and distortion of high contrast details such as the temporal bone trabeculae.
4. Results
4.1. Outlier detection
In all of the experiments reported below, there were no outliers detected in the self-calibration data, indicating a suitable degree of robustness of the 3D–2D registration process. This includes the various forms of initialization for the i = 1 and i = 2 projections, the prediction method for initializing the i ≥ 3 projections, the similarity metric (NGI) even in the presence of image content mismatch (e.g. the biopsy needle in Experiment 3), and the CMA-ES optimization method. To stress test the outlier detection and recovery method, a study was conducted as described in section 2.5 in which the geometry estimates were purposely perturbed. Example results are shown in figure 6, where the magnification is plotted as a function of projection view angle before outlier detection (dashed black line) and after detection and recovery (solid black line). Following perturbation, 13 outliers were detected among the 360 projections, and all were recovered by the re-start method described in section 2.5 (re-starting and/or using the previous view for initialization).
Figure 6.
Outlier detection. The dashed black line shows the magnification of the registration before outlier detection using a perturbed initialization (σ = 20 mm, 20°). The solid black line shows the magnification after outlier detection and re-starting the registration using the previous view for initialization. The grey region represents the window for allowable magnification (10% for the i = 1 view, 1% for subsequent views).
4.2. Effect of geometric calibration on spatial resolution (FWHM of the PSF)
The PSF about the tungsten wire in Experiments 1–4 is shown in figure 7 for the reference calibration (top row) and the self-calibration using both 6 DoF (middle row) and 9 DoF (bottom row) representation of system geometry. We note overall improvement for self-calibration compared to reference calibration—both quantitatively (FWHM for each case) and qualitatively (apparent distortion and intensity of the PSF). For Experiment 1 (cylinder phantom on the imaging bench; figures 7(a), (e) and (i)), the PSFs are comparable, indicating that self-calibration performs as well as (simultaneous) reference calibration for a simple object on a near-perfect system (stable, high-precision imaging bench).
Figure 7.
Effect of geometric calibration on spatial resolution (FWHM of the PSF). Images show an axial slice through the tungsten wire in the cylinder or head phantom. (Top row, (a)–(d)) Images reconstructed using the reference calibration. (Middle row, (e)–(h)) Images reconstructed using self-calibration and 6 DoF characterization of system geometry. (Bottom row, (i)–(l)) Images reconstructed using self-calibration and 9 DoF characterization of system geometry. Each column represents one of the four experiments detailed in section 3.
Experiment 2 (head phantom on the imaging bench; figures 7(b), (f) and (j)) shows improvement in FWHM (0.66 mm for self-calibration, 0.86 mm for reference calibration, p < 0.001) as well as the general shape and intensity of the PSF. Note that the wire in the head phantom was located ~9 cm inferior to the central axial slice (whereas the wire in the cylinder phantom of Experiment 1 was analyzed around the central axial slice). The improvement compared to the reference calibration likely indicates that while the reference calibration is suitable near the central slice (figure 7(a)) it may include errors in detector angulation that become apparent farther from isocenter (figure 7(b)). An alternative explanation is that the scan geometry was slightly irreproducible between the reference calibration and the current scan (whereas Experiment 1 involved simultaneous imaging and calibration in the same phantom); however, this is less likely, since the imaging bench is rated to a fairly high degree of reproducibility (~0.001 mm) in positioning of the motion control system. Also, previous work showed that detector angulation is among the more difficult parameters to estimate in reference calibration (Bronnikov 1999, Noo et al 2000) and can have a large impact on the geometric accuracy of CBCT reconstructions (Daly et al 2008).
Experiment 3 (head phantom on the Zeego; figures 7(c), (g) and (k)) show measurable improvement of the PSF using self-calibration compared to the standard clinical reference calibration. The two most likely explanations are similar to those noted above: (1) slight intrinsic errors in the reference calibration; and/or (2) slight differences between the reference calibration and current scan, owing either to irreproducibility of the C-arm orbit and/or aging of the reference calibration over time.
Finally, figures 7(d), (h) and (l) show the results of Experiment 4 involving the head phantom on the imaging bench with a non-circular orbit. Note that the reference for comparison (figure 7(d)) is for a circular orbit (calibrated with the Cho phantom), and all images were reconstructed with MBIR using the same regularization and optimization parameters. The results demonstrate the feasibility of self-calibration for non-circular orbits, suggesting the same level of geometric accuracy in pose estimation as for circular orbits (Experiments 1 and 2) and compatibility of the resulting geometry estimates with MBIR.
Comparing the self-calibration results for 6 DoF (figures 7(e)–(h)) and 9 DoF (figures 7(i)–(l)) characterization of system geometry, we see no appreciable (or statistically significant) differences in the PSF or FWHM, implying relative insensitivity to the additional 3 DoF associated with variations in source position relative to the detector for the systems considered in this work. This is not a surprising result for the imaging bench (for which the source is rigidly fixed with respect to the detector) and suggests that possible variations in source position on the Zeego (e.g. due to C-arm flex under gravity) are minor with respect to the PSF of image reconstructions.
Figure 9.
Effect of geometric calibration on image quality. (a)–(c) Zoomed region of an axial slice of the head phantom in Experiments 2–4 reconstructed using reference calibration. (d)–(f) The same, reconstructed using the 6 DoF self-calibration and (g)–(i) the 9 DoF self-calibration. Image (j) shows the full axial field of view and zoom region.
4.3. Effect of geometric calibration on RPE
Figure 8 summarizes the results for the four experiments in terms of the RPE, echoing the results of figure 7. Figure 8(a) shows an example point cloud from which the RPE was determined as detailed in section 3.3, and figure 8(b) shows the improvement in RPE obtained by self-calibration in comparison to reference calibration. For Experiment 1, we see a statistically significant improvement in RPE (~0.69 mm for self-calibration) compared to reference calibration (0.83 mm) under ideal conditions (p < 0.001). This also shows RPE to be a more sensitive test of geometric calibration than PSF width (figures 7(a), (e) and (i)).
Figure 8.
Effect of geometric calibration on RPE. (a) Example point cloud distribution used to measure RPE, generated by backprojecting the centroid of a BB in each projection and finding the closest point of intersection between orthogonal views. (b) RPE resulting from 6 and 9 DoF self-calibration compared to conventional reference calibration. An asterisk indicates significant difference from the reference, an open circle indicates mean value, a horizontal line indicates median value, a closed box indicates interquartile range, and whiskers indicate full range of the data.
Experiment 2 demonstrates an additional characteristic of self-calibration: the 6 DoF self-calibration was significantly improved compared to reference calibration (RPE = 0.61 mm versus 0.84 mm, p < 0.001); in addition, the 6 DoF self-calibration was superior to the 9 DoF self-calibration (RPE = 0.61 mm versus 0.82 mm, p < 0.001). This result may seem counter-intuitive and points to an interesting characteristic of self-calibration: the 9 DoF method allows potentially unrealistic variations in source position with respect to the detector—e.g. excursions in Ts,z; while FBP reconstruction image quality (figures 7(f) and (j)) may be relatively insensitive to such excursions since backprojected rays are still along the correct lines (recognizing a fairly small effect associated with distance weighting), the difference is evident in the RPE among orthogonal rays. The 6 DoF self-calibration holds the position of the source fixed with respect to the detector, which appears to incur less error in geometry estimation, at least for the rigid geometry of the imaging bench.
For Experiment 3, the mean and median RPE are lower for the self-calibration methods than the reference calibration method, but the difference was not statistically significant (p = 0.08). The overall performance appears better (consistent with figures 7(c), (g) and (k)), but errors in finding the BB centroid in the projection images may have contributed to a reduction in reliability of the RPE estimates. Another factor is that the C-arm undergoes significant deviations from a circular orbit, which broadens the point cloud distributions. Experiment 4 is not shown, since RPE assumes a circular orbit.
4.4. Effect of geometric calibration on image quality
Figure 9 illustrates the effects quantified above in terms of qualitative visualization of high-contrast details in the anthropomorphic head phantom, including streaks (from a high-contrast biopsy needle) and distortion (wisps about cortical bone and temporal bone trabeculae). Images from Experiment 1 are not shown, because they were essentially identical: both reference calibration and self-calibration yielded qualitatively accurate reconstruction of the cylinder phantom without appreciable geometric artifacts. The same result is seen for Experiment 2 (figures 9(a), (d) and (g)), where both reference and self-calibration yield a qualitatively accurate reconstruction of the skull. Other sources of image quality degradation include x-ray scatter, beam hardening, etc, but not geometric calibration.
Experiment 3 demonstrates noticeable improvement in images reconstructed using self-calibration, evident as a reduction in streak artifact arising from the high-contrast biopsy needle located at the anterior aspect of the axial slice in figures 9(b), (e) and (h). The reduction in streaks indicates that the artifacts are not solely attributable to metal artifact, but are in part also a result of an imprecision in geometric calibration that is accentuated in the reconstruction of a high-contrast, high-frequency objects such as a needle. The self-calibration method yields a more accurate geometric calibration and is more robust against such streak artifacts. This is analogous to the observation of De Man et al (1999) that patient motion causes substantially increased streak artifacts when metal is present in the image.
Experiment 4 shows MBIR images formed using reference and self-calibration methods, the former for a circular orbit and the latter for a saddle orbit. The results are qualitatively identical, with both methods yielding calibration suitable for MBIR. Overall, even in cases for which the difference between reference calibration and self-calibration is negligible, the results are positive findings: they demonstrate not only the feasibility to compute a geometric calibration using the proposed method, but also that the resulting calibration is comparable to well-established methods for reference calibration; moreover, the self-calibration method is extensible to non-circular orbits and imaging systems for which reference calibration is irreproducible or infeasible.
5. Discussion and conclusions
The self-calibration method presents a promising means to obtain accurate geometric calibration not only for standard circular orbits and presumably well calibrated systems, but also for more complicated non-circular orbits and/or systems for which system geometry is unknown/irreproducible. The study detailed above demonstrates that the self-calibration method yields system geometry sufficient to reconstruct images with comparable or improved image quality compared to reference calibration methods and is extensible to cases where conventional reference calibration may not be possible—e.g. non-circular orbits. It is interesting to note that while both 6 and 9 DoF self-calibration performed better overall than the reference calibrations, the 6 DoF self-calibration method slightly outperformed the 9 DoF self-calibration, specifically in Experiment 2. This may indicate that although the 9 DoF method yields a more complete system description, it may be subject to local minima in the larger search space. With the 6 DoF method, the 3 DoF describing the source position are held fixed and reduce the search space in a manner that appears to reduce susceptibility to such local minima and is consistent with the mechanical rigidity of the robotic C-arm used in this work. It is also possible that the 9 DoF optimization is more susceptible to image noise. The optimization was not strongly affected by propagation of error from previous views to the next, even though the algorithm is sequential in nature and uses previous views to initialize the next. In addition to trapping outliers as described in section 2.5, the registration for each view is computed de novo (i.e. with a new CMA-ES population and a search for the current pose that is largely independent of the previous pose) and demonstrates capture range that is more than sufficient to recover from small errors in PMinit resulting from previous views.
The primary objective of the current study was to assess the feasibility and geometric accuracy of the self-calibration method; accordingly, the run-time of the algorithm was not fully optimized. The algorithm was implemented in Matlab (The MathWorks, Inc., Natick MA) and yielded a run-time of approximately 3 s per registration for the 6 DoF method (or 5 s per registration for the 9 DoF method), excluding the projections for which multiple initializations are used and scale the registration time accordingly. A variety of ways to reduce the run-time for a complete scan could be developed in future work, such as parallelizing registrations by binning the projections into sub-groups and registering these groups in parallel (as opposed to registering all projections sequentially), or simultaneously registering more than one projection during the same optimization as in Uneri et al (2014).
Among the limitations of the algorithm is that the accuracy of registration is dependent on the quality of the 3D volume and the 2D images forming the basis of registration. However, as shown in Experiment 3 where a CBCT image acquired from the Zeego system was used as the 3D volume for registration, the registration algorithm is fairly robust to artifacts present in the 3D images (e.g. cone-beam artifacts, scatter, truncation, etc). A second limitation is its dependence on the initialization of the geometric parameters in the first view, and poor initialization could result in registration failure. Initialization is most important for the first projection, which requires knowledge of the nominal system parameters, since if the first projection fails to register correctly, the algorithm may be unable to proceed. Another limitation is that the registration between the 3D volume and 2D projections is limited to affine transformations that presume rigid patient anatomy. Although limited to affine transformations, the registration is still fairly robust against anatomical deformation, as described in previous work (Otake et al 2013), since the similarity metric incorporated in the registration process uses strong edges consistent in both images, which in CBCT most likely represent rigid, bony structures. Registration therefore aligns consistent bony structures in the images while tending to ignore soft tissue deformations. Such robustness to deformation was previously investigated in the context of spine surgery (Otake et al 2013), where it was found that the 3D–2D registration framework was able to register images with a median projection distance error of 0.025 mm even under conditions of strong deformation (e.g. preoperative CT with the patient oriented supine and the spine straight (or lordotic) registered to an intraoperative projection image in which the patient is oriented prone with the spine in kyphosis). Otake et al additionally incorporated a multi-resolution pyramid and a multi-start optimization method that could be incorporated in the self-calibration algorithm at the cost of computation time. This suggests the potential for application to imaging sites beyond purely rigid anatomical contexts such as the cranium, including the spine and pelvis, for example. Such application remains to be fully investigated, but initial results on robustness of the registration appear promising, particularly if the region of interest is small (e.g. a few vertebral levels), thereby better satisfying conditions of local rigidity that can be approximated by affine transformation.
The current study did not specifically investigate the effect of patient motion, and while good patient immobilization is certainly good practice, it is worth noting that the self-calibration method described here can be extended to provide a means for correction of patient motion. Just as the 3D–2D registration of each view characterizes the source and detector pose with respect to the patient (i.e. to the CT image), (rigid) motion of the patient in each projection view is encoded into the projection matrices; therefore, patient motion is effectively seen as virtual motion of the C-arm and is therefore mitigated in the 3D image reconstruction. The patient motion must be affine (e.g. shift or tilts of the head position during the scan), and the method would not be expected to work well for deformable motion (e.g. respiratory motion of the diaphragm). In a similar manner as described above with respect to robustness to anatomical deformation, the method may be applicable to contexts involving a small region of interest within which the motion can be approximated as affine. Such application of the self-calibration algorithm to motion correction is the subject of ongoing future work.
In summary, the self-calibration method performed as well as a reliable (up-to-date) reference calibration on a highly stable CBCT imaging bench and performed better than the reference calibration (subject to periodic quality assurance updates) on a clinical robotic C-arm. This indicates that self-calibration could improve 3D image reconstruction even for presumably well calibrated systems and could offer a sentinel alert against ‘aging’ of the reference calibration. The algorithm demonstrated robustness to changes in the image between the 3D volume and the 2D projection data, such as changes in object positioning and/or the presence of strong extraneous gradients in the 2D projections (e.g. the presence of a metal biopsy needle). Furthermore, the self-calibration method could enable advanced 3D imaging methods that utilize non-circular orbits to expand the field of view, improve image quality (e.g. reduce cone-beam artifacts), and/or maximize local, task-dependent imaging performance, as in task-driven imaging. Future work includes extension of the algorithm to provide a basis for motion correction and evaluation in clinical image data.
Acknowledgments
Research supported by National Institutes of Health Grant No. R01-EB-017226 and academic-industry partnership with Siemens Healthcare (AX Division, Forcheim, Germany). The authors thank Mr Ali Uneri (Department of Computer Science, Johns Hopkins University) and Dr Tharindu De Silva (Department of Biomedical Engineering, Johns Hopkins University) for assistance with the 3D–2D registration method. Thanks also to Dr Clifford Weiss and Ms Robin Belcher (Department of Radiology, Johns Hopkins University) as well as Mr Robert Meyer (Siemens Medical Solutions USA, Inc., Customer Solutions Group, Baltimore/Washington DC) for assistance with the Zeego imaging system.
References
- Bronnikov AV. Virtual alignment of x-ray cone-beam tomography system using two calibration aperture measurements. Opt Eng. 1999;38:381–6. [Google Scholar]
- Chen GH, Tang J, Leng S. Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phys. 2008;35:660–3. doi: 10.1118/1.2836423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cho Y, Moseley DJ, Siewerdsen JH, Jaffray DA. Accurate technique for complete geometric calibration of cone-beam computed tomography systems. Med Phys. 2005;32:968–83. doi: 10.1118/1.1869652. [DOI] [PubMed] [Google Scholar]
- Daly MJ, Siewerdsen JH, Cho YB, Jaffray DA, Irish JC. Geometric calibration of a mobile C-arm for intraoperative cone-beam CT. Med Phys. 2008;35:2124–36. doi: 10.1118/1.2907563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dang H, Wang AS, Sussman MS, Siewerdsen JH, Stayman JW. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images. Phys Med Biol. 2014;59:4799–826. doi: 10.1088/0031-9155/59/17/4799. [DOI] [PMC free article] [PubMed] [Google Scholar]
- De Man B, Nuyts J, Dupont P, Marchal G, Suetens P. Metal streak artifacts in x-ray computed tomography: a simulation study. IEEE Trans on Nuclear Science. 1999;46(3):691–6. [Google Scholar]
- Dennerlein F, Jerebko A. Geometric jitter compensation in cone-beam CT through registration of directly and indirectly filtered projections. Nuclear Science Symp and Medical Imaging Conf (NSS/MIC) 2012:2892–95. [Google Scholar]
- Fahrig R, Dixon R, Payne T, Morin RL, Ganguly A, Strobel N. Dose and image quality for a cone-beam C-arm CT system. Med Phys. 2006;33:4541–50. doi: 10.1118/1.2370508. [DOI] [PubMed] [Google Scholar]
- Ford JC, Zheng D, Williamson JF. Estimation of CT cone-beam geometry using a novel method insensitive to phantom fabrication inaccuracy: implications for isocenter localization accuracy. Med Phys. 2011;38:2829–40. doi: 10.1118/1.3589130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hansen N. Towards a New Evolutionary Computation. 2006. Berlin: Springer; The CMA evolution strategy: a comparing review; pp. 75–102. [Google Scholar]
- Herbst M, Schebesch F, Berger M, Fahrig R, Hornegger J, Maier A. Third International Conf.on Image Formation in X-ray Computed Tomography. Salt Lake City: UT; 2014. Improved trajectories in C-Arm computed tomography for non-circular fields of view Proc; pp. 274–8. [Google Scholar]
- Hu Z, Gui J, Zou J, Rong J, Zhang Q, Zheng H, Xia D. Geometric calibration of a micro-CT system and performance for insect imaging. IEEE Trans Inf Technol Biomed. 2011;15:655–60. doi: 10.1109/TITB.2011.2159012. [DOI] [PubMed] [Google Scholar]
- Jaffray D, Siewerdsen JH, Wong JW, Martinez AA. Flat-panel cone-beam computed tomography for image-guided radiation therapy. Int J Radiat Oncol Biol Phys. 2002;53:1337–49. doi: 10.1016/s0360-3016(02)02884-5. [DOI] [PubMed] [Google Scholar]
- Kingston A, Sakellariou A, Varslot T, Myers G, Sheppard A. Reliable automatic alignment of tomographic projection data by passive auto-focus. Med Phys. 2011;38:4934–45. doi: 10.1118/1.3609096. [DOI] [PubMed] [Google Scholar]
- Kyriakou Y, Lapp RM, Hillebrand L, Ertel D, Kalender WA. Simultaneous misalignment correction for approximate circular cone-beam computed tomography. Phys Med Biol. 2008;53:6267–89. doi: 10.1088/0031-9155/53/22/001. [DOI] [PubMed] [Google Scholar]
- Li X, Zhang D, Liu B. A generic geometric calibration method for tomographic imaging systems with flat-panel detectors—a detailed implementation guide. Med Phys. 2010;37:3844–54. doi: 10.1118/1.3431996. [DOI] [PubMed] [Google Scholar]
- Li X, Zhang D, Liu B. Sensitivity analysis of a geometric calibration method using projection matrices for digital tomosynthesis systems. Med Phys. 2011;38:202–9. doi: 10.1118/1.3524221. [DOI] [PubMed] [Google Scholar]
- Meng Y, Gong H, Yang X. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects. IEEE Trans Med Imaging. 2013;32:278–88. doi: 10.1109/TMI.2012.2224360. [DOI] [PubMed] [Google Scholar]
- Mennessier C, Clackdoyle R, Noo F. Direct determination of geometric alignment parameters for cone-beam scanners. Phys Med Biol. 2009;54:1633–60. doi: 10.1088/0031-9155/54/6/016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Navab N, Bani-Hashemi A, Nadar MS, Wiesent K, Durlak P, Brunner T, Barth K, Graumann R. Medical Image Computing and Computer-Assisted Interventation—MICCAI’98. Berlin: Springer; 1998. 3D reconstruction from projection matrices in a C-arm based 3D-angiography system; pp. 119–29. [Google Scholar]
- Noo F, Clack R, White TA, Roney TJ. The dual-ellipse cross vertex path for exact reconstruction of long objects in cone-beam tomography. Phys Med Biol. 1998;43:797–810. doi: 10.1088/0031-9155/43/4/009. [DOI] [PubMed] [Google Scholar]
- Noo F, Clackdoyle R, Mennessier C, White TA, Roney TJ. Analytic method based on identification of ellipse parameters for scanner calibration in cone-beam tomography. Phys Med Biol. 2000;45:3489–508. doi: 10.1088/0031-9155/45/11/327. [DOI] [PubMed] [Google Scholar]
- Otake Y, Schafer S, Stayman JW, Zbijewski W, Kleinszig G, Graumann R, Khanna AJ, Siewerdsen JH. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery. Phys Med Biol. 2012;57:5485–508. doi: 10.1088/0031-9155/57/17/5485. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Otake Y, Wang AS, Stayman JW, Uneri A, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Siewerdsen JH. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation. Phys Med Biol. 2013;58:8535–53. doi: 10.1088/0031-9155/58/23/8535. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pack JD, Noo F, Kudo H. Investigation of saddle trajectories for cardiac CT imaging in cone-beam geometry. Phys Med Biol. 2004;49:2317–36. doi: 10.1088/0031-9155/49/11/014. [DOI] [PubMed] [Google Scholar]
- Panetta D, Belcari N, Del Guerra A, Moehrs S. An optimization-based method for geometrical calibration in cone-beam CT without dedicated phantoms. Phys Med Biol. 2008;53:3841–61. doi: 10.1088/0031-9155/53/14/009. [DOI] [PubMed] [Google Scholar]
- Patel V, Chityala RN, Hoffmann KR, Ionita CN, Bednarek DR, Rudin S. Self-calibration of a cone-beam micro-CT system. Med Phys. 2009;36:48–58. doi: 10.1118/1.3026615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearson EA, Cho S, Pelizzari CA, Pan X. Non-circular cone beam CT trajectories: a preliminary investigation on a clinical scanner. Nuclear Science Symp Conf Record (NSS/MIC) 2010:3172–5. [Google Scholar]
- Rougée A, Picard C, Ponchut C, Trousset Y. Geometrical calibration of x-ray imaging chains for three-dimensional reconstruction. Comput Med Imaging Graph. 1993;17:295–300. doi: 10.1016/0895-6111(93)90020-n. [DOI] [PubMed] [Google Scholar]
- Stayman JW, Siewerdsen JH. Task-based trajectories in iteratively reconstructed interventional cone-beam CT. The 12th Int Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine. 2013:257–60. [Google Scholar]
- Stayman JW, Dang H, Ding Y, Siewerdsen JH. PIRPLE: a penalized-likelihood framework for incorporation of prior images in CT reconstruction. Phys Med and Biol. 2013;58:7563–82. doi: 10.1088/0031-9155/58/21/7563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Uneri A, Otake Y, Wang AS, Kleinszig G, Vogt S, Khanna AJ, Siewerdsen JH. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy. Phys Med Biol. 2014;59:271–87. doi: 10.1088/0031-9155/59/2/271. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vidal-Migallon I, Abella M, Sisniega A, Vaquero JJ, Desco M. Simulation of mechanical misalignments in a cone-beam micro-CT system. IEEE Nuclear Science Symp Conf Record NSS ’08. 2008:5007–9. [Google Scholar]
- von Smekal L, Kachelriess M, Stepina E, Kalender WA. Geometric misalignment and calibration in cone-beam tomography. Med Phys. 2004;31:3242–66. doi: 10.1118/1.1803792. [DOI] [PubMed] [Google Scholar]
- Wang AS, Stayman JW, Otake Y, Kleinszig G, Vogt S, Gallia GL, Khanna AJ, Siewerdsen JH. Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction. Phys Med Biol. 2014;59:1005–26. doi: 10.1088/0031-9155/59/4/1005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang K, Kwan ALC, Miller DF, Boone JM. A geometric calibration method for cone beam CT systems. Med Phys. 2006;33:1695–706. doi: 10.1118/1.2198187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang J, Weir V, Fajardo L, Lin J, Hsiung H, Ritenour ER. Dosimetric characterization of a cone-beam O-arm imaging system. J X-Ray Sci Technol. 2009;17:305–17. doi: 10.3233/XST-2009-0231. [DOI] [PubMed] [Google Scholar]
- Zhao Z, Gang GJ, Siewerdsen JH. Noise, sampling, and the number of projections in cone-beam CT with a flat-panel detector. Med Phys. 2014;41:61909. doi: 10.1118/1.4875688. [DOI] [PMC free article] [PubMed] [Google Scholar]