Abstract
Deformable image registration is widely used in various radiation therapy applications including daily treatment planning adaptation to map planned tissue or dose to changing anatomy. In this work, a simple and efficient inverse consistency deformable registration method is proposed with aims of higher registration accuracy and faster convergence speed. Instead of registering image I to a second image J, the two images are symmetrically deformed toward one another in multiple passes, until both deformed images are matched and correct registration is therefore achieved. In each pass, a delta motion field is computed by minimizing a symmetric optical flow system cost function using modified optical flow algorithms. The images are then further deformed with the delta motion field in the positive and negative directions respectively, and then used for the next pass. The magnitude of the delta motion field is forced to be less than 0.4 voxel for every pass in order to guarantee smoothness and invertibility for the two overall motion fields that are accumulating the delta motion fields in both positive and negative directions, respectively. The final motion fields to register the original images I and J, in either direction, are calculated by inverting one overall motion field and combining the inversion result with the other overall motion field. The final motion fields are inversely consistent and this is ensured by the symmetric way that registration is carried out. The proposed method is demonstrated with phantom images, artificially deformed patient images and 4D-CT images. Our results suggest that the proposed method is able to improve the overall accuracy (reducing registration error by 30% or more, compared to the original and inversely inconsistent optical flow algorithms), reduce the inverse consistency error (by 95% or more) and increase the convergence rate (by 100% or more). The overall computation speed may slightly decrease, or increase in most cases because the new method converges faster. Compared to previously reported inverse consistency algorithms, the proposed method is simpler, easier to implement and more efficient.
1. Introduction
It has been witnessed in the recent years that anatomical image information (kVCT, daily megavoltage-CT or cone-beam-CT, MRI, etc) and functional image information (PET, SPECT, fMRI, etc) were increasingly adopted into patient radiation treatment management. Image registration is a procedure to transform different image datasets into a common coordinate system so that corresponding points of the images are matched and the complementary information from the different images can be analyzed for different diagnostic and therapeutic purposes. Kessler provided a comprehensive review of image registration for radiation therapy (Kessler 2006). Image registration algorithms can be generally grouped into rigid registration and deformable (non-rigid) registration according to the type of transformation that an algorithm applies. While rigid registration only applies rigid (or affine) transformations with limited number of free parameters (up to 12), deformable registration uses much larger number of free parameters (up to three times the total number of voxels in an image) in order to describe non-rigid tissue deformation in 3D space.
Deformable image registration can be computed based on features extracted from the images, e.g. points (Kessler et al 1991), lines (Balter et al 1992) and surfaces (van Herk and Kooy 1994) or based on metrics directly derived from the image intensity values, e.g. mean square error (MSE) (Thirion 1998) for images from the same modality, mutual information (MI) (Viola and Wells 1995) and cross-correlation (Kim and Fessler 2004) for images from different modalities. Mean-squared-error (MSE) based CT to CT deformable image registration is especially important for radiation therapy applications, including patient response monitoring, treatment adaptation, dose tracking and patient motion modeling, etc (Lu et al 2004, Wang et al 2005, Sarrut et al 2007). This paper focuses on such algorithms.
Regardless of the image registration algorithm, registration accuracy is always one of the most important aspects that affects the clinical applicability of the algorithm. For most medical images, registration results often cannot be validated on a voxel-by-voxel basis because there is no such ground truth available. While landmark matching and structure volume matching are often used for results validation, they are not voxel by voxel based and the overall accuracy of such a validation is quite limited because landmarks or structures can only cover limited regions of the entire image.
Inverse consistency, which means that the registration results are consistent from registering the images in the forward direction (from image 1 to image 2) or in the reverse direction (from image 2 to image 1), is often considered as one of the more feasible ways for measuring image registration accuracy (Christensen et al 2006) for any registration algorithms. This is based on the fact that results by an accurate registration algorithm must be inversely consistent. Therefore, inverse consistency is always desirable for any deformable registration algorithm in addition to its accuracy. For image-guided and adaptive radiation therapy (ART) applications, such inverse consistency is not only desirable but also practically useful. Information such as treatment planning contours, etc is defined on the treatment planning CT, while daily doses, contours, etc are referenced to the daily images. Inverse consistency registrations can provide voxel mapping in both directions so that information can be consistently mapped from one image to the other image.
Computation speed is also very important for image-guided and adaptive radiation therapy (ART). For example, registration needs to be computed quickly and accurately between the treatment fractionations. In the future, such a computation may need to be completed online while a patient is on the treatment table. Computation speed is also demanded by 4D-CT based respiratory motion estimation because of the large amount of 4D image data.
Because of these reasons, we propose a new and efficient inversely consistent deformable registration method in this paper. The new method uses a simplified system cost function and solves registration in a symmetric way. Because image information (intensity and intensity gradients) from both images are symmetrically used in the computation, both registration accuracy and convergence speed are improved in the proposed method compared to asymmetrical inverse-inconsistency algorithms. Because the system cost function is simpler, the overall computation speed is improved compared to other inverse consistency algorithms.
1.1. Optical flow deformable image registration and inverse consistency
1.1.1. Optical flow
Optical flow algorithms are among the most used algorithms for single modality deformable image registration. They are based on image intensity and gradient information. For two images I and J to be registered, let I be the moving image and J be the fixed image, a displacement motion vector field V registers I to J so that
| (1) |
where ∘ is the composition operator, V is the motion field and x is the spatial coordinate. The motion field V is the displacement vector field instead of the transformation vector field. V is often referred to as the deformation field or the optical flow field. Description of notations used in this paper is in table 1.
Table 1.
Notations used in this paper.
| I | The moving image, or the first image |
| J | The fixed image, or the second image |
| Id | The difference image, Id = J − I |
| Ω | The image domain |
| x | Coordinates of image positions in Ω |
| V, U | The ‘pull-back’ displacement motion vector fields |
| ΔVn | The delta motion field |
| I ∘ V | =I(x−V(x)), the image I deformed by V |
| V2 ∘ V1 | =V1(x−V2(x)) + V2(x), the composition of two motion fields |
| V−1 | The inverted vector field of V |
V normally cannot be resolved by only using equation (1) because the system is underdetermined. Other constraints, such as global smoothness, are often enforced in order to successfully compute V. With the additional global smoothness constraints, the system cost function could be written as
| (2) |
where R is the smoothness constraint (also known as regularization constraint) function, Ω is the image domain and α is a constant. Many optical flow algorithms use R(V) = tr((∇V)T(∇V)), where tr() is the matrix trace operator. If |V| is small, equation (2) could be expressed using a Taylor expansion of the first-order terms in the following differential form:
| (3) |
where Id = J − I, ∇ is the gradient operator, · is the vector inner product operator.
V could be solved by minimizing E1 with many numerical methods, either iteratively or analytically. Barron et al (1994) have reviewed many published optical flow algorithms (Barron et al 1994, McCane et al 2001) and summarized the algorithms into four categories: differential techniques, region-based matching, energy-based methods and phase-based techniques. This paper uses the Horn–Schunck (HS) algorithm (Horn and Schunck 1981) and the demons diffusion algorithm (Thirion 1998). These two algorithms belong to the differential techniques where the differential form of the system cost equation is solved using the image intensity and gradient. Such differential optical flow algorithms are often referred to as small-motion-model algorithms because they only work if |V| is sufficiently small so that the Taylor expansion series can be applied.
1.1.2. Registration in the inverse direction
Traditionally, if image J needs to be registered to image I in the backward direction, the second motion field U needs to be computed so that
| (4) |
A similar system equation could be written as
| (5) |
Even if V has already been computed, U has to be computed independently because there is unfortunately no direct dependence among the solutions of V and U. This is illustrated in figure 1(a).
Figure 1.

Illustration of asymmetric registration and inverse consistency error. Point A (in image I) and B (in image J) are matching points. V is computed by registering I to J. U is computed by registering image J to image I. (a) After imperfect asymmetric registrations, point A moves to point A′ and point B moves to point B′. (b) Using U, A′ will be moved to A″. Similarly, B″ is B′ moved by using V. The distance from A to A″, and from B to B″, are the inverse consistency errors.
1.1.3. Inverse consistency
It is desirable for many applications that V and U are inversely consistent so that registration could start with either image and the results are consistent. Inverse consistency could be written as
| (6) |
where the composition operator ∘ between two motion fields is defined in table 1.
The inverse consistency error (ICE) could be then defined as
| (7) |
| (8) |
If V and U are inversely consistent, ICE1 and ICE2 will be both 0. Otherwise, ICE1 and ICE2 will not be 0 and may not be the same, as illustrated in figure 1(b). A combined inverse consistency error term ICE can be defined as
| (9) |
1.2. Previous inverse consistency methods
It is generally difficult to have V and U consistent if the image registration computations for both directions are carried out separately or without explicit constraints for inverse consistency. Therefore, most inverse consistency registration algorithms perform computations for both directions simultaneously and explicitly constrain V and U to be, or closely to be, inversely consistent.
Christensen and Johnson (2001) seem to be among the earliest groups to consider inverse consistency for deformable image registration. In their algorithm, V and U were simultaneously computed by minimizing the symmetric system cost equation (10), which contained the similarity constraint, inverse consistency constraint and a diffeomorphism regularity constraint. Diffeomorphism refers to the continuous differentiability of the motion field as discussed below.
| (10) |
where the linear elastic operator L = −a2∇2 − b∇ + c, a, b and c are constants. U and V were parameterized with Fourier sequences and solved iteratively. Both V and U needed to be inverted to obtain V−1 and U−1 in every iteration. An inversion procedure, to be further discussed later, was performed iteratively or analytically, for a displacement motion vector field V, by minimizing |V−1 ∘ V| or |V−1 ∘ V|.
Alvarez et al (2007) proposed an algorithm based on the system cost equation (11). The algorithm does not explicitly invert the forward and reverse motion fields during the iterations. Instead, the inverse consistency error is computed and minimized per iteration.
| (11) |
where the regularization constraint ER(I, V) = tr((∇V)T D(∇I) ∇V), α and β are constants, the inverse consistency constraint ES(V, U) = |U ∘ V|2 and D (∇I) is a regularized projection matrix in the direction perpendicular to ∇I.
Cachier and Rey (2000) analyzed the reasons why results of unidirectional registrations are asymmetric and pointed out that inversely inconsistant approaches penalized the image expansion more than the shrinkage. They proposed an inverse-invariant type system cost equation given in equation (12) and two finite element implementations to solve the new cost function, depending on where motion field inversion is being computed or not. Registration does not need to be performed simultaneously for both forward and reverse directions in this method
| (12) |
Leow et al (2005) reported an approach to model the backward motion field by a function of the forward motion field, therefore inverse consistency registration can be computed without computing the inversion of the motion fields. They used a symmetric system cost function, similar to equation (10), with the V−1 and U−1 replaced by the functions of V and U.
Diffeomorphism algorithms (Dupuis et al 1998, Christensen et al 1996, Trouve 1998) are closely related to inverse consistency. Diffeomorphism means continuous, differentiable and invertible. These algorithms are often referred to as large-motion-model algorithms because the regularization term in the system cost function is different and the algorithms can compute smooth and continuous large motions. Dupuis et al (1998) showed theoretically that the solution for the diffeomorphism system cost equation is unique, smooth, differentiable and invertible. It should be understood that the invertibility is not equal to inverse consistency and diffeomorphism algorithms are not inversely consistent by default.
There are a few inverse consistency algorithms proposed under the diffeomorphism framework. Joshi et al (2004) proposed a method to construct a template image from multiple images for brain mapping. The major computation of this algorithm is done in the Fourier frequency domain. The system cost equation is given by
| (13) |
where N is the total number of images, Î is the shape average image, which is updated during the iterations, Vi is the motion field to deform image i to Î, vi is the velocity vector field for image i and .
If the number of images is 2, then this method becomes an inverse consistency method and the system cost equation reduces to
| (14) |
Similar algorithms have also been proposed by Avants and Gee (2004) and by Beg and Khan (2007). These algorithms are all based on the idea that both images are deformed toward the ‘mean shape’ image in order to achieve better registration. Such an idea is quite similar to the basic concept of the method investigated in this paper. We will further compare our method to these algorithms in the later sections.
The goal of computing image registration with inverse consistency is to improve the registration accuracy and to provide consistent motion fields for both registration directions. Better accuracy has been achieved by adding additional inverse consistency constraints and using symmetric system cost functions. However, solving the more complicated registration problem is usually much slower.
2. Methods and materials
2.1. Overview
Our method is similar to the inverse consistency diffeomorphism algorithms (Joshi et al 2004, Avants and Gee 2004, Beg and Khan 2007), however, focuses on simplicity and computational efficiency. As illustrated in figure 2, I and J are symmetrically deformed pass-by-pass toward each other. In and Jn denote I and J deformed after pass n. Registration is achieved on In and Jn when In and Jn matches.
Figure 2.

Demonstration of the proposed inversely consistent registration method. Matching points A and B are in image I and image J, respectively. After n passes, A is moved to point A′ and B is moved to point B′. A′ and B′ are in close proximity, but are not perfectly registered. Vn and Un are the overall motion fields. The delta motion field ΔVn and ΔUn are computed for each pass.
At pass n, a delta motion field ΔVn, is computed by minimizing a symmetric optical flow system cost equation (to be discussed in the following section) using modified optical flow algorithms. The two overall motion fields, Vn for image I and Un for image J, are updated by accumulating ΔVn and −ΔVn as
| (15) |
| (16) |
and In and Jn are then updated as
| (17) |
| (18) |
The two new deformed images In and Jn will be used for the next pass.
Initially, V0 = U0 = 0. Because ΔVn is a ‘pull-back’ motion field (defined on the voxel grid of In and Jn, instead of the voxel grid of I and J), the Vn ≠ −Un for pass numbers n > 1, therefore Vn and Un are updated individually. The magnitude of ΔVn is forced to be less than 0.4 voxel size in order to ensure the smoothness and invertibility of Vn and Un as discussed below. If the registration direction is reversed, it can be shown that In and Jn will be swapped, and consequently Vn and Un will be swapped.
The final motion fields, VIJ which registers I to J, and UJI which registers J to I, are calculated as
| (19) |
| (20) |
from the last Vn and Un. It can be shown that VIJ and UJI are inversely consistent to each other. If the registration direction is reversed, Vn and Un simply swap. VIJ and UJI will also be simply swapped. Because VIJ and UJI are inversely consistent, the final motion fields computed in the forward and the backward registration directions are inversely consistent. Both VIJ and UJI can be computed in one step regardless of the registration direction.
2.2. Symmetric optical flow system cost equation
At pass n, we compute the delta motion fields ΔVn and ΔUn to achieve further image registration between the current deformed In−1 and Jn−1. In−1 is deformed using ΔVn and generates In according to
| (21) |
Jn−1 is deformed using ΔUn and generates Jn according to
| (22) |
ΔVn and ΔUn are solved by minimizing the following new system cost equation:
| (23) |
To simplify the new equation, we add another hard constraint on ΔVn and ΔUn
| (24) |
and we select the smoothness regularity function R( ) so that
| (25) |
and let α2 = 2β2, then the system cost equation could be rewritten in the following differential form using Taylor expansion:
| (26) |
which is simplified into
| (27) |
where IS = In−1 + Jn−1 and Id = Jn−1 − In−1.
One can see that equation (27) has exactly the same form as equation (3). This means that the intermediate deformation fields ΔVn could be solved with the same algorithms that solve equation (3) while having ΔUn = −ΔVn. Most regularization functions, including the ordinary optical flow global smoothness function R(V) = tr((∇V)T (∇V)), are good choices for equation (25).
2.3. Solving the system cost equation
2.3.1. Case 1: Horn–Schunck (HS) optical flow algorithm
The original Horn–Schunck (HS) (Horn and Schunck 1981) algorithm solves equation (3) using the following iterative solution:
| (28) |
where Vk is the motion field at iteration k, V̄k is Vk averaged for each pixel in the neighborhood of that pixel.
To solve the new system cost equation (27), the iterative equation is modified slightly to
| (29) |
where IS = In−1 + Jn−1 and Id = Jn−1 − In−1.
After all iterations are finished, the last Vk+1 is ΔVn, the desired solution for equation (27).
2.3.2. Case II: using demons algorithm
The original demons algorithm (Thirion 1998) solves equation (3) using this iterative solution
| (30) |
where Gσ is Gaussian lowpass filter with a window width σ, k is the iteration number.
To solve equation (27), we replace the gradient terms with ∇In−1 + ∇Jn−1. We also do not have to use multiple iterations because we are already applying multiple passes at the In and Jn levels. In this way, the equation can be reduced to
| (31) |
2.4. Inversion of Vn and Un
2.4.1. Guarantee of invertibility
Vn and Un must be invertible so that the final motion fields can be computed. To be invertible, Vn and Un should be smooth, without folding and the determinant of the Jacobian matrix should be strictly positive defined (Leow et al 2005). However, neither the HS algorithm nor the demons algorithm guarantees the invertibility for Vn and Un. Therefore, the following small-step multiple pass approach is used to ensure it.
The strategy is to compute Vn and Un in smaller incremental steps. If ΔVn is diffeomorphic, then Vn and Un, which are accumulating ΔVn and −ΔVn, will also be diffeomorphic. Such an approach has been reported previously (Cootes et al 2004, Rueckert et al 2006). Rueckert et al reported that the maximal displacement of the control points for their cubic B-Spline algorithm needs to be less than 0.40 of the spacing of the control points for the motion field to be diffeomorphic. Such a conclusion could be indirectly applied to the optical flow algorithms by treating voxels in each image as B-Spline control points. If ΔVn is less than 0.4 voxel size, ΔVn will be diffeomorphic and Vn and Un will also be diffeomorphic.
We used the following ad hoc step after every pass to explicitly reduce the magnitude of ΔVn to 0.4 voxel if it is greater than 0.4 voxel
| (32) |
There are alternative approaches to guarantee diffeomorphism for ΔVn. Vercauteren et al (2007) reported using exp(ΔVn) to replace ΔVn. The term exp(ΔVn) is approximated by composing ΔVn/n for n times. For example, exp(ΔVn) = (ΔVn/16) ∘ (ΔVn/16) ∘ · · · ∘ (ΔVn/16). Because ΔVn/n is diffeomorphic if n is large, exp(ΔVn) will be diffeomorphic. One problem of this approach is that exp(ΔVn) ≠ ΔVn, neither in direction nor in magnitude, therefore exp(ΔVn) is a rough but diffeomorphic approximation of ΔVn.
Neither our magnitude limiting procedure nor the method of using exp(ΔVn) is perfect. If ΔVn is accurate, then errors will be introduced by either method. Such errors have to be recovered in the next pass, and could slow down the overall convergence. Our method is, however, simpler to implement and more computationally efficient.
Smoothness of ΔVn after the magnitude limiting procedure should not be a concern because ΔVn is discrete and the largest possible magnitude difference of ΔVn between two adjacent voxels is 0.4 × 2 = 0.8. However, if more smoothness is desired, ΔVn can be smoothed by a Gaussian lowpass filter as Gσ (ΔVn) → ΔVn, where σ is the window size. The maximal magnitude of ΔVn will still be less than 0.4 after such a lowpass filtering step and the smoothed ΔVn will still be diffeomorphic.
2.4.2. Motion field inversion
Vn and Un can be inverted by a few different ways. The easiest way, as used in many diffeomorphism algorithms, is to integrate (accumulate) the inverse of the delta motion fields during the passes. This means to integrate the inverse of ΔVn and ΔUn (ΔUn = − ΔVn). Because magnitude of ΔVn is small, (ΔVn)−1 can be approximated as −ΔVn. Integration of (ΔVn)−1 and (ΔUn)−1 is slightly different from computation of Vn and Un by accumulating ΔVn and ΔUn because (ΔVn)−1 and (ΔVn)−1 are the push-forward motion fields while ΔVn and ΔUn are the pull-back motion fields. However, the way to approximate (ΔVn)−1 by −ΔVn) does not work very well with multigrid approaches because a small ΔVn in the coarse image resolution stage equals to larger motion in the finer resolution stage and will have larger approximation errors. To reduce such error in the finer resolution stage, the spatial step (maximal |ΔVn|) in the coarse stage must be very small, much smaller than 0.4 voxel. Using very small spatial steps contradicts the idea of using the multigrid approach since the multigrid approach is applied to improve computation speed.
Methods to directly compute the inverse motion field have been reported (Christensen and Johnson 2001, Cachier and Rey 2000). Ashburner reported a fast method based on the idea of tetrahedral and affine transformation inversion (Ashburner et al 2000). We used this method in this work because it is computationally efficient and accurate. The method is already implemented in the statistical parametric mapping (SPM) (Friston 2006) version 5 package. We evaluated the code from SPM and demonstrated that it worked well for all our tested cases with averaged error <0.05 pixel and maximal error <0.1 pixel.
2.5. The entire procedure
The entire inverse consistency method can be described in following pseudo code:
Let the pass number n = 0 and V0 = U0 = 0.
Compute the deformed image In and Jn according to equations (17) and (18).
Use one of the modified optical flow algorithms to perform registration between In and Jn and compute ΔVn+1.
Limit the magnitude of ΔVn+1 according to equation (32), and optionally smooth ΔVn+1 with a Gaussian lowpass filter.
Let n = n + 1, update the overall motion fields Vn and Un according to equation (15) and (16) and optionally smooth Vn and Un with another Gaussian lowpass filter.
If the results have not converged and n is less than the maximal step number allowed, and then go back to step 2 for the next pass. Convergence is determined by checking whether the maximal magnitude of ΔVn is less than a user set value, for example, 0.01 voxel.
Otherwise, compute the final deformation fields according to equations (19) and (20).
The entire procedure is similar to a regular asymmetric registration procedure, with additional steps 4 and 7. An important difference is that the computation needs to be carried out for both images to update In and Jn, Vn and Un, while a regular asymmetric procedure often only needs to compute similar variables for one image. Optional smoothing in step 4 helps to smooth ΔVn after the magnitude of ΔVn is limited. Optional smoothing in step 5 helps to diffuse the motion from high contrast regions into neighborhoods with low contrast regions (Thirion 1998).
2.6. Implementation
The proposed method is implemented primarily in MATLAB with image processing toolbox. The motion field inversion procedure is implemented in C/C++. Besides the multiple passes approach, we also used the multigrid approach to sequentially carry out the registration in multiple down-sampled image resolution stages. We used five stages for the pig lung image set, and four stages for Yosemite, liver and kidney and patient lung image sets. The number of stages is selected to ensure capturing of the largest possible image deformations. We used eight passes for each stage. For the HS algorithm, we used five iterations for each pass with α = 0.2. For the demons algorithm, we used σ = 2 pixels and did not use multiple iterations.
Before the two images were registered, their intensities were always normalized to [0, 1] by dividing by the common maximal intensity value. The Laplacian pyramid down-sampling filter (Burt and Adelson 1983) was used to half-sample the images in the multigrid approach. Bilinear (for 2D images) or trilinear (for 3D images) interpolation was used for situations where interpolation is needed. The differential mask [−1 8 0 −8 1]/12 was used for all gradient computations.
3. Evaluation
3.1. Image data sets
We used three 2D image data sets and one 3D image data set to test the new method. All three 2D image datasets are accompanied by ground truth motion fields. The 3D image data did not have ground truth deformation fields, therefore manually selected landmarks were used for accuracy validation purposes.
3.1.1. Yosemite sequence—2D
We used the first two images from the Yosemite 2D image sequence, which was originally generated at SRI (Barron et al 1994). The Yosemite image sequence is widely used for the validation of deformable image registration algorithms. Ground truth motion field is known. Maximal magnitude of motion is 5.19 pixels. Figure 3 shows both images and their difference. Evaluation using this image dataset would make it possible to directly compare our method to other reported deformable registration algorithms.
Figure 3.

Yosemite images: (a) the Yosemite 0 image, used as moving image, (b) the Yosemite 1 image, used as fixed image, overlaid with ground truth motion field, (c) the difference image.
3.1.2. CT images of pig lung phantom—2D
We used a cropped CT slice of a pig lung phantom (Yang et al 2007b), with pixel size of 0.2441 × 0.2441 mm. The CT slice was deformed according to a synthesized motion field. The original CT slice is used as the moving image. The generated one is used as the fixed image. The synthesized motion field is the ground truth. The maximal motion was 25 pixels. Figure 4 shows the original and the deformed CT slice, the difference image and the optical flow vector field. We also deformed the CT slice with a similar synthesized motion field with maximal motion magnitude of 5 pixels. The second generated image is used for convergence analysis.
Figure 4.

Pig lung CT images: (a) the original slice, (b) the generated image, (c) the difference image, (d) the generated image with ground truth motion field vectors. The difference image is limited between [−0.5, 0.5] in order to show the difference better.
3.1.3. Patient upper abdominal CT scan—2D
We used a transverse slice of patient upper abdominal CT scan, which contains liver and kidney, with pixel size 0.9766 × 0.9766 mm. This CT scan has much lower image contrast than the pig lung CT scan. It is good to test for low image contrast situations. We deformed this CT scan with a synthesized motion field. The maximal motion was 10 pixels. Figure 5 shows the original and the deformed CT slice, the difference and the synthesized motion field. For convergence analysis, we used another similar motion field of maximal magnitude of 3 pixels to deform the CT slice.
Figure 5.
The liver and kidney CT images: (a) the original slice, (b) the generated image, (c) the difference, (d) the generated image overlaid with motion field vectors. The difference image is scaled and limited between [−0.2, 0.2] in order to visualize the differences better.
3.1.4. Patient 4D-CT images
We used two 3D-CT volumes from a patient 4D-CT dataset. This data set was used in multi-institutional evaluation study of deformable registration (Brock 2007). The first volume represents an exhalation phase and the second one represents an inhalation phase. Dimensions are 512 × 512 × 152 for both phases, with voxel size of 0.9766 × 0.9766 × 2.5 mm. For this study, we first cropped the 3D-CT volumes to delete almost everything but lung and then half-sampled every 2D transverse slice. The volume dimensions became 122 × 150 × 110, and voxel size became 1.9532 × 1.9532 × 2.5 mm. Figure 6 shows the coronal slices of both volumes, the difference image and the checkerboard image. In total, 38 corresponding landmarks have been manually selected by physicians on the original 512 × 512 × 152 volumes. Among the 38 landmarks, 17 were in the right lung, 17 were in the left lung, 2 for heart and 2 for aorta. The landmark selection procedure was reported in Brock (2007).
Figure 6.

Patient 4D-CT images. (a) The exhaled phase—the moving image, (b) the inhaled phase—the fixed image, (c) the difference.
3.2. Evaluation procedures
We performed several quantitative comparisons to evaluate the proposed inverse consistency method against the corresponding asymmetric optical flow algorithms. We always used the same parameters, multigrid and multiple pass settings in order to make the comparison as fair as possible.
3.2.1. Accuracy validation
After the resulting motion field is computed, the displacement error vector field Verr is computed as
| (33) |
where V is the computed field and Vgt is the ground truth field. Mean, standard deviation and maximal values are computed for |Verr|. For the 2D images, the mean and standard deviation for absolute angular error Aerr are also computed. Aerr is defined as
| (34) |
where tan−1 is the inverse tangent function, Vy and Vx are the x and y component of V, and Vy,gt and Vx,gt are the x and y component of Vgt. The ground truth motion fields are available for the 2D images; Verr and Aerr are computed for the entire image.
For the 4D-CT image data set, Verr is only computed for the landmark points. If a landmark point is not located in the center of a voxel, liner interpolation was used to compute V for the landmark point. We did not compute angular errors for this image dataset. Instead, we computed the mean, max and standard deviation for Verr,x, Verr,y and Verr,z where Verr,x, Verr,y and Verr,z are the x, y and z directional components of Verr. Directions of x, y and z for 3D image volumes are also called as LR (left–right), AP (anterior–posterior) and SI (superior–inferior) directions.
MSE (mean square error) and MI (mutual information) between the deformed image I and the fixed image J are widely used in many published papers as indirect measurements of image registration accuracy. We avoided to use them in this work because (1) they are not accurate measurements of registration accuracy since better MSE or MI does not directly translates into higher accuracy, (2) MI is not useful for single modality image registration analysis.
3.2.2. Inverse consistency evaluation
For the asymmetric algorithms, we carried registration in both forward and backward directions. For the proposed inverse consistency method, we only performed registration once in the forward direction and the motion fields for both registration directions were computed from the results of a single computation step. Inverse consistency error ICE is computed according to equation (9) using the motion fields of both directions.
3.2.3. Convergence analysis
We evaluated the algorithms for two types of convergence, converging to the ground truth and converging to a stable solution. Note that converging to a stable solution, does not necessarily mean that the stable solution is the known ground truth as these problems are non-convex. We only used 2D image sets for such convergence study, because computation cost will be too expensive for the 4D-CT data. Registrations for convergence analysis were performed without using the multigrid approach so that the native convergence ability of the methods can be evaluated.
3.2.4. Analysis of accuracy improvement
We have observed that the proposed inverse consistency optical flow algorithm is more accurate than the inverse-inconsistency version of the HS and the demons algorithms (see section 4). We conjecture that two things could contribute for this accuracy improvement: (1) using spatial gradients of both images; (2) the symmetric formation that deforms both images to the middle. To test our conjecture, we performed experimental simulations as follows. We used the sum of the gradients of both images to replace the single image gradient in the original optical flow algorithms, and performed registration in asymmetric way as the original asymmetric optical flow algorithms. This means using equation (27) to solve V as defined in equation (1). Results of such modified asymmetric optical flow algorithms were compared to the results by the original algorithms and the proposed inverse consistency algorithms. The same multigrid, multiple pass and smoothing parameters were used for all three algorithms to ensure that the comparisons were fair.
4. Results
4.1. Accuracy validation
Accuracy validation results are tabulated in tables 2 and 3. Figure 7 shows plots and images for sample results obtained by the inverse consistency demons algorithm on the liver and kidney images. These results show that the proposed method always achieves higher accuracy than the corresponding inverse inconsistent HS or the demons algorithms for all image datasets. The reader should be advised that the presented results are only for comparison between the algorithms pairs. The results should not be taken as the best possible accuracy that these algorithms can achieve because the accuracy can always be improved by using better optimization settings and more passes. Even though we have presented accuracy measurements for all two pairs of algorithms, we did not intend to compare the accuracy between the different algorithms. We used Gaussian lowpass filtering with σ = 2 pixels on the motion field after every iteration for the demons algorithms. Such a strong smoothing procedure may be the reason that the demons algorithm work better for the pig lung images and the liver–kidney images since the ground truth motion fields are very smooth. Such strong smoothing may or may not be good for different images, especially for images with less continuous motion.
Table 2.
Accuracy test results with Yosemite, pig lung CT, patient liver and kidney images. HS stands for Horn–Schunck. IC stands for the inverse consistency version of an algorithm.
| Algorithms | Displacement error (pixel)
|
Angular error (degree)
|
||||
|---|---|---|---|---|---|---|
| Yosemite | Pig lung | Liver and kidney | Yosemite | Pig lung | Liver and kidney | |
| HS | 0.25 ± 0.38 | 1.08 ± 3.01 | 0.70 ± 1.06 | 5.6 ± 8.6 | 3.0 ± 8.9 | 2.9 ± 4.8 |
| HS-IC | 0.11 ± 0.16 | 0.29 ± 1.34 | 0.46 ± 0.85 | 2.6 ± 4.6 | 0.8 ± 3.5 | 1.5 ± 3.3 |
| Demons | 0.26 ± 0.37 | 3.48 ± 5.49 | 0.72 ± 1.14 | 7.3 ± 12 | 8.9 ± 15.3 | 4.0 ± 5.7 |
| Demons-IC | 0.12 ± 0.19 | 0.09 ± 0.39 | 0.06 ± 0.08 | 3.2 ± 5.0 | 0.4 ± 1.8 | 0.32 ± 0.7 |
Table 3.
Accuracy test results with patient 4D-CT lung images.
| Algorithms | LR (mm) | AP (mm) | SI (mm) | Abs (mm) | Max abs (mm) |
|---|---|---|---|---|---|
| HS | 0.99 ± 1.12 | 0.97 ± 1.23 | 1.37 ± 1.38 | 2.25 ± 1.83 | 7.14 |
| HS-IC | 0.73 ± 0.71 | 0.57 ± 0.77 | 0.79 ± 0.83 | 1.38 ± 1.16 | 4.85 |
| Demons | 1.32 ± 1.17 | 1.84 ± 1.87 | 1.62 ± 1.34 | 3.14 ± 2.12 | 10.18 |
| Demons-IC | 0.76 ± 0.77 | 0.81 ± 1.09 | 0.76 ± 0.91 | 1.56 ± 1.40 | 5.42 |
Figure 7.

Results by the demons algorithms on the liver and kidney images: (a)–(c) are by the original demons algorithm, (d)–(f) are with the inverse consistency demons algorithm. (a) and (d) are the histogram in log10 of absolute displacement error magnitude, note that the x-axis scale is different, the maximal error is 6 versus 1.4 pixels, (b) and (e) are the color mapped absolute registration error in pixel, (c) and (f) are the color mapped angular error in degree.
4.2. Inverse consistency evaluation
Inverse consistency measurement results are tabulated in table 4. It is not surprising to see that inverse consistency has been greatly improved by the proposed method. Figure 8 shows the inverse consistency errors by the demons algorithms on the liver and kidney images. These plots and figures show the significance of the achieved improvement. Note that the scales in the two plots are different by about three orders of magnitude.
Table 4.
Inverse consistency error (in pixels) of the deformation field.
| Algorithms | Yosemite | Pig lung | Liver and kidney | 4D-CT lung |
|---|---|---|---|---|
| HS | 0.36 ± 0.68 | 1.74 ± 2.1 | 0.96 ± 0.84 | 0.39 ± 0.64 |
| HS-IC | 0.88 × 10−3 ± 1.3 × 10−3 | 3.37 × 10−2 ±0.34 | 1.5 × 10−3 ± 7.8 × 10−3 | 1.1 × 10−3 ± 3.1 × 10−2 |
| Demons | 0.25 ± 0.22 | 0.98 ± 1.16 | 0.45 ± 0.36 | 0.56 ± 0.34 |
| Demons-IC | 0.73 × 10−3 ± 0.79 × 10−3 | 5.04 × 10−3 ±0.11 | 7.2 × 10−4 ± 6.1 × 10−4 | 2.2 × 10−3 ± 1.8 × 10−2 |
Figure 8.
Inverse consistency errors computed by the demons algorithms on the liver and kidney CT images: (a) and (c) are by the original demons algorithm, (b) and (d) are by the inverse consistency demons algorithm. (a) and (b) are the histogram plot (in log 10) for the per pixel inverse consistency error, note that the x-axis scale in (b) is in 10−3. (c) and (d) are the color mapped images, note that the color map scale in (d) is in 10−3. These results suggest that inverse consistency error has been reduced by a magnitude of 103.
4.3. Speed of convergence
The convergence study results are shown in Figure 9. From these results, we can easily see that the proposed method converges faster to the ground truth and to stable solutions.
Figure 9.
Convergence study results: (a) and (b) are using Yosemite images; (c) and (d) are using the pig lung CT images; (e) and (f) are using the liver and kidney CT images. (a), (c) and (e) are plots of results convergence to the ground truth; (b), (d) and (f) are plots of results convergence to a stable solution.
4.4. Computation time comparison
Table 5 lists the recorded computation time for every registration algorithms on every image data set. All computations were finished on a Windows-XP based desktop PC with dual Intel Xeon 3.00 GHz CPU and 3GB RAM. We will defer the discussion about these results to the discussion section.
Table 5.
Relative computation speed comparison. For inverse consistency algorithms, the computation time used by the final motion field inversion procedure is listed separately inside the braces.
| Algorithms | Computation time (s)
|
|||
|---|---|---|---|---|
| Yosemite | Pig lung | Liver and kidney | 4D-CT lung | |
| HS | 14 | 40 | 17 | 385 |
| HS-IC | 20 (+6) | 61 (+21) | 17 (+6) | 617 (+30) |
| Demons | 10 | 24 | 8 | 493 |
| Demons-IC | 14 (+6) | 47 (+21) | 18 (+7) | 696 (+31) |
4.5. Comparison to the modified asymmetric demons algorithm
The results in table 6 suggest that using both image gradients to replace the single image gradient term in the original demons algorithm will improve the overall registration accuracy. This agrees with previously published results (Christensen and Johnson 2001, Rogelj and Kovacic 2006, Alvarez et al 2007). Inverse consistency demons algorithm, however, is able to outperform such modifications by over 10%.
Table 6.
Registration accuracy comparison of the original demons algorithm, the inverse consistency demons algorithm and the original demons algorithms modified to use both image gradients.
| Algorithms | Yosemite (pixels) | Pig lung (mm) | 4D-CT (mm) |
|---|---|---|---|
| Original demons | 0.26 ± 0.37 | 3.48 ± 5.49 | 3.14 ± 2.12 |
| Using both image gradients in the asymmetric registration | 0.15 ± 0.25 | 0.97 ± 2.85 | 1.76 ± 1.40 |
| Inverse consistency demons | 0.12 ± 0.19 | 0.09 ± 0.39 | 1.56 ± 1.40 |
5. Discussion
5.1. Comparing to regular optical flow algorithms
The proposed method originated from the asymmetric optical flow algorithms that we used in our previous work (Yang et al 2008a, El Naqa et al 2004, Yang et al 2007a, Yang et al 2008b). The important changes introduced by the new method are: (1) both images I and J are deformed to match in the middle; (2) both image gradients are used in the new system cost PDE; (3) the magnitude of the delta motion field is limited in order to ensure the diffeomorphism; (4) the magnitude of the overall motion fields only needs to be half of the magnitude of the motion fields in the asymmetric algorithms, and this means that the new method would converge faster and capture greater motion range.
This new method is more than a multi-grid and multiple pass extension to solve equation (27). If images I and J are swapped, this method will guarantee the inverse consistency of the results while a regular multi-grid and multiple pass method for equation (27) will not. The optical flow algorithm acts as a solver for equation (27) to solve ΔVn for a single pass. Similarly, different solvers can be used in the same overall framework, so that advantages of these solvers can be utilized. For instance, the demons algorithm or another diffeomorphic algorithm could be used to replace the HS algorithm.
5.2. Comparing to other inverse consistency methods
The proposed method is similar to the diffeomorphism inverse consistency algorithms (Joshi et al 2004, Avants and Gee 2004, Beg and Khan 2007) but quite different from other previously published non-diffeomorphism inverse consistency algorithms (Christensen and Johnson 2001, Alvarez et al 2007, Cachier and Rey 2000, Leow et al 2005). The differences from the non-diffeomorphism algorithms are apparent. In all the previous non-diffeomorphism algorithms, the inverse consistency constraints are explicitly included in the system cost PDE so that inverse consistency errors can be reduced when the system cost is minimized. In this proposed method, the registration scheme is different (both images are deformed toward each other until the two deformed images match in the middle) and the inverse consistency is guaranteed implicitly by the new registration scheme without using additional constraints in the new PDE (equation (27)). The new PDE has the same form as a regular optical flow PDE. It is much simpler than its counterparts with explicit inverse consistency constraint terms. The new PDE is easier and faster to solve.
To register two images by deforming them to each other must not be confused with registering both images individually to a third image. If such a third image exists, one can always register each of the two images to it, and then compose the two motion fields in the same way as the final motion fields are computed by equations (19) and (20). Inverse consistency is always guaranteed in this way regardless of the accuracy of each individual registration. Unfortunately, such a third image is not available. Finding such a ‘intermediate’ image, which is often referred to as the shape average image, is one of the main aims of the diffeomorphism inverse consistency algorithms by Joshi et al (2004) and Avants and Gee (2004). We do not explicitly use the concept of the shape average image in our method. However, our method could be thought of as using the intensity average image An = (In + Jn)/2 for pass n, computing the delta motion field ΔVn+1 by registering both In and Jn to An with the optical flow smoothness regularization constraint and a constraint that the summation of all motion fields is 0 (equation (24)). The constraint by equation (24) ensures consistency when the two images are swapped. Similar approaches have been independently reported for group image registration studies (Zöllei et al 2005, Studholme and Cardenas 2004).
Important differences of the proposed method from the diffeomorphism inverse consistency algorithms are: (1) the regularity constraint terms in the PDE are different. The global smoothness constraint used by our method is generally simpler than the diffeomorphism smoothness constraints; (2) there is only one delta motion field instead of two to be solved in our method; (3) the delta motion field computation is completely different and more efficient; (4) we used an extra step to limit the magnitude of the delta motion field in order to ensure the diffeomorphisms and invertibilities of both resulting overall motion fields. Similar small-spatial-step approach is also necessary for diffeomorphism algorithms but is done differently; (5) our results are not geodesic while the results by diffeomorphism algorithms are geodesic. This however does not mean that the results by our method are less accurate because geodesic is not equivalent to accuracy; (6) motion field inversion procedure is different. We used the triangulation interpolation method proposed by Ashburner et al (2000) to invert the motion fields because it is robust and fast; (7) our algorithm does need to compute the inverse transformation, but only once at the very end.
Such differences may not warrant more accurate results but could definitely warrant faster computation. Our method is expected to be intuitively more efficient than the diffeomorphism algorithms. Additionally, the way we simplify the inverse consistency system cost equation (23) by forcing ΔVn + ΔUn = 0 may be applicable to diffeomorphism inverse consistency algorithms and may help improve their computation speed.
5.3. Accuracy versus inverse consistency
Absolute registration accuracy is definitely one of the most important goals for all deformable image registration algorithms. Inverse consistency is a desirable feature for accurate deformable registration algorithms, but in itself is not as important as the overall accuracy. They are not equivalent. In our method, inverse consistency is intrinsically ensured (assuming the motion field inversion is 100% accurate), and accuracy of the final motion fields, which is dependent on the accuracy of the delta motion fields, is also improved. There are other ways to achieve improved accuracy, for example, to use more passes. Enforcing inverse consistency is only one way to achieve better accuracy. We are however encouraged to see that the proposed method can indeed improve accuracy for a wide class of medical images.
5.4. Convergence and computation speed
Our results have shown that the proposed method could converge faster than the inverse inconsistent counterparts. To run the same number of iterations, the proposed method could be significantly more accurate (30% to a few times) and slightly slower (10% to 30%). The overall computation speed, which can be measured by total time spent to compute the registration between a pair of images, is often subject to the choice of parameters. The proposed method could actually finish a registration computation faster by using fewer iterations and to achieve the same required accuracy. Unfortunately, accuracy cannot be measured in most cases.
5.5. Implementation
The proposed method is straightforward to implement. The complexity of the method is lower than other inverse consistency algorithms. To implement the new method, the original optical flow algorithms and the multigrid and/or the multiple pass loops both must be modified. In our experiences, only minor changes are needed to the actual optical flow algorithms. Changes to the multigrid and/or the multiple pass loops are generally straightforward.
5.6. Radiation therapy applications
The motion vector fields, which are the results of deformable image registration, can be directly utilized in many radiation therapy applications, especially for treatment adaptation. The proposed algorithm provides two very important and useful advantages: (1) better accuracy compared to regular asymmetric algorithms and (2) consistent motion vector fields in both registration directions.
For adaptive radiotherapy applications, information on both planning CT and the daily CT need to be remapped from one image to the other image. For example, structure contours defined on the planning CT need to be propagated to the daily CT so that daily dose computed on the daily CT can be evaluated. There are two ways to perform such tasks. The first way is to use the forward motion field (motion field defined on the voxels of the planning CT) to move the contour points from the planning CT onto the daily CT. The second way is to use the backward motion field (motion field defined on the daily CT) to deform the structure binary mask volumes from the planning CT to the daily CT. In a second example, a daily dose computed on the daily CT could be deformed back to the planning CT using the forward motion field so that all daily doses can be accumulated on the voxel grids of the planning CT. In another scenario, the planning CT, which is usually in better quality, can be deformed to the daily CT by using the backward motion field and to replace the daily CT for daily dose computation. The proposed method is very useful for such treatment adaptation tasks because it provides consistent motion fields in both directions. Otherwise, the motion fields have to be computed in both directions by two separated registrations and the results will not be inversely consistent.
5.7. Limitations
There are a few drawbacks with the proposed method. First, computation is typically slower. Second, registration between two images of different dimensions is not well defined. With asymmetric methods, the larger image could be used as the moving image and the smaller image could be used as the fixed image. Registration in such a configuration is at least possible to compute. Inverse consistency methods cannot handle such dimension mismatching because both images need to be deformed and matched in the middle. A similar situation can arise when one image is moving out of the boundaries of another image as in image occlusion problems. In these cases, results generated by the inverse consistency method may have more artifacts on the image boundaries.
6. Conclusion
In this work, we have proposed a new inverse consistency deformable image registration method, which performs registration between two images in a symmetric way such that both images are incrementally deformed to the middle until they match. Motion fields are solved step-by-step using modified optical flow algorithms in a multigrid and multiple-pass framework. The step size of the motion field adjustment is controlled to ensure the smoothness and invertibility of the final results. Compared to regular asymmetric optical flow algorithms, the proposed method is able to provide inversely consistent motion fields for both registration directions, and significantly improve registration accuracy and convergence speed, with only minor additional computation. This new method could be applied to many radiation therapy applications including adaptive radiotherapy and 4D-CT.
Acknowledgments
This research was partially supported by American Cancer Society grant IRG-58-010-50 and NIH grant K25 CA128809. We would also like to acknowledge Dr Kristy Brock for providing us with the 4D-CT images and landmark data.
References
- Alvarez L, Deriche R, Papadopoulo T, Sanchez Javier. Symmetrical dense optical flow estimation with occlusions detection. Int J Comput Vis. 2007;75:371–85. [Google Scholar]
- Ashburner J, Andersson JLR, Friston KJ. Image registration using a symmetric prior—in three dimensions. Hum Brain Mapp. 2000;9:212–25. doi: 10.1002/(SICI)1097-0193(200004)9:4<212::AID-HBM3>3.0.CO;2-#. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Avants B, Gee J. Computer Vision and Mathematical Methods in Medical and Biomedical Image Analysis. Vol. 3117. Berlin: Springer; 2004. Symmetric geodesic shape averaging and shape interpolation; pp. 99–110. [Google Scholar]
- Balter JM, Pelizzari CA, Chen GT. Correlation of projection radiographs in radiation therapy using open curve segments and points. Med Phys. 1992;19:329–34. doi: 10.1118/1.596863. [DOI] [PubMed] [Google Scholar]
- Barron JL, Fleet DJ, Beauchemin SS, Burkitt TA. Performance of optical flow techniques. Int J Comput Vis. 1994;12:43–77. [Google Scholar]
- Beg MF, Khan A. Symmetric data attachment terms for large deformation image registration. IEEE Trans Med Imaging. 2007;26:1179–89. doi: 10.1109/TMI.2007.898813. [DOI] [PubMed] [Google Scholar]
- Brock KK. A multi-institution deformable registration accuracy study. Int J Radiat Oncol Biol Phys. 2007;69:S44. doi: 10.1016/j.ijrobp.2009.06.031. [DOI] [PubMed] [Google Scholar]
- Burt PJ, Adelson EH. The Laplacian pyramid as a compact image code. IEEE Trans Commun. 1983;31:532–40. [Google Scholar]
- Cachier P, Rey D. Symmetrization of the non-rigid registration problem using inversion-invariant energies: application to multiple sclerosis. Medical Image Computing and Computer-Assisted Intervention—MICCAI. 2000;2000 [Google Scholar]
- Christensen G, Geng X, Kuhl J, Bruss J, Grabowski T, Pirwani I, Vannier M, Allen J, Damasio H. Biomedical Image Registration. Berlin: Springer; 2006. Introduction to the non-rigid image registration evaluation project (NIREP) [Google Scholar]
- Christensen GE, Johnson HJ. Consistent image registration. IEEE Trans Med Imaging. 2001;20:568–82. doi: 10.1109/42.932742. [DOI] [PubMed] [Google Scholar]
- Christensen GE, Rabbitt RD, Miller MI. Deformable templates using large deformation kinematics. IEEE Trans Image Process. 1996;5:1435–47. doi: 10.1109/83.536892. [DOI] [PubMed] [Google Scholar]
- Cootes TF, Marsland S, Twining CJ, Smith K, Taylor CJ. Groupwise diffeomorphic non-rigid registration for automatic model building. Computer Vision—ECCV 2004 Lect. Notes Comp Sci. 2004;3024:316–27. [Google Scholar]
- Dupuis P, Grenander U, Miller MI. Variational problems on flows of diffeomorphisms for image matching. Q Appl Math. 1998;LVI:587–600. [Google Scholar]
- El Naqa I, Low DA, Nystrom M, Parikh P, Lu W, Deasy JO, Amini A, Hubenschmidt J, Wahab S. An optical flow approach for automated breathing motion tracking in 4D computed tomography. Proc. 14th Int. Conf. Use of Computers in Radiation Therapy; Seoul, Korea. 2004. [Google Scholar]
- Friston KJ. Statistical Parametric Mapping: The Analysis of Functional Brain Images. New York: Academic; 2006. [Google Scholar]
- Horn BKP, Schunck BG. Determining optical flow. Artif Intell. 1981;17:185–203. [Google Scholar]
- Joshi S, Davis B, Jomier M, Gerig G. Unbiased diffeomorphic atlas construction for computational anatomy. NeuroImage. 2004;23:S151–60. doi: 10.1016/j.neuroimage.2004.07.068. [DOI] [PubMed] [Google Scholar]
- Kessler ML. Image registration and data fusion in radiation therapy. Br J Radiol. 2006;79:S99–108. doi: 10.1259/bjr/70617164. [DOI] [PubMed] [Google Scholar]
- Kessler ML, Pitluck S, Petti P, Castro JR. Integration of multimodality imaging data for radiotherapy treatment planning. Int J Radiat Oncol Biol Phys. 1991;21:1653–67. doi: 10.1016/0360-3016(91)90345-5. [DOI] [PubMed] [Google Scholar]
- Kim J, Fessler JA. Intensity-based image registration using robust correlation coefficients. IEEE Trans Med Imaging. 2004;23:1430–44. doi: 10.1109/TMI.2004.835313. [DOI] [PubMed] [Google Scholar]
- Leow A, Huang S-C, Geng A, Becker J, Davis S, Toga A, Thompson P. Information Processing in Medical Imaging. Berlin: Springer; 2005. Inverse consistent mapping in 3D deformable image registration: its construction and statistical properties. [DOI] [PubMed] [Google Scholar]
- Lu W, Chen ML, Olivera GH, Ruchala KJ, Mackie TR. Fast free-form deformable registration via calculus of variations. Phys Med Biol. 2004;49:3067–87. doi: 10.1088/0031-9155/49/14/003. [DOI] [PubMed] [Google Scholar]
- Mccane B, Novins K, Crannitch D, Galvin B. On Benchmarking Optical Flow. Comput Vis Image Underst. 2001;84:126–43. [Google Scholar]
- Rogelj P, Kovacic S. Symmetric image registration. Med Image Anal. 2006;10:484–93. doi: 10.1016/j.media.2005.03.003. [DOI] [PubMed] [Google Scholar]
- Rueckert D, Aljabar P, Heckemann R, Hajnal J, Hammers A. Diffeomorphic registration using B-Splines. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2006 Lect. Notes Comp Sci. 2006;4191:702–9. doi: 10.1007/11866763_86. [DOI] [PubMed] [Google Scholar]
- Sarrut D, Sarrut D, Delhay S, Villard PF, Boldea VABV, Beuve MABM, Clarysse PAC. A comparison framework for breathing motion estimation methods from 4-D imaging. IEEE Trans Med Imaging. 2007;26:1636–48. doi: 10.1109/tmi.2007.901006. [DOI] [PubMed] [Google Scholar]
- Studholme C, Cardenas V. A template free approach to volumetric spatial normalization of brain anatomy. Pattern Recognit Lett. 2004;25:1191–202. [Google Scholar]
- Thirion JP. Image matching as a diffusion process: an analogy with Maxwell’s demons. Med Image Anal. 1998;2:243–60. doi: 10.1016/s1361-8415(98)80022-4. [DOI] [PubMed] [Google Scholar]
- Trouve A. Diffeomorphisms groups and pattern matching in image analysis. Int J Comput Vis. 1998;28:213–21. [Google Scholar]
- Van Herk M, Kooy HM. Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching. Med Phys. 1994;21:1163–78. doi: 10.1118/1.597344. [DOI] [PubMed] [Google Scholar]
- Vercauteren T, Pennec X, Perchant A, Ayache N. Non-parametric diffeomorphic image registration with the demons algorithm. Medical Image Computing and Computer-Assisted Intervention—MICCAI. 2007;2007 doi: 10.1007/978-3-540-75759-7_39. [DOI] [PubMed] [Google Scholar]
- Viola P, Wells WM., III Alignment by maximization of mutual information. IEEE Proc. Fifth Int. Conf. Computer Vision; Piscataway, NJ: IEEE; 1995. [Google Scholar]
- Wang H, Dong L, O’daniel J, Mohan R, Garden AS, Ang KK, Kuban DA, Bonnen M, Chang JY, Cheung R. Validation of an accelerated ‘demons’ algorithm for deformable image registration in radiation therapy. Phys Med Biol. 2005;50:2887–905. doi: 10.1088/0031-9155/50/12/011. [DOI] [PubMed] [Google Scholar]
- Yang D, Deasy JO, Low DA, El Naqa IM. Level set motion assisted non-rigid 3D image registration. Medical Imaging, Proc. SPIE; San Diego, CA. 2007a. [Google Scholar]
- Yang D, Hubenschmidt J, Goddu S, Parikh P, Deasy J, Low D, El Naqa I. TH-D-M100F-04: a biomechanical phantom for validation of deformable multimodality image algorithms (abstract) Med Phys. 2007b;34:2636–7. [Google Scholar]
- Yang D, Lu W, Low DA, Deasy JO, Hope AJ, El Naqa IM. 4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling. Med Phys. 2008a;35:4577–90. doi: 10.1118/1.2977828. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang D, Lu W, Low DA, Deasy JO, Hope AJ, El Naqa IM. Deformable registration of abdominal kilovoltage treatment planning CT and megavoltage daily CT for tomotherapy adaptive radiotherapy treatment planning. Med Phys. 2008b doi: 10.1118/1.3049594. submitted. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zöllei L, Learned-Miller E, Grimson E, Wells W. Efficient population registration of 3D data. Computer Vision for Biomedical Image Applications 2005 [Google Scholar]



