Abstract
Purpose
This study investigates methodologies for the estimation of small animal anatomy from non-tomographic modalities, such as planar X-ray projections, optical cameras, and surface scanners. The key goal is to register a digital mouse atlas to a combination of non-tomographic modalities, in order to provide organ-level anatomical references of small animals in 3D.
Procedures
A 2D/3D registration method was developed to register the 3D atlas to the combination of non-tomographic imaging modalities. Eleven combinations of three non-tomographic imaging modalities were simulated, and the registration accuracy of each combination was evaluated.
Results
Comparing the 11 combinations, the top-view X-ray projection combined with the side-view optical camera yielded the best overall registration accuracy of all organs. The use of a surface scanner improved the registration accuracy of skin, spleen, and kidneys.
Conclusions
The methodologies and evaluation presented in this study should provide helpful information for designing preclinical atlas-based anatomical data acquisition systems.
Keywords: Small animal imaging, Mouse atlas registration, 2D/3D registration, Planar X-ray projection, 3D surface scanner
Introduction
In preclinical small animal studies, in vivo estimation of the mouse anatomy is important for localizing functional changes and measuring organ morphometry. Some molecular imaging modalities also need complimentary anatomical information to help with image acquisition, reconstruction, and analysis, such as micro-single photon emission computed tomography (micro-SPECT) scan planning [1], optical tomography reconstruction [2–5], micro-positron emission tomography (micro-PET) attenuation correction [6], and tissue uptake quantification [7]. Currently, in vivo imaging of the mouse anatomy can be achieved with small animal tomographic imaging modalities such as micro-computed tomography (micro-CT) [8] and micro-magnetic resonance imaging (micro-MR) [9]. These systems can acquire 3D tomographic images with micron-level resolution (≤100 μm for in vivo imaging of both modalities).
Undoubtedly, micro-CT and micro-MR systems have contributed greatly to preclinical research; however, they present significant complexity as well as upfront and maintenance costs limiting their availability. The technologies in these systems also complicate their combination with molecular imaging systems [2, 4, 10, 11]. To avoid these problems, some researchers turned to the use of low-cost non-tomographic imaging systems, such as optical cameras, 3D surface scanners, and bench-top planar X-ray systems. Optical cameras can be used to obtain 2D body profiles which are useful in inter-modality co-registration [12–14], respiratory motion monitoring [15], and 3D surface geometry reconstruction [16, 17]. Recent developments in 3D surface scanning techniques make it possible to build a surface scanner with consumer–market electronic devices [18] (e.g., laser pointer, digital camera, and/or pocket projector). As a result, several research prototypes and commercial products have been developed, such as the laser scanner with conical mirror [19] and the structured light-based surface scanner [20]. Bench-top planar X-ray systems are more expensive than optical cameras and surface scanners, but still far less costly than fully 3D tomographic systems. With a planar X-ray projection, the anatomy of some internal structures (e.g., bones and lungs) can be readily observed. Several commercial small animal optical imaging systems have integrated planar X-ray systems [21], such as the KODAK In-Vivo Multispectral System FX [22] and the Caliper LifeSciences IVIS® lumina XR system [23].
Borrowing the idea from small animal image registration [24–26] and atlas registration [16, 27], several software approaches have been developed to register a digital mouse atlas with the non-tomographic modalities, in order to approximate 3D organ anatomy. Baiker et al. registered the mouse atlas to optical profiles of the mouse body to assist scan planning of region-focused micro-SPECT [1]. Khmelinskii et al. performed mouse atlas registration under the guidance of multi-view optical photos [16]. Zhang et al. aligned the mouse atlas with body surface reconstructed from multiple-view photos, aiming to assist fluorescence tomographic reconstruction [28]. Joshi et al. developed a finite-element-model-based atlas warping method to register the atlas with laser scans of the mouse surface [29]. Chaudhari et al. proposed a method for registering a mouse atlas to a surface mesh acquired by a structured light scanner [30]. Based on our survey, current methods mainly focus on registration with optical modalities like optical photos and surface scans, and no method has been reported for mouse atlas registration with X-ray projections. Since 2D/3D registration with X-ray projections has been extensively studied in the clinical field [31–39], we believe that it is worthwhile to extend it into the preclinical field.
Conforming to the trend of low-cost mouse anatomical imaging, this paper studies the problem of mouse atlas registration with different combinations of three non-tomographic imaging modalities, including an optical camera, a surface scanner, and a planar X-ray projector. The objective is to design a computerized method that registers the mouse atlas to different combinations of these three modalities, as well as to evaluate the registration accuracy of the different combinations. Here, we simulated the imaging process of these non-tomographic modalities, using 3D tomographic preclinical mouse data acquired with micro-CT scans, and we evaluated the registration accuracy based on these simulated images. With this study, we hope to provide reference information for designing different combinations of low-cost non-tomographic systems that can achieve registration with a mouse atlas.
Materials and Methods
Creation of Mouse Atlases and Subject Phantoms
The atlases and subjects used in this study were created from 28 contrast-enhanced mouse micro-CT images, which were selected from the preclinical imaging database of the Crump Institute for Molecular Imaging, UCLA [40]. The contrast agent was Fenestra™ LC (ART, Saint Laurent, QC, Canada). The images were acquired in vivo with healthy subjects of different strains, weights, postures, and sex. The three most frequently used strains (Nude, C57, and severe-combined immunodeficient) were included, with body weights ranging from 15 to 30 g. The imaging system was a MicroCAT II small animal CT (Siemens Preclinical Solutions, Knoxville, TN, USA), and the images were reconstructed using a modified Feldkamp algorithm with isotropic voxel size 0.20 mm and matrix size 256×256×496. The reason for using contrast-enhanced CT images was to enable the definition of organ regions, so as to facilitate the evaluation of registration accuracy. Major mouse organs were segmented from the 28 images by human experts using computer-assisted segmentation tools which incorporated the methods of intensity thresholding, region growing, deformable simplex mesh [41], and graph cuts [42, 43]. The segmented organs included skin, skeleton, heart, lungs, liver, spleen, kidneys, and bladder.
Five of the 28 images were chosen as mouse atlases, with the remaining 23 used as target subjects. The reason for selecting multiple atlases instead of using one of the publicly available mouse atlases [44, 45] was to reduce the influence of single atlas bias.
Simulation of the Three Non-tomographic Modalities
The acquisitions of optical photograph, X-ray projection, and surface scan were simulated based on the 23 subjects. Fig. 1a–c demonstrates the simulation mechanism and results for the three modalities. To represent the common field of view of the three modalities, a 3D global world coordinate system is defined (Fig. 1a) with the origin located at the geometric center of the subject image. The x-, y-, and z-axes are defined to point from left to right, tail to head, and anterior to posterior of the subject, respectively. Fig. 1d shows an option of combination of the three modalities, with X-ray and surface scan on top view and camera on side view. Many other combinations can be achieved with these non-tomographic imaging modalities (see “Different Combinations of the Three Non-tomographic Modalities” section). Realizing all the combinations by hardware is nontrivial, and we therefore validated them using a simulation approach. This way, we can also compare the different combinations using exactly the same test datasets.
Fig. 1.
Simulation configuration and results of the three non-tomographic modalities. a Simulation mechanism and result of the optical camera. b Simulation mechanism and result of the planar X-ray projection. c Simulation mechanism and result of the laser surface scanner. d An example of the combination of the three modalities.
The simulation of the optical camera is based on the pinhole projection model [46] (Fig. 1a). The principal axis (the line connecting the detector center and the camera aperture) of the camera is defined to belong in the xoy-plane and pass through the origin o. The principle axis can be rotated along the y-axis by different view angles. θ denotes the view angle, θ = 0° means top view and θ = 90° means side view. The camera detector is 156 mm away from the origin o. For this work, the focal length is 3 mm and the detector size is 4.5×3.4 mm, with a 640×480-pixel matrix. For an arbitrary pixel p′ in the detector, a straight line is drawn connecting the center of the pixel and the geometric camera aperture s. If this line intersects with the subject surface, the pixel is assigned value 1; otherwise, it is assigned with value 0. The simulation produces a binary silhouette image of the mouse body, as shown at the bottom of Fig. 1a. This result is slightly different from a real-world photo which is a gray-scale or RGB-color picture of the mouse body. For a real-world photo, an additional segmentation step would be required to extract the body silhouette, as in [1] and [13].
The simulation of the X-ray projection is shown in Fig. 1b. The X-ray point source and the pixelated detector are located at opposite sides of the mouse body. The principal axis (the line connecting the detector center and the point source) is defined to belong to the xoy-plane and pass through the origin o. The principle axis is allowed to rotate along the origin for projections by different view angles. The view angle θ is defined in the same way as for the optical camera. The distance from the point source to the origin is 156 mm, and the distance from the detector center to the origin is 52 mm. The size of the X-ray detector is 100×50 mm, with a pixel matrix size of 248×128. For an arbitrary pixel p′ in the detector, a straight line is drawn connecting the centers of the pixel and the point source s. If this line intersects with the subject surface at points a1 and a2, the pixel p′ is assigned with value
| (1) |
where I is the pixel value, I0 is the source energy, μ(s) is the linear tissue attenuation coefficient along the emitted X-ray, and denotes the line integration along the emitted X-ray. The coefficient μ(s) can be obtained from the CT image that was used for simulating the subject. Since the CT images are contrast-enhanced, the voxel intensities will not exactly reflect the tissue attenuation coefficients in the non-contrast-enhanced mouse projection images that we anticipate. To eliminate the influence of contrast agent, the voxel intensities of contrast-enhanced organs (the liver and spleen which are already segmented) are scaled down to the level of brain intensity. The simulated X-ray projection image is shown at the bottom of Fig. 1b, with an inverse intensity map.
The simulation of the surface scanner is shown in Fig. 1c. There are several types of surface scanners, such as laser scanner, structured light scanner, and swept plane scanner. All these are based on a similar imaging geometry as described in [18]. In this study, we simulate the laser scanner, but the same registration approach can also be applied to other type of scanners. The laser source is located at position (0, −60, 156)mm of the world coordinate system. The laser is emitted from the source and is scanned in a row-by-row manner. The row of scanning is parallel to the x-axis. If the laser line intersects with the body surface, the intersection point is recorded. The final range data are composed of all the intersection points of the scanning, as shown at the bottom of Fig. 1c. The resulting range data reveal the shape of the upper body surface, but the bottom surface is missing.
Different Combinations of the Three Non-tomographic Modalities
In practice, the three non-tomographic imaging modalities can be combined in different ways. One of the possible combinations is shown in Fig. 1d, whereas the X-ray system and camera system are located at orthogonal views (θ = 0° and θ = 90°, respectively). In this study, we simulated 11 combinations of the non-tomographic modalities, with the different modalities placed at different view angles. We use the term “modality θ” to describe the modality and view angle, where “X,” “C,” and “L” stand for X-ray, camera, and laser scan, respectively. For example, “C0” means top-view camera, and “X45” means X-ray projection from θ = 45°. The 11 evaluated combinations are C0+C90, C0+X90, X0+C90, X0+X90, X45+X135, L Only (laser scan only), X0+L, X90+L, C90+L, X0+ X90+L, and X0+C90+L.
Atlas Registration Workflow
The overall method described here is designed to register the mouse atlas to different combinations of the three non-tomographic modalities. Fig. 2 demonstrates the workflow of this method. Three registration paths are designed for the three non-tomographic imaging modalities. The switch at the beginning of each path allows flexible combinations of the three modalities. Note that for demonstration purposes, Fig. 2 shows top-view X-ray projection and side-view camera photo. However, the X-ray projection and camera photo can be taken from other view angles as well.
Fig. 2.
Workflow of atlas registration with non-tomographic modalities. Three registration paths are designed for the three modalities. The switch at the beginning of each path enables the flexible combination of different modalities. Note that although this figure demonstrates top-view X-ray and side-view camera, in practice the X-ray and cameras can also be placed at other view angles. a The mouse atlas to be registered. b The mouse atlas deformed by 3D deformation. c The virtual X-ray projection of the deformed atlas. d The virtual optical photo silhouette of the deformed atlas. e The body surface of the deformed atlas. f The X-ray projection of the target subject. g The optical photo silhouette of the target subject. h The laser scan of the target subject. i The 2D registration result of the X-ray projection, where the resultant 2D deformation is displayed as a deformed 2D grid. j The 2D registration result of the optical photo silhouette. k The 3D registration result of the body surface, where the registered atlas surface is displayed along with the subject laser scan and the resulting 3D deformation is displayed as a deformed 3D grid.
Before registration, the atlas is initially positioned at the same location as the target subject, based on the assumption that the 3D position of the target subject is known from the hardware setup [26]. To accelerate registration, the organs of the atlas are converted into triangular surface meshes. The purpose of the registration is to deform the atlas in 3D space to match the X-ray projection, optical silhouette, and/or surface scan of the subject. The desired 3D deformation is estimated via an iterative process, with an initial guess of none, i.e., no deformation. In each iteration, the deformed atlas (Fig. 2b) is virtually projected into the X-ray image (Fig. 2c), optical photo silhouette (Fig. 2d), and/or body surface mesh (Fig. 2e). These virtual projections are registered with their measured counterparts of the subject. Fig. 2i, j shows the 2D registration results of the X-ray projection and optical silhouette; the resulting 2D deformations are displayed as deformed grids (red). Fig. 2k shows the registered atlas surface (gray) with the laser scan of the subject (red); the resulting 3D deformation is displayed as a 3D grid (blue). The registration results of different modalities are combined to one 3D deformation, which is then applied to the atlas again. The iteration terminates when the deformed atlas shows small enough difference between two adjacent loop steps, i.e., Datlas < ε, where represents the coordinate of the ith surface vertex of the atlas at iteration k and ε is set as 0.2 mm, which is the voxel size of subject image (see “Creation of Mouse Atlases and Subject Phantoms” section). The sub-sections below explain the details of the registration method for each modality, as well as the combination of the results from the different modalities into one 3D deformation.
2D Registration of Planar X-ray and Optical Photo
In each iteration, the corresponding optical silhouette and/or X-ray projection of the atlas is generated and registered with the measured optical silhouette and/or X-ray projection of the subject. The optical silhouette of the atlas is generated in the same way as in the simulation of the subject optical silhouette (as in “Simulation of the Three Non-tomographic Modalities” section). The configuration settings of the virtual camera for the atlas, including the relative positions of the aperture and the detector as well as the detector size and pixel resolution, are all the same as for the camera system of the subjects. The X-ray projection of the atlas is generated in a similar way as simulating the subject X-ray projection, the major difference being that the atlas X-ray projection is simulated based on a surface mesh rather than a CT volume. This is because surface-based projection is much faster than volume-based projection; thus, the registration can be accelerated via surface-based projection. The configuration settings of the virtual X-ray system for the atlas, including the relative positions of the source and the detector, and the detector size and pixel resolution, are all the same as those of the real X-ray system for the subjects. A set of virtual X-rays is emitted from the source in a raster scanning fashion toward the detector pixels. If the X-ray intersects with the triangular patches of the organ mesh, the intersection points are recorded. The virtual X-ray is then cut into several sections by the intersection points, with each section inside a particular tissue. The detector pixel that receives this X-ray is assigned with the value of
| (2) |
where I is the pixel value, I0 is the source energy, i denotes the index of X-ray section, l is the length of the X-ray section, and μ is the tissue attenuation coefficient. Only the body surface, the bones, and the lungs are used for the projection. The X-ray sections inside the body surface, bones, and lungs are attenuated with μ of water, bone, and air, respectively. Other soft organs are not considered assuming they have attenuation coefficients similar to water [47]. Fig. 2c shows the surface-based X-ray projection of the atlas, which roughly resembles the volume-based projection (Fig. 1b).
Both X-ray projection images and optical photos are registered using the same 2D registration method, i.e., B-spline registration based on mutual information [48]. This method is implemented using the elastix software, which incorporates a family of state-of-the-art image registration methods [49]. The size of the B-spline control grid is 10×10 pixels. A multi-resolution registration scheme is used to accelerate the registration. Five levels of resolutions are used. The down-sampling ratios for the five resolutions are 16, 8, 4, 2, and 1. An adaptive stochastic gradient descent algorithm [50] is used for the optimization at each resolution.
Rigidity-Constrained Registration of the Body Surface
Atlas registration with the laser scan is an ill-posed problem due to the incompleteness of the scan data [29]. Since the laser does not penetrate the mouse body, the bottom surface of the subject is missing. To address this problem, we use the shape of the mouse bed to emulate the bottom surface of the subject, assuming that the lower mouse surface conforms well to the supporting bed. This is reasonable for in vivo imaging because the bottom of a living mouse is soft enough to conform to the bed shape. For this simulation study, the scan of the bed was obtained by using the simulated laser scan on the bed segmented from the CT image. For physical imaging systems with reliable reproducibility of bed placement [26], the scan of the bed only needs to be performed once and will be consistent for different subjects. After the bed is scanned, the subject is scanned together with the bed. The overlapping points of the two scans are removed, leaving only the non-overlapping parts that enclose the body surface.
To register the atlas surface point set to the enclosed laser scan point set, the rigidity-constrained deformable registration method [51] is used. This method assigns different organs of the atlas with different rigidity values, so as to prevent implausible deformation of the organ shapes during the registration. Three-dimensional distance maps of the two point sets are computed as the input images of the registration method. The elastix software is used to perform the rigidity-constrained registration [49]. The spatial transformation model is B-spline deformation, and the final control grid size is 8×8×8 voxels. The image similarity metric is advanced mean squared difference, and the optimizer is adaptive stochastic gradient descent algorithm. A four-level multi-resolution strategy is used, and the down-sampling ratios for the four resolutions are 8, 4, 2, and 1.
Combining the Registration Results of Different Modalities
Since the atlas is registered separately to different modalities, the registration of each modality yields a separate spatial deformation. 2D registrations (X-ray projection or optical photo) yield 2D deformations (Fig. 2i,j), while 3D registration (surface range data) yields a 3D deformation (Fig. 2k). The 2D deformations are back-projected into 3D space, so as to be combined in 3D.
Fig. 3a, b demonstrates the back-projections of the 2D deformation from the top view and side view, respectively. Note that although Fig. 3 shows the X-ray projection for top view and optical photo for side view, the same principles also apply if the optical photo is taken from the top view and the X-ray is projected from the side view. Let p be an arbitrary vertex of the atlas mesh, p′ be the projection point of p onto the detector (either X-ray detector or camera detector), and s be the X-ray point source or camera aperture. The 2D deformation is characterized by a 2D coordinate system (m, n) which is set on the detector. The 2D deformation vector of p′ is defined as v(p′) = [vm(p′), vn(p′)]. For the top-view projection, we use to denote the 3D deformation vector. vT(p) is back-projected from v(p′) using
Fig. 3.
Back-projection of 2D deformations into 3D space. a Back-projection of top-view 2D deformation (in this case, an X-ray projection). b Back-projection of side-view 2D deformation (in this case a camera projection).
| (3) |
where |sp| and |sp′| are the distances from s to p and p′, respectively. For the side-view projection, we use as the 3D deformation vector. vS(p) is back-projected from v(p′) using
| (4) |
The back-projections through Eqs. 3 and 4 preserve the collinearity of the points in the line sp′ after the 3D deformation. By imposing this collinearity regulation, implausible 3D distortions of organ anatomy are avoided. However, this regulation may over-constrain the anatomical variations of inter-subject registration. This problem is addressed in the “Discussion and Conclusions” section.
In the case where the projections are taken from oblique angles (θ =45°), the principle of back-projection is similar to the case described above. The only difference is that the world coordinate system is rotated clockwise along the y-axis so that angle θ becomes the top view and θ+90° becomes the side view.
After the 2D deformations are back-projected onto 3D, the deformations of different modalities are combined together. In this study, the evaluated combinations are based on the following principles: (1) No more than three modalities are included in the combination; (2) if 2D modalities (X-ray imaging and optical camera) are included, there are no more than two 2D modalities; and (3) if two 2D modalities are included, they must be orthogonal to each other so that redundant projection is avoided. Based on these principles, we use a three-element binary vector b=[bT, bS, bL] to describe the composition of the combination, where bT, bS, bL stand for the presence of top-view, side-view, and laser range data, respectively. bT=1 indicates the inclusion of a top-view 2D image, bT=0 indicates exclusion of the top-view 2D image, and so forth for bS and bL. For example, the combination of “C0+C90” has the b value of [1, 1, 0], and the combination of “X0+C90+L” has the b value of [1, 1, 1]. The deformations of different modalities are combined into one 3D global deformation according to
| (5) |
where v(p) = [vx(p), vy(p), vz(p)] is the combined deformation and is the deformation of the laser scan.
Validation of Registration Accuracy
For each of the 11 combinations, the 23 target subjects were imaged with the simulation method described in “Materials and Methods” section. As a result, 11×23=253 sets of images were generated. Each of the five atlases was registered with the 253 sets of images; therefore, 5×253=1,265 registration results were obtained. For each registration result, the registration accuracy of each organ was measured using the Dice coefficient
| (6) |
where RA and RS represent the organ region of the registered atlas and the expert segmentation, respectively (|·| denotes the number of voxels and ∩ means the overlapping between two regions). The Dice coefficient has the value range of [0, 1]. If two regions completely overlap with each other, the Dice coefficient is 1; if two regions have no overlap at all, the Dice coefficient is 0. To evaluate the registration accuracy of different combinations for different organs, the mean Dice coefficients and standard deviations were calculated based on the 5×23=115 results of each combination.
Results
The statistics of the registration accuracy of different combinations and different organs are shown in Fig. 4, where the 10 organs are displayed in four sub-plots. These results were calculated across five different atlases to reduce the possible bias of using any individual atlas. It can be seen that different organs have different levels of accuracy. The whole body (Dice ≈ 0.9) and the brain (0.6≤Dice≤0.8) have the largest Dice coefficients. The bones (Dice≤0.4), the spleen (Dice≤0.35), and the bladder (Dice≤0.15) have the lowest level of Dice coefficients. Large internal soft organs like the lungs, heart, and liver have the same level of Dice coefficient (0.4≤Dice≤0.6), and the right kidney (0.3≤Dice≤0.6) is always more accurate than the left kidney (0.3≤Dice≤0.5). Fig. 4 also reveals the differences between the different modality combinations. Generally, the combinations with X-ray projections have larger Dice coefficients of bones, lungs, and heart than those without X-ray projections because the bones and lungs have good X-ray contrast and the heart is highly correlated with the lungs. Moreover, the addition of laser scan increased the Dice coefficients of the kidneys and the spleen, since the positions of the kidneys and spleen correlate well with the surface geometry. The accuracy differences between different organs and different combinations will be addressed in the “Discussion and Conclusions” section.
Fig. 4.
Registration accuracy (Dice coefficients) of different combinations of the non-tomographic modalities. The results for different organs are displayed in four separated charts. The means and standard deviations are calculated based on the registration results of five atlases and 23 subjects.
To give an intuitive demonstration of the registration results, some representative images of the registration results are shown in Fig. 5. To avoid being lost in too much data, we only demonstrate the results of four combinations, i.e., “C0+ C90,” “C90+L,” “X0+C90,” and “X0+C90+L.” “C0+C90” and “C90+L” are the two least expensive combinations regarding hardware implementation, while “X0+C90” and “X0+C90+L” are two combinations with best compromise between the accuracies of high X-ray contrast structures (skin, skeleton, and lungs) and low X-ray contrast structures (heart, liver, spleen, kidneys, and bladder), according to the results in Fig. 4. For each combination, both good and imperfect results are demonstrated. The good examples are selected from the cases where the atlases match well with the subjects, while the imperfect examples are selected from the cases where the atlases match poorly with the subjects. For the imperfect examples, the same subject is shown for the four combinations, so as to reveal the differences between the combinations. In Fig. 5, each row shows one combination; the left column shows the good cases, and the right column shows the imperfect cases. The white contours indicate the registered atlases, and the colored regions indicate the expert segmentation.
Fig. 5.
Representative registration results of four combinations. Each row shows one combination; the left column shows the good examples, and the right column shows the imperfect examples. The white contours indicate the registered atlases, and the colored regions indicate the human expert segmentation of the micro-CT images that are used for the simulation.
The registration workflow was programmed with IDL 7.1 (ITT Visual Information Solutions, Boulder, CO, USA). The elastix toolbox was called online by the IDL program. The registration was executed on a PC with a 3.05-GHz CPU and 5.99-GB RAM. The time requirements were ≈20 s for the registration of each 2D image (camera photo or X-ray projection) and ≈2 min for the registration of each laser scan. The entire workflow generally took 4~6 iterations, leading to less than 5 min for the combinations without surface scanning and less than 20 min for combinations with surface scanning.
Discussion and Conclusions
Accuracy of Different Organs
Based on Fig. 4, whole body has the largest Dice coefficient and the smallest standard deviation. This is because the whole body is the most obvious region in the acquired images. The brain also has high accuracy because its shape and position is highly correlated to the head outline which is clear in both optical photos and X-ray projections. The bones have low accuracy because they have long and curved shapes which are difficult to match precisely. The lungs, heart, and liver have the same level of Dice coefficient because they are in the same anatomical region, i.e., the thoracic and upper abdominal region. Compared to other soft organs, the lungs, heart, and liver are relatively larger in size and more stable in position. Therefore, they are more accurately registered than other soft organs. It is also interesting to find that the right kidney is always more accurate than the left kidney because the position of the right kidney is affected by the liver (which is large and stable), while the position of the left kidney is affected by the stomach (which varies in size according to food contents). The spleen and the bladder have the smallest Dice coefficients and the largest standard deviations. It is noticeable that the standard deviation of the bladder is even larger than its mean value (see Fig. 4), meaning that the accuracy distribution of the bladder is highly scattered and skewed. This is because the bladder anatomy is strongly affected by urine production which is almost a random process. The spleen is inaccurate because it has long and thin shape and its position is affected by the surrounding big organs, such as the highly variable stomach. Even a slight mismatch of the spleen position or direction will cause significant decrease of the Dice coefficient.
Influence of the Combination of Non-tomographic Modalities
As Fig. 4 reveals, it is obvious that adding the X-ray projection improves the accuracy of bones, lungs, and heart and adding the laser scanner improves the accuracy of the spleen and the kidneys. There are also some more detailed findings from Fig. 4 which are discussed below.
“C0+C90,” “L Only,” and “C90+L” are the three all-optical combinations. Comparing “L Only” with “C0+C90,” it seems that “L Only” is less accurate than “C0+C90” in terms of most soft organs, although “L Only” contains richer stereo information than “C0+C90.” This is because “L Only” overemphasizes surface alignment, thereby sacrificing the accuracy of internal organs
Comparing the eight combinations that contain X-rays, significant differences can also be found among them. For example, “C0+X90” and “X0+C90” are both composed of one X-ray projector and one optical camera. However, “X0+ C90” is much more accurate than “C0+X90.” This is because (1) the top-view X-ray can indicate the transverse curvature of the spine, while the top-view camera cannot; although the side-view X-ray can reveal lateral curvature of the spine, this lateral curvature can also be reflected from the side-view optical silhouette; therefore, “X0+C90” is better at estimating 3D spine curvature than “C0+X90.” (2) The top-view X-ray gives a better view of the positions and shapes of the lungs and the heart because these organs have greater anatomical variations in the transverse direction than in the lateral direction. (3) The positions of some abdominal organs (e.g., the liver, the kidneys, and the spleen) are correlated with the lungs and the spine; therefore, “X0+ C90” also give better predictions of these abdominal organs.
Comparing “X0+C90” with “X0+X90,” “X0+X90” seems slightly more accurate for the bones, lungs, and heart because “X0+X90” contains more X-ray information. However, “X0+X90” is less accurate for the low X-ray contrast organs (mainly abdominal organs) because it overemphasizes the alignment of the high X-ray contrast organs, thus sacrificing low X-ray contrast organ registration accuracy.
“X0+X90” and “X45+X135” are both all-X-ray modalities. Although both of them are composed of orthogonal projections, “X45+X135” has much lower Dice coefficients than “X0+X90.” This result confirms the advantages of top-view and side-view over oblique views.
Comparing the combinations with and without laser scan, it can be found that adding the laser scan only slightly increases the Dice coefficient of the whole body. This is because the Dice coefficient is mainly affected by global overlapping volume, and it is not sensitive to the improvements of local skin alignment. Observing from Fig. 5 (right column), it is clear that the laser scan improves the accuracy of local skin alignment, although this is not apparently reflected by the Dice coefficient.
Fig. 5 demonstrates both good (left column) and imperfect (right column) examples of the registration results, based on four combinations: “C0+C90,” “X0+C90,” “C90+L,” and “X0+C90+L.” The good examples are selected from the subjects that are similar to the atlas in body size, shape, and internal organ distributions. It is worth noticing that even for these good examples, a perfect match of the bladder is not guaranteed. The imperfect examples reveal essential differences between the four combinations. “C0+C90” has poor internal organ alignments because the optical photos contain no internal anatomy. “X0+C90” has much better accuracy of internal organs (especially for skeleton and lungs), thanks to the use of X-rays. Since “C0+C90” and “X0+C90” only rely on 2D modalities, they both have imperfect matches for the skin. By using the laser scanner, “C90+L” and “X0+C90+L” achieve much better alignment of the skin. This is important to those applications which require good skin registration, such as optical tomographic reconstruction. Comparing “C90+L” with “X0+C90+L,” “X0+C90+L” seems to have better accuracy of the internal organs, due to the use of X-rays.
Comparison with Previous Studies
As introduced in the first section, a number of previous studies on mouse atlas registration with non-tomographic modalities have been reported. A major difference between this study and previous work is the scope of the concerned modalities. This study includes 11 different combinations of three non-tomographic modalities, while the previous studies mainly included one optical modality, i.e., optical camera [1, 16] or surface scanner [29, 30]. Although this study also includes the same combinations of the previous studies (“C0+C90” and “L Only”), it is hard to provide a direct comparison in terms of registration accuracy because the previous studies were only based on a few (one or two) test subjects. Nevertheless, both this and previous studies reveal a number of similar evaluation results, i.e., the whole body and the brain tend to have larger Dice coefficients than the internal organs and the skeleton, the spleen, and the bladder tend to have the lowest Dice coefficients.
Besides registration with non-tomographic modalities, there is also previous work on registration with fully tomographic imaging modalities like micro-CT [24, 52] and micro-MR [16]. Comparing our accuracy with these methodologies as reported in the literature, we find that our Dice coefficients (including all the 11 combinations) are generally lower than those from fully tomographic registrations. In other words, the strategy of non-tomographic registration trades methodology and implementation cost for registration accuracy. It is important therefore that researchers desiring to use non-tomographic systems should have an estimate of the desired level of registration accuracy. Nevertheless, the advantage of a non-tomographic system over a fully tomographic system is that the non-tomographic system is easier to be combined with molecular imaging systems (e.g., PET, SPECT, and optical tomography) and has lower cost. Normally, if one researcher wants to co-register a fully tomographic modality with a molecular imaging modality, he should either physically combine the two systems or rely on the transfer of a specially designed chamber that moves from one system to the other. The limitations of these two solutions are that the first one suffers from the complexity of the fully tomographic system and the second one has the inherent risk of animal movements. Considering these limitations, incorporating a simpler non-tomographic system with the molecular imaging modalities can be a reasonable choice.
An advantage of this study is the design of the registration method. This method enables atlas registration with flexible combinations of the non-tomographic modalities. The algorithm is implemented based on the publicly accessible registration toolbox “elastix.” The computational cost is reasonable for standard PC, and it is fully automatic. All these features make this method easy to be implemented and used by other researchers. Compared with most exiting 2D/3D registration methods [31–34], which jointly register the atlas with 2D projections via direct 3D deformation, this method separately registers the atlas with each 2D projection and back-projects the 2D deformations into 3D under a collinearity constraint. The rationale for not using joint registration is that direct 3D deformation has many more degrees of freedom than 2D deformations. Therefore, 3D shape constraints (such as statistical shape model) are needed to regularize the 3D deformation. However, to build a statistical shape model of the whole-body mouse anatomy is challenging due to the involvement of multiple training subjects and multiple organs. With the collinearity constraint, we investigate a simpler way to regularize the atlas deformation. Based on the test results, this constraint works robustly with the 11 imaging combinations. Nevertheless, for future improvement of the registration method, we still need to consider using more sophisticated shape constraints, since the collinearity constraint tends to over-constrain inter-subject deformation and limits the registration accuracy. A statistical model of whole-body mouse anatomy is currently under development by our group. Hopefully such a model can be used as a 3D shape constraint for joint registration.
Future Directions
In the future, we are planning to construct a low-cost hardware system based on the results of this study. Turning from simulation to hardware, one important aspect that should be addressed is the calibration of the imaging devices [18] because this registration method requires the configuration of the imaging devices to virtually project the atlas into X-ray projections and optical silhouettes. Another future direction is to make a mouse atlas that can fit different types of subject populations. Currently, we are developing a statistical mouse atlas including different ages, strains, sexes, and postures, with the hope that this atlas can be adaptive to different subject populations.
Acknowledgments
The authors thank Dr. Stefan Klein and Dr. Marius Staring for providing the elastix registration toolbox and giving advises for using it and Dr. Yuri Boykov for offering publicly available codes of the graph cuts method which was used for mouse atlas and subject phantom construction. We also acknowledge Dr. Ritva Lofstedt for comments on this paper and Richard Taschereau, Waldemar Ladno, Nam Vu, David Prout, Zheng Gu Alex Dooraghi, and Brittany Berry Puzey for helpful discussions on this project. This work was supported in part by SAIRP NIH-NCI 2U24 CA092865 and in part by a UCLA Chancellor’s Bioscience Core grant.
Footnotes
Conflict of Interest Statement. A provisional patent application describing this work has been filed (UCLA Case No. 2011–395).
References
- 1.Baiker M, Vastenhouw B, Branderhorst W, et al. Atlas-driven scan planning for high-resolution micro-SPECT data acquisition based on multi-view photographs: a pilot study. Proc SPIE medical imaging 2009: visualization, image-guided procedures, and modeling; Lake Buena Vista, FL, USA. 2009. pp. 72611L–72618. [Google Scholar]
- 2.Schulz RB, Ale A, Sarantopoulos A, et al. Hybrid system for simultaneous fluorescence and X-ray computed tomography. IEEE Trans Med Imag. 2010;29:465–473. doi: 10.1109/TMI.2009.2035310. [DOI] [PubMed] [Google Scholar]
- 3.Hyde D, Miller EL, Brooks DH, Ntziachristos V. Data specific spatially varying regularization for multimodal fluorescence molecular tomography. IEEE Trans Med Imag. 2010;29:365–374. doi: 10.1109/TMI.2009.2031112. [DOI] [PubMed] [Google Scholar]
- 4.Gulsen G, Birgul O, Unlu MB, Shafiiha R, Nalcioglu O. Combined diffuse optical tomography (DOT) and MRI system for cancer imaging in small animals. Technol Cancer Res Treat. 2006;5:351–363. doi: 10.1177/153303460600500407. [DOI] [PubMed] [Google Scholar]
- 5.Song X, Wang D, Chen N, Bai J, Wang H. Reconstruction for free-space fluorescence tomography using a novel hybrid adaptive finite element algorithm. Opt Express. 2007;15:18300–18317. doi: 10.1364/oe.15.018300. [DOI] [PubMed] [Google Scholar]
- 6.Chow PL, Rannou FR, Chatziioannou AF. Attenuation correction for small animal PET tomographs. Phys Med Biol. 2005;50:1837–1850. doi: 10.1088/0031-9155/50/8/014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Loening AM, Gambhir SS. AMIDE: a free software tool for multimodality medical image analysis. Mol Imaging. 2003;2:131–137. doi: 10.1162/15353500200303133. [DOI] [PubMed] [Google Scholar]
- 8.Matsui E. Micro CT. Lung Cancer. 2005;49:S137. [Google Scholar]
- 9.Driehuys B, Nouls J, Badea A, et al. Small animal imaging with magnetic resonance microscopy. ILAR J. 2008;49:35–53. doi: 10.1093/ilar.49.1.35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Unlu MB, Lin Y, Birgul O, Nalcioglu O, Gulsen G. Simultaneous in vivo dynamic magnetic resonance–diffuse optical tomography for small animal imaging. J Biomed Opt. 2008;13:060501. doi: 10.1117/1.3041165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Judenhofer MS, Wehrl HF, Newport DF, et al. Simultaneous PET-MRI: a new approach for functional and morphological imaging. Nat Med. 2008;14:459–465. doi: 10.1038/nm1700. [DOI] [PubMed] [Google Scholar]
- 12.Xia Z, Huang XS, Zhou XB, et al. Registration of 3-D CT and 2-D flat images of mouse via affine transformation. IEEE Trans Inf Technol Biomed. 2008;12:569–578. doi: 10.1109/TITB.2007.904631. [DOI] [PubMed] [Google Scholar]
- 13.Wildeman MH, Baiker M, Reiber JHC, et al. 2D/3D registration of micro-CT data to multi-view photographs based on a 3D distance map. 6th IEEE int. symp. biomed. imag.: from nano to macro; Boston, MA, USA. 2009. pp. 987–990. [Google Scholar]
- 14.Kok P, Dijkstra J, Botha CP, et al. Integrated visualization of multi-angle bioluminescence imaging and micro CT. Proc SPIE medical imaging 2007: visualization and image-guided procedures; San Diego, CA, USA. 2007. pp. 65091U–65010. [Google Scholar]
- 15.Zhang H, Bao Q, Vu NT, et al. Performance evaluation of PETbox: a low cost bench top preclinical PET scanner. Mol Imag Biol. 2011;13:949–961. doi: 10.1007/s11307-010-0413-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Khmelinskii A, Baiker M, Kaijzel EL, et al. Articulated whole-body atlases for small animal image analysis: construction and applications. Mol Imag Biol. 2011;13:898–910. doi: 10.1007/s11307-010-0386-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lasser T, Soubret A, Ripoll J, Ntziachristos V. Surface reconstruction for free-space 360 degrees fluorescence molecular tomography and the effects of animal motion. IEEE Trans Med Imag. 2008;27:188–194. doi: 10.1109/TMI.2007.904662. [DOI] [PubMed] [Google Scholar]
- 18.Douglas L, Gabriel T. ACM SIGGRAPH 2009 courses. ACM; New Orleans, Louisiana: 2009. Build your own 3D scanner: 3D photography for beginners. [Google Scholar]
- 19.Li C, Mitchell GS, Dutta J, et al. A three-dimensional multispectral fluorescence optical tomography imaging system for small animals based on a conical mirror design. Optic Express. 2009;17:7571–7585. doi: 10.1364/oe.17.007571. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Leblond F, Davis SC, Valdes PA, Pogue BW. Pre-clinical whole-body fluorescence imaging: review of instruments, methods and applications. J Photochem Photobiol B. 2010;98:77–94. doi: 10.1016/j.jphotobiol.2009.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.McLaughlin W, Vizard D. Kodak in vivo imaging system: precise coregistration of molecular imaging with anatomical X-ray imaging in animals. Nature Methods Application Notes. 2006:26–28. [Google Scholar]
- 23.Caliper LifeSciences IVIS® lumina XR system. http://www.caliperls.com/products/preclinical-imaging/ivis-lumina-xr.htm.
- 24.Li X, Yankeelov TE, Peterson TE, Gore JC, Dawant BM. Automatic nonrigid registration of whole body CT mice images. Med Phys. 2008;35:1507–1520. doi: 10.1118/1.2889758. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Fei B, Wang H, Muzic RF, Jr, et al. Deformable and rigid registration of MRI and microPET images for photodynamic therapy of cancer in mice. Med Phys. 2006;33:753–760. doi: 10.1118/1.2163831. [DOI] [PubMed] [Google Scholar]
- 26.Chow PL, Stout DB, Komisopoulou E, Chatziioannou AF. A method of image registration for small animal, multi-modality imaging. Phys Med Biol. 2006;51:379–390. doi: 10.1088/0031-9155/51/2/013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Lebenberg J, Herard AS, Dubois A, et al. Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: an anatomofunctional transgenic mouse brain imaging study. Neuroimage. 2010;51:1037–1046. doi: 10.1016/j.neuroimage.2010.03.014. [DOI] [PubMed] [Google Scholar]
- 28.Zhang X, Badea CT, Johnson GA. Three-dimensional reconstruction in free-space whole-body fluorescence tomography of mice using optically reconstructed surface and atlas anatomy. J Biomed Opt. 2009;14:064010. doi: 10.1117/1.3258836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Joshi AA, Chaudhari AJ, Li C, et al. DigiWarp: a method for deformable mouse atlas warping to surface topographic data. Phys Med Biol. 2010;55:6197–6214. doi: 10.1088/0031-9155/55/20/011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Chaudhari AJ, Joshi AA, Darvas F, Leahy RM. A method for atlas-based volumetric registration with surface constraints for Optical Bioluminescence Tomography in small animal imaging. Proc SPIE medical imaging 2007: physics of medical imaging; 2007. pp. 651024–651010. [Google Scholar]
- 31.Dworzak J, Lamecker H, von Berg J, et al. 3D reconstruction of the human rib cage from 2D projection images using a statistical shape model. Int J Comput Assist Radiol Surg. 2010;5:111–124. doi: 10.1007/s11548-009-0390-2. [DOI] [PubMed] [Google Scholar]
- 32.Groher M, Zikic D, Navab N. Deformable 2D–3D registration of vascular structures in a one view scenario. IEEE Trans Med Imag. 2009;28:847–860. doi: 10.1109/TMI.2008.2011519. [DOI] [PubMed] [Google Scholar]
- 33.Benameur S, Mignotte M, Parent S, et al. 3D/2D registration and segmentation of scoliotic vertebrae using statistical models. Comput Med Imaging Graph. 2003;27:321–337. doi: 10.1016/s0895-6111(03)00019-3. [DOI] [PubMed] [Google Scholar]
- 34.Fu DS, Kuduvalli G. A fast, accurate, and automatic 2D–3D image registration for image-guided cranial radiosurgery. Med Phys. 2008;35:2180–2194. doi: 10.1118/1.2903431. [DOI] [PubMed] [Google Scholar]
- 35.van der Bom MJ, Pluim JP, Gounis MJ, et al. Registration of 2D X-ray images to 3D MRI by generating pseudo-CT data. Phys Med Biol. 2011;56:1031–1043. doi: 10.1088/0031-9155/56/4/010. [DOI] [PubMed] [Google Scholar]
- 36.Chen X, Gilkeson RC, Fei B. Automatic 3D-to-2D registration for CT and dual-energy digital radiography for calcification detection. Med Phys. 2007;34:4934–4943. doi: 10.1118/1.2805994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Kim Y, Kim K-I, Choi Jh, Lee K. Novel methods for 3D postoperative analysis of total knee arthroplasty using 2D–3D image registration. Clin Biomech. 2010;26:384–391. doi: 10.1016/j.clinbiomech.2010.11.013. [DOI] [PubMed] [Google Scholar]
- 38.van der Bom MJ, Bartels LW, Gounis MJ, et al. Robust initialization of 2D–3D image registration using the projection-slice theorem and phase correlation. Med Phys. 2010;37:1884–1892. doi: 10.1118/1.3366252. [DOI] [PubMed] [Google Scholar]
- 39.Zheng G. Effective incorporating spatial information in a mutual information based 3D–2D registration of a CT volume to X-ray images. Comput Med Imaging Graph. 2010;34:553–562. doi: 10.1016/j.compmedimag.2010.03.004. [DOI] [PubMed] [Google Scholar]
- 40.Stout D, Chatziioannou A, Lawson T, et al. Small animal imaging center design: the facility at the UCLA Crump Institute for Molecular Imaging. Mol Imag Biol. 2005;7:393–402. doi: 10.1007/s11307-005-0015-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Delingette H. General object reconstruction based on simplex meshes. Int J Comput Vis. 1999;32:111–146. [Google Scholar]
- 42.Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans Pattern Anal Mach Intell. 2004;26:1124–1137. doi: 10.1109/TPAMI.2004.60. [DOI] [PubMed] [Google Scholar]
- 43.Kohli P, Torr PHS. Efficiently solving dynamic Markov random fields using graph cuts. Proc IEEE international conference on computer vision (ICCV 05); Beijing, China. 2005. pp. 922–929. [Google Scholar]
- 44.Dogdas B, Stout D, Chatziioannou AF, Leahy RM. Digimouse: a 3D whole body mouse atlas from CT and cryosection data. Phys Med Biol. 2007;52:577–587. doi: 10.1088/0031-9155/52/3/003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Segars WP, Tsui BMW, Frey EC, Johnson GA, Berr SS. Development of a 4-D digital mouse phantom for molecular imaging research. Mol Imag Biol. 2004;6:149–159. doi: 10.1016/j.mibio.2004.03.002. [DOI] [PubMed] [Google Scholar]
- 46.Thevenaz P, Unser M. Optimization of mutual information for multiresolution image registration. IEEE Trans Image Process. 2000;9:2083–2099. doi: 10.1109/83.887976. [DOI] [PubMed] [Google Scholar]
- 49.Klein S, Staring M, Murphy K, Viergever MA, Pluim JP. elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imag. 2010;29:196–205. doi: 10.1109/TMI.2009.2035616. [DOI] [PubMed] [Google Scholar]
- 50.Stefan K, Josien PP, Marius S, Max AV. Adaptive stochastic gradient descent optimisation for image registration. Int J Comput Vis. 2009;81:227–239. [Google Scholar]
- 51.Staring M, Klein S, Pluim JP. A rigidity penalty term for nonrigid registration. Med Phys. 2007;34:4098–4108. doi: 10.1118/1.2776236. [DOI] [PubMed] [Google Scholar]
- 52.Baiker M, Milles J, Dijkstra J, et al. Atlas-based whole-body segmentation of mice from low-contrast micro-CT data. Med Image Anal. 2010;14:723–737. doi: 10.1016/j.media.2010.04.008. [DOI] [PubMed] [Google Scholar]





