Abstract
Translational science requires the use of mouse models for the characterization of disease and evaluation of treatment therapies. However, often there is a lack of comprehensive training for scientists in the systemic and regional anatomy of the mouse that limits their ability to perform studies involving complex interventional procedures. We present our methodologies for the development, evaluation, and dissemination of an interactive 3D mouse atlas that includes designs for presenting emulation of procedural technique. We present the novel integration of super-resolution imaging techniques, depth-of-field interactive volume rendering of large data, and the seamless delivery of remote visualization and interaction to thin clients.
Keywords: Translational science, simulation, procedural techniques
1. Introduction
To effectively translate basic research into practical treatments, scientists must engage in programs of investigation that meet the complex challenges of advancing disease prevention and treatment. These scientists must be able to collaborate with clinicians in identifying clinical problems and design relevant basic science experiments that will contribute to solutions. Current efforts to develop new scientific teams include training of clinical and basic scientists in the principles and concepts of fields outside their primary disciplines and promoting a culture of scientific integration from growing interdisciplinary expertise. This is, however, a time consuming process. The acceleration of this interdisciplinary conceptual, technical, and cultural training is required to meet the growing health care needs of our population. Limitations of time and availability of laboratory resources and expert tutorage significantly impede effective scientific training.
As a specific example, translational science requires the use of various animal models for disease characterization. A lack of comprehensive understanding of the normal systemic and regional anatomy of the model, and the limited ability to properly administer procedural technique, can lead to sub-optimal results and the waste of valuable resources. Proper training in animal model use is challenged by care costs, and the availability of animal, equipment, and expert resources. An integrated system is required to obviate the need for actual animals in initial and formative training, to test both knowledge and contextual skills, and to facilitate optimization of efficient and humane employment of animal models.
2. Background
Classical examples of atlases of mouse anatomy include Cook [1] and Iwaki [2] and are limited to illustrations and images (See Figure 1). A variety of mouse atlases have been previously developed focusing on the brain and mouse development. Digimouse [3], and The Visible Mouse [4] are examples of whole mouse body atlases. These atlases provide high-resolution (100μm) images, access to 2D slice viewing software, and downloadable 3D polygonal (surface) data that can be viewed using an external program. Most of the current models are developed using surface based models that do not maintain the internal volumetric structural information. Furthermore, these projects do not provide an interactive 3D volumetric rendering solution and have no support for interactive procedural training.
Figure 1. Examples of atlases showing real limb osteology from Cook (LEFT, Illustration) and Iwaki (Right, Photo).
Researchers at OSC have developed virtual simulations including anatomical visualization for research and integrated multisensory training simulations [5]. Procedural and surgical simulations have included interactive aural and graphical representations integrated with force feedback (haptic) devices for novel interfaces with instrumentation. More recently, OSC has translated developments derived from the human surgical simulation field for application to veterinary surgical training [6]. Through the use of digital models and low-cost computing environments, these integrated simulation technologies have served as adjuvants to traditional anatomical and surgical methods for conceptual and procedural training.
3. Methods & Materials
For this specific effort, whole-body isotropic images of the fixed specimen were obtained at approximately 60 microns (512×512×2000). The μCT data was obtained using a Siemens Inveon™ microCT Imaging system (Siemens, Erlangen, Germany). To obtain μMRI data, a 9.4T BioSpec 94/30 horizontal bore magnet (Bruker BioSpin, Germany) was used. The μCT acquisitions provided data sets used to reconstruct the osseous anatomy of the mouse, an example of a systemic atlas. Each voxel was originally 16bits. Utilizing existing software, VolEdit, described previously [6] and developed in the Interface Lab, segmentation of the mouse osseous anatomy was achieved using Cook's and Iwaki's atlas as guides. (See Figure 1, 2).
Figure 2.
View from Interactive Mouse Atlas showing highlight and definition on selected segment (femurs).
Super Resolution (SR) Reconstruction
MRI acquisitions provide the soft tissue anatomy required for creating a 3D mouse model. High resolution isotropic MR imaging is limited for in-vivo applications due to the long acquisition times. SR techniques [7] utilize several 2D multi-slice acqusitions. An iterative backprojection algorithm is employed for reconstruction of an isotropic volume. 2D multi-slice scans require significantly less acquisition time than full 3D isotropic acquisitions.
A SR reconstructed image of a live mouse using three orthogonal 2D multi-slice images is presented in Figure 3. The 2D multi-slice data used to reconstruct this image were acquired using a 35mm quadrature volume coil, and a respiratory-gated FLASH imaging sequence (TR-20ms, TE-2.7ms, avg=8, FOV=25mm × 25mm, matrix = 256 × 256, in-plane resolution 98 μm, slice thickness = 1 mm, total acquisition time = 15 min).
Figure 3.

LEFT: Normal image of mouse liver lobes in Gadolinium-stained adult 129/C57Bl6 male mouse. RIGHT: Employing super-resolution techniques.
Out-of-Core Memory and Depth-of-Field (DOF) Effect Rendering
Data sets, such as produced by acquisitions described above, are larger than the size of typical video memory. Subsequently, we employ a hierarchical representation of the data in order to interactively render datasets [8]. The system uses lower resolution data sets (sub-sampled) than can be swapped in and out of video memory depending on the visualization viewpoint and other parameters [9]. Parts of the scene close to the user are rendered using high resolution data, white parts further away from the viewer can be rendered with less detail [10].
Remote Visualization
In order to facilitate the delivery of imaged data through remote interactive visualization to thin clients (i.e., deskside, laptop, and handheld) from a data repository host sites such as OSC, a TurboVNC Client using Virtual GL [11] was adopted. This provided a seamless remote session for users to view and modify data directly from a centralized server. By providing this capability, it is not required to download data to a local device for rendering. The rendering is performed at the site where the data is stored. This not only allows users to access their larger data sets quickly, but also supports the establishment of data repositories, limiting the management of multiple copies of data, and promotes standardization.
4. Results
Initial formative evaluation was obtained from a Senior Research Associate of the Comparative Anatomy and Mouse Phenotyping Core at OSU's Comprehensive Cancer Center. Basic interface design, functionality, and representation were approved. Subsequently, we have successfully demonstrated the remote interactive sessions at the Eye and Ear Institute at The Ohio State University Medical Center during December 2010. Most recently the system was demonstrated at the SxSW conference in Austin, Texas, and at AANS in Denver Colorado in April 2011. The visualizations were running at OSC, and over wireless, provided real-time performance of interactive volume reconstructions of computed tomography data on the iPad™. In addition, we are implementing the remote visualization code for PC based slates. Current designs include integration in anatomical courses at OSU's College of Veterinary Medicine. The goal is to eventually provide the digital atlas over wireless to a handheld device for use in the anatomy curriculum. An implementation of the viewing software has been ported to Small Animal Imaging Shared Resource for clients to remotely view acquired data, and additional testing is ongoing. An example of the iPad™ session is available online as a movie showing interaction with temporal bone data. (See http://dl.dropbox.com/u/616225/VolRen.mov). A movie of an interactive session with an anatomical atlas of the mouse skull is available at: http://www.youtube.com/watch?v=IUcUxWeostw&feature=mfu_in_order&list=UL.
The data size was 315MB for the osseous anatomy and the graphics card texture memory was limited algorithmically to 150 MB during runtime. The latency observed in the non-DOF rendering is mainly due to the time required to upload data from main memory to GPU memory. The DOF rendering requires much less storage and data for different views can be cached in the GPU memory at the same time. When the data is cached, the total rendering time is a frame rate of 9 fps. A comparison of non-DOF versus DOF rendering for a μCT data set of whole mouse skeleton can be observed at: http://www.youtube.com/watch?v=x06nvJHL4ZA.
Conclusions
As data derived from more advanced imaging systems continue to increase in quality, and subsequently, size, more efficient and convenient methods are required to provide optimal access and utility. We have presented an integrated system that provides access to a digital atlas of mouse anatomy for use in training translational scientists in regional and systemic anatomy of animal models [12][13]. By employing super-resolution acquisition magnetic resonance techniques, we have created volume data sets of excellent detail for use in an interactive digital atlas. Through hierarchical data representation we exploit direct volume rendering techniques to provide interactive 3D models using very large data sets. By providing interactive, remote volume visualization, the current system prevents the need to download data to a thin client to render. This supports the establishment of data repositories, limits the management of multiple copies of data, and promotes standardization.
Acknowledgments
This research was supported with ARRA funding from the National Center for Research Resources (NCRR UL1RR025755), P30 CA016058, and with partial funding from R01 DC011321091A1 from the National Institute on Deafness and other Communications Disorders (NIDCD) of the National Institutes of Health.We also acknowledge the effort of Dr. David Reed from Capital University for assistance with the remote visualization during his sabbatical at the Interface Lab, Autumn 2010.
References
- 1.Cook Margaret J. The Anatomy of the Laboratory Mouse. online www.informatics.jax.org/cookbook/imaginedex.shtml.
- 2.Iwaki Takamasa. A Color Atlas of Sectional Anatomy of the Mouse. Braintree Scientific. 2001 Sep 15; [Google Scholar]
- 3.Dogdas B, Stout D, Chatziioannou AF, Leahy RM. Digimouse: A 3D Whole Body Mouse Atlas from CT and Cryosection Data. Phys Med Biol. 2007;52:577–587. doi: 10.1088/0031-9155/52/3/003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Johnson GA, Cofer GP, Gewalt SL, Hedlund LW. Morpholgical Phenotyping with MR Spectroscopy: The Visible Mouse. Radiology. 2002;222:789–793. doi: 10.1148/radiol.2223010531. [DOI] [PubMed] [Google Scholar]
- 5.Hiemenz L, Litsky A, Schmalbrock P. Puncture Mechanics for the Insertion of an Epidural Needle. Trans Amer Soc Biomech. 1997;21:36–37. [Google Scholar]
- 6.Stredney D, Hittle B, Collidas J, McLoughlin MA. Translating Human Simulation Technologies to Veterinary Surgical Training: Accelerating Adoption. In: Westwood JD, et al., editors. Proc MMVR16. IOS Press; Amsterdam: pp. 502–504. [PubMed] [Google Scholar]
- 7.Souza A, Senn R. Model-based Super-resolution for MRI; 30th Annual International IEEE EMBS Conference; Vancouver, British Columbia, Canada. August 20-24, 2008; [DOI] [PubMed] [Google Scholar]
- 8.Guthe S, Strasser W. Computers & Graphics. Elsevier Science; 2004. Advanced Techniques for High Quality Multiresolution Volume Rendering; pp. 51–58. [Google Scholar]
- 9.Crassin C, Neyret F, Lefebvre S, Sainz M, Eisemann E. Proc SIGGRAPH '09. ACM; New York, NY: 2009. Beyond Triangles: Gigavoxels Effects in Video Games; pp. 1–1. http://artis.imag.fr/Publications/2009/CNLSE09. [Google Scholar]
- 10.Lamar E, Hamann B, Joy KI. Multiresolution Techniques for Interactive Texture-based Volume Visualization. Proc of Visualization (VIS) 1999:355–361. [Google Scholar]
- 11.VirtualGL. Project = http://www.virtualgl.org/
- 12.Manivannan N, Clymer BD, Bratasz A, Powell KA. Orthogonal Super Resolution Reconstruction for 3D Isotropic Imaging in 9.4 MRI; oral presentation ISMRM 2011; Montreal Canada May. May 7-13. [Google Scholar]
- 13.Powell K, Chen CM, Hittle B, Kerwin T, Bratasz A, Manivannan N, Stredney D. Virtual Simulation of Mouse Anatomy and Procedural Techniques; poster 2011 World Molecular Imaging Conference; San Diego. Sept 7-10, 2011; [PMC free article] [PubMed] [Google Scholar]


