Abstract
We developed multi-scale, live-time interactive visualization of color image data, including microscopic whole-mouse cryo-images serving many biomedical applications. Using true-color volume rendering, we interactively, selectively enhanced anatomy using feature detection. For example, to enhance red organs (vessels, liver, etc.) and internal surfaces, we computed a red feature from R/(R+G+B) and surface features from color/gray-scale gradients, respectively. For >70GB cryo-image volumes, we developed multi-resolution visualization, which provided low-resolution rendering of an entire mouse and zooming to organs, tissues, and cells. Fusions of fluorescence and color cryo-volumes uniquely showed biodistribution of metastatic and stem cells within an anatomical context.
Keywords: Cryo-imaging, Color and Gradient Magnitude Feature Detectors, Opacity Transfer Functions, Multi-resolution Volume Rendering
1. Introduction
We are creating specialized image visualization techniques for the enormous, information rich data sets from whole-mouse cryo-imaging. The cryo-imaging system at Case Western Reserve University provides high-resolution, large field-of-view, anatomical color and molecular fluorescence image data by alternatively sectioning and imaging the block face [1–3]. The samples are flash frozen in liquid nitrogen after embedding them in a histological medium called Optimal Cuttting Temperature (OCT) compound. The frozen block is alternatively sectioned using a cryomicrotome and imaged in a tiled fashion yielding very large, high resolution data volumes. Cryo-imaging is unique in that it fills the gap between in vivo imaging such as with MRI or CT and histology, allowing one to image along the continuum from mouse → organ → tissue structure → cell. Color image volumes present many more opportunities for volume visualization than do gray-scale volumes such as those from CT or MRI, and we are exploring enhancements. We acquire volumes at microscopic resolution, resulting in data sets as large as 70 GB, far exceeding the maximum RAM (32GB) available on our PC imaging workstations. This necessitates fast, multi-resolution volume rendering to aid data interpretation. Other microscopy methods with large data sizes will also benefit from the multi-scale visualization approach. For example, we have recently processed gray scale cryo-electron microscopy images exceeding 250 GB. In addition to color images, cryo-imaging provides fluorescence images of one or more fluorophores in studies using targeted imaging agents, fluorescently labeled stem or cancer cells, targeted drug delivery, tissue-specific fluorescence of transgenics, etc. Multiple modalities (bright field color, fluorescence, multi-spectral imaging, etc.) provide opportunities for renderings of fused data, to enable fast, efficient data interpretation.
Over the last two decades, direct volume rendering has been a key technology for visualization of large 3D datasets from scientific, engineering, and medical applications [4–11]. However, several factors still inhibit its widespread use including the complex interrelationship of rendering parameters, the lack of interactivity, and the design of a suitable transfer function to be used during volume rendering [12]. Interactivity during volume rendering helps in quickly locating anatomical structures of interest and in conducting localized investigations for the presence of one or more fluorescent markers. This kind of interactivity would permit simultaneous studies of anatomical, molecular, and functional data from several organs. Due to the advent of high-performance graphics hardware, rendering and interacting with fusion volumes can be performed in just a few seconds of computation time. Fluorescence and bright field volumes can be rendered simultaneously with the ability to dial in/dial out transparency of each of these volumes permitting data exploration and interaction in a way not previously possible.
Transfer function design greatly affects the visual outcome of volume rendering [6;8;9]. A transfer function assigns values for optical properties, such as color and opacity, to original values of the data set being visualized. The design of effective color and opacity transfer function from scalar-valued data has been the subject of substantial research over the last years with the design of the color transfer function (1-channel grayscale to 3-channel RGB color mapping) often much more difficult than the design of an opacity transfer function (1-channel gray scale to 1-channel opacity mapping) [5]. In the case of grayscale data, the scalar is the grayscale intensity value and the color mapping results in a pseudo color assignment. A separate gray-scale to opacity mapping function is designed for opacity. When color information is available in original data, one can use a 3-channel to 3-channel mapping function to assign pseudo color values when natural colors do not provide adequate contrast. In applications (e.g., cryo-imaging) where high-resolution high-contrast true color information is available and is essential for making biologically useful inferences, it is desirable to employ true-color volume rendering. An appropriate choice of an opacity transfer function still needs to be made, and this largely depends on the data itself. For example, in routine medical visualizations of CT data it is often possible to use pre-defined 1D opacity transfer functions to highlight certain tissue types, such as bone or liver [6;13]. Further, a combination of data attributes such as color channel values, grayscale value, gradient of channels, and grayscale gradient can be mapped to a suitable opacity value as in the case of multidimensional opacity transfer functions reported in the literature [5;8;9]. In some previous studies [5;11], transfer function design was preceded by a tissue classification step where mathematical classifiers were used to determine the class (tissue type) of each voxel. First, a lookup table was designed that assigns opacities to each tissue type using a simple all-or-none (or hard) classification that uses grayscale values. This was not suitable for tissue interfaces thereby requiring a probabilistic (or soft) classification method that employed maximum likelihood classifiers or piecewise linear mapping to assign opacities based on the probability of every voxel belonging to the various predetermined tissue types. Pattern recognition classification can be a computationally demanding step, which would be problematic for fast visualization of our extremely large cryo-imaging data sets (>70GB for a color cryo-image volume of an adult whole mouse).
In this paper, we explore direct volume rendering techniques using natural colors with the opacity values being a function of color and gradients from the data. Specific anatomical structures are enhanced using a two-step process – feature detection followed by rendering using suitable opacity transfer functions (OTFs). We exploit a variety of color feature detection strategies and methods for computing gradients. We include all within a graphical user interface which allows one to either pull up organ-specific stored visualization parameters or interactively (i.e., in live time) identify best choices for visualizing a particular tissue of interest. Multi-resolution rendering allows one to zoom into a region at full resolution and slicing functions allow one to create multiplanar reformatted sections showing single fluorescent cells. We chose the Amira (Visage Imaging, San Diego, CA) [14] 3D visualization/analysis software package to create our visualization pipeline. Similar techniques to ours have been previously used on data from the Visible Human Project [15–18]. In particular, opacity transfer functions involving color and color gradient feature detectors [6;8;9] have been employed to derive rendered opacity values. However, we note that the superior resolution and the fluorescence imaging capability of cryo-imaging make our data and techniques distinct from those of Visible Human Project.
The rest of the paper is organized as follows. In Methods, we describe the feature detectors and OTFs used in creating enhanced volume renderings and the design of our visualization interface. In Results, we illustrate renderings and provide anecdotal user evaluations on embryonic and adult mouse cryo-image data sets. Finally, in Discussion, we discuss the effect of parameter values (e.g., threshold, scalar weights), choice of transfer function, the use of gradients, and computer hardware limitations on volume rendering based on our experiences.
2. Methods
2.1 Cryo-imaging system
The whole mouse cryo-imaging system was developed in Dr. Wilson’s laboratory at Case Western Reserve University. It consists of a modified, bright field/fluorescence stereo microscope; a robotic imaging system positioner; and a customized, motorized cryostat, all fully automated by a control system. By alternately sectioning and imaging, the system acquires 3D, high-resolution, large field of view, color and molecular fluorescence image volumes from sequential images of the tissue block face. Applications include stem cells and regenerative medicine, imaging agent optimization, phenotyping, characterization of spatial gene expression, validation of in vivo medical imaging data, etc. Details of the system, sample preparation, and a review of some applications are described elsewhere [1;2].
2.2 Color-based Feature Detectors
We exploit the rich color separation of cryo-images using color ratio feature detectors. Examples of red, green, and blue ratio feature detectors (cR, cG, cB) are shown below, where R, G, and B refer to the 8-bit data for red, green, and blue channels, respectively. All of our feature detectors are designed to lie in the [0, 1] interval.
| (1) |
These color detectors provide an opportunity to highlight various tissues of interest. It is also possible to derive other color detectors. For example, for detecting stomach and intestinal regions, which are predominantly brown in color, we exploited the fact that brown is composed of one part of R, two parts of G, and zero (0) parts of blue. A brown feature detector is therefore expressed as a weighted linear combination of red and green feature detectors.
| (2) |
Similarly, one could define a purple feature detector with equal amounts of R and B.
| (3) |
To highlight the brain, spinal cord, and eyeballs, which had a “light red” tone in the whole mouse test data set, a mixture of R, G, and B in the ratio 0.5R + 0.25G + 0.25B produced the most acceptable results as evaluated visually. This resulted in a specialized "light red" feature detector.
| (4) |
For detecting gray tones, we used another specialized feature detector that exploits the fact that gray is composed of balanced amounts of R,G, and B.
| (5) |
A low value of cGRAY indicates presence of “gray” in volume. It is clear from the formulations of (3) – (5) that the responses of these feature detectors lie in the interval [0, 1].
2.3 Gradient-based feature detectors
Data gradients are useful in visualization and interpretation of internal structures and surfaces within volumetric data. Many possibilities exist. First, one can compute gradients on grayscale data. Given the color vector [R, G, B] at a voxel location (x, y, z), the grayscale value Ig is simply computed as follows (using the Y component of the RGB to NTSC YIQ transformation matrix [19]:)
| (6) |
Using the central difference operator, a numerical estimate of grayscale gradient at a voxel location (x, y, z) is obtained:
| (7) |
A useful gradient function for rendering is the normalized magnitude of the grayscale gradient vector in (7):
| (8) |
Here, Ig,max denotes the maximum grayscale value (Ig,max =255 is common for rendering). Similarly, gradient magnitudes can be computed from each color channel by substituting Ig with R, G, or B in equations (5) and (6). Normalized gradient magnitude feature detectors for R, G, and B are listed below:
| (9) |
Yet another approach is to compute gradients directly from color vector data. Color vector data can be represented using several color spaces (RGB, L*u*v*, YCbCr, HSV, YIQ, etc.) [19]. It has been reported that the L*u*v* color space is perceptually linear unlike the RGB color space [6;9] which is non-linear in terms of human perception. This important observation suggests that computing gradients in perceptually linear spaces would be more appropriate for visualization by a human user. Following a color space transformation from RGB to L*u*v*, we computed two gradient measures pertaining to the color data vector: color distance gradient magnitude and color distance gradient dot product [6;9]. Let C(x,y,z) denote the color volume in L*u*v* color space. The color distance gradient vector Ḏ(x,y,z) is given by
| (10) |
where ∇x, ∇y, ∇z denote the x, y, and z components respectively of the color distance gradient vector and are defined by:
| (11) |
In (11), the function d on the two color vectors C1 and C2 in L*u*v* space is defined by:
| (12) |
where Ci,L*, Ci,u*, and Ci,v* denote the L*, u*, v* components respectively of vector Ci. The color distance gradient magnitude (CDGM) is then simply given by:
| (13) |
The above measure can be easily normalized to lie in the interval [0,1] by dividing the RHS of (13) by the maximum CDGM value obtained for a given volume. The normalized CDGM based feature detector is given by:
| (14) |
For the color distance gradient dot product, we first compute the normalized color distance gradient vector as follows:
| (15) |
The color distance gradient dot product (CDGDP) is then defined by:
| (16) |
where. denotes the dot product and neighbori denotes the six neighbors of a voxel. Note that CDGDP is already normalized to lie in the range [0,1].
2.4 Opacity Transfer Functions
The color (c) and normalized gradient magnitude (g) feature detectors, which are scalar quantities introduced in Equations 1–5, 8 and 9, serve as inputs to opacity transfer functions (OTF’s), denoted by o in this section, which finally assign a scalar α-opacity value to each voxel in the volume for rendering based on c and g. We have investigated threshold, sigmoidal, and power-law OTF’s. We use a threshold OTF with a threshold parameter, T, and a weighting factor, w, for combining the effects of color and gradient.
| (17) |
A threshold OTF can introduce step artifacts in the rendering that can be alleviated with a sigmoidal OTF.
| (18) |
Above, w is a weighting factor, T is a threshold, and γ controls the width of the transition period. The α values are rescaled to lie in the interval [0,255]. For comparison purposes, we have also used a power-law OTF [6;9], as given below with the following parameters– a scalar k, an exponential γ, and weight w, with "." representing multiplication.
| (19) |
2.4.1 Linear combination after OTF mapping
An alternative approach to α-opacity assignment involves computing separate OTF’s for c and g, and linearly combining them using w, the weighting factor. Such a linear combination is represented below for the case of sigmoidal OTF’s, where "." represents multiplication:
| (20) |
2.4.2 Multiplicative combination of data and gradient in OTF
In some cases, it is more advantageous in terms of memory and computation time to (i) compute c and apply an OTF such as the sigmoidal o(T,γ,c) based on only color feature, and (ii) multiply gradient feature g with the OTF computed in step (i) to obtain the final opacity value for a gradient-enhanced rendering. The resulting equation for α-opacity is given below where the "." symbol represents multiplication.
| (21) |
2.5 Color Volume Rendering and Visualization Pipeline
A block diagram of our volume rendering and visualization pipeline is shown in Fig 1. The baseline volume rendering is created by simply using the gray value at each voxel as the "feature". In this case, to obtain a rendered opacity value, we complement the gray value by subtracting it from 255, thereby making darker structures more opaque than brighter ones. This is equivalent to using a Ramp OTF. The enhanced volume rendering employs color and normalized gradient magnitude feature detectors along with threshold, sigmoidal, and power law OTF's to generate a rendered opacity value. In cases where data and gradient are both employed in deriving the rendered opacity, a suitable combined detection strategy is applied (sec 2.4). The rendered opacity value is then combined with the original color channel values and provided as input to a true color volume renderer to create the final rendered volume. We evaluated several software packages in terms of volume rendering capability and chose the Amira (Visage Imaging, San Diego, CA) rendering engine because of its true color support and better quality.
Fig. 1.

Block diagram of color volume rendering pipeline for cryo-imaging data. Baseline feature detection uses voxel grayscale value as feature combined with ramp OTF to obtain a rendered alpha opacity value. Enhanced feature detection employs color and normalized gradient magnitude feature detectors along with threshold, sigmoidal, and power law OTF's detailed in sec 2.2 – 2.4 to obtain rendered opacity. The effects of data and gradients are combined (combined detection strategy) using techniques detailed in sec 2.4. The rendered alpha value along with original RGB channel values are provided as input to the Amira volume renderer, which uses an emission and absorption model to render a 3D volume.
2.6 User Interface for Enhanced Color Volume Rendering (UIECVR)
We have created an intuitive volume visualization user interface, UIECVR, that allows for live time interaction for volume rendering of color data. A schematic block diagram of the UIECVR interface is shown in Fig. 2. We illustrate the visualization workflow using an example mouse bright field color volume. Starting from a baseline volume rendering (Fig. 2A), the region-of-interest (ROI) specification tool is launched (Fig. 2B) to quickly identify a 3D rectangular region for “slab” volume rendering. The ROI can be modified by dragging the small spheres identifying the bounding box, eliminating any extra structures which would otherwise limit our ability to volume visualize a tissue of interest. Typically, we render the entire volume at low resolution using our default scheme, identify an ROI, and then create an enhanced rendering at full resolution from the ROI (more details about multi-resolution rendering are provided later in this section.) The cropped, zoomed-in ROI for our mouse volume has been shown separately in Fig 2C. UIECVR allows one to select appropriate color feature detectors [cR, cG, and cB in (1)] to be applied to ROI's. In addition to standard color feature detectors, there is support for preset color detectors [e.g., “brain” or "light red" detector, brown, gray, etc. in (2) – (5)]. Our interface also supports normalized gradient magnitude feature detectors [(8) and (9)], and color vector gradient based feature detectors [(13) – (16)]. Finally, one can choose the OTF (e.g., threshold, sigmoid, and power law OTF) and its parameters for volume rendering. Color and gradient feature detectors can be combined either by a weighted sum or multiplicatively (section 2.4). A weighting factor w lets the user chose the relative contribution of color and normalized gradient magnitude feature detectors. In the case of fluorescence data, one would use the same interface, but with either Red or Green channel selected according to the fluorescent imaging agent used. In our system, the blue channel is not acquired during fluorescence imaging since emission band of most fluorophores lies in the green or red parts of the spectrum. Also, the gradient magnitude feature detector, and the sigmoidal and power law OTF's are not available for fluorescence data because of its sparse nature. Fig. 2D shows an enhanced volume rendering for the user-defined ROI of the example mouse in Fig. 2C. In this case, we used the red color feature detector and a sigmoidal OTF to visualize the kidneys and surrounding vasculature. User settings for a given visualization “session” can be saved into a file on the hard drive for later recall allowing one to optimize rendering for a given experiment and recall it for the next tissue specimen in the experiment.
Fig. 2.

A schematic block diagram of UIECVR illustrating 3D region-of-interest (ROI) selection followed by feature enhancement. The user starts from a baseline volume rendering (A) and launches the interactive ROI specification & cropping tool. A 3D ROI (green solid dots) is selected (B) and the whole mouse cryo-image volume is cropped (C). Next, the preferred color feature detection and OTF parameter settings are made to create an enhanced volume rendering. In this example, we used the red feature detector and a sigmoidal OTF to visualize the kidneys and surrounding vasculature (D).
We chose to implement the visualization pipeline described above using the Amira TCL scripting language, which is part of the software package and enables fast prototyping of visualization workflows. We have also implemented a multi-resolution volume rendering feature to enable the visualization of the extremely large (> 70 GB) cryo-image data sets. For multi-resolution volume rendering, we are using “large disk data access,” a proprietary data store from Amira which uses multiple disk files with a single reference file that holds pointers to data, thereby enabling faster and easier access to specific regions in the volume. Routines allow one to access rectangular solid sub-volumes with a voxel skip factor which sets the resolution of the data to be read from disk. Multi-resolution access is quick, especially considering the data size, allowing live time interaction. For example, the user can identify a sub-region containing an organ or tissue of interest within a low-resolution rendering of the entire volume. The sub-region can then be interrogated at a higher resolution in the context of nearby low resolution data.
2.7 Evaluation of volume renderings
We briefly describe the method adopted for evaluating the quality of volume renderings created by the different feature detectors and OTF's discussed earlier. Comparing one volume visualization to an another one created using a different feature detector and OTF in a rigorous way is not a straightforward proposition. DLW has many years of experience in quantitative image quality evaluation on 2D and 2D+time images [20–23]. We considered some quantitative image quality evaluations by human subjects using techniques such as ROC, forced choice, double-stimulus continuous-quality scale, etc., but we felt that anecdotal responses from expert users was more appropriate at this stage of development. After establishing our software, we obtained consensus anecdotal responses from three expert users and results in the text are based upon their findings.
3. Results
The objective of our experiments was to visualize organs and tissues from the mouse volume that are anatomically interesting with minimal user intervention. Since each organ had unique color properties, different feature detectors were required to produce visually appealing, optimal volume renderings of organs. Also, we tested feasibility of applying stored visualization settings from one mouse volume to the next. Last, we present results from handling very large data sets through the use of a multi-resolution visualization scheme.
The most commonly occurring tissue color in our specimens is deep red (e.g., heart, liver, kidneys, lungs, etc). First, we conducted experiments where we evaluated the feasibility of the deep red detector, cR, for segmenting organs from a whole mouse volume. Results were compared to our baseline true-color volume rendering of the whole mouse where opacity was set to the inverse of grayscale value (Fig. 3A). A threshold of T = 0.6 when used along with a threshold based OTF was found to adequate to highlight mainly lungs, liver, kidneys and some surrounding vasculature, all of which are red (Fig. 3B). Using gradient detectors (Fig. 3D) produced a volume rendering where internal details (especially in the lung and kidneys, see orange and green arrows respectively) are more clearly visible as compared to when color detectors are used (Fig. 3B).
Fig. 3.

Enhanced true color volume rendering using color and normalized gradient magnitude based opacities on whole mouse data. (A) The baseline volume rendering is obtained by setting opacity equal to the inverse of grayscale value. (B) Volume rendering from thoracic and abdominal region using simple threshold OTF after “deep red” detection (C) Volume rendering from head and thoracic regions using sigmoidal OTF after “pale red” detection. (D) and (E) show volume renderings obtained using the same feature detectors as in (B) and (C) but with gradient enhancements. Compared to color detectors, gradient detectors enable better visualization of internal structures of organs, e.g., lung (orange arrows in B and D), kidney (green arrow in B and D), lobes of brain (orange arrows in C and E), and spinal cord (green arrow in C and E).
In case of the brain and surrounding regions, along with the spinal cord, vascularization imparts an overall "light red" or pinkish coloration. For highlighting such regions, the color feature detector cLIGHTRED was found to be appropriate. Threshold based OTFs can result in noisy renderings due to the abrupt change in opacity at the threshold value; a sigmoid OTF with γ=50 and T = 0.55 was found to be more appropriate for highlighting brain, spinal cord, eyeballs, and olfactory bulbs all of which had light red content (Fig. 3C). Further, gradient enhancement was applied to the rendering in Fig. 3C using multiplicative combination [see (21)] to produce the rendering in Fig. 3E. The lobes of the brain and spinal cord (see orange and green arrows respectively) are more clearly highlighted as compared to the rendering in Fig. 3C.
By cropping the volume, we can remove clutter and more clearly delineate organs, and some examples follow. First, we note that abdominal tissue is a combination of both deep and pale red. A slab from the abdominal region was used, and the cR and cLIGHTRED feature detectors were both employed along with sigmoidal OTF with γ = 50 and T = 0.6. The two volume renderings were fused to show the structure of one of the kidneys along with the adrenal gland (Fig. 4A). The “slab” was moved to a slightly different location and cR only was applied to visualize kidney and pancreas in great detail (Fig. 4B).
Fig. 4.

Full resolution volume rendering from an ROI highlighting kidney and surrounding regions. (A) A single 2D slice showing kidney and adrenal gland which were highlighted by cR and cLIGHTRED feature detectors along with a sigmodial OTF to create a volume rendering (inset). (B) High resolution volume rendering within an ROI showing the pancreas along with a cutaway view of the kidney in the adult mouse. In this case, cR feature detector was employed and a sigmodial OTF was applied after feature detection.
Similarly, the head has a combination of both deep red (vasculature) and light red (brain) tissue. Starting from a whole mouse (Fig. 5A), a slab that includes the head and spinal cord was chosen and the cR and cLIGHTRED feature detectors were employed along with a sigmoidal OTF (γ = 50, T = 0.6) and fused to create a volume rendering (Fig. 5B) where the brain, spinal cord, olfactory bulbs, and collecting veins along with surrounding vasculature were highlighted. Last, we applied the cR feature detector in the abdominal region to obtain a high quality volume rendering where the inferior vena cava and the hepatic venules were clearly highlighted (Fig. 5C).
Fig. 5.

High resolution volume rendering using data feature detectors. (A) Low resolution whole mouse volume rendering showing ROI's selected for enhancement (boxes), following which feature detection was performed on high resolution mouse data from the selected ROI's. (B) The cR (deep red) and cLIGHTRED (pale red) feature detectors were both applied along with a sigmodial OTF in a region that included brain and spinal cord. In the resulting rendering, one can clearly see olfactory bulbs, spinal cord, collecting veins, and other vasculature. (C) Abdominal vasculature including hepatic venules, heart and vena cava. The cR (deep red) feature detector, was employed along with a sigmodial OTF, and no decimation.
We next evaluated the use of gradients on opacity values, which enhances internal edges and surfaces. We combined gradients multiplicatively with cLIGHTRED feature detector and a sigmoid OTF with γ = 50 [see (20)] on a slab chosen in a region that includes head and spinal cord (Fig. 6). In the resulting volume rendering, the boundaries around brain, brain stem, and spinal cord have been clearly enhanced. We next chose the same slab as above, and employed a linear combination of data (0.8) and gradient (0.2) prior to OTF mapping (Fig. 7). More anatomical details were seen with respect to homogeneous structures in brain, spinal cord and surrounding tissues (Fig. 7A) and edges were less prominent. When these weights were reversed, edges were highlighted better (Fig. 7B). When data and gradient were combined after OTF mapping [see (20)] with same weights used in Fig. 7A, a visualization was obtained where data and gradient features were both highlighted (Fig. 7C).
Fig. 6.

A gradient-enhanced volume visualization from a slab chosen from a region that includes the head and spinal cord in adult mouse clearly highlights in the brain the left and right cerebral hemispheres as well as the lobes, the spinal cord, the eyeballs, and olfactory bulbs. The gradients were combined multiplicatively with data attributes as per (21).
Fig. 7.

Comparison of adult mouse volume renderings obtained from linear combination of data and gradient effects, both prior to and after OTF mapping. (A) A slab in a region that included the head and spinal cord was chosen and a linear weighting of 0.8fc, BRAIN + 0.2fg,GRAY was applied prior to assigning opacities using a sigmoidal OTF with γ=50. The hemispheres and lobes of the brain, spinal cord, and eyeballs were clearly highlighted in the resulting rendering. (B) The weights for data and gradient were reversed (0.2fc, BRAIN + 0.8fg,GRAY) resulting in a higher contribution of gradients towards rendered opacity. As a result, the fissure dividing the two cerebral hemispheres is more clearly visible. (C) An interesting variation included mixing data and gradient effects by α-opacity assignment after OTF mapping as in (20), i.e., employing a linear combination of two OTFs - one for fc,BRAIN and the other for fg,GRAY with weights of 0.8 and 0.2 (respectively).
We then compared the sigmoidal OTF with the previously reported power-law OTF (Fig. 8). We set out to enhance vasculature (i.e., deep red tissue) in the mouse volume. The red feature detector (cR) with α-opacity assignment using sigmoidal OTF was quite successful at enhancing these features (Fig. 8A). Although the same red feature detector (cR) was used, renderings created from α-opacity assignment using power law OTF were less successful in delineating vasculature (Fig. 8, B-C).
Fig. 8.

Visualization of vasculature using red feature detection and different OTF's. (A) Our red feature detection step followed by α-opacity assignment using sigmoidal OTF targets only specific features (deep red tissue) for enhancement. (B) Red feature detection followed by a power law OTF with k = 1 and γ=2) does not provide clear delineation between different tissue types due to its non-saturating nature. (C) Tissue delineation can be somewhat improved by reducing γ (e.g., k=1, γ = 0.5 was used), although this rendering still does not delineate vasculature as clearly as the OTF in A above.
We created fused volume renderings of color anatomy and molecular fluorescence image data. In Figure 9, a true-color brightfield volume, and a single-channel fluorescence volume mapped to shades of green, have been fused to show GFP-labeled Lewis lung carcinoma (LLC) cells which had homed to the adrenal gland of an adult mouse. We controlled relative transparencies to reveal varying amounts of color brightfield and fluorescence through live time user interaction, allowing one to easily visualize fluorescently labeled cells within an anatomical context.
Fig. 9.

Fusion of anatomical color and molecular fluorescence image showing homing of GFP-labeled cancer cells to adrenal gland in a model of cancer metastasis. (A) baseline true-color rendering. (B) enhanced rendering using CDGM feature detector. (C) surface rendering of LLC cancer cells segmented from fluorescence data using region growing. (D) fusion of brightfield and fluorescence renderings. (E, F) Visualization effects created by changing relative opacities during fusion.
Our software allows one to store and recall default visualization parameters. We found that with color brightfield data, one can use default parameters to visualize different data sets. The robustness of this approach is illustrated. We applied settings from Fig. 3B (cR, OTF = threshold, T = 0.6) to two other adult mouse volumes (Figs 10B and 10C). As a comparison, Fig. 10A shows the settings of Fig. 3B applied to the original whole mouse volume. We obtained very similar renderings, indicating that our approach was robust. We have repeated this in many other instances and even applied uniform visualization parameters in a recent study of over 20 mouse embryos. We have not yet determined a completely automated method for visualizing a new, unknown data set. However, with a library of presets, an experienced user can get close and then quickly adjust parameters to optimize a new visualization.
Fig. 10.

Enhanced true color volume rendering of three different adult mouse volumes using identical settings for feature detector and OTF. (A) The rendering of Fig. 3B obtained with our default test data set is repeated. It uses the red feature detector, threshold OTF, and T = 0.6. (B, C) Stored settings from the visualization session for (A) were applied to two other adult mouse volumes and resulted in very similar renderings, demonstrating the robustness of our approach.
Using an embryonic mouse dataset, we evaluated feature detectors where gradients were computed directly from color vector data (Fig. 11). Apart from a baseline rendering (Fig. 11A), the CDGM and CDGDP as given by (14) and (16) respectively, were used for feature detection along with sigmoidal OTF's. By operating on color vector data, these gradient detectors (Fig. 11, B-C) better enhance boundaries between tissue types and changes in tissue orientation, not previously possible with scalar gradients.
Fig. 11.

True-color volume renderings from E13.5 embryonic mouse data set using gradients computed directly from color vector data. (A) Baseline rendering created using the inverse of grayscale value. (B) fg = CDGM in the L*u*v* color space [(13) and (14)] was used for feature detection. Boundaries between internal organs are more clearly visible. (C) fg = CDGDP was used [see (16)]. This clearly shows local texture within sub-regions in volume and finer details such as the boundary between liver and surround, the umbilical cord, etc., not possible with (B) alone.
Our multi-resolution interface allows one to render the entire mouse on the screen at low resolution, define a bounding box of interest, and create a new rendering of the sub-volume at higher or even the highest available resolution. This process can be repeated to enable one to view a mouse, an organ, a tissue, and finally single cells. In Figure 12, we illustrate this multi-resolution capability by showing renderings from a single >70 GB adult mouse volume at two different resolutions (skip factor of full volume = 8, skip factor for lung = 0, i.e., full resolution). Zooming in to a new resolution requires disk drive access. The wait time for rendering depends upon hardware and the time to render goes as follows: drive over the internet > drive on the computer > solid state disk (SSD) on the computer. With an Intel X25E SSD, we were able to read and render a 256 × 256 × 256 sub-block of color data, consisting of 64 MB, at a new resolution from the >70 GB mouse data set in about 90 seconds.
Fig. 12.

Multi-resolution data access from color cryo-imaging data of the adult mouse. A low resolution volume rendering of the whole mouse is initially produced. The user then specifies a smaller sub-volume of interest (bounding box with blue markers), which in this example is the left lung. A full resolution rendering is created within this region, co-registered and fused with original volume.
4. Discussion
Cryo-imaging provides a unique opportunity to employ microscopic resolution, true-color bright field and co-registered fluorescence image data in order to visualize molecular processes within an anatomical context. This enables us to study bio-distribution of stem cells, malformations in specific organs and tissues, nanoparticle visible drug delivery, imaging agents, gene expression profiles, etc. A key aspect of volume visualization is transfer function design, and this has been a topic of significant research within the volume visualization community in the past decade [6–9]. A high degree of user interactivity is desirable in user interfaces for 3D data exploration [8]. Our UIECVR interface has been designed with usability, interactivity, and flexibility in mind. Our main contribution is the suite of carefully designed color and normalized gradient magnitude feature detectors that enable us to quickly highlight tissues of interest in a given volume, save these settings, and then quickly apply these settings to the next volumes. Our visualization software covers a wide range of choices for feature detectors and OTFs, along with volume editing/cropping and multi-resolution data access options. Our investigations have specifically revealed that the sigmoidal OTF is most effective in producing volume renderings with smoothly highlighted edges (Figs 3–7). Further, the parameter γ controlling the behavior of sigmoid around the threshold value T is crucial, with a larger γ providing sharper delineation. Also, normalized feature detectors have enabled us to define thresholds (T) in a uniform fashion across different data sets (e.g., see Fig. 10). In general, applying feature enhancements on thick slabs of data (~50–100 slices) has proved useful in terms of visualizing structures of interest. 2D multiplanar reformatted slices overlaid on 3D renderings have helped the user home to a region of interest more quickly and efficiently. Volume fusions generated using our interface (e.g., Fig. 9) have been very useful in obtaining anatomical perspective while simultaneously analyzing molecular markers anywhere in the specimen volume. One of our major challenges is efficient handling of extremely large data sets. A tiled dual-modality (color bright field and fluorescence) acquisition with an adult whole mouse using 4×5 microscope acquisitions and 40µm section thickness generates > 70 GB of color image data and > 25 GB of fluorescence data, a prohibitively large size for volume rendering on a machine with a conventional, single graphics processor. As a remedy to the extreme data problem, we designed the multi-resolution volume rendering approach (Fig. 12). Multi resolution greatly improves the visualization experience. For example, one can render a mouse, an organ, a tissue, and then even single cells. We have found this very useful for examining the biodistribution of implanted fluorescent stem cells. Multi-resolution volume rendering enables one to explore large data sets without resorting to time-consuming manual segmentation. There are some hardware considerations. Data access time can be greatly reduced by employing modern Solid State Disk (SSD) hard drives, which generally have higher data read and write speeds as compared to conventional hard drives. As for graphics hardware, the larger the amount of graphics RAM, the smaller the decimation that needs to be applied to produce high resolution renderings of sub-regions within the large data sets.
In conclusion, volumes of color data provide many opportunities for volume rendering not found with gray scale data such as CT or MRI. We have exploited these opportunities to create a platform for multi-resolution, volume visualization of extremely large color and fluorescence image data sets, oftentimes using stored visualization parameters. The platform allows us to quickly recognize anatomy and zoom into particular regions of interest. It has already shown great utility in many of our cryo-imaging studies of mouse phenotyping, stem cells and regenerative medicine, cancer, imaging agents, etc.
Acknowledgment
This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR12463-01 from the National Center for Research Resources, National Institutes of Health. This research is supported by the Ohio Wright Center of Innovation and Biomedical Research and Technology Transfer award: “The Biomedical Structure, Functional and Molecular Imaging Enterprise,” NIH R42CA124270, and NIH 1R24 CA110943. GS was partially supported by NIH training grant, NIH T32EB007509. Dr. Wilson has interest in BioInVision, Inc., which intends to commercialize cryo-imaging technology.
Biographies
Madhusudhana Gargesha, PhD (Senior Research Associate, Department of Biomedical Engineering, Case Western Reserve University) received his BE in Electronics and Communication Engineering from Bangalore University, Bangalore, India, and his MS and PhD in Electrical Engineering from Arizona State University, Tempe, Arizona, USA. He is currently Senior Research Associate in the Department of Biomedical Engineering at Case Western Reserve University, Cleveland, Ohio, USA. His current interests are in the areas of biomedical image processing and analysis with a particular focus on 3D visualization of small animal imaging data. He has authored 5 peer-reviewed journal publications and over 15 conference papers.
Mohammed Qutaish received his BS degree in Biomedical Engineering from Jordan University of Science and Technology (JUST), Amman, Jordan in 2007. He is currently pursuing his PhD in Biomedical Engineering in the field of molecular imaging at Case Western Reserve University, Cleveland, Ohio, USA.
Debashish Roy, PhD, received his BS in Electrical Engineering from Jadavpur University, India and MS and PhD in Biomedical Engineering from Case Western Reserve University, Cleveland, Ohio, USA. As part of his PhD, completed in 2009, he developed the robotic control system and the image acquisition software for the Case cryo-imaging system. He now works for BioInVision as Senior Project Scientist and his current research interests include image processing, 3D visualization, fluorescence microscopy and cryo-imaging. Dr. Roy also has wide experience in process control and automation, and has occupied leadership positions in several multi-national corporations.
Grant Steyer, PhD, holds BS and PhD degrees in Biomedical Engineering from Case Western Reserve University. During his PhD, completed in 2009, he contributed greatly to the creation of the Case cryo-imaging system, particularly the development of stem cell applications. Dr. Steyer has published numerous abstracts and papers on his work. Currently, he is pursuing a law degree at the Cleveland-Marshall College of Law, Cleveland State University, with the intention of become a patent lawyer.
Michiko Watanabe, PhD (Professor of Pediatrics, Anatomy, Genetics, Case Western Reserve University School of Medicine) investigates the mechanisms of cardiovascular development using a range of in vitro and in vivo approaches. She has a long track record of research funded by both non-federal and federal sources and more than 50 peer-reviewed papers in high impact journals. Her expertise in cardiovascular development has been sought by grant review committees and editorial boards. Her current studies probe the relationship between form and function of the cardiovascular system using state of the art biomedical imaging technology and the role of hypoxia induced cellular responses in coronary vessel development.
David L. Wilson, PhD (Robert Herbold Professor of Biomedical Engineering and Radiology, Case Western Reserve University) has interests in biomedical image processing, analysis, and visualization as well as small animal cellular and molecular imaging. He has a significant track record of federal research funding, over 100 refereed journal publications, and 7 patents. Most recently, he has created the Case cryo-imaging system.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1.Roy D, Steyer GJ, Gargesha M, Stone ME, Wilson DL. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse. Anatomical Record-Advances in Integrative Anatomy and Evolutionary Biology. 2009 Mar;vol. 292(no. 3):342–351. doi: 10.1002/ar.20849. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Roy D, Breen M, Salvado O, Heinzel M, McKinley E, Wilson D. Imaging system for creating 3D block-face cryo-images of whole mice. Medical Imaging 2006: Physiology, Function, and Structure from Medical Images. 2006;vol. 6143 doi: 10.1117/12.655617. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wilson D, Roy D, Steyer G, Gargesha M, Stone M, McKinley E. Whole mouse cryo-imaging. Medical Imaging 2008: Physiology, Function, and Structure from Medical Images. 2008;vol. 6916 doi: 10.1117/12.772840. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Csebfalvi B, Groller M. Interactive Volume Rendering Based on a 'Bubble Model'. Proceedings of Graphics Interface 2001. 2001:209–216. [Google Scholar]
- 5.Drebin RA. Volume rendering. ACM SIGGRAPH 88. 1988;vol. 22(no. 4):65–74. [Google Scholar]
- 6.Ebert DS, Morris CJ, Rheingans P, Yoo TS. Designing effective transfer functions for volume rendering from photographic volumes. Ieee Transactions on Visualization and Computer Graphics. 2002 Apr;vol. 8(no. 2):183–197. [Google Scholar]
- 7.Joe K, Simon P, Charles H, David E. Interactive translucent volume rendering and procedural modeling; Proceedings of the conference on Visualization '02; Boston, Massachusetts: IEEE Computer Society; 2002. pp. 109–116. [Google Scholar]
- 8.Kniss J, Kindlmann G, Hansen C. Multidimensional transfer functions for interactive volume rendering. Ieee Transactions on Visualization and Computer Graphics. 2002 July;vol. 8(no. 3):270–285. [Google Scholar]
- 9.Morris CJ, Ebert D. Direct volume rendering of photographic volumes using multi-dimensional colorbased transfer functions; Proceedings of the Symposium on Data Visualisation 2002; 2002. pp. 115–124. [Google Scholar]
- 10.Roerdink JBTM. Multiresolution maximum intensity volume rendering by morphological adjunction pyramids. IEEE Transactions on Image Processing. 2003 June;vol. 12(no. 6):653–660. doi: 10.1109/TIP.2003.812759. [DOI] [PubMed] [Google Scholar]
- 11.Levoy M. Display of Surfaces from Volume Data. Ieee Computer Graphics and Applications. 1988 May;vol. 8(no. 3):29–37. [Google Scholar]
- 12.Cootes TF, Taylor CJ, Cooper DH, Graham J. Active Shape Models - Their Training and Application. Computer Vision and Image Understanding. 1995 Jan;vol. 61(no. 1):38–59. [Google Scholar]
- 13.He T, Hong L, Kaufman A, Pfister H. Generation of Transfer Functions with Stochastic Search Techniques. Proceedings of VIS '96. 1996:227. [Google Scholar]
- 14.Stalling D, Hege HC, Zockler M. Amira- An Advanced 3D Visualization and Modeling System. 2007 http://amira.zib.de. [Google Scholar]
- 15.Ackerman MJ. The Visible Human Project. J. Biocommun. 1991;vol. 18(no. 2):14. [PubMed] [Google Scholar]
- 16.Ackerman MJ. The Visible Human Project: a resource for education. Acad. Med. 1999 June;vol. 74(no. 6):667–670. doi: 10.1097/00001888-199906000-00012. [DOI] [PubMed] [Google Scholar]
- 17.Ackerman MJ, Banvard RA. Imaging outcomes from The National Library of Medicine's Visible Human Project(R) Computerized Medical Imaging and Graphics. 2000 May;vol. 24(no. 3):125–126. doi: 10.1016/s0895-6111(00)00012-4. [DOI] [PubMed] [Google Scholar]
- 18.Spitzer VM, Whitlock DG. The Visible Human Dataset: the anatomical platform for human simulation. Anat. Rec. 1998 Apr;vol. 253(no. 2):49–57. doi: 10.1002/(SICI)1097-0185(199804)253:2<49::AID-AR8>3.0.CO;2-9. [DOI] [PubMed] [Google Scholar]
- 19.Plataniotis KN, Venetsanopoulos . Color Image Processing and Applications. 1st ed. Springer; 2000. Color Spaces; pp. 1–45. [Google Scholar]
- 20.Jabri KN, Wilson DL. Quantitative assessment of image quality enhancement due to unsharp mask processing in x-ray fluoroscopy. Journal of the Optical Society of America. 2002;vol. 19(no.7):1297–1307. doi: 10.1364/josaa.19.001297. [DOI] [PubMed] [Google Scholar]
- 21.Srinivas Y, Wilson DL. Quantitative Image Quality Evaluation of PixelBinning in a FlatPanel Detector for X-ray Fluoroscopy. Medical Physics. 2003 Dec; doi: 10.1118/1.1628278. [DOI] [PubMed] [Google Scholar]
- 22.Srinivas Y, Wilson DL. Image Quality Evaluation of Flat Panel and Image Intensifier Digital Magnification in X-ray Fluoroscopy. Med. Phys. 2002;vol. 29(no. 7):1611–1621. doi: 10.1118/1.1487858. [DOI] [PubMed] [Google Scholar]
- 23.Xue P, Thomas CW, Gilmore GC, Wilson DL. An adaptive reference/test paradigm with applications to pulsed fluoroscopy perception. Behavior Research Methods, Instruments, and Computers. 1998;vol. 30(no. 2):332–348. [Google Scholar]
