Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Dec 1.
Published in final edited form as: Ann Biomed Eng. 2018 Aug 15;46(12):2177–2188. doi: 10.1007/s10439-018-02113-z

Simulating Developmental Cardiac Morphology in Virtual Reality using a Deformable Image Registration Approach

Arash Abiri a,b,d,#, Yichen Ding a,b,#, Parinaz Abiri a,b, René R Sevag Packard b, Vijay Vedula e, Alison Marsden e,f,g, C-C Jay Kuo h, Tzung K Hsiai a,b,c,
PMCID: PMC6249076  NIHMSID: NIHMS1503965  PMID: 30112710

Abstract

While virtual reality (VR) has potential in enhancing cardiovascular diagnosis and treatment, prerequisite labor-intensive image segmentation remains an obstacle for seamlessly simulating 4-dimensional (4-D, 3-D + time) imaging data in an immersive, physiological VR environment. We applied deformable image registration (DIR) in conjunction with 3-D reconstruction and VR implementation to recapitulate developmental cardiac contractile function from light-sheet fluorescence microscopy (LSFM). This method addressed inconsistencies that would arise from independent segmentations of time-dependent data, thereby enabling the creation of a VR environment that fluently simulates cardiac morphological changes. By analyzing myocardial deformation at high spatiotemporal resolution, we interfaced quantitative computations with 4-D VR. We demonstrated that our LSFM-captured images, followed by DIR, yielded average dice similarity coefficients of 0.92 ± 0.05 (n = 510) and 0.93 ± 0.06 (n = 240) when compared to ground truth images obtained from Otsu thresholding and manual segmentation, respectively. The resulting VR environment simulates a wide-angle zoomed-in view of motion in live embryonic zebrafish hearts, in which the cardiac chambers are undergoing structural deformation throughout the cardiac cycle. Thus, this technique allows for an interactive micro-scale VR visualization of developmental cardiac morphology to enable high resolution simulation for both basic and clinical science.

Keywords: medical simulation, light-sheet imaging, cardiology, image registration, dynamic imaging, surgical simulation

Introduction

Virtual reality (VR) is changing the 3-dimensional (3-D) simulation platform by implementing user-intuitive interaction in an immersive environment 8,31,48,49 Over the past decade, advances in the integration of VR simulators with surgical training have allowed for the enhancement of image visualization to improve clinical and procedural outcomes 19,53. However, generating a VR environment for the cardiac and pulmonary systems entails the implementation of time-dependent changes in organ morphology present during cardiac and respiratory cycles, respectively, rendering 4-D (3-D + time) VR visualization a challenge 43,54,59 High capture rates exceeding 100 frames per second (FPS), as needed for imaging contracting embryonic zebrafish hearts (~2 beats per second), further complicates motion extraction at the micro-scale 3,34. Despite being a gold standard, manual segmentation is both labor-intensive and prone to human errors for time-dependent raw images. Its susceptibility to human errors frequently engenders coarse and inaccurate 4-D outputs. Specifically, the surface mesh topologies of the reconstructed 3-D organ systems tend to be inconsistent between the consecutive frames from one cardiac cycle to another, thus rendering quantitative studies on tissue contractile function and cardiac wall stress non-physiological 62.

Although VR technology has been applied in clinical applications such as surgery 22,23,56, the insufficient spatial and temporal resolution of magnetic resonance imaging (MRI) and computed tomography (CT) has limited the uncovering of underlying physiological mechanisms of moving structures in live animal models 58,62. In this context, we sought to address the temporal dependence of structural deformations to recapitulate the dynamics of developmental cardiac physiology. Intensity-based deformable image registration (DIR) enables us to extract moving regions of interest (ROI) for 4-D image visualization 3,24,57, thus allowing for accurately tracking intricate displacements and deformations in the contracting heart 7,24,63,64. Therefore, to dynamically interrogate biophysical and biochemical events in the 4-D domain, we applied DIR to link light-sheet fluorescence microscopy (LSFM) images with VR. Unlike VR scenarios that are commonly generated by computer graphics or multiple cameras 20,27,28,65, the advent of LSFM 26,30,52 allows for capturing of authentic 4-D physiological events in zebrafish embryos, bypassing the wearable devices deployed in the motion capture system. In comparison to other frame-by-frame segmentation methods 6,3537,44,51, DIR also takes advantage of the correspondence of adjacent frames in 4-D image stacks, enabling an effective and accurate vertex correspondences post image acquisition of LSFM. Thus, similar to other physics-based approaches 38,41,45, DIR enables accurate tracking of cardiac motion without encountering the banding artifacts that are frequently observed in surfaces reconstructed through independent segmentation of serial tomographic images. This is of particular importance, as many cardiovascular diagnostic procedures rely on the accurate tracking and deformation analysis of nonrigid, moving atria and ventricles 45.

To demonstrate the application of DIR for visualizing the imaging data in VR, we captured the developing heart in a live zebrafish embryo at 100 FPS via our in-house LSFM system 11,15,32,47. We performed sequential registration on the 4-D LSFM-acquired cardiac images, starting from ventricular relaxation (diastolic) to contraction (systolic frames). Using the dice similarity coefficient (DSC), we demonstrated a high degree of spatial overlap between the registration outputs and their corresponding ground truth images, thus validating the accuracy of DIR-processed outcomes. Next, we applied the 3-D reconstruction tools and a VR development environment to generate an immersive VR scene that closely simulated the dynamic morphological and topological changes of developmental hearts undergoing structural deformation throughout the cardiac cycle. Furthermore, we utilized DIR to compute and visualize instantaneous myocardial deformation at end-diastole and end-systole. As such, our novel pipeline enables micro-scale investigation of in vivo embryonic zebrafish hearts at high temporal resolution by integrating non-linear image registration with VR-LSFM. Thus, we made a substantive advance to marshal a novel DIR-based VR process to recreate a computed model of a 4-D contracting heart and its physiological function.

Materials and Methods

Generation of a VR environment for LSFM-acquired 4-D cardiac imaging data

We illustrated the fundamental steps of constructing a VR interface from the authentic dynamic imaging data.

  1. We captured 4-D images of contracting heart in the live zebrafish embryos, and synchronized inconsistent periodicity of cardiac cycles 32. This dataset comprised of 50 image stacks. Each image stack represented a single time frame and consisted of 86 512×512 pixel images (Fig. 1A).

  2. We manually segmented the first stack of raw data from the 50 image stacks to label the region of interest (ROI), serving as the first segmented reference image stack for the contracting heart (Fig. 1B).

  3. Using the first image stack as an initial reference, we applied the DIR algorithm to the remaining 49 stacks of raw data to obtain their segmented counterparts (Fig. 1C) and propagated the prior segmentation results through the subsequent registrations. During this process, we also validated the accuracy of the DIR-segmented images against ground truth images that were obtained from two different segmentation techniques: Otsu thresholding and manual segmentation.

  4. We reconstructed a 3-D model from the individually segmented image stacks using the voxel size (0.65 × 0.65 × 2 μm) and exported the 3-D object as an obj file for further processing (Fig. 1D).

  5. We edited the polygon mesh of each 3-D model while maintaining the authenticity of the anatomical structure, and we exported each 3-D object as an fbx file for VR integration (Fig. 1E).

  6. We generated an environment in which a dynamic visualization of all 50 models into a VR scene. We finalized the entire cardiac cycle in Unity with an educational license. (Fig. 1F).

Fig. 1. A flow chart to depict the processes of visualizing image data in VR.

Fig. 1.

(A) 3-D image stacks were collected from imaging system and stored as tiff files. (B) The first 3-D image stack was manually segmented to serve as the primary segmented reference image. (C) DIR was applied to the subsequent 49 3-D images to generate their corresponding segmented components. (D) The segmented 3-D images were loaded into Amira for reconstruction into the 3-D editable models. (E) The 3-D model meshes were rendered and edited in Autodesk 3DS-Max, and subsequently exported as 3-D objects. (F) The 3-D objects were imported into a Unity scene to provide an immersive VR simulation.

Data acquisition and manual segmentation

Raw data from a single embryonic zebrafish heart was acquired from an in-house LSFM system as previously described 11,15,32,47. The entire cardiac cycle was captured at 100 FPS, amounting to 50 image stacks. With each stack consisting of 86 tiff image slices (512 × 512 pixels), a total of 4300 2-D images were collected. We manually segmented the first raw data from the 50 image stacks by using Amira 6.1 (FEI Visualization Sciences Group). Scalar images were segmented by using global thresholding based on image intensity, followed by further manual segmentation of the epicardium (outer surface of the heart) and endocardium (inner surface of the heart).

Deformable image registration

We performed DIR on the remaining 49 stacks of raw images, amounting to 4214 2-D images, in MATLAB (MathWorks, Inc.) by using functions in the default application package and Image Processing Toolbox. We converted each stack of 2-D images into a 3-D image matrix prior to performing image registration. For each DIR iteration, we used the 3-D matrices from two temporally consecutive frames. To correct illumination differences between matrices, we used imhistmatch to match the intensity histogram of the current iteration’s matrix (defined as the moving image) with that of the previous iteration's matrix (defined as the reference image).

To calculate the 3-D displacement field between the reference and moving images, we applied Maxwell demon’s 3-D non-rigid registration algorithm with a 3-pyramid level implementation (imregdemons). Demon’s algorithm estimated displacement vectors by mapping the pixel location from the reference image to a corresponding location in the moving image 57. Each pyramid level decreased the resolution of the image by a factor of 2. We performed 500, 400, and 300 iterations, respectively, for the high- (level 1), medium- (level 2), and low-resolution (level 3) pyramid levels. Thus, we initiated this algorithm with images at the 3rd level and an initial displacement field of zero. The displacement field was iteratively calculated for this level’s images until the iteration limit was reached. Next, the computed displacement field at the 3rd level was rescaled to the images at the 2nd level in the pyramid and used as the initial displacement field. An accumulated field-smooth factor of 1.3 was used to update the values in the 2nd level’s initial displacement field to decrease errors in subsequent computations. This process was repeated until we obtained the displacement field for the 1st level images. Finally, we used an inverse mapping algorithm (imwarp) in conjunction with the computed 3-D displacement field to perform a 3-D geometric transformation on the 3-D segmented reference image to generate the 3-D segmented moving image.

During the first DIR application, the segmented reference image was a manually segmented 3-D image of the first raw image stack (Fig. 2, Step A). In the subsequent iteration, this segmented reference image represented the calculated and segmented moving image from the previous round of registration (Fig. 2, Step B). For these reasons, we continuously applied the DIR algorithm until we had segmented all of the raw 3-D images. We then converted these 3-D images back to the 2-D segmented image slices for 3-D reconstruction. Finally, we validated the accuracy of the recurrent segmentation results by comparing the segmented images against ground truth images.

Fig. 2. The workflow for segmenting time-dependent imaging data with DIR.

Fig. 2.

(A) The first stack of raw data was manually segmented. (B) The Nth and N-1th raw image stacks were used in DIR to compute a displacement field. The displacement field was applied onto the segmented N-1th image stack to generate the segmented Nth image stack, which, upon validation against the ground truth Nth image stack, was saved and passed onto the next iteration of DIR for a total of 49 loops.

Calculation of dice similarity coefficient

Prior to determining the DSC, we normalized the 8-bit monochromatic images obtained from the DIR step into binary images. We used the DSC as the main validation metric to assess the spatial overlap between the segmented 2- or 3-D image slices. For a pair of 2-D images, we defined the DSC as

DSC(A,B)=2(AB)/(A+B) (1)

where ∩ is the intersection, and A and B are the target regions of the two segmentations.

For a pair of 3-D images, we defined the DSC as

DSCV(A,B)=2s=1n(AsBs)/s=1n(As+Bs)

where ∩ is the intersection, n is the number of 2-D image slices, and As and Bs are the target regions of two 2-D image slices within the two segmented 3-D images.

3-D reconstruction, volume rendering and modification

After obtaining 50 stacks of segmented data, we reconstructed the 3-D object from each image stack based on 0.65 × 0.65 × 2 μm voxel size in Amira 6.1. We labeled the ROIs to generate a polygon mesh for the volume. Next, we compressed this 3-D surface by reducing the number of vertices and faces, and subsequently exported it from Amira as an obj file.

We smoothened and scaled down the obj models in Autodesk 3DS-Max (Autodesk Inc) using an educational-purpose license. After modification, we exported these 3-D models separately as fbx files. Technically, the aforementioned steps in Autodesk 3D-Max are optional and were incorporated for the purpose of demonstrating the potential to implement volume optimization prior to VR integration.

VR application development in Unity

We used Google Cardboard (Google Inc.) as the VR viewer and an educational-purpose license version of Unity 5.5 (Unity Technologies) as the development engine. Before creating a new project, we installed the Android SDK onto the computer and integrated Google VR Software Development Kit (SDK) v1.20 into Unity. In a new Unity 3-D project, we imported the fbx files, and replaced the default main camera of the scene with the CardboardMain Camera to edit all of the GameObjects in VR mode. We hereby implemented the interactive elements into the Unity scene to strengthen the immersive experience.

Visualization of instantaneous myocardial deformations in VR

Based on aforementioned procedures, we performed DIR by using 50 stacks of raw imaging data to obtain 49 3-D vector fields representing the myocardial deformations of the zebrafish heart during a cardiac cycle. For each 3-D stack, the magnitudes of tissue displacements were normalized to values ranging from 0 to 255 that were compatible with the intensity range of an 8-bit grayscale image. The resulting 3-D matrix of displacement magnitudes was projected onto its corresponding segmented 3-D stack to yield a 3-D grayscale image whose intensity values represented the relative displacement. This 3-D grayscale image was subsequently loaded into ParaView (http://www.paraview.org), an open-source visualization application, to generate a volume rendering of the displacement data. A 0.65 × 0.65 × 2 μm voxel size was used for 3-D reconstruction. In conjunction with the Visualization Toolkit (Kitware Inc), ParaView was used as a VR platform for visualizing color-mapped volume renderings of deformations in the zebrafish heart.

Statistics

We expressed all of the values as mean ± standard deviation. For statistical comparisons of the means between two normally-distributed data sets from the same experimental conditions, we performed a paired two-tailed t-test. To compare the means between two data sets from the same experimental conditions where normal distribution assumptions are not met, we performed a paired two-tailed Wilcoxon signed-rank test. We performed comparisons of variances using a Bartlett test. We further validated the data sets for normal distribution using a Lilliefors test. This test provides a normality assessment based on the Kolmogorov-Smirnov test to assess the null hypothesis that the data are from a normally distributed population 10. A p-value of < 0.05 was considered statistically significant.

Study approval

Zebrafish experimentation was performed in compliance with the UCLA Institutional Animal Care and Use Committee (IACUC) protocol (ARC no. 2015-055).

Results

Deformable image registration

To establish a gold standard without the labor-intensive segmentation, we generated a manually labeled ROI for only the first 3-D image stack. The result served as the segmented component of the initial reference data for the remaining series of unsupervised deformable image registration processes used to generate the 3-D segmented image stacks. In any DIR iteration, there was a 3-D stack of 2-D reference (Fig. 3A) and a 3-D stack of 2-D moving images (Fig. 3B), whose contrasting intensities represented their morphological differences (Fig. 3C). Traditionally, DIR is used to estimate a displacement field that would align the moving image with the reference image 5,9,17. Here, we utilized DIR to compute a 3-D displacement field (Fig. 3D) to align the reference image with the moving image; hence, creating a transformed reference image (Fig. 3E). This image closely resembled the moving image (Fig. 3F). Therefore, this displacement field was applied onto the segmented reference image (Fig. 3G) to generate the segmented moving image (Fig. 3H). The qualitative differences between the raw reference and moving images (Fig. 3C) appeared to be well-maintained in the segmented reference and moving images (Fig. 3I).

Fig. 3. Results following deformable image registration.

Fig. 3.

(A) A reference 3-D image stack. (B) A moving 3-D image stack. (C) The difference in intensity between the reference and moving images. (D) 3-D displacement field (green arrows) on a reference image. (E) The transformed reference image. (F) The difference in intensity between the transformed reference and original moving images. (G) The 3-D displacement field (green arrows) on segmented reference image. (H) The transformed segmented reference image. (I) The difference in intensity between the segmented and transformed reference images. (C, F, I) The green coloration indicates a higher intensity in the reference image, and purple indicates a higher intensity in the moving image. The grayscale areas represent nearly equal values before and after registration. Red dotted lines outline the regions with the most significant cardiac wall movement resulting from ventricular contraction.

Validation of DIR Output

We performed two methods of validation to determine the accuracy of our DIR-based technique for generating an authentic VR simulation of the contracting zebrafish heart. Furthermore, we measured accuracy by comparing processed images against ground truth images using the DSC, a similarity metric that measures the degree of spatial overlap between image pairs 66. The DSC values range from 0 to 1, where 0 indicates no overlap and 1 indicates complete overlap between two images.

First, we assessed the accuracy of DIR in computing a 3-D displacement field by comparing the binarized components of transformed reference images with the binarized components of their corresponding moving images. These images were binarized using Otsu’s cluster-based intensity thresholding method, which computed image-specific global histogram thresholds to maximize the interclass variance of each image’s thresholded black and white pixels 46. To maintain the integrity of our DSC calculations 29, we adjusted the intensity histograms of reference and moving images to match one another prior to performing thresholding. Since DSC image pairs consisted of slices at the same z-position, the effect of image noise on producing differences in threshold-based segmentation was considered negligible in impacting the accuracy of the computed DSC values.

We used 17 2-D image slices, evenly-spaced in the z-axis, from 30 3-D image stacks to compute a DSC average of 0.92 ± 0.05 (n = 510) to indicate a high degree of spatial overlap. To visualize the variability in DSC across the 3-D image stacks, we plotted the DSC values for 17 slices from each of these 3-D image stacks (Fig. 4A). A greater variation in DSC values was observed for the outer slices than the center slices. To assess the differences in means and variances between these slices, we first reorganized them into two sets of data: the outer slices representing DSC coefficients from the rostral and caudal ends of the zebrafish heart (slices 0-20, 70-85), and the inner slices from the middle (slices 25-65) of the heart. The average DSC coefficients for the outer and inner slices were 0.89 ± 0.07 and 0.93 ± 0.03, respectively. Next, we averaged and organized these data sets into two 1-D arrays. The items from each array represented the average DSC coefficients of the slices for their respective 3-D image stack. We performed a Lilliefors test on the outer and inner arrays, yielding p--values of 0.42 and 0.27, respectively, indicating that both arrays were normally distributed. Next, a paired two-tailed t-test demonstrated a significant difference between the two arrays (p < 0.001). A Bartlett test further demonstrated a significant difference between the two variances of the two data sets (p < 9.50×10−3).

Fig. 4. Dice similarity coefficients for processed images validated against Otsu thresholded (A-B) and manually segmented (C-D) ground truth images.

Fig. 4.

(A) A box plot of DSC values (n = 510) for 17 evenly-spaced, processed 2-D image slices from 30 time frames demonstrates a lower mean and higher variance among the outer slices as compared to the inner slices. (B) A linear regression analysis (blue line) of volumetric DSC measurements (n = 30) shows a negative correlation between DSC and the number of DIR iterations. (C) A box plot of DSC values (n = 240) for 8 evenly-spaced, segmented 2-D image slices from 30 time frames demonstrates a lower mean and higher variance among the outer slices as compared to the inner slices. (D) A linear regression analysis (blue line) of mean DSC measurements (n = 30) shows a negative correlation between DSC and time frame. (B, D) The uncertainties of the linear regression relationships are expressed by the 95% confidence intervals (blue bands) and 95% prediction intervals (gray bands).

In light of our segmentation-dependent propagation of antecedent DIR results, errors from the previous computations for displacement fields were possible to accumulate. To model these errors, we calculated the DSC coefficients by using the binarized 3-D moving and transformed reference images generated in the previous step. We plotted the DSC coefficients for 30 image stacks to produce a trend line (Fig. 4B). A negative linear trend developed in the volumetric DSC as the number of DIR cycles increased. Despite 49 DIR iterations, the DSC was estimated to be approximately 0.80 and indicated high accuracy.

In our second validation step, we evaluated the accuracy of our DIR-segmented images by comparing them against images that were manually segmented by an expert. We used 8 2-D image slices, evenly-spaced in the z-axis, from 30 3-D image stacks to compute a DSC average of 0.93 ± 0.06 (n = 240) to indicate a high degree of spatial overlap. To visualize the variability in DSC across the 3-D image stacks, we plotted the DSC values for 8 slices from each of these 3-D image stacks (Fig. 4C). Following the same workflow as the first validation step, we demonstrated the average DSC coefficients for the outer and inner slices to be 0.89 ± 0.08 and 0.96 ± 0.01, respectively. We also averaged these data sets into two 1-D arrays, with which we performed a Lilliefors test that yielded p-values of 0.087 and 0.035 for outer and inner arrays, respectively, indicating that only the outer array was normally distributed. Next, we performed a paired two-tailed Wilcoxon signed-rank test to demonstrate a significant difference between the two arrays (p < 0.001). A Bartlett test also demonstrated a significant difference between the two variances of the two data sets (p < 0.001).

To model the growing segmentation errors caused accumulating inaccuracies from previous segmentation operations, we averaged the DSC values from the 30 sets of 8 evenly-spaced 2-D image slices to generate a 1-D array of DSCs, where each value represented the average DSC for its respective 3-D image stack, and hence time frame. We plotted the DSC values for the 30 image stacks to produce a trend line (Fig. 4D). A negative linear trend developed in the DSC as the number of DIR cycles increased. After 49 DIR iterations, the DSC was estimated to be approximately 0.84.

Results from both validation methods indicated significantly lower average and greater variance among the DSC values from the outer slices as compared to the inner slices. This difference in accuracy can be attributable to increased background noise from the LSFM-acquired anterior and posterior image slices 18,60. Nonetheless, both validation techniques demonstrated that our DIR technique could not only achieve accurate results in the presence of complicated cardiac anatomy, but also enhance segmentation efficiency by a factor of 49.

Utilization of DIR for 4-D VR reproduction and analysis of cardiac contractile function

The ability to visualize dynamic imaging data in 4-D VR is limited by the need to accurately extract high-speed, micro-scale deformations in the organ morphology. However, by using DIR-based segmented image stacks, we were able to reconstruct 50 3-D digital embryonic hearts and to seamlessly simulate the dynamics of zebrafish cardiac structures throughout the cardiac cycle (systole + diastole) in 4-D VR (Figure S1). Ten representative time points during a cardiac cycle were visualized by contouring the epicardium with dashed lines (Fig. 5A-J). We selected Fig. 5A (early diastole) as the baseline to contour the epicardium in the white dashed lines. We demarcated the cardiac morphological changes in the red dashed lines using the aforementioned white dashed lines as a reference (Fig. 5B-J). We further visualized 3-D projections of the instantaneous myocardial deformation at end-diastole and end-systole to enable evaluation of local cardiac contractile function (Fig. 5K-L). Through user-directed volume slicing, the atrioventricular valve was isolated and its morphology and local deformation at these two instances in the cardiac cycle were also visualized in VR. As such, the immersive and interactive nature of VR enabled a unique perspective in the zebrafish cardiac morphology. Thus, the integration of DIR with 4-D VR reproduction provides promise to couple quantitative computations with 4-D VR applications to uncover the dynamics of cardiac physiology at the micro-scale.

Fig. 5. VR visualization of contracting embryonic zebrafish heart and localized myocardial deformation.

Fig. 5.

(A-J) Images of the contracting heart during evenly-spaced times in the cardiac cycle. White dashed lines represent contour of the baseline heart model. Red dashed lines represent the contour of the dilating or contracting heart. (K) Volume rendering of the zebrafish cardiac epicardial deformations at end-diastole. Inset demonstrates localized deformation at the atrioventricular valve. (L) Volume rendering of the zebrafish cardiac epicardial deformations at end-systole. Inset demonstrates localized deformation at the atrioventricular valve. Deformations in (K-L) were represented by displacement magnitudes and mapped to colors designated by the corresponding color bars.

Discussion

We demonstrated a novel pipeline to recapitulate light sheet-acquired embryonic hearts at the micro-scale in a 4-D VR environment. Based on only one stack of manually segmented data as the primary reference, we utilized DIR to perform automatic segmentation of 49 other 3-D image stacks (4214 2-D images), resulting in a nearly 50-fold improvement in segmentation efficiency compared to traditional manual segmentation. We established an average DSC value of 0.92 ± 0.05 to support a high degree of spatial overlap in agreement with the ground truth images. The segmented image stacks were reconstructed into 3-D models, followed by mesh rendering, and subsequent importing into a Unity scene for VR environment deployment. By integrating DIR with 4-D reconstruction and VR visualization tools, we reconstructed the developing zebrafish hearts for a 4-D VR experience in developmental cardiac physiology for precision training. Moreover, by visualizing instantaneous myocardial deformation at end-diastole and end-systole, we were able to observe cardiac contractile function at high spatiotemporal resolution.

Unlike our previous reports 11,12,15,32, we have developed a resolution-enhancement method 16 with the retrospective synchronization algorithm for 4-D volumetric imaging of zebrafish embryos in a large field-of-view. Our novel pipeline was demonstrated linking deformable 3-D model architecture with the authentic dynamic LSFM data, and, founding the basis for recapitulating into a 4-D VR simulation of a contracting embryonic zebrafish heart. By uncovering the transformation of each voxel between reference and moving images to compute instantaneous myocardial deformation, this pipeline further established a basis for linking quantitative computations with 4-D VR applications. Dynamic morphological and topological changes of inner and outer structures of tissues and/or organs are captured for complete visualization. Interactions such as slicing, rotating, and scaling of models further provide users with unique perspectives. The rate of 4-D simulation is adjustable to provide users with new outlooks on dynamic biological processes for medical training and surgical planning. Virtual reality in conjunction with the haptic feedback technologies has been investigated as a potential platform for providing realistic surgical simulations that replicate operative experiences 1,33,42. However, current efforts have been limited to generating such an environment using computer-generated 3-D models. In this context, combining our pipeline with the haptic feedback system shows potential in revolutionizing medical training and surgical planning.

In light of the segmentation process, errors in displacement field calculations were accumulated over the subsequent DIR iterations. While the DSC coefficients of the processed 3-D image stacks remained close to 1 for the later time frames, errors are likely to accumulate in situations involving much greater numbers of image stacks. At the expense of processing speed, these calculation errors may be reduced by increasing the number of optimal transformation estimates per pyramid level. This reduction in speed can be offset by implementing the Demons image registration on a GPU using the computed unified device architecture programming environment 21. In addition, B-spline registration has been recently recognized as an alternative to the Demons image registration algorithm for its relatively high accuracy and reproducibility 55. However, B-spline registration has been reported to be nearly twice as slow as Demons registration 2. Nonetheless, our robust framework supports integration of B-spline registration for applications such as monitoring respiratory variations in association with sub-diaphragmatic tumors with accuracy 4,5,39. Ultimately, both of these algorithms are limited in their capacity to accurately calculate displacement fields, resulting in an inevitable accumulation of errors. For processing a large number of image stacks, we may need to resort to utilizing multiple manually segmented, intermediate image stacks to reset the accumulated calculation errors. Despite the aforementioned errors, our technique remains viable to effectively reduce the labor-intensive manual segmentation.

Further analysis of DSC coefficients in the stacks of 2-D image slices revealed some variation and inaccuracy in our segmentation as a result of an increase in the background noise in both the upper and lower image slices along the z-axis. High image noise is recognized to hinder registration accuracy 25. For this reason, the low signal-to-noise ratio in the early and late slices along the z-axis from the LSFM-captured 3-D images likely accounted for a decrease in the overall accuracy and reproducibility of calculated displacement fields. It is possible to reduce these levels of noise by utilizing Gaussian low-pass and smoothing filters 25,50. In addition, our technique is readily adapted to other imaging modalities, including CT, MRI, photoacoustic tomography (PAT), and optical coherence tomography (OCT) 13,14,40,61. Thus, our framework is designed to utilize raw data from various imaging systems as an input for developing a 4-D VR environment.

Supplementary Material

1
Download video file (7MB, mov)

Acknowledgements

This work was supported by the National Institutes of Health (5R01HL083015-10, 1R01HL118650, 1R01HL129727, 7R01HL111437) and the American Heart Association (Scientist Development Grant 16SDG30910007).

Footnotes

Supplementary Material

See supplementary material for a video of the 4-D VR simulation of an entire cardiac cycle in the studied contracting embryonic zebrafish heart.

Competing Financial Interests

The authors declare no competing financial interests.

References

  • 1.Abiri A, Tao A, LaRocca M, Guan X, Askari S, Bisley J, Dutson E, and Grundfest W. Visual-perceptual mismatch in robotic surgery. Surg. Endosc , 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Araki T, Ikeda N, Dey N, Chakraborty S, Saba L, Kumar D, Godia E, Jiang X, Gupta A, Radeva P, Laird J, Nicolaides A, and Suri J. A comparative approach of four different image registration techniques for quantitative assessment of coronary artery calcium lesions using intravascular ultrasound. Comput. Methods Programs Biomed 118:158–172, 2015. [DOI] [PubMed] [Google Scholar]
  • 3.Brock K ., Sharpe M, Dawson L, Kim S, and Jaffray D. Accuracy of finite element model-based multi-organ deformable image registration. Med. Phys 32:1647–1659, 2005. [DOI] [PubMed] [Google Scholar]
  • 4.Brock KK Results of a Multi-Institution Deformable Registration Accuracy Study (MIDRAS). Int. J. Radiat. Oncol. Biol. Phys 76:583–596, 2010. [DOI] [PubMed] [Google Scholar]
  • 5.Brock KK, Dawson LA, Sharpe MB, Moseley DJ, and Jaffray DA. Feasibility of a novel deformable image registration technique to facilitate classification, targeting, and monitoring of tumor and normal tissue. Int. J. Radiat. Oncol. Biol. Phys 64:1245–1254, 2006. [DOI] [PubMed] [Google Scholar]
  • 6.Bronstein AM, Bronstein MM, Kimmel R, Mahmoudi M, and Sapiro G. A Gromov-Hausdorff framework with diffusion geometry for topologically-robust non-rigid shape matching. Int. J. Comp. Vis 89:266–286, 2010. [Google Scholar]
  • 7.Brown L . A survey of image registration techniques. ACM Comput. Surv 24:325–376, 1992. [Google Scholar]
  • 8.Chan S, Conti F, Salisbury K, and Blevins N. Virtual Reality Simulation in Neurosurgery: Technologies and Evolution. Neurosurgery 72:A154–A164, 2013. [DOI] [PubMed] [Google Scholar]
  • 9.Cuchet E, Knoplioch J, Dormont D, and Marsault C. Registration in neurosurgery and neuroradiotherapy applications. J. Image Guid. Surg 1:198–207, 1995. [DOI] [PubMed] [Google Scholar]
  • 10.Dallal G . An analytic approximation to the distribution of Lilliefors’s test statistic for normality. Am. Stat 40:294–296, 1986. [Google Scholar]
  • 11.Ding Y, Abiri A, Abiri P, Li S, Chang C-C, Baek KI, Hsu JJ, Sideris E, Li Y, Lee J, Segura T, Nguyen TP, Bui A, Sevag Packard RR, Fei P, and Hsiai TK. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics. JCI Insight 2:, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ding Y, Lee J, Hsu JJ, Chang CC, Baek KI, Ranjbarvaziri S, Ardehali R, Packard RRS, and Hsiai TK. Light-Sheet Imaging to Elucidate Cardiovascular Injury and Repair. Curr. Cardiol. Rep 20:, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Ding Y, Xie H, Peng T, Lu Y, Jin D, Teng J, Ren Q, and Xi P. Laser oblique scanning optical microscopy (LOSOM) for phase relief imaging. Opt. Express 20:14100–14108, 2012. [DOI] [PubMed] [Google Scholar]
  • 14.Ding Y, Zhang M, Lang J, Leng J, Ren Q, Yang J, and Li C. In vivo study of endometriosis in mice by photoacoustic microscopy. J. Biophotonics 8:94–101, 2015. [DOI] [PubMed] [Google Scholar]
  • 15.Fei P, Lee J, Sevag Packard R, Sereti K-I, Xu H, Ma J, Ding Y, Kang H, Chen H, Sung K, Kulkarni R, Ardehali R, Kuo J, Xu X, Ho C-M, and Hsiai T. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function. Sci. Rep 6:, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Fei P, Nie J, Lee J, Ding Y, Li S, Yu Z, Zhang H, Hagiwara M, Yu T, Segura T, Ho C-M, Zhu D, and Hsiai TK. Sub-voxel light-sheet microscopy for high-resolution, high-throughput volumetric imaging of large biomedical specimens. bioRxiv , 2018. at <http://biorxiv.org/content/early/2018/01/29/255695.abstract> [Google Scholar]
  • 17.Freeborough PA, Woods RP, and Fox NC. Accurate registration of serial 3D MR brain images and its application to visualizing change in neurodegenerative disorders. J. Comput. Assist. Tomogr 20:1012–22. [DOI] [PubMed] [Google Scholar]
  • 18.Fuchs E, Jaffe J, Long R, and Azam F. Thin laser light sheet microscope for microbial oceanography. Opt. Express 10:145–154, 2002. [DOI] [PubMed] [Google Scholar]
  • 19.Gallagher AG, and Cates CU. Virtual reality training for the operating room and cardiac catheterisation laboratory. Lancet 364:1538–1540, 2004. [DOI] [PubMed] [Google Scholar]
  • 20.Greenbaum P The lawnmower man. Film Video 9:58–62, 1992. [Google Scholar]
  • 21.Gu X, Pan H, Liang Y, Castillo R, Yang D, Choi D, Castillo E, Majumdar A, Guerrero T, and Jiang. S Implementation and evaluation of various demons deformable image registration algorithms on a GPU. Phys. Med. Biol 55:207–219, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Guiraudon GM, Jones DL, Bainbridge D, and Peters TM. Mitral valve implantation using off-pump closed beating intracardiac surgery: a feasibility study. Interact. Cardiovasc. Thorac. Surg 6:603–607, 2007. [DOI] [PubMed] [Google Scholar]
  • 23.Handels H, and Ehrhardt J. Medical image computing for computer-supported diagnostics and therapy. Methods Inf. Med 48:11–17, 2009. [PubMed] [Google Scholar]
  • 24.Hill D ., Batchelor P, Holden M, and Hawkes D. Medical image registration. Phys. Med. Biol 46:R1, 2001. [DOI] [PubMed] [Google Scholar]
  • 25.Holden M, Hill D, Denton E, Jarosz J, Cox T, Rohlfing T, Goodey J, and Hawkes. D Voxel similarity measures for 3-D serial MR brain image registration. IEEE Trans. Med. Imaging 19:94–102, 2000. [DOI] [PubMed] [Google Scholar]
  • 26.Huisken J, Swoger J, Del Bene F, Wittbrodt J, and Stelzer EHK. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science (80-. ). 305:1007–1009, 2004. [DOI] [PubMed] [Google Scholar]
  • 27.Hwang SS, Kim H-D, Jang TY, Yoo J, Kim S, Paeng K, and Kim SD. Image-based object reconstruction using run-length representation. Signal Proc. Image Commun 51:1–12, 2017. [Google Scholar]
  • 28.Kanade T, and Narayanan PJ. Virtualized reality: perspectives on 4D digitization of dynamic events. IEEE Comp. Graph. Appl 27:32–40, 2007. [DOI] [PubMed] [Google Scholar]
  • 29.Kardell M, Magnusson M, Sandborg M, Aim Carlsson G, Jeuthe J, and Malusek A. Automatic segmentation of pelvis for brachytherapy of prostate. Radiat. Prot. Dosimetry 169:398–404, 2016. [DOI] [PubMed] [Google Scholar]
  • 30.Keller PJ, Schmidt AD, Wittbrodt J, and Stelzer EHK. Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. Science (80-. ). 322:1065–1069, 2008. [DOI] [PubMed] [Google Scholar]
  • 31.King F, Jayender J, Bhagavatula S, Shyn P, Pieper S, Kapur T, Lasso A, and Fichtinger G. An Immersive Virtual Reality Environment for Diagnostic Imaging. J. Med. Robot Res 1:1640003-1–9, 2016. [Google Scholar]
  • 32.Lee J, Fei P, Sevag Packard R, Kang H, Xu H, Baek KI, Jen N, Chen J, Yen H, Kuo J, Chi N, Ho C-M, and Hsiai T. 4-Dimensional light-sheet microscopy to elucidate shear stress modulation of cardiac trabeculation. J. Clin. Invest 126:1679–1690, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Lemole G, Banerjee P, Luciano C, Neckrysh S, and Charbel F. Virtual Reality in Neurosurgical Education. Neurosurgery 61:142–149, 2007. [DOI] [PubMed] [Google Scholar]
  • 34.Li G, Citrin D, Camphausen K, Mueller B, Burman C, Mychalczak B, Miller RW, and Song Y. Advances in 4D medical imaging and 4D radiation therapy. Technol. Cancer Res. Treat 7:67–81, 2008. [DOI] [PubMed] [Google Scholar]
  • 35.Li Z, Wu M, Zhou W, and Yu J. 4D Human Body Correspondences from Panoramic Depth Maps., 2018. [Google Scholar]
  • 36.Lipman Y, and Funkhouser T. Möbius voting for surface correspondence. ACM T. Graph 28:72, 2009. [Google Scholar]
  • 37.Litman R, and Bronstein AM. Learning spectral descriptors for deformable shape correspondence. IEEE T. Patt. Anal. Mach. Intell 36:171–180, 2014. [DOI] [PubMed] [Google Scholar]
  • 38.Lorenzo-Valdés M, Sanchez-Ortiz GI, Mohiaddin R, and Rueckert D. Atlas-Based Segmentation and Tracking of 3D Cardiac MR Images Using Non-rigid Registration BT - Medical Image Computing and Computer-Assisted Intervention — MICCAI 2002. , 2002. [Google Scholar]
  • 39.Lu W, Parikh PJ, El Naqa IM, Nystrom MM, Hubenschmidt JP, Wahab SH, Mutic S, Singh AK, Christensen GE, Bradley JD, and Low DA. Quantitation of the reconstruction quality of a four-dimensional computed tomography process for lung cancer patients. Med. Phys 32:890–901, 2005. [DOI] [PubMed] [Google Scholar]
  • 40.Lu Y, Yang K, Zhou K, Pang B, Wang G, Ding Y, Zhang Q, Han H, Tian J, Li C, and Ren Q. An Integrated Quad-Modality Molecular Imaging System for Small Animals. J. Nucl. Med 55:1375–1379, 2014. [DOI] [PubMed] [Google Scholar]
  • 41.Mcinerney T, and Terzopoulos D. A Dynamic Finite Element Surface Model for Segmentation and Tracking in Multidimensional Medical Images with Application to Cardiac 4D Image Analysis. Comput. Med. Imaging Graph 19:69–83, 1995. [DOI] [PubMed] [Google Scholar]
  • 42.van der Meijden O, and Schijven M. he value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surg. Endosc 23:1180–1190, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Metz CT, Klein S, Schaap M, van Walsum T, and Niessen WJ. Nonrigid registration of dynamic medical imaging data using nD+t B-splines and a groupwise optimization approach. Med. Image Anal 15:238–249, 2011. [DOI] [PubMed] [Google Scholar]
  • 44.Mitchell SC, Bosch JG, Lelieveldt BPF, Van der Geest RJ, Reiber JHC, and Sonka M. 3-D active appearance models: segmentation of cardiac MR and ultrasound images. IEEE T. Med. Imaging 21:1167–1178, 2002. [DOI] [PubMed] [Google Scholar]
  • 45.Montagnat J, and Delingette H. 4D deformable models with temporal constraints : application to 4D cardiac image segmentation. Med. Image Anal 9:87–100, 2005. [DOI] [PubMed] [Google Scholar]
  • 46.Otsu N A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern 9:62–66, 1979. [Google Scholar]
  • 47.Packard RRS, Baek KI, Beebe T, Jen N, DIng Y, Shi F, Fei P, Kang BJ, Chen PH, Gau J, Chen M, Tang JY, Shih YH, DIng Y, Li D, Xu X, and Hsiai TK. Automated Segmentation of Light-Sheet Fluorescent Imaging to Characterize Experimental Doxorubicin-Induced Cardiac Injury and Repair. Sci. Rep 7:1–11, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Peng H, Ruan Z, Long F, Simpson JH, and Myers EW. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat. Biotechnol 28:348–353, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Peng H, Tang J, Xiao H, Bria A, Zhou J, Butler V, Zhou Z, Gonzalez-Bellido PT, Oh SW, Chen J, Mitra A, Tsien RW, Zeng H, Ascoli GA, Iannello G, Hawrylycz M, Myers E, and Long F. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis. Nat. Commun 5:1–13, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Planchon T, Gao L, Milkie D, Davidson M, Galbraith J, Galbraith C, and Betzig E. Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination. Nat. Methods 8:417–423, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Pottmann H, Wallner J, Huang Q-X, and Yang Y-L. Integral invariants for robust geometry processing. Comp. Aided Geom. Des 26:37–60, 2009. [Google Scholar]
  • 52.Power RM, and Huisken J. A guide to light-sheet fluorescence microscopy for multiscale imaging. Nat. Meth 14:360–373, 2017. [DOI] [PubMed] [Google Scholar]
  • 53.Reznick R, and MacRae H. Teaching Surgical Skills - Changes in the Wind. N. Engl. J. Med 355:2664–2669, 2006. [DOI] [PubMed] [Google Scholar]
  • 54.Riva G Applications of virtual environments in medicine. Methods Inf. Med 42:524–534, 2003. [PubMed] [Google Scholar]
  • 55.Shen J-K, Matuszewski B, Shark L-K, Skalski A, Zielinski T, and Moore C. Deformable Image Registration - A Critical Evaluation: Demons, B-Spline FFD and Spring Mass System., 2008. [Google Scholar]
  • 56.Smith LN, Farooq AR, Smith ML, Ivanov IE, and Orlando A. Realistic and interactive high-resolution 4D environments for real-time surgeon and patient interaction. Int. J. Med. Robot 13:e1761, 2017. [DOI] [PubMed] [Google Scholar]
  • 57.Thirion J . Image matching as a diffusion process: an analogy with Maxwell’s demons. Med. Image Anal 2:243–260, 1998. [DOI] [PubMed] [Google Scholar]
  • 58.Turinsky AL, and Sensen CW. On the way to building an integrated computational environment for the study of developmental patterns and genetic diseases. Int. J. Nanomed 1:89, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Vemuri AS, Wu JC-H, Liu K-C, and Wu H-S. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery. Surg. Endosc 26:3655–3662, 2012. [DOI] [PubMed] [Google Scholar]
  • 60.Verveer P, Swoger J, Pampaloni F, Greger K, Marcello M, and Stelzer E. High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy. Nat. Methods 4:311–313, 2007. [DOI] [PubMed] [Google Scholar]
  • 61.Weissleder R, and Pittet M. Imaging in the era of molecular oncology. Nature 452:580–589, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Wierzbicki M, Drangova M, Guiraudon G, and Peters T. Validation of dynamic heart models obtained using non-linear registration for virtual reality training, planning, and guidance of minimally invasive cardiac surgeries. Med. Image Anal 8:387–401, 2004. [DOI] [PubMed] [Google Scholar]
  • 63.Yan D Adaptive radiotherapy: merging principle into clinical practice. Semin. Radiat. Oncol 20:79–83, 2010. [DOI] [PubMed] [Google Scholar]
  • 64.Yan D, Vicini F, Wong J, and Martinez A. Adaptive radiation therapy. Phys. Med. Biol 43:123, 1997. [DOI] [PubMed] [Google Scholar]
  • 65.Yang JC, Chen CH, and Jeng MC. Integrating video-capture virtual reality technology into a physically interactive learning environment for English learning. Comput. Educ 55:1346–1356, 2010. [Google Scholar]
  • 66.Zou KH, Warfield SK, Bharatha A, Tempany CMC, Kaus MR, Haker SJ, lii WMW, and Jolesz FA. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index. Sci. Rep 11:178–189, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
Download video file (7MB, mov)

RESOURCES