Abstract
Four-dimensional data sets are increasingly common in MRI and CT. While clinical visualization often focuses on individual temporal phases capturing the tissue(s) of interest, it may be possible to gain additional insight through exploring animated 3D reconstructions of physiological motion made possible by augmented or virtual reality representations of 4D patient imaging. Cardiac CT acquisitions can provide sufficient spatial resolution and temporal data to support advanced visualization, however, there are no open-source tools readily available to facilitate the transformation from raw medical images to dynamic and interactive augmented or virtual reality representations. To address this gap, we developed a workflow using free and open-source tools to process 4D cardiac CT imaging starting from raw DICOM data and ending with dynamic AR representations viewable on a phone, tablet, or computer. In addition to assembling the workflow using existing platforms (3D Slicer and Unity), we also contribute two new features: 1. custom software which can propagate a segmentation created for one cardiac phase to all others and export to surface files in a fully automated fashion, and 2. a user interface and linked code for the animation and interactive review of the surfaces in augmented reality. Validation of the surface-based areas demonstrated excellent correlation with radiologists’ image-based areas (R > 0.99). While our tools were developed specifically for 4D cardiac CT, the open framework will allow it to serve as a blueprint for similar applications applied to 4D imaging of other tissues and using other modalities. We anticipate this and related workflows will be useful both clinically and for educational purposes.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10278-022-00659-y.
Introduction
Software segmentation and visualization of cardiac anatomy from medical imaging data is a fundamental feature of a range of commercial and open-source programs (e.g., Mimics/3-MATIC (Materialise, Leuven Belgium), Syngo Via (Siemens Medical Systems, Erlangen Germany), CVI42 (Circle, Calgary Canada), and Osirix MD (Osirix, Geneva Switzerland)). The tools depend on different workflows, often relevant to the intended goals, with examples focused on clinical modeling of the blood pool (Mimics/3matic) [1], evaluating TAVR leaks [2], exploring functional cardiac anatomy [3], providing input to deep learning algorithms [4], and patient-communication and education [5]. While there are many paths and described applications of VR/AR for 3D images [6], 4D workflows are either commercial and cost-prohibitive or limited to demonstration without executable software code [7]. With the literature on 3D AR/VR applications expanding rapidly, a turn-key and open-source 4D solution would facilitate a new dimension of inquiry.
As part of an ongoing effort at our institution to develop open-source toolboxes that can broadly serve image post-processing goals for researchers and educators, we developed a custom modules for Slicer [8] (http://www.slicer.org/) that take as input 4D CT data and outputs a set of segmented surface files. These files can then be imported into the Unity game development platform (http://unity.com). There, using an interface and routines we have developed, these surfaces can be packaged into phone, tablet, or computer apps that allow animated visualization of segmented surfaces in an augmented reality environment. In this environment, the cardiac surfaces can be observed in a variety of modes, phases can be played like a movie or paused, structures can be sliced into at any angle, and, using a 3D augmented reality target (Merge Cube, https://mergeedu.com/cube), freely rotated and explored in a highly intuitive way, including while surfaces demonstrate a dynamic representation of physiological motion.
To validate surfaces, retrospective ECG-gated multiphase cardiac scans were processed as proof of concept. Correspondence between gold-standard radiologist measurements and shell-derived values were used as a validation index of the tool. A step-by-step description of the processing steps, intermediary files created, and output files are provided. While outside the scope of our software-generating study, how this tool might inform data-driven investigations into surgical planning, patient or trainee education, and modality comparison (3D printing vs VR/AR) is covered in the discussion.
Methods
A high-level overview of the developed workflow is shown in Fig. 1. Briefly, dynamic cardiac CT data sets are acquired and loaded into 3D Slicer, where the blood pool of a single cardiac phase is segmented into regions, automatically propagated to all other cardiac phases, and the resulting segment surfaces are exported to files. These surface files are then loaded into Unity, given a user interface and animation capability, and packaged into apps which can be run on the desired device. These apps provide a user the ability to explore the cardiac data interactively. Below, we first describe the development and capabilities of the custom Slicer module, then describe in more detail the steps required to process a 4D CT study and produce an augmented reality app viewable on a mobile device. Next, a validation cohort was used to measure concordance between gold-standard derived 2D measures and corresponding measurements on the segmented surface shells used to generate the derived AR/VR visualizations. The description of this work follows the toolbox workflow.
Fig. 1.
Process Overview. 1. Acquire contrast-enhanced retrospective ECG-gated cardiac CT dataset. 2. Semi-automated segmentation and generation of 3D surface models. 3. Import 3D models into Unity game engine. Add animation and interactivity. 4. Publish to final interactive augmented or virtual reality application. Each row highlights the primary tool used and some example images to illustrate the results of the corresponding step of the process. Top row: CT scanner and sample image slices from 4D CT cardiac acquisition. Second row: 3D Slicer and sample 2D and 3D views of cardiac segmentations. Third row: Unity game development platform and sample images showing screenshots of interface during app development. Bottom row: Merge cube augmented reality target with sample images of foam cube and functioning augmented reality app in use on mobile device
4D CT Acquisition
Dynamic cardiac CT data sets were acquired using a retrospective ECG-gated technique (Siemen’s Somatom Force) resulting in 10–20 thin slice (0.6 mm) volumes across the cardiac cycle. These studies were processed as proof-of-concept w/ IRB approval.
Data Import
DICOM files of the 4D CT exam were imported directly into Slicer as a “Volume Sequence.” The import process recognizes that the images represent a temporal series of 3D volumes, and Slicer includes the ability to easily navigate the temporal frames or play them as a loop (see Fig. 2D).
Fig. 2.
Custom Slicer module interface and features. A. User interface for segmentation initialization, parameter setting, and individual processing step exploration. B. User interface for fully automated propagation of segmentation to all temporal frames. C. User interface for export of all segmentations to surface mesh files suitable for import into Unity. D. Slicer playback controls for 4D images. E. Example view showing heart segmentation for one temporal phase in 3D and 2D slices
Initial Segmentation on Single Phase
An initial temporal phase was selected for interactive segmentation, generally one where the cardiac valves were most readily visualized, to accurately locate the boundaries between chambers. The threshold value was set to prioritize the bulk compartment of contrast-enhanced blood while minimizing the inclusion of soft tissue. A paintbrush tool was then used to place seeds on several individual slices identifying each desired segment (left ventricle, right ventricle, left atrium, right atrium, aorta, and pulmonary arteries). This tool is limited to paint only voxels which are above the identified threshold. To prevent spurious spillover of segments to bones such as sternum, rib, and vertebrae, voxels above the threshold were painted as well and marked as a segment labeled “Other”. All air and soft tissue with voxel values below the selected threshold were marked as a segment called “BelowThresh” using the Threshold tool. The GrowFromSeeds tool was then run, which generated a complete volume segmentation from the seeds by expanding outward from them to the most similar voxels [9]. The resulting segmentation was examined in 3D and in 2D slices for inconsistencies. If any were found, new seeds were painted in to correct region labels, and the generated segmentation was automatically updated. Once the user was satisfied with the region assignment, the final step for the initial interactive segmentation was to close holes using the “Smoothing”—> “Close Holes” tool with a distance of 2.5 mm, which performs 3D morphological closing on the segmented regions. This step is useful for removing salt and pepper noise and for smoothing the boundaries of segmented regions, which are often rough because of noise and partial volume effects.
The resulting heart segments have closed surfaces; for some visualizations it is helpful to have the blood pool as negative space so that it is possible to, for example, look across a connection between chambers, such as a septal defect. To allow these sorts of visualizations, a “hollowed” version of each segmentation was also created. The processing steps involved in hollowing are as follows: 1. The 6 heart segments were copied and then a new segment called “BloodPool” was created from them via logical union. 2. Each copied segment was then hollowed by adding a uniform thickness shell (5 mm for all heart segments except 10 mm for LV), using the current surface as the inside surface of the shell. 3. Subtracting the “BloodPool” segment from the hollowed shell segments (this restores the connections between segments as holes).
Custom Slicer Module
3D Slicer is an open-source software platform for visualization and medical image computing (http://slicer.org) which has many existing capabilities as well as a modular architecture which allows users to extend Slicer’s functionality with custom modules. Using this architecture, we developed a module, called “PropagateSegToOtherPhases” to facilitate the multiphase segmentation of cardiac 4D CT images. Source code for this module is available in a public GitHub repository (https://github.com/mikebind/Heartbeat4D) and as Supplemental Data and has been tested to be compatible with Slicer versions 4.10–4.13. Elements of this module are shown in Fig. 2. The primary function of the module was to automate the propagation of cardiac segmentations across phases. Manual segmentation of atria, ventricles, and great vessels on one temporal frame is time-consuming, taking at least 30–60 min, and multiplying that across up to twenty temporal frames for 4D CT is prohibitive.
Using just a few parameters (a Hounsfield unit (HU) threshold, a segment erosion distance, and a segment hole filling distance), a sequence of processing steps was identified which allowed automated generation of the next cardiac phase’s segmentation based on the previous existing segmentation. The sequence was as follows: 1. Copy the segmentation from the prior frame onto this frame, 2. Erode all existing segments by a fixed amount (typically 5 mm), 3. Mark all voxels in this frame which are below the identified HU threshold as part of a “BelowThresh” segment, 4. Using the eroded cardiac segments as seeds and the BelowThresh segment as boundaries, grow each segment until it runs into other segments or the BelowThresh segment, 5. Fill any internal holes within each cardiac segment which are smaller than a certain size (typically set at 2.5 mm; this handles holes due to salt and pepper noise which comes from some blood pool voxels being below the specified threshold). Steps 2 and 3 allow for movement of the segment boundaries between phases, step 4 establishes the new boundaries, and step 5 cleans up the segments. Steps 1–5 are then repeated for the next temporal frame until all temporal frames have been segmented.
In addition to the basic segmentations, a number of automated manipulations of the resulting segments was desired, including a hollowed and wrapped version of each segment and a fused blood pool segment. These additional versions of the segmentations allow viewing in different modes in the AR app, and the custom slicer module handled generating these variations of each segmentation. Lastly, the custom module automated the export of surface files corresponding to each variation of each segment in each temporal frame as OBJ-format surface mesh files.
Propagation to All Other Phases
Once the initial phase segmentation was complete, all remaining phases were segmented in a fully automated fashion by clicking the “Propagate Segmentation to All Frames” button in our custom “PropagateSegToOtherPhases” module.
Export of Segmentation Surfaces
Once all phases were processed, the resulting segment surfaces were exported using “PropagateSegToOtherPhases” by clicking the “Export All To OBJ” button. OBJ files are a standard 3D surface model storage and exchange file format which contain the 3D coordinates and polygonal facets of a set of surfaces and can be linked to another file (a.MTL file) which describes the color and shading properties of each surface. The export from “PropagateSegToOtherPhases” produces one OBJ file and one MTL file for each temporal phase for each of the hollowed and non-hollowed segmentation versions. For example, if the 4D CT series had 20 cardiac phases, the export step would produce 40 OBJ files and 40 associated MTL files, 20 for the non-hollowed segments and 20 for the hollowed segments.
Import into Unity
The exported surface models and material properties were imported into our custom Unity game engine project (template available in Supplemental Data) by adding the OBJ and MTL files into the project’s Assets folder. This automatically triggers Unity to import them. The imported assets were dragged into the scene hierarchy under SolidSegContainer (for non-hollowed segmentations) or HollowSegContainer (for hollowed segmentations). Within the Unity engine, custom code converts the model surface files into an animated version, provides interactive visibility controls for each segment and the capacity for interactive slicing of the animated volume (see Fig. 3). The augmented reality app was then previewed by pressing the play button and holding the Merge Cube so that it was visible to the computer’s webcam. The included project file has been tested to be compatible with Unity version 2018.3.
Fig. 3.
Unity Engine screenshot. Users can preview the app and manipulate the user interface using Unity. Minimally, users would need to import surface models, drag them into the scene hierarchy, and then click to build the final app file
Export to Mobile App
We produced mobile apps in both Android and iOS versions.
In Unity, the Android app file was then created using File > Build and choosing Android as the target platform. This generates an Android Package Kit (.apk) file, which can then be transferred to and installed on any Android device. Depending on the Android version, the user may be prompted “Allow installing apps from unknown sources?” or a similar message before proceeding with the install. After installation is complete, tapping on the app icon will launch the app and the animated 3D heart model will be displayed and interactive.
To create an iOS app for iPad or iPhone, the process is only slightly more complex. In addition to Unity, the user must also install XCode and set up an AppleID. For building and testing on your own devices (i.e., not through the Apple Store), an Apple Developer account is not needed. When you run File > Build from Unity, an XCode project is generated, and that XCode project is used to sign, compile, and load your app onto a connected iOS device. On that device, the user will need to indicate that the AppleID is trusted the first time the app is opened. Once installation is complete, tapping the app icon will launch the app and the animated 3D heart model will be displayed and interactive, with all controls functioning exactly as in the Android version of the app.
Videos showing the functioning of both these apps are available as Supplemental Data.
Segmented Surface Validation
To validate that the segment surfaces correspond closely to the radiologist-perceived edges of the blood pool in a clinically relevant region, we compared cross sectional area measurements made by two experienced cardiac radiologists on raw images from five patients with measurements carried out on corresponding slices across the segmented surfaces produced by the developed tools. Given the patient cohort used to develop the tools were being evaluated for pulmonary artery valve replacement, we focused on cross sectional areas along the pulmonary artery. Areas assessed were three locations along the pulmonary artery (PA): near the junction of the PA with the right ventricle, near the bifurcation of the PA, and approximately halfway in between. The sites and slice planes were fixed ahead of time, but the two radiologists blinded to each other's measurements and to the surface-derived values. There is no current ability to perform direct AR-based measurements with the existing Unity platform tool set.
Results
Data processing was straightforward using the established methods on all datasets. Initial segmentation seed placement and revision typically took approximately 45 min. The fully automated segmentation of all other phases takes 1–3 min per phase. Export and import of OBJ files plus apk file building and installing takes approximately 20 min. Examples of functioning AR apps are shown on an Android phone and on an iPad in Fig. 4, and videos demonstrating all features are available in Supplementary Data.
Fig. 4.
Example Augmented Reality Views. A. MergeCube augmented reality target is tracked to control the position and orientation of the dynamic heart rendering. B. View into the hollowed segmentation of the left (orange) and right (lavender) ventricle cavities, with all other segments hidden. C. A dynamic cutaway view of the merged blood pool segment (pink) wrapped with a transparent sheath. D. iPad app showing the ability to hide segments (aorta and pulmonary arteries hidden). Videos illustrating all app features are available as online Supplemental Data
Validation of cross-sectional area measurements between image-based and segmented surface-based methods demonstrated close correspondence between the two (Fig. 5). Surface-based areas were very strongly correlated with each radiologists' image-based measurements, with correlation coefficients of 0.999 and 0.996, respectively. The correlation coefficient between the two radiologists' measurements was 0.999. Considering the average of the two radiologist measurements as the gold standard, the surface-based measurements deviated by -0.15 cm^2 on average, with + 0.23 cm^2 to -0.52 cm^2 as the limits of agreement. As a percentage of the measurement, the deviation from gold standard was 2.2% on average, with a maximum percent difference of 8.3% from a radiologist measurement. Between radiologists, the maximum percent difference was 5.5%, and the average percent difference was 1.1%.
Fig. 5.
Correspondence between image-based and extracted surface-based measurements. Upper panel: Cross sectional area measurements of the pulmonary artery at three locations in each of five patients being evaluated for PA valve replacement. Radiologist area measurements on images (open squares and open circles) are shown with corresponding area measurements extracted from the segmented blood pool surface (i.e., the surfaces shown in the augmented reality animations; filled circles). The average of the two radiologist area measurements was considered the gold standard. Lower panel: Bland–Altman plot of the difference between the segmented blood pool surface-based area measurement vs the gold standard area measurement, with mean difference and limits of agreement shown as horizontal dashed black lines
Discussion
The open-source toolbox generated demonstrated a fast and facilitated workflow for processing 4D CT images. Comparison between image-based area measurements by radiologists and corresponding segmented surface-based area measurements confirmed that the segmented surfaces corresponded closely to the boundaries of the blood pool.In a set of clinically relevant measurements of the pulmonary artery in patients being evaluated for pulmonary valve replacement, the average percentage difference between image-based areas and surface-based areas was only 2.2%. This performance is slightly worse than the radiologist-to-radiologist area difference of only 1.1%, but well within a reasonable range for utility and similar to other studies assessing measurements of 3D printed models [10]. As this toolbox is not FDA approved for diagnostic measurement, these analyses are intended to support the robustness of the visualizations generated, not as replacement of the typical raw image-based measures.
As with all segmentation approaches, issues of image artifact, motion blurring, and contrast flow voids can impede segmentation of the blood pool, though manual revision is easily performed in the Slicer environment depending on the degree of visual precision desired for the final surfaces desired. When data-quality is high reflective of sufficient contrast dose, volumes can likely be processed with minimal revision, supporting the robustness of this tool as a starting point for data-driven patient-group investigations. With new implementations of commercial products (Materialise Mimics/3-Matic) having the option for built in metal artifact correction and accentuated right-sided segmentation (a structure which is often is sub-optimally filled compared to the left ventricle) future version of the tool can look towards similar iterative improvements.
Dimensionally rich imaging information often travels down one of two analytic pathways: 1. reducing the sum data to quantify a specific disease feature (e.g., valve visualization), or 2. providing a generic workflow to explore a broad range of secondary clinical, education, and research applications. While advanced 4D segmentation algorithms can have this former focus [11], and commercial workflows exist for a mixture of the two (Mimics/3Matic, Materialise, Leuven, Belgium), we believe our implementation fills a critical niche in providing a complete pipeline using free and open-source software for bringing 4D DICOM imaging data into an interactive AR/VR solution.
With the creation of this tool, many questions can be explored in existing or future datasets. At the fundamental level, having the ability to visualize dynamic data might be useful, for example, in a range of cardiac conditions. While reconstructed volumes can be dynamically visualized on conventional 2D displays, this tool provides a novel virtual yet tangible experience of being able to manipulate the 4D data by rotating the Merge Cube in hand to explore the details of the demonstrated anatomy. This process creates a familiar connection between hand and eye producing an interactive experience. However, depending on the question being asked, there may be situations where 3D data remains superior [12]. Similarly, one could posit that learners early in their cardiac anatomical knowledge would potentially be better served by 3D vs 4D examples. Within the literature, there is evidence for potentially greater learning occurring in early trainees with more immersion [13]. Whether adding 4D data into this equation would be interesting to evaluate. Given that cognitive load can inversely impact performance [14], it may be that early learners would be impeded with 4D information, whereas more expert learners would benefit. With much discussion of fidelity in simulation being better conceptualized as "transfer of learning, learner engagement, and suspension of disbelief," there is much work that can be done to characterize the pros, cons, and optimal uses of 3D/4D data [15], there are many inroads to study the role of 4D data. While clearly there is an inherent gap between 4D VR and physical objects, there are inherent bridges between the digital and physical that may be of interest to study [16, 17] and exploit, especially for surgical simulation [18].
With the general VR literature expanding daily [19–21], facilitating the ability for imaging data to generate 3D and 4D representations may have beneficial impacts on learning, memory, measurement, and spatial relationships [22–24]. With regard to education in the medical setting, there are clear benefits to being able to switch between the different views. For example, examining the course of the coronary arteries while in endoluminal space is much easier from within the chamber than outside. Additionally, exploring 4D AR visualizations with patients and trainees may facilitate communication regarding complex anatomy and dynamics as compared to 2D slice source data.
In developing this application, we prioritized the creation of a pipeline architecture that would generate usable data for most cases, rather than incorporating specific potential refinements that would be less commonly needed (e.g., metal artifact reduction). While some features may be better addressed during data acquisition rather than through post-processing, these can have clear benefit when integrated into the post-processing phase. Given the flexibility and open-source nature of Slicer module creation, such enhancements can be considered as potential future developments. As this work evolves, integrating other data elements would further expand the opportunity for description. For example, co-registration of multiple modalities (CT and delayed enhancement MRI) would add to user comprehension, illustrating features such as myocardial fibrosis, or specific regions of abnormal T1 or T2 relaxation rates could be integrated into the animated model. As Slicer is adept at creating volume-rendered surfaces rather than simply exporting surface-renderings, the development of dynamic 2D cutaway tools with which to explore anatomic details in the native environment is a clear possibility. Such information may be critical for understanding specific anatomic and pathologic details for individual patients, while also facilitating pre-procedural planning (e.g., implant locations), and for appreciating features of the images that are not necessarily represented on the final surface (e.g., hints of valve leaflets or detailed trabeculations).
Conclusion
AR/VR continues to expand our understanding of how we learn, perceive, and interact with medical anatomy, with improvements in perceptual spatial registration and integration of physiologic dynamics that are distinct from standard 3D printed models or conventional workstation visualization tools. We were able to demonstrate the feasibility and accuracy of our software VR pipeline using standard clinical DICOM image datasets, and we are optimistic these tools may enable other investigators by providing an easily customizable framework to address myriad clinical and educational goals while adding further refinements and functionality in the handling of 4D image data sets.
Supplementary Information
Below is the link to the electronic supplementary material.
Authors’ Contributions
Michael Bindschadler, Ph.D. created the custom software modules for use with 3D Slicer and manuscript draft. All authors participated in manuscript review and editing, as well as reviewing the virtual Unity/Merge models.
Funding
This work was supported in part by an internal Seattle Children’s Heart Center grant awarded to Sujatha Buddhe, M.D.
Availability of Data and Material
As the focus of this manuscript was on the software methods, the anonymized imaging source data used in creating these virtual models is available only upon request.
Code Availability
3D Slicer is a free, open-source image computing platform available at https://www.slicer.org, The Unity game development platform is available at http://unity.com, Information and software relating to the Merge Cube is available at https://mergeedu.com/cube, Custom Software resources, The Python code for the custom Slicer module “PropagateSegToOtherPhases” as well as the custom C# scripts used in the Unity project are publicly available in the GitHub repository link at https://github.com/mikebind/Heartbeat4D, Cardiac4DTemplateProject.zip: A template Unity Project into which users may place their own processed data to create their own augmented reality app.
Declarations
Ethics Approval
A retrospective IRB exemption was obtained for this project for the use of the anonymized imaging data and informed consent from the subjects was not required. This data was previously acquired as part of a separate IRB-approved clinical trial. This study was performed in accordance with the institution’s ethical standards.
Consent to Participate
Not required under our institution’s retrospective IRB exemption.
Consent to Publish
Not required under our institution’s retrospective IRB exemption.
Conflicts of Interest
Authors have no conflicts relating to this work.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Townsend K, Pietila T. 3D printing and modeling of congenital heart defects: A technical review. Birth Defects Res. 2018;110(13):1091–1097. doi: 10.1002/bdr2.1342. [DOI] [PubMed] [Google Scholar]
- 2.Qian Z, Wang K, Liu S, et al. Quantitative Prediction of Paravalvular Leak in Transcatheter Aortic Valve Replacement Based on Tissue-Mimicking 3D Printing. JACC Cardiovasc Imaging. 2017;10(7):719–731. doi: 10.1016/j.jcmg.2017.04.005. [DOI] [PubMed] [Google Scholar]
- 3.Sakly H, Said M, Radhouane S, Tagina M. Medical decision making for 5D cardiac model: Template matching technique and simulation of the fifth dimension. Comput Methods Programs Biomed. 2020;191:105382. doi: 10.1016/j.cmpb.2020.105382. [DOI] [PubMed] [Google Scholar]
- 4.Chen HH, Liu CM, Chang SL, et al. Automated extraction of left atrial volumes from two-dimensional computer tomography images using a deep learning technique [published online ahead of print, 2020 Apr 11]. Int J Cardiol. 2020;S0167–5273(20)30267–9. 10.1016/j.ijcard.2020.03.075 [DOI] [PubMed]
- 5.Costello JP, Olivieri LJ, Su L, et al. Incorporating three-dimensional printing into a simulation-based congenital heart disease and critical care training curriculum for resident physicians. Congenit Heart Dis. 2015;10(2):185–190. doi: 10.1111/chd.12238. [DOI] [PubMed] [Google Scholar]
- 6.Southworth MK, Silva JR, Silva JNA. Use of Extended Realities in Cardiology. Trends Cardiovas Med. 2019;30:143–148. 10.1016/j.tcm.2019.04.005 [DOI] [PMC free article] [PubMed]
- 7.Mena KA, Urbain KP, Fahey KM, Bramlet MT. Exploration of time sequential, patient specific 3D heart unlocks clinical understanding. 3d Print Medicine 2018;4:15. 10.1186/s41205-018-0034-7 [DOI] [PMC free article] [PubMed]
- 8.Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin J-C, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, Buatti J, Aylward SR, Miller JV, Pieper S, Kikinis R. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012;30(9):1323–1341. doi: 10.1016/j.mri.2012.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Liangjia Zhu, Ivan Kolesov, Yi Gao, Ron Kikinis, Allen Tannenbaum. An Effective Interactive Medical Image Segmentation Method Using Fast GrowCut, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Interactive Medical Image Computing Workshop, 2014
- 10.Hadeed K, et al. Feasibility and accuracy of printed models of complex cardiac defects in small infants from cardiac computed tomography. Pediatr Radiol. 2021;51:1983–1990. doi: 10.1007/s00247-021-05110-y. [DOI] [PubMed] [Google Scholar]
- 11.Wang Y, Zhang Y, Xuan W, et al. Fully automatic segmentation of 4D MRI for cardiac functional measurements. Med Phys. 2019;46(1):180–189. doi: 10.1002/mp.13245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Bindschadler, M., Modgil, D., Branch, K. R., Riviere, P. J. L. & Alessio, A. M. Performance comparison between static and dynamic cardiac CT on perfusion quantitation and patient classification tasks. Proc Spie 941224 (2015) doi:10.1117/12.2082098.
- 13.Kim B, et al. A Novel Virtual Reality Medical Image Display System for Group Discussions of Congenital Heart Disease: Development and Usability Testing. Jmir Cardio. 2020;4:e20633. doi: 10.2196/20633. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Frederiksen JG, et al. Cognitive load and performance in immersive virtual reality versus conventional virtual reality simulation training of laparoscopic surgery: a randomized trial. Surg Endosc. 2020;34:1244–1252. doi: 10.1007/s00464-019-06887-8. [DOI] [PubMed] [Google Scholar]
- 15.Hamstra SJ, Brydges R, Hatala R, Zendejas B, Cook DA. Reconsidering Fidelity in Simulation-Based Training. Acad Med. 2014;89:387–392. doi: 10.1097/ACM.0000000000000130. [DOI] [PubMed] [Google Scholar]
- 16.Awori, J. et al. 3D models improve understanding of congenital heart disease. 3d Print Medicine7, 26 (2021). [DOI] [PMC free article] [PubMed]
- 17.Lau I, Gupta A, Sun Z. Clinical Value of Virtual Reality versus 3D Printing in Congenital Heart Disease. Biomol. 2021;11:884. doi: 10.3390/biom11060884. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Wen R, Chng C-B, Chui C-K. Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery. Robotics. 2017;6:13. doi: 10.3390/robotics6020013. [DOI] [Google Scholar]
- 19.Gonzalez AA, Lizana PA, Pino S, Miller BG, Merino C. Augmented reality-based learning for the comprehension of cardiac physiology in undergraduate biomedical students. Adv Physiol Educ. 2020;44(3):314–322. doi: 10.1152/advan.00137.2019. [DOI] [PubMed] [Google Scholar]
- 20.Kang SL, Shkumat N, Dragulescu A, et al. Mixed-reality view of cardiac specimens: a new approach to understanding complex intracardiac congenital lesions [published online ahead of print, 2020 Jul 1]. Pediatr Radiol. 2020. 10.1007/s00247-020-04740-y. 10.1007/s00247-020-04740-y [DOI] [PubMed]
- 21.Napa S, Moore M, Bardyn T. Advancing Cardiac Surgery Case Planning and Case Review Conferences Using Virtual Reality in Medical Libraries: Evaluation of the Usability of Two Virtual Reality Apps. JMIR Hum Factors. 2019;6(1):e12008. Published 2019 Jan 16. 10.2196/12008 [DOI] [PMC free article] [PubMed]
- 22.Hettig J, Engelhardt S, Hansen C, Mistelbauer G. AR in VR: assessing surgical augmented reality visualizations in a steerable virtual reality environment. Int J Comput Assist Radiol Surg. 2018;13(11):1717–1725. doi: 10.1007/s11548-018-1825-4. [DOI] [PubMed] [Google Scholar]
- 23.Wainman B, Pukas G, Wolak L, Mohanraj S, Lamb J, Norman GR. The Critical Role of Stereopsis in Virtual and Mixed Reality Learning Environments. Anat Sci Educ. 2020;13(3):401–412. doi: 10.1002/ase.1928. [DOI] [PubMed] [Google Scholar]
- 24.van Helvoort D, Stobbe E, Benning R, Otgaar H, van de Ven V. Physical exploration of a virtual reality environment: Effects on spatiotemporal associative recognition of episodic memory. Mem Cognit. 2020;48(5):691–703. doi: 10.3758/s13421-020-01024-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
As the focus of this manuscript was on the software methods, the anonymized imaging source data used in creating these virtual models is available only upon request.
3D Slicer is a free, open-source image computing platform available at https://www.slicer.org, The Unity game development platform is available at http://unity.com, Information and software relating to the Merge Cube is available at https://mergeedu.com/cube, Custom Software resources, The Python code for the custom Slicer module “PropagateSegToOtherPhases” as well as the custom C# scripts used in the Unity project are publicly available in the GitHub repository link at https://github.com/mikebind/Heartbeat4D, Cardiac4DTemplateProject.zip: A template Unity Project into which users may place their own processed data to create their own augmented reality app.





