Summary
3D visualization technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) have gained popularity in the recent decade. Digital extended reality (XR) technologies have been adopted in various domains ranging from entertainment to education because of their accessibility and affordability. XR modalities create an immersive experience, enabling 3D visualization of the content without a conventional 2D display constraint. Here, we provide a perspective on XR in current biomedical applications and demonstrate case studies using cell biology concepts, multiplexed proteomics images, surgical data for heart operations, and cardiac 3D models. Emerging challenges associated with XR technologies in the context of adverse health effects and a cost comparison of distinct platforms are discussed. The presented XR platforms will be useful for biomedical education, medical training, surgical guidance, and molecular data visualization to enhance trainees’ and students’ learning, medical operation accuracy, and the comprehensibility of complex biological systems.
Graphical abstract
Venkatesan et al. present a comprehensive review of virtual, augmented, and mixed reality developments, focusing on biomedical applications covering data visualization, surgeries, education, training, and healthcare. They also provide case studies for some of the recent XR-based biomedical applications.
Introduction
The development of virtual, augmented, and mixed reality devices erupted after 2010, and their proliferation has continued since then. These technologies offer an immersive and interactive digital scene for visualization in a three-dimensional (3D) environment, resulting in their widespread adoption in various fields that include commercial, educational, and biomedical sectors. Although the concept of virtual reality (VR) has been in existence since the 19th century, VR became popular during the 1990s. Technological advancements in headset and computer hardware, including computer graphics, resulted in many companies, especially in the entertainment sector, investing in this technology. However, despite significant developments, interest in VR was, in general, low during the 2000s because of technical issues such as bulky headsets, slow computers, poor sensory input quality, low-resolution graphics, and side effects such as headaches and motion sickness.
Recent years have seen a second rise in VR technology. Oculus Rift, created by Palmer Luckey first as a simple do-it-yourself kit, has become a sophisticated VR headset. Other companies, such as Google, HTC, Valve, and Samsung, have created VR products with similar features. The VR headsets differ in platform, content, depth perception, tracking capabilities, display resolution, and audio technology. These modern devices have significantly enhanced fields of view (FOVs) and real-time frame rates that mitigate cybersickness effects to an extent.
Apart from VR devices, another extended reality (XR) experience has also been on the rise in augmented reality (AR)/mixed reality (MR) devices. The world’s first untethered MR device, the Microsoft HoloLens, was released in 2016.1 The device is a self-contained computer with tracking sensors, 3D mapping capabilities, a camera, speakers, and WiFi connectivity.2 AR became a standard household technology with the release of the game Pokémon GO in 2016.3 It is a smartphone game where the user must capture the virtual versions of the fictional species Pokémon, placed in a scene rendered from the user’s actual surroundings. However, unlike VR devices, AR glasses have not yet experienced widespread commercialization because of their high cost.
Despite the popularity of XR devices, a comprehensive analysis of XR’s biomedical impact in medicine,4 surgery,5 and medical education6 is highly needed. In this review, VR, AR, and MR concepts and the functionalities of these technologies are defined. First, current biomedical trends in XR, including visualization, clinical care, and research, are summarized. Next, use of VR and AR in classrooms for interactive educational platforms is demonstrated. Case studies of VR and AR are then illustrated. Finally, the cost, complexity, and challenges of existing XR platforms are discussed. This overview of XR technologies, implementations, and their applications will greatly benefit biomedical and medical professionals, providing potential routes to development of XR platforms for interactive, educational, and discovery projects.
Working principles of XR
XR refers to all real and virtual combined environments generated using computers and wearables, such as VR, AR, and MR technologies. XR includes the entire spectrum of the reality-virtuality continuum from absolute reality to utter virtuality. The classification of XR technologies can be considered a virtuality continuum where applications may cross definition boundaries depending on the use.7
VR is a process of visualizing a computer-generated environment in an interactive manner using software and hardware.8 The experience involves total immersion in the virtual environment, allowing the user to act in the virtual world as they would in the real world (Figure 1A). VR devices obtain input from the user through a combination of head tracking, controllers, hand tracking, voice, joysticks, on-device trackpads, or buttons. VR headsets use two lenses to create a stereoscopic 3D image by projecting a pair of two-dimensional (2D) images, one to each eye, with a slight difference in perspectives. In addition, VR headsets have a wide FOV of 90°–210° and a frame rate of at least 90 frames per second to increase immersion.
Figure 1.
Extended reality (XR) for biomedical applications and mainstream working principles
(A) Virtual reality (VR): visualizing a 3D image of a lung using a head-mounted display (HMD) in VR.
(B) Augmented reality (AR): smartphone-based AR. The smartphone augments the brain’s sketch in the real world captured by the camera by overlaying the brain’s virtual image.
(C) Mixed reality (MR): visualization of a 3D image of the rib cage using MR glasses. The user can interact with real and virtual objects in the user environment seen through MR glasses.
(D) Marker-less tracking in AR. This includes a combination of location data (from Global Positioning System [GPS]), inertial measurement unit (IMU) data (consisting of an accelerometer, gyroscope, and magnetometer), and computer vision (to track image features such as scene depth, the object surface, and object edges).
(E) Marker-based tracking in AR. First the smartphone camera captures an image with the scene’s fiducial marker. Then the smartphone’s computer vision system isolates the marker from the scene and removes the back-end background. Next, a virtual coordinate system is drawn with the marker as the reference, and the virtual object is positioned in the scene with respect to the coordinate system. The augmented image is then displayed to the user on the smartphone.
(F) Degrees of freedom (DoFs) in VR. VR tracking can have 3 DoFs, which are based on the rotational motion of the user, or 6 DoFs, which consists of rotational and translational movement of the user.
(G) Tracking VR principles. Two base stations, placed diagonally across the room, obtain positional data from the HMD and the controllers to track the user’s movement.
AR is an experience that involves superimposition of digital elements such as graphics, audio, and other sensory enhancements onto video streams of the real world with real-time interaction between the user and the digital elements (Figure 1B). Although VR replaces the real-world environment with a virtual world, AR supplements a user’s perception of the real world (as seen on a screen) in an immersive manner without obscuring it completely.
MR is a hybrid of the real world and the virtual world (Figure 1C). MR is created when computer processing combines the user’s inputs and their environment to create an immersive environment where physical and virtual objects co-exist and interact in real time.9 MR systems have three characteristics: the environment merges objects from the real world with objects from the virtual world, the user can interact with the objects in real time, and there is mapping between the objects from the real world and the virtual world, creating interactions between them.10 An example of this technology would be superposition of information or 3D models onto a head-mounted display (HMD); however, MR HMDs do not obfuscate the real world.
AR, VR, and MR concepts can be distinguished based on three criteria: immersion, interaction, and information.10,11 Immersion refers to the nature of the user experience brought by the technology. Although VR provides an entirely virtual immersive experience, AR augments the real world’s view with virtual information. MR performs spatial mapping between the real and virtual world in real time. Interaction refers to types of interactions feasible through the use of technology. VR allows interactions with virtual objects, and AR enables interactions with physical objects. MR allows interactions with physical and virtual objects. Information refers to the type of data handled during visualization. In the case of VR, the virtual object displayed is registered in a virtual 3D space. AR provides a virtual annotation in real time within the user’s environment. For MR, the virtual object displayed is registered in 3D space and time with a correlation to the user’s environment in the real world.
Arguably all highly immersive XR experiences rely on the seamless interplay between the physical and digital worlds. Accordingly, a user’s context, including what is present around him or her in the physical world, is critically important. The contextual bases for current AR applications frame this importance well. Two specific examples are location-based and marker-based triggers for AR experiences.12,13
Location-based triggers are often driven by the same sensing systems users have come to rely on for what are now considered everyday mobile phone experiences. For instance, the location of a mobile device can be tracked by identifying the nearest radio tower. Such a mechanism could be used to gain access to services at a particular geographical location. In addition, this mechanism could trigger location-based events in interactive applications, such as discovering new assets in the game of Pokémon GO. Location can also be combined with pose estimation via an inertial measurement unit (IMU; a multi-component sensor that includes an accelerometer, gyroscope, and magnetometer) (Figure 1D).14 This combination enables some of the most demonstrated location-based AR experiences; e.g., seeing the overlaid names of different streets as users observe around through a mobile phone screen.
Unlike location-based AR triggers, which are more global, marker-based triggers anchor AR experiences within a local frame of reference (Figure 1E).15 Markers or fiducials are typically objects or patterns an AR device can recognize using its camera. Many explicit markers are black and white, like a 2D barcode, but they can certainly include colors as long as the camera can detect sufficient contrast between the marker and the background. Because a marker is usually in relative proximity to the user, it can provide accurate estimates of a user’s location in space and track it over time and other users’ locations in the vicinity with devices that recognize the same marker. This feature allows markers to drive single-user experiences, such as superimposing expert rating scores on onscreen wine labels, and multi-user experiences, such as multi-player interactive AR games. Markers must, of course, be tracked to serve their purposes. Tracking refers to detecting and recognizing a fixed marker and estimating its position and orientation in the scene with respect to the camera.16 This helps in determining a reference coordinate system within which virtual objects are located and tracked. Sometimes markers in an AR experience display are visually replaced with other content, which calls for segmentation.17 The preferred vehicles for this include machine learning-based and classical computer vision approaches. In many applications, markers are not replaced but rather augmented with additional information superimposed on the scene. For example, vocational training for automotive mechanics employs AR to label many components a mechanic may need to recognize in the complex engine assembly beneath a car’s hood.
Markerless AR is another distinct category of reality augmentation.18 In markerless AR applications, the user can prescribe where an AR asset is placed within a scene rather than relying on a specific marker. One example is virtual furniture placement experimentation in a scene using markerless AR; users can place a virtual piece of furniture in their environment to judge fashion fit and scale.19 They can also navigate around the virtual object to consider different vantage points. This method requires a user’s dynamic position to be understood over time, which is accomplished by tracking collections of natural markers (e.g., image feature descriptors and depth) intrinsic to a scene.20,21 The process of monitoring dynamic user position over time based on camera images is also known as visual odometry.22 Some AR systems use hybrid tracking methods, combining fiducial markers with markerless tracking systems like Global Positioning System (GPS) and inertial tracking.
Conventional tracking systems in the context of XR experiences consist of a signal source, a signal detector, and a central processing unit (CPU) that processes the signal. Rotational tracking gives 3 degrees of freedom (DoF) corresponding to the x, y, and z axes, termed pitch, yaw, and roll, respectively. Mobile phone-based VR usually has rotational tracking but lacks position tracking. Motion tracking devices such as VR headsets usually have 6 DoFs, adding translation along the x, y, and z axes, termed surge, sway, and heave, respectively, to 3 DoF rotation (Figure 1F).23
There are two methods of motion tracking in VR: optical and non-optical. Optical tracking uses an imaging sensor and signals such as an infrared (IR) laser to track the user’s body motion. It is done using controllers, HMDs, or other optical markers placed on certain body parts for tracking (Figure 1G). For example, VR CAVE (cave automatic virtual environment) uses a set of tracking cameras to monitor the user’s location and adjust the user’s view based on his or her movement.24 Non-optical tracking includes electromechanical sensors (such as gyroscopes, accelerometers, and magnetometers) installed in hardware (mobile phones or HMDs) or attached to the user’s body.
XR platforms improve learning and emotional experience
XR technologies affect the user’s emotions by creating immersive experiences. They facilitate a level of engagement that aids the understanding of new concepts. Krokos et al.25 showed that virtual memory palaces enhance the effectiveness of memorizing information. Here, VR resulted in better memory recall as compared to using a standard desktop display. Among 40 participants, the participants’ overall average recall performance was 8.8% higher in the VR environment than with the desktop display. Immersion in a VR environment provides users with an increased sense of spatial awareness, improving spatial organization and memory.
Neuroscience-informed research has shown that content delivered using a VR platform leads to greater emotional engagement than flat 2D and 360° experiences, with a 27% and 17% increase in a study of 150 participants, respectively.26 The emotional responses were measured by identifying the participants’ gaze and eye movements with eye tracking, along with a biometric monitoring device to measure the electrodermal response, heart rate changes, and behavior-coding methods.
Another study explored the effects of perception (visual and acoustic cues) versus conceptual information (information related to fear) on fear and anxiety in a VR environment. The researchers found that people with phobias were more sensitive to visual cues.27 This demonstration can be useful for studying cue-based phobias and anxiety. The association analysis between presence (i.e., the extent to which users feel involved in VR) and emotional experience showed that they are mutually dependent, with an increasingly significant correlation with stronger emotions. Certain arousing emotions, including fear and anxiety, are stronger in VR than non-arousing emotions such as relaxation and happiness. The interoceptive attribution model of presence demonstrated that users made decisions about the degree of presence based on the degree of excitement they felt and immersion provided by the VR.
Makransky and Lilleholt28 presented the effect of immersive VR on the users’ emotional processes while using VR learning tools compared with desktop versions. A survey assessed the effect of VR on non-cognitive factors such as presence, motivation, enjoyment, learning, and perceived usefulness, along with cognitive factors such as mental benefits and reflective thinking. The VR simulations included different methods of interactivity, such as dialoguing (with a laboratory assistant), controlling (letting the user make choices), and manipulating (users moving the objects around the screen). Paired-sample t tests of the users’ feedback on different parameters demonstrated the presence, motivation, ease of control, and enjoyment to be enhanced in VR. Structural equation modeling described the relationship between immersion in VR and perceived learning outcomes. Factors such as presence, motivation, control, and cognitive benefits improved the perceived learning outcomes, whereas reflective thinking remained unaffected for the perceived learning outcome between VR and desktop platforms. Thus, VR utility demonstrated potential in e-learning because it provides an ideal learning environment compared with a desktop by better engaging, motivating, and arousing students.
Current biomedical trends in XR
XR enables data visualization and interactive 3D analysis
XR technologies provide a tool to look at 3D models as they are; i.e., 3D objects rather than 2D representations. Thus, volumetric data benefit greatly from XR technologies. For example, in biomedical engineering, VR and AR have dramatically enhanced visualization capabilities and interaction with microscopic images, molecular data, and anatomical datasets.
Google’s AR Microscope (ARM) used machine learning to diagnose cancers in real-time from whole-slide microscopic images.29 The ARM system contained an augmented bright-field microscope, a computer, and a set of trained deep learning algorithms. In ARM, the deep learning (DL) model has been trained to detect prostate cancer and lymph node metastasis in breast cancer. The resultant DL predictions were projected onto the microscopic sample as contours, heatmaps, or textural information using AR. This system has helped pathologists save time when scanning large whole-slide images for the presence of cancer.
A nanoscale imaging technique, expansion microscopy, was combined with VR to enlarge and analyze cell structures that would be too small to visualize using normal light microscopy.30,31 They proposed a tool, ExMicroVR, that could be used by up to six people simultaneously, enabling remote collaboration between scientists. Expansion microscopy increases the tissue sample volume by 100 times, making it easy to visualize the tissues and molecules and interactions between the cells. The 2D expansion microscopy images were combined with VR in 3D with a 360° view using an interactive interface.
Another VR application, ConfocalVR, studied the complexity of cell structure and arrangement and protein or molecular distributions.32 The application visualized 3D cellular images, such as confocal microscopy z stack images as red-green-blue (RGB) volumes. In the ConfocalVR interface, users could grab and drag the displayed image using controllers to rotate and scale it to focus on a specific region of interest. Display parameters such as color, lighting, and opacity were adjustable. As in many VR applications discussed here, this application provided an option for multiple users to work together simultaneously. However, although ConfocalVR allowed visualization of high-resolution images, it lacked image-based metrology, including quantification of displayed data in terms of distance, intensity, or pixel location.
VR technology-assisted pathologists examine whole-slide images (WSIs) with greater ease of navigation, diagnostic confidence, and image quality (Figure 2A). WSI platforms are used for digitizing glass slides using hardware, such as whole-slide scanners. Pathologists currently experience challenges in viewing and navigating the digital slides in 2D on a computer monitor using a computer mouse. VR’s feasibility yielded similar performance compared to high-quality display.33 Here, VR improved navigation capability using active controllers. However, the image quality and diagnostic reliability were sub-optimum compared to conventional displays due to the low-resolution VR headsets.
Figure 2.
VR- and AR-based visualization of scientific experimental imaging data, tools for surgery and anatomy, and collaborative interfaces for education and telehealth
(A) Digital whole-slide visualization and navigation using an HMD in VR and a web-based browser for whole-slide imaging on a desktop.33
(B) Visualization of a user demonstrating a neuron tracing tool. For example, TeraVR can visualize whole-brain imaging data in VR and reconstruct neuron morphology at different regions of interest (ROIs).34
(C) Visualization and navigation of a 3D scanning electron microscope (SEM) image using VisionVR software by arivis.35
(D and E) Physicians can use AR to rotate certain anatomy during brain surgery and cardiac surgery to get full visualization to better perform, plan, and explain their surgeries.
(F) Studying anatomy using VR can help physicians visualize and explain medical processes to other health professionals. A medical student visualizes multiple organs and organ systems in VR.
(G) AR pens can be used to get a 3D image to help students better visualize and study concepts.
(H) VR can be used for clinical assessments where the doctor and affected individual can enter a virtual world to receive a checkup.
Microsoft HoloLens, a holographic MR HMD, is used by engineers and doctors to look at 3D images, such as anatomical structures, interactively with higher clarity. HoloLens-based MR interfaces helped surgeons and medical staff virtually visualize complex organs during surgeries.36 HoloLens was used to annotate specimens during an autopsy, to visualize and navigate WSIs, and for telepathology and real-time pathology-radiology correlation (Figure S1A).37 The device was suitable for digital pathology because it was easy to use and supported high-resolution imaging.
VR tools have also been developed to help neuroscientists trace neurons in brain images (Figure 2B). TeraVR was created to visualize and annotate neurons in teravoxel-scale brain images.34 TeraVR-based neuronal reconstructions in whole mouse brains improved the annotations compared with non-VR visualizations (e.g., 2D or 360° views). To further enhance efficiency, a U-Net-based DL model was trained on the reconstructions to refine its output based on the user’s feedback. In addition, the Tera-VR system enabled collaboration of researchers to annotate the model simultaneously from different geographical locations. Another VR neuron tracing tool explored and resolved the spatial relations of neurons in brain data.38 Here the users rapidly traced neurons using VR compared with state-of-art desktop software while maintaining accuracy.
Theart et al.39 developed a graphical user interface (GUI) for visualization and colocation analysis of 3D microscopy data in VR (Figure S1B). Apart from head tracking using the headset, other input interfaces, such as hand tracking using Leap Motion, a stereoscopic IR camera, or a traditional gamepad, were used. Users selected two channels in a pre-selected region of interest (ROI) to visualize colocalized data. VisionVR by arivis is a VR application for visualizing 3D microscopic images, providing a collection of tools for purposes such as visualization, analysis, and tracking (Figure 2C).35,40 Although originally developed for microscopy, VisionVR can be used for volume visualization on other 3D or four-dimensional (4D) images, such as computed tomography (CT) or magnetic resonance imaging (MRI) images. The CAVE provided a different visualization method of cellular components in VR instead of HMDs (Figure S1C).41 3D cellular reconstructions from electron microscopy (EM) preparations of neural tissue were projected onto a room’s walls using CAVE, allowing the users to step inside and navigate within the model.
Another VR visualization software package, called vLUME (Visualization of the Universe in a Micro Environment), rendered 3D single-molecule localization microscopy (SMLM) datasets.42 vLUME built a complete VR environment for data visualization, segmentation, and quantification of complex 3D point-cloud data and identifying defects. In addition, vLUME software provided detailed image analytics features such as data exploration, comparison between datasets, ROI extraction, analysis of custom sub-regions, and exporting movies for presentations.
Virtual training for surgeries and biomedical devices
Surgeries require specific skills requiring extreme practice and dedicated training, but this process can be difficult for the medical trainees. VR offers medical students the opportunity to practice a low-risk procedure before performing surgery on an individual to address this issue. VR-based simulators such as RASimsAs, AnatomyX, and SimSurgery have prepared students for unexpected scenarios during medical procedures. With these simulators becoming more popular in medical schools, students develop skills such as thinking on the spot, problem-solving in a hands-on environment, and performing tasks in a stressful environment. Although VR programs cause lack of collaboration because of the isolated headsets, some of the previously mentioned simulators encourage cooperation by using pointers so that others can see what a person is referring to in the virtual world. Students and educators can collaborate, and instead of explicitly saying to which object or body part they are referring in the virtual world, they can point to the object, which their peers can see through a pointer.
Although VR may not deliver the same learning experience as physical training on a cadaver or a real human body, VR-based training has unique advantages. VR facilitates working with various deep organs that may be physically obstructed by others and, thus, are hard to observe during conventional surgical training. For instance, teaching pancreatic procedures using cadavers requires removal of the liver to visualize the pancreas.43 On the other hand, VR platforms can digitally dissect the human body to synthetically remove the pancreas using VR control by finger movements, allowing students to rapidly access the pancreas in the virtual scene. Coordinated sound is another important aspect of what XR offers in the context of medical education. For example, students can hear the heart beating when learning about the organ, allowing them to better understand how valves and chambers operate in a human body.
Advanced simulators, such as RASimAs, developed by the University of Aachen, utilized scans from real affected individuals to simulate the reaction of tissues that allowed surgeons to experience highly realistic surgical scenarios and learn to plan accordingly (Figures 2D and 2E). For example, tissue reactions from the RASimAs simulator enabled students to develop proper skills for injection accuracy.44 The tissue reactions from the RASimAs mimicked what could happen when a syringe needle was turned, and the resultant reactions trained the physician to check the accuracy of hitting the right nerve. Understanding how certain mistakes occur is extremely valuable to medical students, especially when the immediate results of their actions are brought to light.
AnatomyX enables an AR learning experience for students that fosters collaboration in a hands-on environment. Multiple students use a shared model in real time by working together toward surgical solutions (Figures 2F, S2A, and S2B).45 Another valuable feature of this software is access to the latest medical information, which is continuously updated with facts and figures during learning. AnatomyX also provides vehicles for educators to administer quizzes and tests using the platform’s AR models. SimSurgery was another training software that allowed medical professionals to adjust the exercise’s difficulty to suit the trainee’s level.46 These platforms also help educators understand students’ experiences better because they can be viewed in real time from a first-person perspective. Offering valuable experiences for medical students, such as hands-on learning, VR allows medical procedures to be practiced with minimal risk. Schools will widely embrace VR training in their programs. One can expect a sharp rise in VR and MR technologies in mainstream anatomy education.47
VR can be used as a training approach to transfer knowledge of procedures in the biopharma industry, replacing the traditional reading of lengthy manuals.48 This study evaluated the practical skills and theoretical understanding of 69 study participants after receiving traditional practice or VR simulation training. The conventional training groups read the standard operating procedures (SOPs) or received real-life training in a laboratory. The researchers found that the participants trained using VR simulation performed better than those who read SOPs in the standardized compliance test (39% better) and practical skill evaluation (41% better). However, compared with those who received real-life training, they received equal scores in the standardized compliance test and performed 21% worse on the practical skills test. The researchers concluded that VR simulation training is a cost-effective and standardized alternative to real-life training. They suggested that VR simulation training could replace SOP reading and supplement real-life training on equipment use and procedures.
Experiential biomedical education tools for teaching
AR and VR enhance students’ learning experience by teaching biology, history, and geography concepts interactively and engagingly.49 For a generation living a digital lifestyle, attention span has decreased significantly using media technologies.50 VR/AR as an educational tool offers feasible digital solutions to this problem because students are focused on a virtual space where distractions are reduced considerably. One approach to utilizing VR in classrooms is providing students with headsets synced to a central device to experience the same content. It can also be decentralized, where lectures are held in a virtual classroom with students wearing VR headsets and connecting from different locations.
Apart from universities and medical schools, several K–12 classrooms have already introduced learning using XR technology. For example, in biology classes, students can learn about the human body’s anatomy and other organisms using 3D models in VR (Figure S2C). At Agawam Public Schools in Massachusetts, teachers integrated Google’s VR software, Google Expeditions, into their courses to explore the inside of atoms and the human body.51 The software required only a compatible smartphone and the Google cardboard apparatus. Using Expeditions, students also learned about history by experiencing the past and visiting ancient monuments, all while seated at their desks.52
VR was used to teach cell biology concepts that affected students’ engagement and understanding of concepts.53 Students participated in a VR experience, Journey Inside a Cell, by The Body VR using HMDs and were tested on the material with a timed challenge to match cell parts with their correct labels. The participants were also asked to complete a survey to describe their VR experience and whether it affected their learning. Of 62 students who took part in the study, 58 students (93.55%) reported that VR enhanced the cell biology concept learning experience.
zSpace was developed as a virtual anatomical laboratory where students could learn in an immersive environment using holographic images (Figure 2G).54 zSpace employed a unique approach to combine AR/VR immersion using polarized glasses with IR markers instead of headsets to create isolation feelings to inhibit collaboration. With zSpace, students used a stylus to divide an anatomical image into several regions and analyze different body parts to learn more interactively. The AR experience could also be recorded or captured as videos and photos to share with others.
Telemedicine and telehealth screening
Although distinct from XR, two other rapidly growing digital medicine areas are telemedicine and telehealth (Figures 2H and S2D–S2F). XR-based telehealth presents individuals with an opportunity to experience remote consultations with their doctors in an immersive, interactive environment (Figure 2H). An emerging XRHealth platform uses AR and VR to provide physical therapy, stress management, pain management, respiratory recovery, and support groups for different physical, emotional, and neurocognitive symptoms.55 The presented VR platform utilizes virtual environments, games, and exercises with movement tracking to help clinicians provide feedback. XRHealth launched its first telehealth clinic to focus on rehabilitation. One important and unique aspect of this form of therapy is that people can exercise in the comfort of their own homes. This can be especially valuable for paralyzed individuals who are affected by transportation challenges. The VR platform allows people to continue a smooth recovery as they engage in an immersive environment that resembles the centers they know, increasing participation. Physicians can even view what the individuals see during VR therapies and adjust their experiences accordingly. After training sessions, participants get feedback on their status, and reports are generated to track their rehabilitation trajectory.55 This immersive XRHealth platform aims to supplement traditional therapeutic methods, such as prescription drugs, tailoring to the patient’s specific needs using artificial intelligence (AI).
Biomedical VR-AR case studies
Case study 1: Visualization of 3D and highly multiplex protein images in single cells
Biomedical applications use HMDs and VR applications for 3D visualization. Software packages for visualizing a 3D dataset in VR can be developed using computation platforms such as Unity. As mentioned previously, ConfocalVR is one such software developed by Immersive Science to visualize confocal microscopy images.32 This software can help users understand cellular architecture and the distribution of proteins and molecules through immersive 3D visualization.
The dataset used for visualization contains 3D subcellular co-detection by indexing (CODEX) images acquired through multiplex imaging of DNA-barcoded antibodies to target 20 cellular markers.56 CODEX data were developed by a spinning-disk confocal microscope with a 60× objective lens, providing diffraction-limited optical images across 25–30 depth slices of a 5-μm cancer tissue sample. The resultant high-resolution CODEX datasets were visualized for three different regions on a microarray sample. The 3D scene contained single-cell distributions from individuals with chronic lymphocytic leukemia (CLL), Hodgkin’s lymphoma (HL), and natural killer (NK)/T cell lymphoma. Image processing algorithms were used for background subtraction and registration, and the resulting images were stored in the Tag Image File Format (TIF or .tif). The TIF files were converted into Neuroimaging Informatics Technology Initiative file format (NIfTI or .nii) using ImageJ to visualize these 3D data using the ConfocalVR software.
ConfocalVR software allows users to control image viewing parameters such as lighting, image depth, opacity, image quality, colors (RGB), intensity range, etc. (Figures 3A and S3–S5). Users can also grab the 3D volume using the controllers to scale and rotate the image. In addition, the parameters used during visualization can be saved in a text file. This approach helps to reproduce the visualization results in the future.
Figure 3.
Case studies using VR and AR
(A) Case 1: VR-based visualization of multiplexed protein imaging data. Shown is visualization of highly multiplexed CODEX imaging data (18 markers) obtained from individuals with chronic lymphocytic leukemia (CLL) in ConfocalVR.32 Slices from each set of markers were converted into RGB stacks of NIfTI (.nii) format. The first image for each condition shows the display control panel.
(B) Case 2: AR-based visualization of a cerebral aneurysm for surgical planning. The operator uses the pen as a fiducial to drag the vascular model out of the screen and into AR space. Normal cerebral vessels are gray, the aneurysm is red, and the aneurysm’s neck plane is black. Two snapshots from the real-time operation of this AR tool are presented.
(C) Case 4: VR-based Google Cardboard platform for cell biology education. The left side shows the phone screen view, which is a split screen, and the right side shows what can be seen through Google Cardboard, which is a more 3D version of the left side. The images are of a nucleus in a plant cell.57
Case study 2: AR for neurosurgical planning and execution
Several recent studies have employed AR for surgical planning and execution in interventions involving the head, neck, and spine.58 AR was used for visualizing presurgical neurovascular anatomy before endovascular intervention (Figures 3B and S6A).59 However, many of these studies have been demonstrative and exploratory; far fewer have quantified the effect of AR-based approaches on outcomes. A notable exception focused on AR navigation for spine fixation.60 In that study, 20 individuals receiving screw placement surgery were treated using an AR surgical navigation (ARSN) technique. Specifically, bone entry points, “bulls eye” views along the screw axis, and instrument navigation queues were displayed via AR during surgery. Twenty other individuals received screw placements via a more conventional free-hand (FH) technique. A total of 262 screws were placed in ARSN-based interventions, and 288 screws were placed in FH-based interventions. Both groups were composed of similar numbers of screw placements in the thoracic and lumbosacral vertebrae. The same surgeon performed all operations. The Gertzbein scale was used to assess screw placement accuracy on a postoperative basis via imaging, and grades 0 and 1 were categorized as accurate. In addition to accuracy, procedure time, blood loss, and length of hospital stay were also quantified. The results demonstrated that the share of clinically accurate screw placements was higher for the ARSN cohort than the FH cohort; this was statistically significant (93.9% versus 89.6%, p < 0.05). The proportion of screws placed without a cortical breach was also higher for the ARSN cohort (more than twice as high, in fact) compared with the FH cohort (63.4% versus 30.6%, p < 0.0001). Statistically significant differences were not observed for the procedural outcome parameters that were quantified. Nevertheless, this study demonstrated that AR-based surgical navigation for spine fixation holds great promise for enhancing screw placement accuracy.
Case study 3: VR-based surgical techniques for complex surgical repair
An 8-year-old male with complex cardiac heterotaxy was presented to a cardiothoracic surgeon with advanced heart failure (J. Ryan, 2017, AHA Scientific Sessions, conference). The individual’s cardiology team determined that the ideal pathway was a heart transplant. The already complex surgical option was further complicated by the heterotaxy pathology, where there were aberrant arterial and venous connections of the native anatomy. The surgeon requested advanced images; a cardiac MRI was acquired. A multidisciplinary team performed a 3D reconstruction with the initial intent of 3D printing, a surgical planning tool often utilized by the surgeon and surgical team. The surgeon intended to cut and excise parts of the 3D-printed model, representative of native heart anatomy, as he would in the actual surgery. The inherently destructive process results in a 3D print being useful for a single mock operation, not allowing for multiple investigational approaches. Thus, the surgeon opted to utilize computational modeling (in-house solution on the Unity software engine) augmented with immersive VR HMDs (Oculus).
The interdisciplinary team created a realistic virtual environment where the surgeon could virtually remove the cardiac anatomy from the surrounding anatomy. This process leaves contact points (such as the pulmonary vessels of the hilum and the vena cava) as virtual landmarks (note well). Separation of the cardiac anatomy from the vessels occurred beforehand and not through real-time computational mesh deformation. 3D reconstructions of developmentally typical anatomy were imported into the virtual environment; these datasets were analogous to a donor’s heart. Next, the surgeon manipulated the “donor” anatomy. The surgeon anecdotally reported that the experience allowed him to visualize how a developmentally typical heart could fit into the recipient’s heterotaxy-altered native anatomy. The surgeon also imported a computational model of the Total Artificial Heart (SynCardia Systems), a mechanical circulatory support device, into the virtual environment. The surgeon viewed the experience as a valuable planning tool for a backup solution if a donor was not made available or the recipient rejected the donor. The virtual experience was important in this situation because the device is typically used on individuals of greater age and thoracic dimensions. Overall, the virtual experience enabled the surgeon to understand the spatial complexity of the individual’s anatomy and appreciate the gestalt of the anatomical size and orientation.
Case study 4: Cell biology with Google Cardboard
Google Cardboard offers a cost-effective solution to experience VR. It is a simple VR viewer designed for smartphone users. Such a system is beneficial in a classroom setting to create an immersive environment (Figures 3C and S6B). Google Cardboard is composed of a pair of biconvex lenses attached to a cardboard box containing a cavity to insert a mobile device in front of the lenses. This approach creates an HMD where a user perceives the mobile screen through the lenses, enabling stereoscopic effects. Google Cardboard also comes with an auxiliary button to provide “touch” commands because the touch screen is inaccessible. Thus, a user can focus on a particular region on the screen through gaze and provide input by pressing the auxiliary button.
Additionally, this device comes with a strap to block out any external visuals that may prevent a fully immersive experience. Although Google Cardboard is easy to use and cost effective, it does have limitations in terms of functionality. For instance, it only serves as a basic VR platform, whereas products such as Microsoft Hololens support virtual teleportation (holoportation),61 and platforms such as Oculus VR provide support for binaural audio.62
As a demonstration of Google cardboard in biomedical applications, we performed the following experiment. A 360° video from the internet visualizing the organelles within a plant cell was used to test Cardboard. Specifically, the video looked into the nucleus in depth, presenting facts in the written text next to certain organelles as the regular audio played. In this way, a cell tour was demonstrated, and three distinct angles were viewed and captured.
Cost and complexity of XR platforms
Implementation of XR technologies spans a gamut of complexities and costs. The use of Google Cardboard hardware with Blender (Blender Foundation) open-source software can allow a user with minor programming/scripting expertise to create a low-cost VR experience. On the other end of the spectrum, XR platforms consisting of hardware and software for specific medical applications cost tens of thousands of dollars. Although not attempting to be exhaustive, Table 1 presents a list of commercially available solutions for XR technologies used in biomedical applications. The table is initially stratified by cost: no/low cost, prosumer, and professional/commercial. Next, each cost stratum is subdivided into three categories based on implementation: software only, hardware only, and platform (i.e., a combination of hardware and software). The table can be a starting place for discovering XR technologies based on the need and resources available. Some forms of implementation may depend on specific skill sets, such as programming and 3D modeling.
Table 1.
Cost analysis and overview of XR software, hardware, and platforms
Type | XR type | Product name | Company | Information |
---|---|---|---|---|
No-cost to low-cost solutions | ||||
Software | AR/VR/MR | Unitya,b | Unity Technologies | requires programming knowledge or additional software packages to develop/implement biomedical solutions |
AR/VR/MR | Unreala,b | EPIC Games | requires programming knowledge or additional software packages to develop/implement biomedical solutions | |
AR/VR/MR | WebXR Device APIb | Immersive Web Working Group at the W3C | requires programming knowledge or additional software packages to develop/implement biomedical solutions | |
AR/VR | Sketchfaba,b | Sketchfab | no programming experience needed | |
AR/VR/MR | Blenderb | Blender Foundation | benefits from programming and 3D modeling experience | |
MR | HoloAnatomy | Case Western Reserve University | no cost for software but requires Microsoft Hololens | |
Hardware | VR | Google Cardboard (or do it yourself [DIY] Viewer)b | Google/DIY | requires programming knowledge or additional software packages to develop/implement biomedical solutions |
VR | Gear VRb | Samsung | requires programming knowledge or additional software packages to develop/implement biomedical solutions | |
Platforms | N/A | None readily identifiable | N/A | N/A |
Prosumer solutions | ||||
Software | VR | DICOM2Print | 3DSystems | utilizes a VR or MR HMD |
Hardware | VR | Viveb | HTC | requires programming knowledge or additional software packages to develop/implement biomedical solutions |
VR | Oculus hardwareb | hardware at time of publication includes Oculus Quest, Quest 2, and Rift | ||
VR | Reverbb | HP | requires programming knowledge or additional software packages to develop/implement biomedical solutions | |
VR | Indexb | Valve | requires programming knowledge or additional software packages to develop/implement biomedical solutions | |
AR | Google Glassb | requires programming knowledge to develop/implement biomedical solutions | ||
AR | Apple Glassesb | Apple | requires programming knowledge to develop/implement biomedical solutions | |
MR | zSpaceb (monitor) | zSpace | requires programming knowledge or additional software packages to develop/implement biomedical solutions; NB (note well): the zSpace product does include software at acquisition but cannot readily be used for biomedical solutions |
|
Platforms |
VR | Enduvo | Enduvo | Platform is compatible with SteamVR tracking-based VR systems (e.g., Vive, Vive Pro, and Valve Index), Windows MR VR systems, and Oculus VR systems |
VR |
Elucis |
Realize Medical |
Platform is compatible with SteamVR tracking-based VR systems (e.g., Vive, Vive Pro, and Valve Index), Windows MR VR systems, and PC-based Oculus VR systems |
|
Production/commercial solutions | ||||
Software | MR | SyngoVia | Siemens | utilizes Microsoft Hololens |
Hardware | N/A | none readily identifiable | N/A | N/A |
Platforms | MR | True 3D | EchoPixel | utilizes zSpace monitor |
VR | PrecisionVR | Surgical Theater | platform service includes on-site VR specialist | |
VR | SimX | SimX | platform is intended for simulation/training. |
The table illustrates commercially available solutions for XR technologies. The table is stratified initially by start-up cost and secondarily by whether the product is software, hardware, or a platform (a combination of hardware and software). The list is not intended to be exhaustive. Currently, a no-cost to low-cost solution is below $200 for an initial investment. Production/commercial solutions exceed $5,000 (often the limit for organizations’ capital investment cost).
Cost is affected by the type of license (i.e., personal versus commercial use).
Solutions are not designed or marketed solely for biomedical use. Implementations can be created that target biomedical and medical applications.
Current challenges of XR platforms
Despite the widespread adoption of XR for biomedical applications, XR methods experience several technical issues, including computational limitations, tracking issues, improving user interactions, battery consumption, and overheating (in the case of a head-mounted device). For example, location-based AR systems may be prone to localization errors because of various factors. For instance, applications relying on GPS may provide inaccurate location predictions.63 Moreover, the latency caused by lower computational bandwidth could result in injuries from bumping into walls and other objects, especially when wearing an immersive HMD.11 It is expected that AR systems will become more reliable and integrated when it comes to visualizing simulations.
Apart from these challenges, privacy, security, and ethical concerns also need to be addressed as these technologies gain popularity.64 For example, AR browsers installed on mobile devices can be hijacked. This feature can manipulate the AR content displayed by gaining unauthorized access to the device’s camera and GPS.65 Security and privacy challenges caused by a complex set of input sensors that are always on, including cameras, microphones, and GPS, need to be considered. In collaborative XR systems, attackers can copy the virtual content, posing risks regarding intellectual property and user rights. Attackers can also pose as legitimate users (identity theft) and manipulate other users to reveal sensitive information or behave in a certain way.
MR systems require different image capture components, computer vision, tracking, image fusion, and display to work together.66 Computational complexity and latency in each of the pieces are also challenges that have to be addressed. High precision and real-time performance are required for most areas using MR, such as medical applications. The FOV of MR glasses is another issue that has to be considered; a narrow FOV is not suitable for many applications. In mobile MR devices, memory storage and energy consumption are also bottlenecks for real-time 3D graphics rendering.
Numerous VR applications have failed because of hardware modifications and updates, making it challenging for VR content to keep up with quality and novelty. Dominant platform dependencies remain, and there has been no ecosystem explosion like there has been with AR applications. Developing high-quality content for VR devices is expensive and time consuming.52 The user friendliness of VR devices also needs to be considered when developing applications. Without well-designed hardware, users could experience a lack of balance or inertia, which decreases the immersive effect. Side effects, such as motion sickness, eye fatigue, headache, blurred vision, and nausea, are also observed with prolonged use of VR devices.67
VR is prone to addiction among teens, especially when used for entertainment purposes such as video games. In particular, VR promotes escapism where one is completely isolated from the real world, which increases addiction and causes detrimental effects to one’s physical and psychological health.68 Furthermore, VR creates an isolated environment, which many simulators are now trying to avoid by implementing collaborative features. During surgery, when a physician uses a VR headset, challenges may arise because the device could get in the way of effective and urgent communication with staff.
In the classroom, utilization of VR and AR is hindered by lack of focused attention, lack of time to master the technology, high cost of large-scale implementation, limited infrastructural facilities such as a stable internet connection, and limited content to satisfy students’ learning needs or instructional outcomes.69 Virtual environments may also require students to leverage spatial navigation, collaboration, and technology manipulation, which is challenging to learn, especially for young students. This platform is gaining popularity for educational use.70
Conclusions
Although XR technology has existed for decades, its popularity has risen only in the past few years. Despite having obstacles, many XR applications are being developed in areas of biomedical engineering. Here we discussed the current trends in XR in medicine and biology. XR technologies, especially VR, are used to visualize and analyze 3D models on scales ranging from molecular to anatomical structures. Integrating XR devices in the classroom enhances learning, and, thus, XR is being implemented for teaching biological concepts of cell structures and anatomical structure at high school and university levels. With improvements, it has the potential to replace many cadaver-based experiences for training medical students in surgery. In addition, XR technology has entered telehealth and therapy, aiding physicians and affected individuals with remote consultations and treatments. The case studies presented discuss scenarios where XR can be used, such as learning concepts in a VR environment with Google Cardboard, visualizing single-cell protein images using an HMD, and surgical planning using AR and VR. Although XR’s widespread use in biomedicine is growing, it has yet to reach its full potential because of the software and hardware challenges. Improving user experience and minimizing the side effects are also concerns that need to be addressed before XR technology is widely used as a daily tool in public. The affordability of XR systems, especially HMDs, and the technical knowledge required to develop custom software are other limiting factors.
Acknowledgments
A.F.C. holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and a Bernie-Marcus Early-Career Professorship. A.F.C. and D.H.F. were supported by start-up funds from the Georgia Institute of Technology and Emory University. J.R.R. was supported by philanthropic funds from The Helen and Will Webster Foundation, United States. In addition, the Swiss National Science Foundation supported C.M.S. Code is available upon request. The CODEX data used in Case study 1: Visualization of 3D and highly multiplex protein images in single cells and supplemental videos are available at https://doi.org/10.5281/zenodo.4976835.
Author contributions
M.V., H.M., and J.R.R. contributed to manuscript writing and preparation of the table and figures. C.M.S. and G.P.N. participated in multiplexed protein data acquisition for case studies. A.F.C. initiated the project. A.F.C. and D.H.F. supervised the project.
Declaration of interests
The authors declare no competing interests.
Footnotes
Supplemental information can be found online at https://doi.org/10.1016/j.xcrm.2021.100348.
Supplemental information
References
- 1.Kress B.C., Cummings W.J. Optical architecture of HoloLens mixed reality headset. Digital Optical Technologies. 2017:103350K. [Google Scholar]
- 2.Ungureanu D., Bogo F., Galliani S., Sama P., Duan X., Meekhof C., Stühmer J., Cashman T.J., Tekin B., Schönberger J.L. HoloLens 2 Research Mode as a Tool for Computer Vision Research. arXiv. 2020 https://arxiv.org/abs/2008.11239 arXiv:2008.11239. [Google Scholar]
- 3.Rauschnabel P.A., Rossmann A., tom Dieck M.C. An adoption framework for mobile augmented reality games: The case of Pokémon Go. Comput. Human Behav. 2017;76:276–286. [Google Scholar]
- 4.Bin S., Masood S., Jung Y. Virtual and augmented reality in medicine. In: Feng D.D., editor. Biomedical Information Technology. Second Edition. Academic Press; 2020. pp. 673–686. [Google Scholar]
- 5.Khor W.S., Baker B., Amin K., Chan A., Patel K., Wong J. Augmented and virtual reality in surgery-the digital surgical environment: applications, limitations and legal pitfalls. Ann. Transl. Med. 2016;4:454. doi: 10.21037/atm.2016.12.23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Pottle J. Virtual reality and the transformation of medical education. Future Healthc. J. 2019;6:181–185. doi: 10.7861/fhj.2019-0036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Milgram P., Kishino F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994;12:1321–1329. [Google Scholar]
- 8.Peugnet F., Dubois P., Rouland J.F. Virtual reality versus conventional training in retinal photocoagulation: a first clinical assessment. Comput. Aided Surg. 1998;3:20–26. doi: 10.1002/(SICI)1097-0150(1998)3:1<20::AID-IGS3>3.0.CO;2-N. [DOI] [PubMed] [Google Scholar]
- 9.Hughes C.E., Stapleton C.B., Hughes D.E., Smith E.M. Mixed reality in education, entertainment, and training. IEEE Comput. Graph. Appl. 2005;25:24–30. doi: 10.1109/mcg.2005.139. [DOI] [PubMed] [Google Scholar]
- 10.Rokhsaritalemi S., Sadeghi-Niaraki A., Choi S.-M. A Review on Mixed Reality: Current Trends, Challenges and Prospects. Appl. Sci. (Basel) 2020;10:636. [Google Scholar]
- 11.Parveau M., Adda M. 3iVClass: a new classification method for Virtual, Augmented and Mixed Realities. Procedia Comput. Sci. 2018;141:263–270. [Google Scholar]
- 12.Benligiray B., Topal C., Akinlar C. STag: A stable fiducial marker system. Image Vis. Comput. 2019;89:158–169. [Google Scholar]
- 13.Limmer M., Forster J., Baudach D., Schüle F., Schweiger R., Lensch H.P.A. Robust Deep-Learning-Based Road-Prediction for Augmented Reality Navigation Systems at Night. IEEE. 2016;2016:1888–1895. [Google Scholar]
- 14.Chen C., Zhu H., Li M., You S. A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives. Robotics. 2018;7:45. [Google Scholar]
- 15.Rohs M. Marker-Based Embodied Interaction for Handheld Augmented Reality Games. JVRB. 2007;4:5. doi: 10.20385/1860-2037/4.2007.5. [DOI] [Google Scholar]
- 16.Seo J., Shim J., Choi J.H., Park J., Han T. Enhancing Marker-Based AR Technology. In: Shumaker R., editor. Virtual and Mixed Reality - New Trends. Springer; 2011. pp. 97–104. [Google Scholar]
- 17.Dash A.K., Behera S.K., Dogra D.P., Roy P.P. Designing of marker-based augmented reality learning environment for kids using convolutional neural network architecture. Displays. 2018;55:46–54. [Google Scholar]
- 18.Hajek J., Unberath M., Fotouhi J., Bier B., Lee S.C., Osgood G., Maier A., Armand M., Navab N. Closing the Calibration Loop: An Inside-Out-Tracking Paradigm for Augmented Reality in Orthopedic Surgery. In: Frangi A.F., Schnabel J.A., Davatzikos C., Alberola-López C., Fichtinger G., editors. Medical Image Computing and Computer Assisted Intervention – MICCAI. Springer; 2018. pp. 299–306. [Google Scholar]
- 19.Young T.-C., Smith S. An Interactive Augmented Reality Furniture Customization System. In: Lackey S., Shumaker R., editors. Virtual, Augmented and Mixed Reality. Springer International Publishing; Cham: 2016. pp. 662–668. [Google Scholar]
- 20.Katiyar A., Kalra K., Garg C. Marker Based Augmented Reality. Advances in Computer Science and Information Technology. 2015;2:5. [Google Scholar]
- 21.Ufkes A., Fiala M.A. Markerless Augmented Reality System for Mobile Devices. IEEE. 2013;2013:226–233. [Google Scholar]
- 22.He Y., Zhao J., Guo Y., He W., Yuan K. PL-VIO: Tightly-Coupled Monocular Visual-Inertial Odometry Using Point and Line Features. Sensors (Basel) 2018;18:1159. doi: 10.3390/s18041159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Niehorster D.C., Li L., Lappe M. The Accuracy and Precision of Position and Orientation Tracking in the HTC Vive Virtual Reality System for Scientific Research. i-Perception. 2017;8 doi: 10.1177/2041669517708205. 2041669517708205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Cruz-Neira C., Sandin D.J., DeFanti T.A., Kenyon R.V., Hart J.C. The CAVE: audio visual experience automatic virtual environment. Commun. ACM. 1992;35:64–72. [Google Scholar]
- 25.Krokos E., Plaisant C., Varshney A. Virtual memory palaces: immersion aids recall. Virtual Reality. 2019;23:1–15. [Google Scholar]
- 26.YuMe, Inc. Nielsen . 2016. Groundbreaking Virtual Reality Research Showcases Strong Emotional Engagement for Brands, According to YuMe and Nielsen.https://www.businesswire.com/news/home/20161109005274/en/Groundbreaking-Virtual-Reality-Research-Showcases-Strong-Emotional [Google Scholar]
- 27.Diemer J., Alpers G.W., Peperkorn H.M., Shiban Y., Mühlberger A. The impact of perception and presence on emotional reactions: a review of research in virtual reality. Front. Psychol. 2015;6:26. doi: 10.3389/fpsyg.2015.00026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Makransky G., Lilleholt L. A structural equation modeling investigation of the emotional value of immersive virtual reality in education. Educ. Technol. Res. Dev. 2018;66:1141–1164. [Google Scholar]
- 29.Chen P.C., Gadepalli K., MacDonald R., Liu Y., Kadowaki S., Nagpal K., Kohlberger T., Dean J., Corrado G.S., Hipp J.D. An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat. Med. 2019;25:1453–1457. doi: 10.1038/s41591-019-0539-7. [DOI] [PubMed] [Google Scholar]
- 30.Duffy J. 2019. Microscopy and VR Illuminate New Ways to Prevent and Treat Disease.https://www.cmu.edu/news/stories/archives/2019/july/vr-expands-microscopy.html [Google Scholar]
- 31.Benaroya Research Institute . 2019. Expansion Microscopy VR.https://www.benaroyaresearch.org/our-research/programs/systems-immunology-division/expansion-microscopy-vr [Google Scholar]
- 32.Stefani C., Lacy-Hulbert A., Skillman T. ConfocalVR: Immersive Visualization for Confocal Microscopy. J. Mol. Biol. 2018;430:4028–4035. doi: 10.1016/j.jmb.2018.06.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Farahani N., Post R., Duboy J., Ahmed I., Kolowitz B.J., Krinchai T., Monaco S.E., Fine J.L., Hartman D.J., Pantanowitz L. Exploring virtual reality technology and the Oculus Rift for the examination of digital pathology slides. J. Pathol. Inform. 2016;7:22. doi: 10.4103/2153-3539.181766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wang Y., Li Q., Liu L., Zhou Z., Ruan Z., Kong L., Li Y., Wang Y., Zhong N., Chai R. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat. Commun. 2019;10:3474. doi: 10.1038/s41467-019-11443-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.arivis AG . 2019. arivis VisionVR.https://imaging.arivis.com/en/imaging-science/arivis-inviewr [Google Scholar]
- 36.Limonte K. 2018. AI in healthcare: HoloLens in surgery.https://cloudblogs.microsoft.com/industry-blog/en-gb/health/2018/12/20/ai-healthcare-hololens-surgery/ [Google Scholar]
- 37.Hanna M.G., Ahmed I., Nine J., Prajapati S., Pantanowitz L. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pathology. Arch. Pathol. Lab. Med. 2018;142:638–644. doi: 10.5858/arpa.2017-0189-OA. [DOI] [PubMed] [Google Scholar]
- 38.Usher W., Klacansky P., Federer F., Bremer P.T., Knoll A., Yarch J., Angelucci A., Pascucci V. A Virtual Reality Visualization Tool for Neuron Tracing. IEEE Trans. Vis. Comput. Graph. 2018;24:994–1003. doi: 10.1109/TVCG.2017.2744079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Theart R.P., Loos B., Niesler T.R. Virtual reality assisted microscopy data visualization and colocalization analysis. BMC Bioinformatics. 2017;18(Suppl 2):64. doi: 10.1186/s12859-016-1446-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.arivis AG . 2019. Observing 3D Microscopic Images using Virtual Reality.https://www.news-medical.net/news/20191219/Observing-3D-Microscopic-Images-using-Virtual-Reality.aspx [Google Scholar]
- 41.Calì C., Baghabra J., Boges D.J., Holst G.R., Kreshuk A., Hamprecht F.A., Srinivasan M., Lehväslaiho H., Magistretti P.J. Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues. J. Comp. Neurol. 2015;524:23–38. doi: 10.1002/cne.23852. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Spark A., Kitching A., Esteban-Ferrer D., Handa A., Carr A.R., Needham L.M., Ponjavic A., Santos A.M., McColl J., Leterrier C. vLUME: 3D virtual reality for single-molecule localization microscopy. Nat. Methods. 2020;17:1097–1099. doi: 10.1038/s41592-020-0962-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Alliance of Advanced BioMedical Engineering . 2017. Mixed Reality Replaces Cadavers as Teaching Tool.https://aabme.asme.org/posts/mixed-reality-replace-cadavers-as-teaching-tool [Google Scholar]
- 44.ThinkMobiles Team . 2016. VR Apps in Medicine Transforming Healthcare We Once Knew.https://thinkmobiles.com/blog/virtual-reality-applications-medicine/ [Google Scholar]
- 45.Medivis . 2021. The Augmented Reality Anatomy Lab and Learning Platform.https://www.medivis.com/anatomyx [Google Scholar]
- 46.Buzink S.N., Goossens R.H.M., De Ridder H., Jakimowicz J.J. Training of basic laparoscopy skills on SimSurgery SEP. Minim. Invasive Ther. Allied Technol. 2010;19:35–41. doi: 10.3109/13645700903384468. [DOI] [PubMed] [Google Scholar]
- 47.Cleveland Clinic . 2018. Cleveland Clinic Creates E-anatomy With Virtual Reality.https://newsroom.clevelandclinic.org/2018/08/23/cleveland-clinic-creates-e-anatomy-with-virtual-reality/ [Google Scholar]
- 48.Wismer P., Lopez Cordoba A., Baceviciute S., Clauson-Kaas F., Sommer M.O.A. Immersive virtual reality as a competitive training strategy for the biopharma industry. Nat. Biotechnol. 2021;39:116–119. doi: 10.1038/s41587-020-00784-5. [DOI] [PubMed] [Google Scholar]
- 49.Zimmerman E. 2019. K–12 Teachers Use Augmented and Virtual Reality Platforms to Teach Biology.https://edtechmagazine.com/k12/article/2019/03/k-12-teachers-use-augmented-and-virtual-reality-platforms-teach-biology-perfcon [Google Scholar]
- 50.Hooton C. The Independent; 2015. Our attention span is now less than that of a goldfish.https://www.independent.co.uk/news/science/our-attention-span-now-less-goldfish-microsoft-study-finds-10247553.html May 13, 2015. [Google Scholar]
- 51.Trombley S. The Reminder; 2019. Agawam Public Schools introduces virtual reality learning to classrooms.https://www.thereminder.com/localnews/agawam/agawam-public-schools-introduces-virtual-reality-l/ February 19, 2019. [Google Scholar]
- 52.Vishwanath A., Kam M., Kumar N. Proceedings of the 2017 Conference on Designing Interactive Systems. 2017. Examining Low-Cost Virtual Reality for Learning in Low-Resource Environments; pp. 1277–1281. [Google Scholar]
- 53.Bennett J.A., Saunders C.P. A Virtual Tour of the Cell: Impact of Virtual Reality on Student Learning and Engagement in the STEM Classroom. J. Microbiol. Biol. Educ. 2019;20:20.2.37. doi: 10.1128/jmbe.v20i2.1658. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.zSpace . 2021. AR/VR Learning Experiences.https://zspace.com/ [Google Scholar]
- 55.XRHealth . 2021. VR Telehealth.https://www.xr.health/ [Google Scholar]
- 56.Schürch C.M., Bhate S.S., Barlow G.L., Phillips D.J., Noti L., Zlobec I., Chu P., Black S., Demeter J., McIlwain D.R. Coordinated Cellular Neighborhoods Orchestrate Antitumoral Immunity at the Colorectal Cancer Invasive Front. Cell. 2020;182:1341–1359.e19. doi: 10.1016/j.cell.2020.07.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Lee S.H. (Mark), Sergueeva K., Catangui M., Kandaurova M. Assessing Google Cardboard virtual reality as a content delivery system in business classrooms. J. Educ. Bus. 2017;92:1–8. [Google Scholar]
- 58.Tagaytayan R., Kelemen A., Sik-Lanyi C. Augmented reality in neurosurgery. Arch. Med. Sci. 2018;14:572–578. doi: 10.5114/aoms.2016.58690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Chong B.W., Bendok B.R., Krishna C., Sattur M., Brown B.L., Tawk R.G., Miller D.A., Rangel-Castilla L., Babiker H., Frakes D.H. A Multicenter Pilot Study on the Clinical Utility of Computational Modeling for Flow-Diverter Treatment Planning. AJNR Am. J. Neuroradiol. 2019;40:1759–1765. doi: 10.3174/ajnr.A6222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Elmi-Terander A., Burström G., Nachabé R., Fagerlund M., Ståhl F., Charalampidis A., Edström E., Gerdhem P. Augmented reality navigation with intraoperative 3D imaging vs fluoroscopy-assisted free-hand surgery for spine fixation surgery: a matched-control study comparing accuracy. Sci. Rep. 2020;10:707. doi: 10.1038/s41598-020-57693-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Microsoft Research . 2021. Holoportation.https://www.microsoft.com/en-us/research/project/holoportation-3/ [Google Scholar]
- 62.Thakur A. 2021. Spatial Audio for Cinematic VR and 360 Videos.https://creator.oculus.com/learn/spatial-audio/ [Google Scholar]
- 63.Akçayır M., Akçayır G. Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educ. Res. Rev. 2017;20:1–11. [Google Scholar]
- 64.Happa J., Glencross M., Steed A. Cyber Security Threats and Challenges in Collaborative Mixed-Reality. Front. ICT. 2019;6 [Google Scholar]
- 65.McPherson R., Jana S., Shmatikov V. Proceedings of the 24th International Conference on World Wide Web. 2015. No Escape From Reality: Security and Privacy of Augmented Reality Browsers; pp. 743–753. [Google Scholar]
- 66.Chen L., Day T.W., Tang W., John N.W. Recent Developments and Future Challenges in Medical Mixed Reality. IEEE. 2017;2017:123–135. [Google Scholar]
- 67.Kim H.K., Park J., Choi Y., Choe M. Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment. Appl. Ergon. 2018;69:66–73. doi: 10.1016/j.apergo.2017.12.016. [DOI] [PubMed] [Google Scholar]
- 68.Lavoie R., Main K., King C., King D. Virtual experience, real consequences: the potential negative emotional consequences of virtual reality gameplay. Virtual Reality. 2020;25:69–81. [Google Scholar]
- 69.Alalwan N. Challenges and Prospects of Virtual Reality and Augmented Reality Utilization among Primary School Teachers: A Developing Country Perspective. Stud. Educ. Eval. 2020;66:100876. [Google Scholar]
- 70.Wu H.-K., Lee S.W.-Y., Chang H.-Y., Liang J.-C. Current status, opportunities and challenges of augmented reality in education. Comput. Educ. 2013;62:41–49. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.