Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Oct 18.
Published in final edited form as: Stud Health Technol Inform. 2016;220:25–28.

An Approach for Automated Scene Management in Real-time Medical Simulation Framework

Venkata S ARIKATLA a,1, Ricardo ORTIZ a, Tansel HALIC b, Sean RADIGAN c, David THOMPSON a, DE Suvranu c, Andinet ENQUOBAHRIE a
PMCID: PMC5645784  NIHMSID: NIHMS909665  PMID: 27046548

Abstract

In this paper we present an algorithm that allows for minimal end-user inputs by internally automating the creation and management of interactions amongst the objects in the scene in real-time medical simulation framework. A bidirected graph (with nodes representing the scene objects and the connections representing the interactions) is formed based on the inputs from the user. This graph is then processed using a two stage algorithm that aims to find subgraphs that can be treated as independent sub-systems. Collision detection, collision response, assembly and solver objects are then automatically created and managed. This allows for users with limited knowledge of the underlying physics models, collision detection and contact algorithms to easily create a surgical scenario with minimal inputs.

Keywords: Simulation framework, Real-time simulation, Medical simulation

1. Introduction

The emergence of medical training with the virtual medical simulators has led to increased efforts to aggregate commonly used simulation techniques into a software framework. Designing such a simulation framework is complicated due to the nature of requirements that include realistic graphical rendering, complex physics, object interactions, multimodal nature of scenarios, interactive frame rate requirements, and interfacing with external hardware (e.g. haptic devices). The aim, therefore, of an effective simulation framework is to bring various algorithms from different disciplines under a consistent design framework that is flexible, extendable and above all easy-to-use to aid rapid prototype scenario development.

Most of the existing simulation frameworks such as SOFA [1], SoFMIS [2], GiPSi [3] and OpenTissue [4], offer separate modules for physics, rendering, linear algebra, collision detection, collision response and external devices. However the user is required to create and connect class objects for collision detection, response and solvers resulting in bloated interface code with considerable redundancy. In this work we propose an approach to eliminate the redundancy in the interface code by automating the creation and assignment of the collision detection, response, assembly and solver objects. We achieve this by employing a two stage algorithm that works on the representative graph describing the scene.

2. Methods

Enabling automated scene object and interaction management requires more abstraction. Figure 1 shows the overall design that supports the automation. In addition to the well-known modules such as collision detection, collision response, solver, we have added a collision context, global assembler modules.

Figure 1.

Figure 1

Overall data flow of the simulation framework.

2.1. Initialization and Runtime

Collision context manages all the information that is related to interaction. The collision context is populated from the user’s description of the scene in the interface. For example if the user intends to add a one way interaction between a fem object and a static plane object with plane to mesh collision and penalty contact handling is shown below. The first two arguments describe the scene objects while the next three parameters prescribe the collision detection and contact handling in both ways respectively. The last argument describes the type of data that is computed by collision detection and passed to contact handling.

collContext->addInteraction(femObject, staticPlane,
core::CollisionDetectionType::PlaneToMeshCollision,
core::ContactHandlingType::PenaltyFemToStatic,
core::ContactHandlingType::NoContact,
POINT_PENETRATION_DEPTH_COLL_DATA);

A series of such statements are used to describe the interactions in the scene. We use this information to form a bi-directed interaction graph (see figure 2). Before we proceed to two stage initialization process we categorize the contact handling algorithms into type I and type II. Type I interactions doesn’t require the associated scene objects to be solved together (e.g. penalty contacts) while the type II does (e.g. LCP contacts).

Figure 2.

Figure 2

Collision context initialization process.

The configuration proceeds in 2 stages. First, we contract all the type-I interactions such as penalty, linear projection constraints etc. as shown in figure 2. These contractions result in activating some assembly procedures. In the example above the contraction of the penalty interaction from scene object 3 to 2 results in assembling the penalty force vector (resulting from contact with 3) into the system of equations of object 2. In this second stage, the resulting graph from contraction is then divided into subgraphs/subsystems which are the islands of the graph. From the practical point-of-view all the scene objects within a given sub-system will be solved together. Therefore the number of solver instances (of all types) is equal to the number of subsystems resulting from this stage. For example in the figure above objects 4 and 5 are solved together using a LCP solver. The type of sub-system depends on the leading interaction type using a preset order.

The output from the second stage of the initialization process is then used to instantiate required assembler and solver instances that operate on respective scene objects. This arrangement remains constant throughout the simulation. During the run time the respective system of equations are assembled and passed on to the solver module. Once the solution is computed the solution mapper (see figure 1) uses the information of the collision context to update the respective geometric models.

3. Results

We have successfully employed the methods described above for easy creation of scenes as part of our medical simulation framework. Figure 3 shows a simple scene where the user interacts with a nidus (occurring in the case of arteriovenous malformations) simulated using finite elements with a virtual tool (surgical forceps) which is a static object controlled by the user. A brute force collision check was used to compute the collision with a sphere placed at the tip of the forceps and a penalty collision response was used for interaction. At present we are working towards using this framework to develop a virtual counterpart of the fundamentals of laparoscopic surgery trainer.

Figure 3.

Figure 3

(a) Arteriovenous malformation (AVM) (from www.TAAFonline.org) (b) surgical forceps controlled by the user interacting with the finite element model of the nidus.

4. Conclusions

We have presented an approach that eliminates explicit creation and assignment of the collision detection, response, assembly and solver objects. This results in easy-to-use, modular, flexible, extendable framework. Besides it allows for easy interface with GUIs and scripting languages (e.g. python). We plan to open source the framework in future.

Acknowledgments

The authors gratefully acknowledge the support of this work by the NIH/NIBIB grant #5R01EB010037 and NIH/OD #R44OD018334.

References

  • 1.Allard J, Cotin S, Faure F, Bensoussan PJ, Poyer F, Duriez C, Delingette H, Grisoni LB. SOFA– an Open Source Framework for Medical Simulation. Studies in health technology and informatics. 2006;125:13. [PubMed] [Google Scholar]
  • 2.Halic T, Venkata SA, Sankaranarayanan G, Lu Z, Ahn W, De S. A software framework for multimodal interactive simulations (SoFMIS) Studies in health technology and informatics. 2011;63:213. [PubMed] [Google Scholar]
  • 3.Cavuşoğlu MC, Goktekin TG, Tendick F. GiPSi: a framework for open source/open architecture software development for organ-level surgical simulation. IEEE Trans Inf Technol Biomed. 2006;10(2):312. doi: 10.1109/titb.2006.864479. [DOI] [PubMed] [Google Scholar]
  • 4.Erleben K, Sporring J, Dohlmann H. OpenTissue-An open source toolkit for physics-based animation. 2005 [Google Scholar]

RESOURCES