Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jul 29.
Published in final edited form as: Comput Animat Virtual Worlds. 2008 Mar 5;19(2):151–163. doi: 10.1002/cav.224

An efficient dynamic point algorithm for line-based collision detection in real time virtual environments involving haptics

Anderson Maciel 1,2,*, Suvranu De 3,4
PMCID: PMC9337716  NIHMSID: NIHMS1825659  PMID: 35910783

Abstract

In real time computer graphics, “interactivity” is limited to a display rate of 30 frames per second. However, in multimodal virtual environments involving haptic interactions, a much higher update rate of about 1 kHz is necessary to ensure continuous interactions and smooth transitions. The simplest and most efficient interaction paradigm in such environments is to represent the haptic cursor as a point. However, in many situations, such as those in the development of real time medical simulations involving the interactions of long slender surgical tools with soft deformable organs, such a paradigm is nonrealistic and at least a line-based interaction is desirable. While such paradigms exist, the main impediment to their widespread use is the associated computational complexity. In this paper, we introduce, for the first time, an efficient algorithm for computing the interaction of a line-shaped haptic cursor and polygonal surface models which has a near constant complexity. The algorithm relies on space-time coherence, topological information, and the properties of lines in 3D space to maintain proximity information between a line segment and triangle meshes. For interaction with convex objects, the line is represented by its end points and a dynamic point, which is the closest point on the line to any potentially colliding triangle. To deal with multiple contacts and non-convexities, the line is decomposed into segments and a dynamic point is used for each segment. The algorithm may be used to compute collision detection and response with rigid as well as deformable objects with no performance penalty. Realistic examples are presented to demonstrate the effectiveness of our approach.

Keywords: computer graphics, interaction techniques, simulation and modeling, haptic I/O

Video Abstract

Download video file (32.2MB, mov)

Introduction

Computing collision detection in virtual environments is a fundamental problem in computer graphics. There are many existing collision detection approaches that are aimed at reaching visually interactive speeds, that is, around 30 frames per second, for real time applications. However, in multimodal virtual environments involving haptics, a much higher update rate of at least a few hundred frames per second is necessary to render stable force feedback. If physics-based computations are performed as part of the collision response, then this places severe demands on the efficiency of the collision detection algorithm.

Force feedback1 is an essential component of interactive simulations such as minimally invasive surgery simulation2 where the user interacts with soft deformable organs using long slender surgical tools. The simplest paradigm of haptic interaction is the use of a point-based representation of the haptic cursor. However, this is unrealistic for many applications including laparoscopic surgery simulation where, it may be necessary to flip an organ or appendage aside while holding on to another or to slice through tissue using a blade. For such applications, it is natural to represent the haptic cursor as a line. However, for deformable organ models, such “ray-based” representation is prohibitively expensive if realistic organ models are used, because the computing time to test collisions increases at least linearly with the model complexity. It is then necessary to reduce the number of tests.

In this paper, we introduce a new dynamic point algorithm for computing the interactions of a ray- or line-shaped haptic probe with deformable triangle meshes in real time which is suitable for multimodal virtual environments involving haptics. The line is represented by its end points A and B and a dynamic point P, which is chosen to be the closest point on the line to any potentially colliding triangle t (Figure 1). The dynamic point position on the line is updated at haptic frequencies and hence to the user, due to inherent latencies of the order of 1 ms in the human haptic system, it is virtually indistinguishable from a line, just as static frames presented 30 times per second generates the illusion of motion in real time graphics. Depending on the latency of the physics-based model being used, however, it can be necessary to adapt the deformation part of collision response, for which we propose the closest triangle strip approach (Section “The Closest Strip”). For convex objects, the computational complexity of our algorithm is independent of the number of polygons even when they undergo deformation and, in the process, become locally nonconvex. If the object is nonconvex to start with, the line is decomposed into a set of dynamic points and the operational complexity scales linearly with the number of dynamic points used. In the remainder of the paper, we will provide details of this approach, some potential pitfalls of the algorithm and how they may be avoided.

Figure 1.

Figure 1.

The dynamic point algorithm for a line interacting with a 3D mesh in two consecutive frames: (a) current dynamic point P and the closest point in the mesh Pt; (b) as the triangle t of the previous frame is displaced, a feedback-force is generated and a new P and Pt are selected. (c) shows contact with a nonconvex object being handled with the line decomposed in two dynamic points.

The organization of this paper is as follows. We review existent literature in Section “Related Work.” The concepts of the dynamic point and closest strip are presented in sections entitled “The Dynamic Point” and “The Closest Strip,” respectively. Some implementation details are presented in Section “Implementation” and in Section “Some Examples” we discuss applications to some test examples. Conclusions are drawn in Section “Concluding Remarks.”

Related Work

Our work lies at the intersection of two important areas of active research: collision detection and computer haptics. Collision detection involves checking the geometry of the target objects for interpenetration, which is usually made using static interference tests. These tests include distance calculation algorithms3,4 and intersection checks. Bounding boxes intersection test is particularly useful in bounding volume methods. They can be tested using the separating axis test5. Intersection test between triangles can be done efficiently with the algorithm of Reference [6]. Triangle–triangle test are useful because most 3D polygonal representations are triangle meshes. We refer to References [7,8] and [9] for detailed surveys on collision detection.

Computer haptics, on the other hand, is analogous to computer graphics and deals with various aspects of computation and rendering of touch information when the user interacts with virtual objects. For general and psychophysical aspects of haptic interactions and on haptic rendering, we refer to References [10,11], and [12].

In this section, our goal is to review the literature on collision detection for interactive virtual environments, especially those that involve haptics. In these environments, three clearly different contact situations exist: point to model, line to model, and geometry to model, and the model may be rigid or deformable.

Haptic rendering algorithms make use of basic interference tests to provide force feedback. The first haptic rendering algorithms were based on vector field methods13. Then, to overcome the many drawbacks of vector fields, the concept of god-object was introduced in Reference [14]. A god-object is a virtual model of the haptic interface which conforms to the virtual environment. Actually, one cannot stop the haptic interface point from penetrating the virtual objects. The god-object represents the virtual location of haptic interface on the surface of the objects, the place it would be if the object was infinitely stiff. Recently, Reference [15] introduced an extension of the god-object paradigm to haptic interaction between rigid bodies of complex geometry. The authors used a 6-dof stringed haptic workbench device and constraint-based quasi-statics with asynchronous update to achieve stable and accurate haptic sensation. However, they cannot guarantee a 1 kHz frame rate, which may result in inaccurate representation of high-frequency interactions such as when objects slide rapidly on each other. In Reference [16], haptic frame rate is obtained for distributed contact between one rigid and one reduced deformable model using multi-resolution point-based representation and signed distance field.

Although not explicitly used in haptics, the Voronoi-clip (V-clip) algorithm4 presents the use of spatial-coherence and closest feature idea in collision detection, which is also adopted in our work. Another work exploring local features and spatial and temporal coherence with application to simple haptic environments may be found in Reference [17]. Also not for haptics but concerned with performance, Reference [18] solves the problem of self-collisions in linear-time with a culling algorithm using chromatic decomposition.

Boundary volumes and spatial partitioning have also been used to detect collisions in haptic applications. A hybrid approach based on uniform grids using a hash table and OBB-trees is presented in Reference [12]. They provide a method for fast collision detection between the haptic probe and a complex virtual environment. OBB-trees are also used in Reference [19] to reduce the problem to local searches. Though they provide only point-based haptic rendering, intersections are detected against a line segment defined by the moving path of the haptic interaction point. The short line goes from the actual haptic point to a kind of god-object that always remains on the surface.

In contrast to point-based methods, a ray-based haptic rendering algorithm is proposed in Reference [20] that enables the user to touch and feel convex polyhedral objects with the haptic probe shaped as a line segment. Their line collision detection algorithm uses a hierarchy of bounding boxes and neighborhood information to reduce the number of line checks. They further explored the use of torque-feedback coupling two Phantom devices.

A dynamic method to compute point to mesh distances is presented in Reference [21]. Multiresolution hierarchy and bounding volumes are used to achieve constant time queries in some cases. Some of the algorithms presented in this paper, including those for distance computation for collision detection, coherence between subsequent queries, handling of both rigid and deformable objects share traits similar to those presented in this work, which was, however, not intended for haptic rendering. Multiresolution collision detection algorithms using levels-of-detail and hierarchical impostors of hybrid models for interactive haptic applications were presented in Reference [22].

In Reference [23], the authors use graphics hardware to detect collisions in line-like interactions with deformable objects. This paper is not aimed at haptic applications. However, in a related paper24, they discuss issues such as those pertaining to the use of a discrete projection of the line on the mesh in the context of a collision detection and response algorithm which are quite different from the ones presented here.

The Dynamic Point

Basic Description

One way to perform collision detection in dynamic simulation is by using the Lin–Canny closest feature algorithm25 which maintains a pair of closest features between any two convex polyhedra moving in space. Relying on the assumption that the current closest features in a dynamic environment are in the neighborhood of the previous ones, they can be updated using a constant time distance check. The Lin–Canny algorithm is the basis of I-COLLIDE26, a widely used library for interactive collision detection in complex environments composed of convex polyhedra. In applications involving the interactions of a haptic tool represented as a point (P), with a polygonal mesh, such a “static point” idea can be exploited by computing the distance of the probe to the closest triangle as d = |PPt| where Pt is the closest point on the triangle t to P (Figure 2a).

Figure 2.

Figure 2.

Point-based (a), dynamic point (b), and line-based (c) proximity.

However, many real world objects represented in haptic applications, such as pencils, brushes, knives, surgical instruments, etc., are better represented by a line than a point (Figure 2c). In Reference [20], it has been observed that haptic interaction with a line- or ray-like probe is more effective in terms of object shape recognition in virtual environments than a point-like probe. Such “ray-based” representation is, nevertheless, prohibitively expensive if complex models are used, because the computing time to test collisions increases with the size of the model. Existing collision detection methods can reduce the number of line-polygon intersection tests for rigid objects using spatial classification, bounding volume hierarchies, etc. However, for complex realistic deformable models, these methods cannot perform better than linear to the size of the model in the worst case.

To capture the advantages of both point- and ray-based rendering, we propose the “dynamic point” algorithm in which a point is used for collision detection and response, but this point is constrained to lie on a line at a location which is instantaneously closest to the mesh (Figure 2b).

In the dynamic point, the data structure is initialized with a full distance check between the triangles and the probe (line) to calculate the nearest point on the mesh (Pt) to the line (see Figure 1a). As the line moves, Pt is updated by checking the distance of only the neighbors of the triangle t originally containing Pt with the line. As in the static point algorithm, nonconvex meshes may cause the dynamic point not to be the closest. To circumvent this problem, the line is decomposed into a small set of line segments, each containing one dynamic point (Figure 1c). The number of segments to be used is the number of simultaneous noncontiguous points one wants to ensure will be checked for collision. A collision is reported when the signed distance between any pair of nearest points and dynamic points is negative. The collision response algorithm is then applied and the positions of the dynamic point and of the closest point are used to calculate force feedback as a penalty force.

The Dynamic Point Algorithm for Deformable Models

The algorithm described above (Subsection “Basic Description”) is robust and effective to avoid penetration and render realistic force feedback when interacting with rigid objects. Complications arise when an object is deformable. In this section, we analyze the general case. We discuss degenerate cases in Subsection “Degenerate Cases.”

The first contact detection occurs exactly the same way as with rigid objects. The first collision response results in a deformed model as the colliding triangle t must be pushed apart to remove penetration and this changes the mesh profile. In Figure 1b, the white triangle represents t of Figure 1a that has been displaced. As t is displaced, it may not be the nearest anymore and the proximity pair must be updated. This is done by checking if one of the triangles in the neighborhood is closer to the line than the current t. The dynamic point then shifts its position on the line to the new nearest location and another potential collision situation is analyzed (Figure 1b). As stated before, the dynamic point moves at haptic frequencies from one location to another, interacting with the mesh in a manner reminiscent to a chainsaw tooth (Figure 3).

Figure 3.

Figure 3.

The chainsaw analogy.

Degenerate Cases

There may be pathological cases which degrade the performance of the algorithm presented above. We discuss such pathological cases in Subsections “Local Concavities” and “Force-feedback Jitters,” and introduce solutions in Section “The Closest Strip.”

Local Concavities.

The dynamic point can be trapped by local concavities when in prolonged contact with soft surfaces, or if the tip of the probe touches first the bottom of a deep concavity. Figure 4 illustrates the problem. The tip of the line collides first and causes the deformation of a rather flat surface by moving the closest triangle t, in red. The neighbors n0t and n1t, in yellow, are constantly tested and are correctly diagnosed to be farther away from the line than the red triangle. However, for this type of very local deformation, the triangles at the periphery of the small crater will never be tested for proximity. This will eventually cause non-realistic penetrations (Figure 4c). To solve the problem, the neighborhood information must be adapted to the type of contact.

Figure 4.

Figure 4.

Micro-concavities can trap the dynamic point to a confined region of the line when interacting with deformable objects. A line cursor collides (a) and causes the surface to deform (b). Non-tested triangles at the periphery of the small crater eventually penetrate the surface (c).

Force-feedback Jitters.

Unwanted vibration of the haptic device may be caused by the abrupt changes in feedback force due to rapid switching of the closest triangles during interaction with a large surface area.

This is predominantly a problem with deformable bodies since deformation of some triangles causes some others to move closer to the line-cursor. As each displaced triangle comes back and touches the line, it becomes the closest triangle and is suddenly perceived as a collision, which changes the magnitude and/or direction of the feedback force and causes jitter. To solve the problem, a continuous contact should be handled without abrupt changes in the force sent to the haptic device.

The Closest Strip

The degenerate cases of Subsection “Degenerate Cases” necessitate a refinement of the information obtained with the dynamic point in order to achieve a correct and realistic collision response. We approach this problem by exploiting the information already obtained with the dynamic point and the assumption that the haptic probe is line shaped. We compute a strip on the surface of the mesh containing triangles which are estimated to have a higher probability of collision in the next step of the simulation. The strip is analogous to the shadow cast by the line onto the mesh surface. The direction of the vector passing through the dynamic point and its closest point on the mesh defines the direction in which the shadow projects (Figure 5).

Figure 5.

Figure 5.

The closest triangles strip can be compared to a projected shadow of the line-probe on the mesh object in the direction given by Pt − P.

A brute force solution to build the strip would be to discretize the line into a finite number of points, and cast rays from these points in the direction of PtP. Instead, we exploit our knowledge of the mesh topology to firstly build an adjacency structure that is later exploited to compute the strip with only two additional rays. The algorithm is detailed in the following subsections.

Adjacency Structure

Triangles and vertices neighbors are common elements in any mesh data structure used in computer graphics. We extended our neighborhood information—the same used to update the closest triangle—with an adjacency structure to maintain the traversal distances between any pair of triangles. Let us consider a triangle mesh as a graph in which the triangles are the nodes and the triangles sharing one or more vertices have an edge connecting them. The traversal distance between two triangles t and t′, denoted by τ(t, t′) is the minimal length of all routes between t and t′. See Figure 6 for an example. This problem is similar to the classic traveling salesman shortest route problem, which is NP-complete. However, all edges of our graph have the same weight, which makes the problem simpler. Moreover, we also prioritize the double vicinity (triangles sharing two vertices) to select the shortest out of many otherwise equally short routes. The adjacency structure computation is a recursive and time-consuming task which is performed only once as a precomputation step. Still, one limitation of this approach is that due to the time required for precomputation, we could not generate the adjacency structure to meshes larger than 10 thousand triangles.

Figure 6.

Figure 6.

Example of adjacency structure. The mesh in the left is represented in the graph (right) with the distances of each node to the node W (subscripts in red). The letters give the order the triangles appear in the triangle array. Supposing we want to create the strip from C to W, we check the neighbors of C for the closest to W. We find A and E, as A comes first in the array (alphabetical order) then A becomes the current node and we check for its neighbors and so on. The traversal distance from C to W is 3 and the shortest route is {C, A, V, W}.

Strip Construction

The inputs for the strip construction are the dynamic point, the adjacency structure, and a pair of points located one on each extremity of the line (the points defining the line). The two end points are already available as discussed before. The two points work like two punctual probes and are calculated and maintained as shown in Subsection “Basic Description.” However, they are not actually used in a collision test. When the dynamic point reports a collision, the triangle strip has to be constructed. The closest triangle tP to the line, and the closest triangles tA and tB to the extremities A and B of the line are already known. Then, the adjacency structure is looked-up to find the traversal distances from tA to tP and from tB to tP. All triangles on the route are selected to be in the strip. The size of the strip may vary, but can easily be limited to a constant maximum size. See Figure 6 for an example of strip construction.

Collision Response

The number of triangles in the closest strip depends on the relation between the mesh resolution and line length. However, in commonly proportioned environments the size of the strip is independent of the mesh size and can be considered constant in the worst case. This allows for a full line–triangle distance check between the line and the triangles on the strip. All detected intersections (negative distances) produce displacement of the respective triangles to a non-penetrating position. The direction and the amount of the displacement is dictated by the detected penetration for each triangle. This is applied as boundary condition and the neighboring triangles are deformed according to a physics-based model.

In addition to deformation, the reaction force that needs to be fed back to the user through the haptic device needs to be calculated as well. Analogous to the force feedback for rigid objects, a penalty method is straightforward to apply for deformable ones. However, the local force from the physics model can also be used. The reaction force is computed at the dynamic point which represents the deepest penetration point.

Implementation

We implemented our algorithm on a multi-core PC platform with one graphics card (GeForce 8800). We use the Phantom Omni, by Sensable Inc, a six degrees-of-freedom positional sensing and three degrees-of-freedom force-feedback device to render the forces. The Phantom Firewire interface rate reach up to 1600 Hz which is very suitable. The information we have about the mechanical rate is inconclusive. However, we have not noticed any discrepancy in the context of this work and many other existent devices could potentially be used without impact on the performance of the dynamic point. We have developed complete dynamic, textured, and shaded physics-based models to highlight that the performance of our collision detection algorithm saves computing power that can be utilized for other important tasks in common applications.

Collision and Haptics

We used the interface HDU of the OpenHaptics library to implement a software layer with the haptic device. This layer reads information about the 6-dof cursor from the Phantom into the model and sends back the 3-dof force information. This runs asynchronously with the collision detection loop, where the dynamic point and triangle strip algorithms are implemented.

Physics-Based Modeling

We have implemented a simple mass-spring model to represent deformable objects. In this technique, the object is modeled as a collection of point masses connected by springs in a lattice structure. Springs connecting point masses exert forces on neighboring points when a mass is displaced from its rest position. For each mass point i, applying Newton’s law of motion, we can write a vectorial equation of the form:

mix¨i=Fi (1)

where mi is the mass of the point, xi is its position, and Fi is the sum of all forces acting on it.

Applying equation (1) to all the points leads to a system of ordinary differential equations that can be solved using various time-stepping algorithms. In this work, we have used a simple forward Euler integrator. We solve the equations for every spring in time t based on their status in time t − 1 with time steps in the order of 1 ms. Collision response on the deformable mesh is applied using displacement vectors on the vertices of colliding triangles in time t. Such displacement vectors are calculated according to the amount of penetration detected. The deformation model will then react in time t + 1 to propagate the physical consequences of the displacements.

Graphics Pipeline

The base of our graphics rendering pipeline is OpenGL. However, to obtain improved graphics realism we customized the rendering pipeline using vertex and fragment programs (also called shaders) written in OpenGL Shading Language (GLSL). This allowed us to include effects like textured relief, wetting, and multi-layer color texture blending. Examples can be found in Figure 9.

Figure 9.

Figure 9.

The laparoscopic surgical simulation example.

Software Platform and Flow-Control

Collision detection, haptic rendering, and graphic rendering are independent tasks but work on the same model. We have therefore implemented them in the model-view-controller (MVC) design pattern. Moreover, the viewer and the three controllers (collision handling, deformation, and haptic interaction) are implemented in four separate threads sharing the same model. This allows for extensive use of different frequencies for the different tasks, and exploits parallelism when running on a multiple core hardware.

Some Examples

Our algorithm and implementation assessment has been performed on a set of case-studies. Features like speed, stability, responsiveness, and robustness have been analyzed and compared. The next three subsections detail each of the case-studies, and Subsection “Cross-Assessment” compares the performance of the dynamic point algorithm with some well-known existing approaches for inter-hardware normalization.

A Thin Square Membrane

The first case is the interaction of the cursor, represented by a closed safety pin, with a thin stretched square membrane fixed in the space at its four corners. The safety pin is manipulated by means of the Phantom device. In this example, we highlight the prompt response of the membrane to the exerted forces and the smoothness of the haptic sensation. Figure 7 illustrates this example.

Figure 7.

Figure 7.

The membrane example.

Cloth Falling on Sword

We consider a commonly used example of a piece of cloth falling vertically under the effect of gravity on a sword placed underneath. Despite the lack of interest for haptics in this example, it fits well the line-deformable situation, and highlights the stability of the collision response as well as the effectiveness of the penetration avoidance. Figure 8 illustrates this example.

Figure 8.

Figure 8.

The cloth and sword example.

A Realistic Laparoscopic Surgery Scenario

In the third example, we consider a realistic scenario of palpation of internal organs as part of a laparoscopic surgical procedure. We show a partial model of the interior of the abdominal cavity where a few organs are visible, as they would be visible through the laparoscope. In this example, we have a deformable liver, a rigid stomach, and a rigid spleen, as well as the cavity wall (peritoneum). The organ models were obtained from the Visible Human Project dataset. The cursor is a model of a standard line-like instrument used in laparoscopic surgery, the cautery hook. Some of the goals of this experiment is to evaluate our algorithms in the practical situation of a virtual laparoscopic intervention and expose the responsiveness of the system. An additional aim is to demonstrate that rigid and deformable objects can co-exist in a fully interactive real time haptic environment. Figure 9 illustrates this example.

Rigid Liver Model

In this example, we present the example of a rigid liver model, which is part of the laparoscopic surgical scenario presented in the previous section. It has a higher resolution, being composed of 50 332 triangles. Because of the high number of triangles, the physics-based model is unable to perform deformations in real time. The remarkable point is that, as Table 1 clearly shows, the collision detection time is similar to that for much smaller models and is significantly less than what any existent algorithm can deliver for a model of this size.

Table 1.

Performance of the dynamic point algorithm in comparison with standard approaches. Values correspond to the number of full collision checks per millisecond, which includes collision detection, collision response, and haptic rendering (the greater, the better). Values in the parentheses correspond to the computation of collision detection only, in the case of PQP, and to collision detection and haptic rendering only in the case of the dynamic points (haptic rendering is included for the dynamic points because it is a sub-product of collision detection; with PQP it would require considerable extra cost to compute haptics)

Membrane Cloth Laparoscopic Liver (rigid)
Faces (meshes) 512 (1) 2 048 (1) 5 387 (3) 50 332 (1)
Springs 2 112 8 320 12 736 0
Brute force 1.5 0.5 0.12 0.018
Z-buffer 0.22 0.26 0.22 0.015
PQP 0.43 (3.6) 0.33 (2.8) 0.28 (2.5) 0.18 (0.8)
Dynamic point (1) 8.0 (37.0) 4.0 (36.0) 3.2 (8.6) 21.0 (26.0)
Dynamic point (2) 5.2 (19.5) 2.4 (19.0) 2.0 (4.7) 12.5 (14.3)
Dynamic point (4) 3.2 (8.7) 2.1 (9.0) 1.3 (2.4) 7.0 (8.1)
Dynamic point (8) 1.9 (4.3) 1.8 (4.3) 0.65 (1.3) 3.7 (4.0)
Dynamic point (16) 1.5 (2.2) 1.3 (2.1) 0.38 (0.63) 1.9 (2.1)

Cross-Assessment

We have compared the dynamic point algorithm with a brute force method, a z-buffer-based method and the Proximity Query Package (PQP)27, which uses a hierarchy of oriented bounding boxes (OBB). We expect that other researchers can use the absolute values we display in Table 1 to compare their methods. Notice that all values in the table have been obtained from a complete multi-threaded application in which the different tasks (graphics rendering, haptics rendering, collision detection, physical simulation) are all being performed simultaneously in multiple CPU cores. With the dynamic point, all configurations can render graphics at over 60 Hz.

In the brute force approach, line-triangle distances are checked with all triangles of all meshes. All reported contacts are handled and penetration is avoided using the response technique described in Subsection “Collision Response.” The z-buffer, in turn, uses the graphics pipeline to render a scene from the line point of view. The camera is placed on one end of the line and oriented toward the other end. The scene is then rendered with a maximum depth equal to the distance between the two ends. Each triangle is rendered with a different color. Finally, the resulting image is analyzed. Actually, only the central pixel of the image is read and its color defines which triangle, if any, is penetrating the line. Force feedback is calculated based on this triangle if the object is rigid. With deformable objects, this technique does not provide enough information to respond properly to collisions. With PQP, we initialize the model with the haptic probe as one triangle soup, and the other meshes as the second triangle soup. Then, after each collision query we check the distance of the reported colliding triangles of the second soup with the probe line to build information for collision response and force-feedback computation.

The data in Table 1 compares the four algorithms in terms of the number of full collision checks per millisecond for four example problems: (1) the membrane model, (2) the cloth model falling on the sword, (3) the laparoscopic surgical scenario composed of three organ meshes, and (4) a rigid liver model. The values within the parentheses correspond to the computation of collision detection only, in the case of PQP, and to collision detection and haptic rendering in the case of the dynamic points. For the dynamic point, we present results for five different cases corresponding to the subdivision of the line into 1, 2, 4, 8, and 16 segments with a dynamic point assigned to each segment. Let n be the overall number of triangles in the scene. The complexity of the brute force algorithm is O(n). Z-buffer depends on how triangles are pre-clipped by the GPU and also suffer performance penalties due to concurrency with the heavy rendering programs we use. However, it is also O(n) in the worst case, even if the time to compute each triangle is smaller than the conventional intersection tests used by brute force. In an environment with m meshes, as in the laparoscopic example, the dynamic point has complexity O(m). Although this is not in the scope of this paper, we believe that applying some of the existing space partitioning techniques together with the dynamic point could easily reduce this complexity.

Furthermore, for a fair comparison between PQP and the dynamic point, we used a torus model with different mesh resolutions. In the graph of Figure 10(a), we present results with rigid torus models having 5568, 13 580, 20 832, 39 648, and 61 358 triangles. We observe that the computing time using PQP follows a O(log n) trend, while the dynamic point trends are constant O(1) for all mesh resolutions. To demonstrate that the dynamic point complexity remains constant with deformable models regardless of the size of the mesh, we present the graph of Figure 10(b). PQP can handle only rigid objects because updating the OBB at each frame would have complexity of at least O(nlogn). Nevertheless, we included it in this graph without updating (detection after the first deformation is not correct) to highlight that it is still slower that the dynamic point.

Figure 10.

Figure 10.

Relation of the number of mesh faces with the time in milliseconds to compute one frame of collision detection and force-feedback calculation with PQP and dynamic point. In (a), data was obtained while interacting with a rigid torus; in (b) with a deformable torus. In both the rigid and deformable cases, constant complexity is observed for the dynamic point, while the complexity of the PQP algorithm is logarithmic to the number of faces.

In the graph of Figure 11, we assess the increase in computational cost when the haptic tool is decomposed in segments. As expected, the complexity is linear in the number of dynamic points used. The plot for the laparoscopic surgical simulation example has a steeper slope since three dynamic points are used, one for each organ mesh model.

Figure 11.

Figure 11.

Relation of the number of dynamic points in the line decomposition with the time in milliseconds to compute one frame of collision detection and force feedback. A linear complexity is observed for all models, with the laparoscopic example having a greater slope due to the presence of multiple meshes.

Concluding Remarks

Line-based contact is very common in haptic applications and probably the most widely used type of interaction in practical haptic applications today. We have presented the dynamic point algorithm for efficient line-based collision detection and response in multimodal virtual environments involving haptics. It relies on spatiotemporal coherence and strong topological information to minimize the number of distance checks between the line and the model. It can handle very large triangle meshes (with tens of thousands of triangles) at haptic frame rates in the order of kHz for both rigid and deformable objects. The algorithm is also suitable for nonconvex meshes as the line can be decomposed into a finite number of segments—each with a dynamic point—without significant performance loss. For very complex shapes with many spikes, however, local minima might cause the dynamic points to be locked in corners and some collisions might pass undetected.

Besides the low computational complexity, one other advantage of the dynamic point over existing methods is that it does not use any intermediate data structure that needs to be updated even for deformable models.

We implemented and evaluated the algorithm within a complete graphics–haptics–physics-based system. Such implementation helps evaluate how the algorithm integrates with the rest of the system and gives a better view of its advantages in practical use. For example, while we can reach frame rates of over to 30 kHz when running the collision detection isolated, when combined with physics-based simulation the frequency drops to one-third of that due to model concurrency. To illustrate the use of the dynamic point, we have presented a few examples including a realistic one from virtual laparoscopic surgery. Although we have not implemented, we believe that the dynamic point algorithm will have excellent application to situations involving interaction with line-shaped tools, such as pencils (drawing), brushes (painting), knives (cutting), and with objects composed of a few line-segments such as a human hand and a string or rope discretized as line segments. In this case, the complexity will scale linearly with the number of segments. Moreover, the dynamic point may be extended to lie on a plane or volume instead of only a line-segment.

As part of future work, we plan to explore hybrid approaches combining the dynamic point with other techniques, such as spatial decomposition, to extend the near constant complexity for scenarios with many mesh models. One potential thread of research goes in the direction of the BD-Trees presented in Reference [28]. We plan to incorporate some local geometry-based strategies to increase exactness of the contact handling with line-like haptic cursors with different tip shapes. We also plan to implement adaptive strategies of decomposition of the line into segments based on local mesh curvatures.

ACKNOWLEDGEMENTS

This work was supported by grants R21 EB003547-01 and R01 EB005807-01 from NIH/NIBIB, and partly by the PDJ program of CNPq. Thanks are due to Professor George Xu of RPI for providing us the segmented organ models of the visible human and to Dr Youquan Liu for the GPU-related implementation.

Biographies

graphic file with name nihms-1825659-b0001.gif

Anderson Maciel is an invited postdoctoral collaborator in the Department of Applied Informatics at the Federal University of Rio Grande do Sul, Brazil, and a Visiting Scientist in the Department of Mechanical, Aerospace and Nuclear Engineering at Rensselaer Polytechnic Institute. He received his PhD degree in Computer Science in 2005 from the EPFL, Switzerland, where he worked as a research assistant in the Virtual Reality Lab with virtual human models for medical applications. He obtained the MSc degree in Computer Science from the Federal University of Rio Grande do Sul, in 2001. He is coauthor of a number of papers in both computer graphics and medical-related conferences.

graphic file with name nihms-1825659-b0002.gif

Suvranu De is an Associate Professor in the Department of Mechanical, Aerospace and Nuclear Engineering at Rensselaer Polytechnic Institute and has a joint appointment in the Department of Electrical Engineering and Computer Science at MIT as a Visiting Scientist. He received his ScD degree from MIT in 2001. He is recipient of the 2005 ONR Young Investigator Award and servers on the editorial board of Computers and Structures and scientific committees of numerous national and international conferences. He is also the founding chair of the Committee on Computational Bioengineering of the US Association for Computational Mechanics. His research effort centers around the development of physics-based real time simulations.

Contributor Information

Anderson Maciel, Department of Applied Informatics at the Federal University of Rio Grande do Sul, Brazil;; Department of Mechanical, Aerospace and Nuclear Engineering at Rensselaer Polytechnic Institute.

Suvranu De, Department of Mechanical, Aerospace and Nuclear Engineering at Rensselaer Polytechnic Institute;; Department of Electrical Engineering and Computer Science at MIT.

References

  • 1.Otaduy MA, Lin MC. High Fidelity Haptic Rendering (Synthesis Lectures on Computer Graphics and Animation). Morgan and Claypool Publishers: San Rafael, CA, USA, 2006. [Google Scholar]
  • 2.Basdogan C, De S, Jung K, Muniyandi M, Hyun K, Srinivasan MA. Haptics in minimally invasive surgical simulation and training. IEEE Computer Graphics and Applications 2004; 24(2): 56–64. [DOI] [PubMed] [Google Scholar]
  • 3.Joukhadar A, Wabbi A, Laugier C. Fast contact localisation between deformable polyhedra in motion. In Proceedings of Computer Animation′96, June 1996; 126–135. [Google Scholar]
  • 4.Mirtich B. V-clip: fast and robust polyhedral collision detection. ACM Transactions on Graphics 1998; 17(3): 177–208. [Google Scholar]
  • 5.Gottschalk S. Separating axis theorem. Technical Report TR96–024, UNC; Chapel Hill, 1996. [Google Scholar]
  • 6.Möller Tomas. A fast triangle-triangle intersection test. Journal of Graphics Tools 1997; 2(2): 25–30. [Google Scholar]
  • 7.Lin MC, Gottschalk S. Collision detection between geometric models: a survey. In Proceedings of the 8th IMA Conference on Mathematics of Surfaces, 1998; 37–56. [Google Scholar]
  • 8.Jimnez P, Thomas F, Torras C. 3D collision detection: a Survey. Computers and Graphics 2001; 25(2): 269–285. [Google Scholar]
  • 9.Teschner M, Kimmerle S, Heidelberger B, et al. Collision detection for deformable objects. Computer Graphics Forum 2005; 24(1): 61–81. [Google Scholar]
  • 10.Colgate JE, Stanley MC, Brown JM. Issues in the haptic display of tool use. In Proceedings of the International Conference on Intelligent Robots and Systems, IEEE Computer Society: Washington, DC, USA, 1995. [Google Scholar]
  • 11.Mark WR, Randolph SC, Finch M, Van Verth JM, Taylor RM II. Adding force feedback to graphics systems: issues and solutions. In SIGGRAPH, 1996; 447–452. [Google Scholar]
  • 12.Gregory A, Lin MC, Gottschalk S, Taylor R. Fast and accurate collision detection for haptic interaction using a three degree-of-freedom force-feedback device. Computational Geometry Theory and Applications 2000; 15(1–3): 69–89. [Google Scholar]
  • 13.Massie TH, Salisbury JK. The phantom haptic interface: a device for probing virtual objects. In Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Radcliffe CJ (ed.). Chicago, 1994. ASME. [Google Scholar]
  • 14.Zilles CB, Salisbury JK. A constraint-based god-object method for haptic display. In IROS′95: Proceedings of the International Conference on Intelligent Robots and Systems, Vol. 3, IEEE Computer Society: Washington, DC, USA, 1995; 3146. [Google Scholar]
  • 15.Ortega M, Redon X, Coquillart S. A six degree-of-freedom god-object method for haptic display of rigid bodies. In VR′06: Proceedings of the IEEE Virtual Reality Conference (VR 2006), IEEE Computer Society: Washington, DC, USA, 2006; 27. [Google Scholar]
  • 16.Barbič Jernej, James Doug. Time-critical distributed contact for 6-dof haptic rendering of adaptively sampled reduced deformable models. In SCA′07: Proceedings of the 2007 ACM SIGGRAPH/Eurographics Symposium on Computer animation, Eurographics Association: Aire-la-Ville, Switzerland, 2007; 171–180. [Google Scholar]
  • 17.Kim YJ, Otaduy MA, Lin MC, Manocha D. Six-degree-of-freedom haptic rendering using incremental and localized computations. Presence: Teleoperators and Virtual Environments 2003; 12(3): 277–295. [Google Scholar]
  • 18.Govindaraju NK, Knott D, Jain N, Kabul I, Tamstorf R, Gayle R, Lin MC, Manocha D. Interactive collision detection between deformable models using chromatic decomposition. In SIGGRAPH′05: ACM SIGGRAPH 2005 Papers, New York, NY, USA, 2005. ACM; 991–999. [Google Scholar]
  • 19.Ho C-H, Basdogan C, Srinivasan MA. Efficient point-based rendering techniques for haptic display of virtual objects. Presence 1999; 8(5): 477–491. [Google Scholar]
  • 20.Chih-Hao H, Basdogan C, Srinivasan MA. Ray-based haptic rendering: Force and torque interactions between a line probe and 3D objects in virtual environments. International Journal of Robotic Research 2000; 19(7): 668–683. [Google Scholar]
  • 21.Guéziec André. ‘meshsweeper’: Dynamic point-to-polygonal-mesh distance and applications. IEEE Transactions on Visualization and Computer Graphics 2001; 7(1): 47–61. [Google Scholar]
  • 22.Chen H, Sun H. Multi-resolution haptic interaction of hybrid virtual environments. In VRST′04: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, ACM Press: New York, NY, USA, 2004; 201–208. [Google Scholar]
  • 23.Lombardo J-C, Cani M-P, Neyret F. Real-time collision detection for virtual surgery. In CA′99: Proceedings of the Computer Animation, IEEE Computer Society: Washington, DC, USA, 1999; 82. [Google Scholar]
  • 24.Picinbono G, Lombardo J-C, Delingette H, Ayache N. Improving realism of a surgery simulator: linear anisotropic elasticity, complex interactions and force extrapolation. Journal of Visualization and Computer Animation 2002; 13(3): 147–167. [Google Scholar]
  • 25.Lin MC. Efficient Collision Detection for Animation and Robotics. PhD Thesis, EECS Department, University of California, Berkeley, 1994. [Google Scholar]
  • 26.Cohen JD, Lin MC, Manocha D, Ponamgi MK. {I-COLLIDE}: an interactive and exact collision detection system for large-scale environments. In Proceedings of the 1995 Symposium on Interactive 3D Graphics, 1995; 189–196. [Google Scholar]
  • 27.Gottschalk S, Lin MC, Manocha D. Obbtree: a hierarchical structure for rapid interference detection. In SIGGRAPH′96: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, ACM Press: New York, NY, USA, 1996; 171–180. [Google Scholar]
  • 28.James DL, Pai DK. BD-Tree: output-sensitive collision detection for reduced deformable models. ACM Transactions on Graphics (SIGGRAPH 2004), 23(3), August 2004. [Google Scholar]

RESOURCES