Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2000 May 16;97(11):5699–5703. doi: 10.1073/pnas.090060097

Fast methods for the Eikonal and related Hamilton– Jacobi equations on unstructured meshes

J A Sethian 1,*, A Vladimirsky 1
PMCID: PMC18495  PMID: 10811874

Abstract

The Fast Marching Method is a numerical algorithm for solving the Eikonal equation on a rectangular orthogonal mesh in O(M log M) steps, where M is the total number of grid points. The scheme relies on an upwind finite difference approximation to the gradient and a resulting causality relationship that lends itself to a Dijkstra-like programming approach. In this paper, we discuss several extensions to this technique, including higher order versions on unstructured meshes in Rn and on manifolds and connections to more general static Hamilton–Jacobi equations.


Fast Marching Methods (1) are numerical algorithms for solving the nonlinear Eikonal equation

graphic file with name M1.gif 1
graphic file with name M2.gif

on a Cartesian mesh in O(M log M) steps, where M is the total number of grid points in the domain. Here, Ω is a domain in Rn, and the right-hand side F(x) > 0 is typically supplied as a known input to the equation, as is the boundary condition that u equals a known function g(x) given along a prescribed curve or surface Γ in Ω. The technique hinges on using numerically consistent upwind finite difference approximations to the operators in the Eikonal equation that select the correct viscosity solution. The structure of this finite difference approximation to the gradient yields a resulting causality relationship that lends itself to an efficient programming approach.

The Fast Marching Method is connected to Huyghen's principle, which is a construction involving expanding wavefronts, and Dijkstra's method, which is an algorithm for computing smallest cost paths on a network. The viscosity solution to the Eikonal equation ∥∇u(x)∥ = F(x) can be interpreted through Huyghen's principle in the following way: circular wavefronts are drawn at each point on the boundary, with the radius proportional to F(x). The envelope of these wavefronts is then used to construct a new set of points, and the process is repeated; in the limit the Eikonal solution is obtained. The Fast Marching Method mimics this construction; a computational grid is used to carry the solution u, and an upwind, viscosity-satisfying finite difference scheme is used to approximate the wavefront.

The order in which the grid values produced through these finite difference approximations are obtained is reminiscent of Dijkstra's method, which is a depth-search technique for computing shortest paths on a network (2). Dijkstra's method keeps track of the “current smallest cost” for reaching a grid point and fans out along the network links to touch the adjacent grid points. The Fast Marching Method exploits the same idea in the context of an approximation to the underlying partial differential equation, rather than the discrete network links.

The Fast Marching Method is intertwined with some earlier work on front propagation, including work on curve and surface evolution in ref. 3, the suggestion to use schemes from hyperbolic conservation laws to approximate front motion in ref. 4, the introduction of level set methods by Osher and Sethian (5), and the narrow band level set method (6). In fact, the Fast Marching Method came in part from examining the limit of the narrow band method as the band was reduced to one grid cell. In this paper, we discuss several extensions to Fast Marching Methods, including higher order versions on Cartesian grids and unstructured meshes, on manifolds and in Rn; we also explore the connections to more general static Hamilton–Jacobi equations. For the sake of notational simplicity, the discussion below is limited to R2, R3, and two-dimensional manifolds; the results hold for arbitrary dimension.

Fast Marching Methods.

We begin by finding a discretized version of the Eikonal Eq. 1 on the Cartesian grid. The easiest way to obtain such a discretization is to replace the gradient by the first-order approximation (see ref. 7):

graphic file with name M3.gif 2

where we have used standard finite difference notation Dijxu = ui,jui−1,j/h and Dij+xu = ui+1,jui,j/h. Here, uij is the value of u on a grid at the point ih,jh with grid spacing h. The forward and backwards operators D−y and D+y in the other coordinate direction are similar. This approximation is consistent and stable, and ensures that the viscosity solution is chosen.

One method for solving Eq. 2, described by Rouy and Tourin (7), is through iteration, which in three dimensions leads to an O(N4) algorithm, where N is the number of points in each direction. The key to the Fast Marching Methods lies in the observation that the upwind approximation possesses a specific causality relationship. By “causality,” we mean that the solution of Eq. 2 at each grid point depends only on the smaller adjacent values. Thus, we can systematically build the solution in the order of increasing values of u.

Suppose, at some time, the solution to the Eikonal equation is known at a set of points (denoted Accepted points). For every not-yet accepted grid point, such that it has an accepted neighbor, we compute a trial solution to the above quadratic Eq. 2 by using the given values for u at accepted points and values of ∞ at all other points. We now observe that the smallest of these trial solutions must be correct, because it depends only on accepted values that are themselves smaller. This causality relationship can be exploited to compute the solution efficiently and systematically as follows.

First, tag points in the initial conditions as Accepted. Then tag as Considered all points one grid point away and compute values at those points by solving the Eq. 2. Finally, tag as Far all other grid points. Then the loop is:

1.

Begin loop: Let Trial be the point in Considered with the smallest value of u.

2.

Tag as Considered all neighbors of Trial that are not Accepted. If the neighbor is in Far, remove it from that set and add it to the set Considered.

3.

Add Trial to Accepted; remove from Considered.

4.

Recompute the values of u at all Considered neighbors of Trial by solving the piecewise quadratic equation according to Eq. 2.

This procedure is the Fast Marching Method as described in ref. 1. Some early applications include photolithography (12), a comparison of this approach with volume-of-fluid techniques (13), and a fast algorithm for image segmentation (8); see also ref. 9 for a different Dijkstra-like algorithm, which obtains the viscosity solution through a control-theoretic discretization, which hinges on a causality relationship based on the optimality criterion.

The key to an efficient implementation of the Fast Marching Method lies in a fast way of locating the Considered grid point with the smallest value for u. An efficient scheme for doing so, discussed in detail in ref. 10, can be devised by using a min-heap structure, similar to what is done in Dijkstra's method. Given M elements in the heap, this structure allows us to change any element in the heap and reorder the heap in O(log M) steps. Thus, the computational efficiency of the total Fast Marching Method for the mesh with M total points is O(M log M); it takes M steps to touch each mesh point, where each step is O(log M), because the heap has to be reordered each time the values are changed.

Two extensions to the basic technique have added to the capabilities of this method. First, Kimmel and Sethian (11) moved the basic first-order scheme to unstructured acute triangulated meshes in R2 and on two-dimensional surfaces and used this technique to compute shortest path geodesics on triangulated manifolds. Second, higher order versions of the Cartesian Fast Marching Method were developed by Sethian in ref. 10; these techniques replaced the basic first-order upwind operators by double-backwards second-order approximations to the first derivatives in each coordinate direction in such a way that a one-pass nature of the method was preserved. For details of these second-order extensions, see ref. 10.

In this paper, we build second-order methods for arbitrary unstructured meshes on manifolds and in Rn and show how certain static Hamilton–Jacobi equations in the plane can be recast as Eikonal equations on appropriately chosen manifolds.

Our interest in unstructured meshes stems from our intent to solve Hamilton–Jacobi equations on manifolds.

Unstructured Mesh Methods.

Derivative approximations.

Because there is no natural choice of the coordinate system for an unstructured mesh, we compute the gradient as a linear combination of n directional derivatives. For any grid point x, the difference approximation of the directional derivative is obviously available for each direction xxr, where xr is any grid point adjacent to x.

Suppose the directional derivative approximations are available for the directions defined by the linearly independent row unit vectors P1, … , Pn. (If x is a vertex of simplex xx1xn, then we will chose Pr = xxr/∥xxr∥ for r = 1, … , n.) Consider the n by n nonsingular matrix P having vectors Pr (r = 1, … , n) as its rows. Let vr(x) be the value of the directional derivative for the direction Pr evaluated at the point x. Assuming that the function u is differentiable at x, we have Pu(x) = v(x), where v(x) is a column vector of vr(x) values. Combining this equation with the Eikonal equation, we can now write an equation for v(x):

graphic file with name M4.gif 3

The particular difference operator used to approximate vr will depend on the type of the mesh and will determine the order of convergence of the numerical method. However, if the difference approximations depend linearly on u(x) (i.e., on the value of the function at the point where the approximation is performed), then the resulting discretized version of the Eikonal equation will always be quadratic in terms of u(x) regardless of the number of dimensions.

To obtain the discretized equation, we now replace each vr with the corresponding difference approximation: vraru + br, where br linearly depends on values of u (and possibly of ∇u for higher order schemes) at the grid points around x. For convenience let Q = (PPT)−1 and use vua + b. Then, the discretized version of Eq. 3 for the grid point x is the quadratic equation:

graphic file with name M5.gif 4

Upwinding criteria.

Suppose we are computing the value of u(x) from some simplex having x as one of its vertices. If u is known at the other vertices of that simplex, then u(x) can be computed by solving the quadratic Eq. 4. Now we can strictly define our upwinding criterion: the computed value of u(x) can be accepted, if the update is coming from within that simplex, i.e., only if the computed approximate (−∇u(x)) lies inside the simplex (see Fig. 1). This restriction is equivalent to requiring that all of the components of the vector QvQ(ua + b) are nonnegative. If this condition is satisfied, we say that x is updated from this simplex and that this simplex is defining for x. The upwinding condition for an arbitrary triangle xx1x2 is equivalent to two inequalities: (ua1 + b1) ≥ (P1P2T)(ua2 + b2) and (ua2 + b2) ≥ (P1P2T)(ua1 + b1), where Pr = xxr/∥xxr∥. Thus, for the first-order difference approximation on the equilateral triangle xxAxB, the upwinding requirement means that we will accept u(x) only if u(x) ≥ 2u(xB) − u(xA) and u(x) ≥ 2u(xA) − u(xB).

Figure 1.

Figure 1

Examples of acceptable (Left) and unacceptable (Right) approximations for ∇u(x). In the latter case, the upwinding requirement is not satisfied, and the update for the point x should be computed by using other simplexes.

It is possible that selecting a different simplex to define P will result in a different value of u(x). To avoid the difficulty, we always choose the defining simplex that produces the smallest new value for u(x); this choice corresponds to a scheme similar to Godunov's method on a Cartesian grid.

Extension to obtuse meshes.

The original Fast Marching Method requires solving Eq. 4 for all of the simplexes adjacent to the grid point x. But what should be done if one or more of the vertices are not yet Accepted? Previous versions of the Fast Marching Method handled this difficulty by using the values at the Considered (as well as Accepted) points for updates and/or by computing updates based on the available partial information (values of u and/or ∇u only at the Accepted vertices of the simplex; ref. 10). Even though this technique is quite useful for “nice” acute triangulations, it can lead to numerical instability of the scheme when used for arbitrary unstructured meshes (see ref. 11). Computing updates based on partial information (i.e., using some but not all vertices) can be particularly dangerous for higher accuracy methods, because the obtained estimate for ∇u might be very different from the true gradient.

The causality relationship requires u(x) to depend only on the smaller values of u at the grid points adjacent to x, which means that if u(x) is updated from simplex xx1x2 [i.e., u(x) is produced by solving Eq. 4 corresponding to that simplex], then both u(x1) and u(x2) should be smaller than u(x). This condition is necessary for using the Fast Marching Method, because that method accepts the values at the grid points in ascending order; thus, if either x1 or x2 has a larger value than x, then it will not be Accepted at the time when we are evaluating x and cannot be used for computing u(x). For the Cartesian grid, the causality relationship is the result of using upwind difference approximations. Previously, we defined the upwinding criteria for an arbitrary triangulated mesh: if u(x) is updated from simplex xx1x2, then the vector −∇u(x) should point into that simplex. It is easy to show that the causality still follows from the upwinding requirement, provided the simplex has only acute angles.

However, if the triangulated mesh contains simplexes with obtuse angles, then the causality relationship might not hold even in the limit. Consider, for example, the front advancing with unit speed in the direction (1,1) (suppose we are solving ∥∇u∥ = 1 in R2 with the boundary condition u = 0 on the line y = −x). Consider the simplex xx1x2 from Fig. 2. The update for the grid point x should clearly be coming from this simplex, because −∇u(x) points into it. However, it is also clear that u(x1) < u(x) < u(x2); thus, u(x2) will not be available for computing u(x).

Figure 2.

Figure 2

Splitting strategies. (a) Constructing splitting section for an obtuse angle. (b) Modified construction of splitting section.

One possible solution is to build locally numerical support at obtuse angles, as was suggested in ref. 11. For an obtuse angle x1xx2 (see Fig. 2a), consider its splitting section—an angle such that any ray inside it will split x1xx2 into two acute angles. Find the closest Accepted grid point in the splitting section and then use that point as if it were adjacent to x. Thus, we would use simplex x1xx6 in Fig. 2a.

There are some disadvantages to this method. First of all, the implementation for higher dimensions is rather cumbersome. Second, implementing it for triangulated surfaces requires an additional step of “unfolding” (11). Third, the method is no longer confined to considering the grid points immediately adjacent to x, because we need to look back for an Accepted point in the splitting section. The upper bound for how far back we need to look in the splitting section is available but depends on the maximum angle of the triangulation—the wider angle corresponds to the narrower splitting section that is less likely to contain a grid point near x.

We observe that the splitting section method can be improved further by noting that the cause of the problem is not just the obtuse angle in the defining simplex but also the fact that some of the vertices of that simplex are not yet Accepted. Thus, it is not necessary to find an Accepted node in the splitting section; it is enough to find an Accepted node such that the resulting virtual simplex intersects the splitting section. This strategy often allows us to look back much less; thus, in Fig. 2b, for example, the grid point x3 is the first Accepted point found such that xx3 intersects the splitting section. Therefore, the virtual simplex x1xx3 will be used to compute the update for u(x).

We note that our construction works equally well on manifolds. As an example, in Fig. 3, we show offsets equidistant from the bounding box on a manifold that represents a complex machine part; the triangulation is obtained by mapping a regular triangular mesh in the xy plane onto the surface, thereby creating a large number of obtuse and near-degenerate triangles, including some with angles bigger than 160°.

Figure 3.

Figure 3

Obtuse triangulated Fast Marching Method.

Higher Order Versions.

We now create higher accuracy Fast Marching Methods by using higher order difference approximations for the directional derivatives. It would seem that such approximations can be used only if the solution u is sufficiently smooth; nonetheless, the fact that at some points ∇u is undefined does not prevent us from using this approach: u is differentiable almost everywhere, and characteristics never emanate from the shocks, i.e., no information is created at the shock. However, the order of the difference approximation does not always correspond to the order of convergence of the method. Still such methods converge faster than the ones that use the first-order approximations (10). We further discuss the rate of convergence of the higher accuracy methods in Numerical Tests.

Cartesian higher order methods.

A higher order Fast Marching Method on Cartesian grid was first presented by Sethian in ref. 10. Herein, we show that such methods can also be obtained from the directional derivative approximation perspective, as described in Derivative approximations.

For a Cartesian grid, the natural choice of the coordinate system will be aligned with the grid lines. Then, for any grid point x, a direction vector Pr is always equal (up to the sign) to one of the canonical basis vectors. Thus, for x inside the grid (i.e., away from the boundary), both points xr,1 = xhPr and xr,2 = x − 2hPr are also present in the grid. Then we can use the well known second-order difference approximation for the directional derivative

graphic file with name M6.gif 5

Using the notation introduced in Derivative approximations, we can write

graphic file with name M7.gif 6

Because this approximation is valid only inside the domain, we need to have the exact values of u for the grid points near the boundary to start the algorithm. If these values are not available, we can use the first-order Fast Marching Algorithm with much smaller mesh size to obtain the second-order accurate approximations of u at those points. Because this approximation is second-order only away from a singularity, we note that the exact (or second-order accurate) values of u are also needed for the grid points in the narrow band around any singularities at the boundary. Finally, we note that the same higher order difference approximations can be used even for non-Cartesian grids provided all of the grid points lie on the straight lines and are equidistant on those lines.

Higher order methods.

Typically, we do not have orthogonal difference operators on an unstructured mesh. Fortunately, we can still build higher order directional derivative approximations by using the gradient information at the grid points adjacent to x.

Consider a grid point xr adjacent to x and the corresponding directional vector Pr = xxr/∥xxr∥. Supposing that both u(xr) and ∇u(xr) are known, we can write a second-order approximation for the directional derivative

graphic file with name M8.gif 7

Using the notation introduced in Derivative approximations, we can write

graphic file with name M9.gif 8

We can also compute the second-order accurate approximation for the gradient at x, namely ∇u(x) = P−1vP−1[u(x)a + b], provided u(x) is known with at least the second-order accuracy as well. Thus, as the algorithm runs, we will store for each Accepted grid point the computed values of both u and ∇u to be used later when recomputing the Considered points.

Because this approach requires information about the gradient, we need to have the exact values of ∇u for the grid points on the boundary to start the algorithm. If these values are not available, we can use the first-order Fast Marching Algorithm with much denser mesh in the narrow band near the boundary to obtain the accurate approximations of ∇u at the points in that narrow band.

Finally, we note that an additional step of “gradient mapping” is required to use this higher order method on nonsmooth triangulated surfaces (J.A.S. and A.V., unpublished work).

Numerical tests.

Next, we test the methods by finding the numerical solutions for the Eikonal equation ∥∇u(x)∥ = 1 in R2 with different boundary conditions. The viscosity solution of this equation taken with zero boundary condition is the “distance from boundary” function. Two examples with shocks are considered: one where the shock line occurs along the grid lines and another where the shock line is not aligned with the grid. Both the L2 and L errors are computed on the grid. Note that the rate of convergence in the L2 norm generally corresponds to the order of the difference approximations. However, the rate of convergence in the L norm might be lower (depending on the location of shocks relative to the grid). This phenomenon is due to the fact that the higher order approximations are meaningful only where the solution is sufficiently smooth. Fortunately, viscosity solutions are differentiable almost everywhere, and no information emanates from the shocks. The upwinding difference approximations used in our methods help the numerical solution to mimic this useful property of the true solution; thus, the L2 norm convergence is not affected by the larger errors committed near the shocks.

The tests are performed on the parallelogram with vertices at (0,0), (1,0), (0.5,Inline graphic/2), and (1.5,Inline graphic/2). The exact values for the distance are used in the narrow band of radius 0.1 around the initial points to start the algorithm. We use the grid of equilateral triangles on this domain. The numerical results for a nonacute triangulation are very similar.

We first use the Fast Marching Method to compute the distance from two vertices: (0,0) and (1.5,Inline graphic/2). The shock line runs along the edges of simplexes (along the shorter diagonal of the parallelogram; Fig. 4). Table 1 shows the errors under mesh refinement.

Figure 4.

Figure 4

First- and “second-order” computations of distance from two points. The shock line runs along the edges of simplexes.

Table 1.

Errors for first- and “second order” schemes for example shown in Fig. 4.

Grid L2 error
L error
First-order Second-order First-order Second-order
212 0.00818 0.00753 0.01500 0.02161
412 0.00432 0.00142 0.00785 0.00559
812 0.00213 0.00024 0.00386 0.00130
1612 0.00108 0.00004 0.00195 0.00031

We now compute the minimal distance from the other two vertices: (1,0) and (1/2,Inline graphic/2). In this case, the shock line is not aligned with the grid lines (edges of simplexes), and it runs along the longer diagonal (Fig. 5). Table 2 shows the errors under mesh refinement. The L2 error is still second-order convergent for the second-order scheme, whereas its L error is lower order, although still much better than the L error of the first-order scheme. This effect is to be expected; because of the grid alignment, the approximation is sometimes performed across the shock lines, which leads to the first-order errors there.

Figure 5.

Figure 5

First- and “second-order” computations of distance from two points. The shock line is not aligned with the edges of simplexes.

Table 2.

Errors for first- and “second-order” schemes for example shown in Fig. 5

Grid L2 error
L error
First-order Second-order First-order Second-order
212 0.00645 0.00533 0.01221 0.02034
412 0.00339 0.00117 0.00642 0.00644
812 0.00165 0.00022 0.00315 0.00203
1612 0.00084 0.00005 0.00159 0.00071

Eikonal Equations on Surfaces and More General Static Hamilton–Jacobi Equations.

Suppose we are given a graph of a function z = f(x,y) and attempt to solve the Eikonal equation ∥∇u∥ = F(x,y) on that manifold. To be clear, 1/F(x,y) gives the speed in the direction normal to the level line u = constant on the manifold z = f(x,y). Projecting down onto the xy plane, this Eikonal equation translates into a particular static Hamilton–Jacobi equation on the plane. We now use this argument in reverse as follows. Consider any static Hamilton–Jacobi equation in the xy plane of the form

graphic file with name M14.gif 9

together with boundary conditions for u. Now suppose we can find functions p(x,y), q(x,y), and F(x,y) such that py = qx, F(x,y) > 0, and

graphic file with name M15.gif 10

It can be then shown (see ref. 10) that the solution of Eq. 9 can be obtained by solving the Eikonal equation ∥∇u∥ = F(x,y) on the manifold z = f(x,y) where fx = p and fy = q. Thus, for any static Hamilton–Jacobi equation of the form given by the Eq. 9, if we can find functions p and q satisfying the above, then we can construct the surface z = f(x,y), approximate it with a triangulated mesh, and then solve the straightforward Eikonal problem on the manifold.

As an example, we consider the following equation.

graphic file with name M16.gif 11

where γ = (0.9π)2, x,y ∈ [0,1], and the boundary condition is u(0.5,0.5) = 0. We can find functions p = 0.9πcos(2πx)sin(2πy) and q = 0.9πsin(2πx)cos(2πy), which satisfy our compatibility requirements, and then solve the Eikonal equation ∥∇u∥ = 1 on the surface f(x,y) = 0.45sin(2πx)sin(2πy). Fig. 6 shows the evolving front on the surface and the solution to the original problem on the plane.

Figure 6.

Figure 6

Non-Eikonal static Hamilton–Jacobi equation.

We have presented techniques for computing certain Hamilton–Jacobi equations on unstructured meshes. Further discussion may be found elsewhere (J.A.S. and A.V., unpublished work).

Acknowledgments

The authors thank L. C. Evans, O. Hald, and M. Falcone. Supported by the Applied Mathematics and Science Office of Energy Research, U.S. Department of Energy (Grant DE-AC03-76SF00098) and the Office of Naval Research (Grant FDN00014-96-1-0381).

Footnotes

Article published online before print: Proc. Natl. Acad. Sci. USA, 10.1073/pnas.090060097.

Article and publication date are at www.pnas.org/cgi/doi/10.1073/pnas.090060097

References

  • 1.Sethian J A. Proc Natl Acad Sci USA. 1996;93:1591–1595. doi: 10.1073/pnas.93.4.1591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Dijkstra E W. Numer Math. 1959;1:269–271. [Google Scholar]
  • 3.Sethian J A. Commun Math Phys. 1985;101:487–499. [Google Scholar]
  • 4.Sethian J A. In: Variational Methods for Free Surface Interfaces. Concus P, Finn R, editors. New York: Springer; 1987. [Google Scholar]
  • 5.Osher S, Sethian J A. J Comp Phys. 1988;79:12–49. [Google Scholar]
  • 6.Adalsteinsson D, Sethian J A. J Comp Phys. 1995;118(2):269–277. [Google Scholar]
  • 7.Rouy E, Tourin A. SIAM J Numer Anal. 1992;29:867–884. [Google Scholar]
  • 8.Malladi R, Sethian J A. Proc Natl Acad Sci USA. 1996;93:9389–9392. doi: 10.1073/pnas.93.18.9389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Tsitsiklis J N. IEEE Trans Autom Control. 1995;40:1528–1538. [Google Scholar]
  • 10.Sethian J A. Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision and Materials Sciences. Cambridge, U.K.: Cambridge Univ. Press; 1999. [Google Scholar]
  • 11.Kimmel R, Sethian J A. Proc Natl Acad Sci USA. 1998;95:8341–8435. doi: 10.1073/pnas.95.15.8431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Sethian J A. Proceedings of the International Symposium on Microlithography. Vol. 2726. Santa Clara, CA: SPIE; 1996. [Google Scholar]
  • 13.Helmsen J, Puchett E G, Colella P, Dorr M. Proceedings of the International Symposium on Microlithography. Vol. 2726. Santa Clara, CA: SPIE; 1996. [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES