Skip to main content
PLOS One logoLink to PLOS One
. 2015 Jul 28;10(7):e0132354. doi: 10.1371/journal.pone.0132354

Algebraic Error Based Triangulation and Metric of Lines

Fuchao Wu 1, Ming Zhang 1,*, Guanghui Wang 2, Zhanyi Hu 1
Editor: Yuanquan Wang3
PMCID: PMC4517892  PMID: 26218615

Abstract

Line triangulation, a classical geometric problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras with known projection matrices. Compared to point features, line segments are more robust to matching errors, occlusions, and image uncertainties. In addition to line triangulation, a better metric is needed to evaluate 3D errors of line triangulation. In this paper, the line triangulation problem is investigated by using the Lagrange multipliers theory. The main contributions include: (i) Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is provided, and from the formula, a new linear algorithm, LINa, is proposed for line triangulation; (ii) two optimal algorithms, OPTa-I and OPTa-II, are proposed by minimizing the algebraic error; and (iii) two metrics on 3D line space, the orthogonal metric and the quasi-Riemannian metric, are introduced for the evaluation of line triangulations. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.

Introduction

Line triangulation [1], [2] refers to the process of determining a 3D line given its projections in two or more images and the corresponding camera matrices. As one of the fundamental problems in computer vision, this problem is trivial in theory, since the corresponding 3D line is the intersection of the back-projection planes of the image lines. However, when the number of views is larger than 2, the back-projection planes usually do not intersect at one line in the 3D space due to measurement errors and image noise. This leads to find a 3D line that fits the measured data optimally, i.e., optimal line triangulation.

Minimizing the algebraic error of line triangulation is a linear least squares problem with a quadratic constraint (called the Klein constraint), as defined in Section 2 of this paper. Adrien and Sturm [3], [4] proposed a linear algorithm for the algebraic error minimization. This algorithm first finds a solution of the corresponding linear least squares problem (i.e., by ignoring the Klein constraint), then, the solution is corrected subsequently by a singular value decomposition (SVD) method with the Klein constraint enforcement. This algorithm yields only an approximation of the optimal solution to the algebraic error minimization. The paper [5] proposed a suboptimal solution to algebraic-error line triangulation. This algorithm finds a suboptimal solution of the original problem by relaxing the quadratic unit norm constraint to six linear constraints. However, this still cannot yield an optimal solution to algebraic error minimization. To the best of our knowledge, how to find optimal solution of the algebraic error minimization is still an open problem.

In studies on line triangulation, a natural question is that which one of the above three optimality criteria is the “best”? In order to answer this question, we need a criterion which is independent of the three optimality criteria to describe the “bestness”. One intuitive criterion is the 3D error, i.e. distance between a reconstructed line and its ground truth. The Euclidean distance does not give a reasonable measure since it is not an intrinsic distance on 3D line space. So far, no study on the metrics of 3D lines is available in the literature, and thus, it is still an open problem for the evaluation of line triangulations.

This paper focuses on the triangulations and metrics of lines. The main contributions are summarized as follows:

  • Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is given and this Plücker correction formula is used to establish a quasi-Riemannian metric in 3D line space. From the formula, a new linear algorithm, LINa, is proposed for line triangulation. The computational complexity of our new linear algorithm is much simpler compared with the SVD method in the literature.

  • For the algebraic error minimization, two new algorithms, OPTa-I and OPTa-II, are proposed to find the optimal solution. The OPTa-I is based on finding roots of a system of 2-degree polynomial equations in five variables; and the OPTa-II is based on solving a system of polynomial equations in two variables (one polynomial is of 6-degree and the other is of 10-degree). The continuous homotopy algorithm [6], [7] is used to solve these systems of polynomial equations.

  • Two new metrics on 3D line space, named as the orthogonal metric and the quasi-Riemannian metric, are proposed for the evaluation of line triangulations. The orthogonal metric is based on the angular distance on rotation groups [8] and the orthogonal representation of 3D lines [4]; and the quasi-Riemannian metric is based on the Riemannian metric on the 5-dimensional unit sphere and our proposed Plücker correction formula.

The rest of the paper is organized as follows. Section 2 presents some preliminaries used in the paper. The Plücker correction formula and a new linear algorithm are presented in Section 3. Section 4 elaborates the two optimal algorithms for the algebraic error minimization. Section 5 gives two new metrics on 3D line space. Some experimental results with synthetic and real data are presented in Section 6 and Section 7, respectively. Finally, the paper is concluded in Section 8.

Preliminaries

2.1 Plücker Coordinates

In 3D projective space, the Plücker coordinates of a line is defined by a nonzero 6-vector:

L=(x×yx4yy4x)(uv) (1)

where X=(xx4),Y=(yy4) are two non-coincident points on the line. The Plücker coordinates is homogeneous since the two 6-vectors computed with two different pairs of points on the line are equal up to a nonzero factor. From Eq (1), it is easy to see that u T v = (x4 y−y4 x)T(x×y) = 0, i.e. the Plücker coordinates satisfies u T v = 0, or written in a matrix form:

LTKL=0  (K=(I3I3)) (2)

In 5D projective space, the quadric defined by Eq (2) is called the Klein quadric [9], thus, the Plücker coordinates satisfies the Klein quadric constraint. Conversely, if a nonzero 6-vector satisfies the Klein constraint, it must be the Plücker coordinates of a line in a 3D projective space.

2.2 Point-Line Distance

In the image plane, the algebraic distance from a point x = (x,y,1)T to a line l is defined as [10]:

da(x,l)=|xTl|  (lTl=1) (3)

Given a measured point set of a line l, = {x j = (x j,y j,1)T: 1≤jM}, and let

la=arg minj=1Mda2(xj,l) subject to lTl=1 (4)

then,l a is called the linear least squares fitting of the measured point set , which has linear solution [10].

2.3 Optimality Criteria

Given N line-projection matrices,Pi(1iN), and let i = {x ij = (x ij,y ij,1)T: 1≤jM i} be a measured point set from the imaged line PiL of a 3D line L, the line triangulation is meant to estimate the 3D line L from these measured point sets i(1≤iN). The algebraic distance of point-line in the image plane leads to the following optimality criteria to solve this problem [4], [10]:

La*=arg min a(L)i=1Nj=1Mida2(xij,PiL)             subject to LTKL=0 and LTL=1 (5)

where La* is called the optimal solution to minimize algebraic error. La* makes the sum of squared algebraic distances from the measured points x ij to the re-projection lines PiLa* reach a minimum, thus, {P1La*,P2La*,,PNLa*} are the linear least squares fittings of the measured point sets { 1, 2,…, N}.

The minimization term in Eq (5) can be expressed as

a(L)=i=1Nj=1Mi(xijTPiL)2        =LTAL(where A=i=1NPiT(j=1MixijxijT)Pi) (6)

Thus, the cost function Eq (5) can be rewritten as

La*=arg minLTAL                 subject to LTKL=0 and LTL=1 (7)

which means that the minimization of the algebraic error is a linear least squares problem with the Klein constraint.

Linear Solution to Minimize Algebraic Error

Adrien and Sturm [4] first proposed a linear algorithm to estimate La*, which is divided into the following two steps:

(a) Solve the linear least squares problem:

L¯=arg minLTAL           subject to LTL=1 (8)

The solution L¯ is the eigenvector correspond to the matrix A’s smallest eigenvalue.

(b) Compute the nearest point Lk* from L¯ to the Klein quadric as the final estimate:

Lk*=arg minLL¯             subject to LTKL=0 (9)

Adrien and Sturm [4] gave an SVD method to compute the nearest point Lk*.

The step (b) in the above algorithm is called the Plücker correction. When there are errors in the measurement data, L¯ does not strictly satisfy the Klein constraint, hence, it can not be the Plücker coordinates of a line in the 3D projective space. Thus, the Plücker correction is an important step in the algorithm. This section presents a formula to compute the Plücker correction and a new linear algorithm ‘LINa’.

3.1 Linear Algorithm LINa

We consider the following minimization:

Ls*=arg minLL¯              subject to LTKL=0 and LTL=1 (10)

Although this minimization contains a unit norm constraint, it is in fact equivalent to Eq (9) according to the following Lemma.

Lemma 1: (a) If Ls* is the optimal solution of Eq (10), then LkLs*TL¯Ls* must be the optimal solution of Eq (9).

(b) Conversely, if Lk* is the optimal solution of Eq (9), then Ls(Lk*TLk*)1/2Lk* must be the optimal solution of Eq (10).

Proof: For an arbitrary unit 6-vector L, there must be

LL¯=2(1LTL¯) (11)

Since Ls* is the optimal solution of Eq (10), 0Ls*TL¯1 and

Ls*L¯=2(1Ls*TL¯) (12)

Let Lk=Ls*TL¯Ls*, then

LkL¯=Ls*TL¯Ls*L¯=1(Ls*TL¯)2 (13)

Let Ls=(Lk*TLk*)1/2Lk*, then,

LsL¯=2(1LsTL¯) (14)

Since Lk* is the optimal solution of Eq (9), Lk*=LsTL¯Ls and 0LsTL¯1, thus

Lk*L¯=LsTLk*LsL¯=1(LsTL¯)2 (15)

(a): If L k is not the optimal solution of Eq (9), then,LkL¯>Lk*L¯. From Eqs (13) and (15), we have

1(Ls*TL¯)2>1(LsTL¯)2 (16)

and thus,Ls*TL¯<LsTL¯. Then, by Eqs (12) and (14),Ls*L¯>LsL¯, which is contrary to the fact that Ls* is the optimal solution of Eq (10). Therefore,L k must be the optimal solution of Eq (9).

Similarly, (b) can be proved.

According to Eq (11), the minimization problem Eq (10) is simplified to

Ls*=arg min(1LTL¯)             subject to LTKL=0 and LTL=1 (17)

Proposition 1 below gives an analytical expression of Ls*. Compared with the SVD method to compute Lk*, the computation of Ls* is much simpler.

Proposition 1: For L¯=(u¯T,v¯T)TR6 and L¯=1, (a) The minimization Eq (10) has a unique solution if u¯±v¯ as:

Ls*=12(u¯+v¯u¯+v¯+u¯v¯u¯v¯u¯+v¯u¯+v¯u¯v¯u¯v¯) (18a)

(b) The minimization Eq (10) has infinitely many solutions if u¯=±v¯ as:

Ls*=2(u¯Tdd±(u¯u¯Tdd))(dR3,d=1) (18b)

The proof of the proposition is given in the next subsection. The geometric interpretations for Eqs (18a) and (18b) are shown in Fig 1. Since u¯+v¯=1+2u¯Tv¯ and u¯v¯=12u¯Tv¯, Eq (18a) can be rewritten as

Ls*=12(u¯+v¯1+2u¯Tv¯+u¯v¯12u¯Tv¯u¯+v¯1+2u¯Tv¯u¯v¯12u¯Tv¯) (19)

Fig 1. Geometric interpretation of the Plücker correction Ls*=(us*T,vs*T)T.

Fig 1

Thus, when L¯ satisfies the Klein constraint u¯Tv¯=0, there must be Ls*=L¯.

(a) u¯±v¯(b) u¯=v¯(c) u¯=v¯

Based on the above discussion, our linear algorithm LINa can be summarized in Table 1.

Table 1. Linear algorithm: LINa.

(a) Compute the eigenvector L¯ associated with the smallest eigenvalue of matrix A
(b) Compute the Plücker correction Ls* by Eq (12) as the final estimation of La*

Remark 1: In practice, the case (b) in Proposition 1 happens rarely. This is because the Klein constraint makes {u, v} orthogonal to each other, thus, they must be linearly independent of each other. When there are errors in the measurement data, the solution of Eq (8) cannot guarantee the orthogonality of {u, v}, except for their linear independency. Hence, the case (b) rarely happens in practice.

By Lemma 1 and Proposition 1, the optimal solution of Eq (15) can be obtained as:

(a) If u¯±v¯, then

Lk*=u¯+v¯+u¯v¯4(u¯+v¯u¯+v¯+u¯v¯u¯v¯u¯+v¯u¯+v¯u¯v¯u¯v¯) (20a)

(b) If u¯=±v¯, then

Lk*=(1+4(u¯Td)2)(u¯Tdd±(u¯u¯Tdd))(dTd=1) (20b)

3.2 Proof of Proposition 1

Construct the Lagrange function of Eq (17) as follows:

f(L,α,β)=(1LTL¯)α( LTKL)β(LTL1) (21)

According to the optimization theory [11], the solution of Eq (17) must be a stationary point of the Lagrange function, i.e., there are multipliers (α*,β*) such that (Ls*,α*,β*) is a solution of the following Lagrange equations:

{fL=2(αK+βI)LL¯=0fα=LTKL=0fβ=LTL1=0 (22)

Thus, by solving the Lagrange equations we can obtain the optimal solution Ls*. The first equation in Eq (25) can be rewritten as

{(α+β)(u+v)=u¯+v¯2(αβ)(uv)=v¯u¯2 (23)

From the last two equations in Eq (22), we have

u±v (24)

(i) If u¯±v¯, then by Eqs (23) and (24), we have

α+β0,αβ0 (25)

Let α′ = (α+β)−1 and β′ = (αβ)−1, then from Eq (23) we have

L=14(α(u¯+v¯)β(u¯v¯)α(u¯+v¯)+β(u¯v¯))=14(αaβbαa+βb)(where (ab)=(u¯+v¯u¯v¯)) (26)

Thus,

LTKL=0(αaβb)T(αa+βb)=0                α2aTaβ2bTb=0 (27)
LTL=1(αaβb)T(αaβb)+(αa+βb)T(αa+βb)=16            α2aTa+β2bTb=8 (28)

Therefore, the following linear equations on (α′2,β′2) hold:

(aTabTbaTabTb)(α2β2)=(08) (29)

then,

{α=2±aTaβ=2±bTb (30)

Substituting Eq (30) into Eq (26) gives the following four solutions to L:

L±,±=±12bTb(bb)±12aTa(aa)=±12u¯v¯(u¯v¯(u¯v¯))±12u¯+v¯(u¯+v¯u¯+v¯) (31)

The geometric interpretations of the four solutions are shown in Fig 2.

Fig 2. Geometric interpretation for L±,±=(u±,±T,v±,±T)T.

Fig 2

It can be easily verified that L+,+=arg min{1L±,±TL¯±,±}, and thus,

Ls*=L+,+=12(u¯+v¯u¯+v¯+u¯v¯u¯v¯u¯+v¯u¯+v¯u¯v¯u¯v¯) (32)

(ii) When u¯=v¯, there must be u¯Tu¯=12(u¯Tu¯+v¯Tv¯)=12. According to Eq (24) and the second equation of Eq (23), we have β = α. Substituting it into the first equation in Eq (23), we have

2α(u+v)=u¯+v¯2=u¯ (33)

Therefore,

α2(u+v)T(u+v)=u¯Tu¯=12 (34)

By the last two equations in Eq (22), we have

(u+v)T(u+v)=uTu+2uTvT+vTv=1 (35)

Thus,α=±12. Substituting it into Eq (23), we have

v=u2u¯orv=u+2u¯ (36)

If v=u2u¯, then

LTKL=0uTu+2uTu¯=0 (37)
LTL1=0uTu+(u+2u¯)T(u+2u¯)=1uTu+2uTu¯=0 (38)

Thus,

uTu+2uTu¯=0{LTKL=0LTL1=0 (39)

and uS{u:uTu+2uTu¯=0},

1LTL¯=1uTu¯(u2u¯)Tu¯=1+2u¯Tu¯=1+22 (40)

Similarly, if v=u+2u¯, then

uTu2uTu¯=0{LTKL=0LTL1=0 (41)

and uS+{u:uTu2uTu¯=0},

1LTL¯=122 (42)

By Eqs (40) and (42), Ls* has infinitely many solutions:

Ls*=(uu+2u¯),  uS+ (43)

Next, we consider the set S+. Let u = s d (where d is a unit 3-vector, s≠0), then

uTu2uTu¯=0s2s2dTu¯=0s=2u¯Td (44)

thus, S+={u=2u¯Tdd  :  dTd=1}. Therefore Eq (43) can be rewritten as

Ls*=2(u¯Tddu¯Tddu¯)(dTd=1) (45)

(iii) Similarly, when u¯=v¯, Ls* also has infinitely many solutions:

Ls*=2(u¯Tddu¯u¯Tdd)(dTd=1). (46)

Optimal Solution by Minimizing Algebraic Error

The algorithm LINa can only provide an approximate solution by minimizing algebraic errors. This section will present two algorithms ‘OPTa-I’ and ‘OPTa-II’ to compute the optimal solution. The algorithm OPTa-I converts the optimization problem to that of finding the real solutions of two systems of 2-degree polynomial equations in five variables, and the algorithm OPTa-II to that of finding the real solutions of a system of polynomial equations in two variables (one is of 6-degree, and the other is of 10-degree).

4.1 Algorithm OPTa-I

The optimal algorithm OPTa-I is summarized in Table 2.

Table 2. The optimal algorithm: OPTa-I.

(a) Construct the following system of polynomial equations by the matrix: A (see Eq (6)):
{uTv=0uTu+vTv=1[u+v]×((A11+A21)u+(A12+A22)v)=0[uv]×((A11A21)u+(A12A22)v)=0  (where A=(A11A12A21A22)) (47)
Then compute its real solution set SI;
(b) Determine the optimal solution by
La*=arg min{LTAL:L=(uv)SI} (48)

Eq (47) is a system of 2-degree polynomial equations in six variables, and it has at most 64 real solutions based on the algebraic equations theory. Proposition 2 next shows this system can be simplified into two systems of 2-degree polynomial equations in five variables. Here we at first prove that Eq (48) is the optimal solution to the algebraic error minimization.

Proof: Consider the Lagrange function and the Lagrange equations of Eq (7):

fa(L,α,β)=LTALα( LTKL)β(LTL1) (49)
{faL=2(AαKβI))L=0faα=LTKL=0faβ=LTL1=0 (50)

The first equation in Eq (50) can be rewritten as

{A11u+A12v=αv+βuA21u+A22v=αu+βv (51)

It is obvious that this equation is equivalent to the following equation:

{(A11+A21)u+(A12+A22)v=(α+β)(u+v)(A11A21)u+(A12A22)v=(βα)(uv) (52)

By eliminating the multipliers (α,β) in the above equation, we obtain the following 2-degree polynomial equations in (u,v):

{[u+v]×((A11+A21)u+(A12+A22)v)=0[uv]×((A11A21)u+(A12A22)v)=0 (53)

The last two equations in Eq (50) can be rewritten as

{uTv=0uTu+vTv=1 (54)

By combining Eqs (53) and (54), we have Eq (47). L from the stationary points (L,α,β) of the Lagrange function Eq (49) must be a real solution of Eq (47), thus the optimal solution of Eq (7) must belong to the real solution set of Eq (47), i.e.,La*SI. Therefore,

La*=arg min{LTAL:LSI} (55)

Proposition 2: The solution set of Eq (47) is the union of solution sets of two systems of 2-degree polynomial equations in five variables.

Proof: Let S be the solution set of Eq (47), then it must be the union of the following two sets:

S0={L=(uT,vT)TS:v3=0}S1={L=(uT,vT)TS:v30} (56)

Clearly,S0 is the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v 3 = 0 in Eq (47). For the set S1, we consider the resulting equation system obtained by removing the unit norm constraint in Eq (47):

{uTv=0[u+v]×((A11+A21)u+(A12+A22)v)=0[uv]×((A11A21)u+(A12A22)v)=0 (57)

It is second order homogeneous on L = (u T,v T)T, and the set formed by normalizations of its all nonzero solutions is just the solution set S of Eq (47). Thus, let S˜1 be the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v 3 = 1 in Eq (57), there must be

S1={L=(u˜T,v˜T,1)Tu˜Tu˜+v˜Tv˜+1:(u˜v˜)S˜1} (58)

Hence, Proposition 2 holds.

4.2 Algorithm OPTa-II

Let A(α,β) = A−αK−βI and its adjoint matrix be denoted as A*(α,β)=(A1*(α,β),A2*(α,β),,A6*(α,β)), all its elements are at most 5-degree polynomials of (α,β). It is easy to see,

AiT(α,β)Aj*(α,β)={det(A(α,β)),  i=j0,                 ij (59)

where A iT(α,β) is the i-th row vector of A(α,β), therefore,

A(α,β)A*(α,β)=det(A(α,β))I (60)

For each k (1≤k≤6),Ak*(α,β)=0 is a system of 5-degree polynomial equations of (α,β), whose real solution set is denoted as Sk={(αik,βik):0itk}. Next, we prove that this system has at least one real solution, i.e.Sk.

Let Ad(k)(α,β)=(a1(α,β),a2(α,β),,a6(α,β)) be the sub-matrix formed by deleting the k-th row of A(α,β), then Ak*(α,β)=0 can be expressed as

(a):

{det(a2(α,β),a3(α,β),,a6(α,β))=0det(a1(α,β),a3(α,β),,a6(α,β))=0det(a1(α,β),a2(α,β),,a5(α,β))=0 (61)

and it has the same solutions as the following equation system:

(b):

{det(a2(α,β),a3(α,β),,a6(α,β))=0det(a1(α,β),a3(α,β),,a6(α,β))=0 (62)

This is because: From (b), both a 1 and a 2 can be linearly represented with A{a3,a4,a5,a6}, thus, for arbitrary ai,aj,akA, {a 1, a 2,a i, a j,a k} must be linearly dependent, i.e., det(a 1, a 2,a i, a j,a k) = 0. Hence, solutions of (b) must be the ones of (a). Obviously, solutions of (a) are also the ones of (b). Therefore, (a) has the same solutions with (b).

Since non-real solutions of a system of real polynomial equations occur in complex conjugate pairs, there is at least one real solution in the 25 solutions of (b). Thus, Ak*(α,β)=0 has at least one real solution.

The algorithm OPTa-II is summarized in Table 3.

Table 3. The optimal algorithm: OPTa-II.

(a) Compute the adjoint matrix, A*(α,β)=(A1*(α,β),A2*(α,β),,A6*(α,β)), and choose k 1,k 2 such that the following system of polynomial equations has no real solutions:
{Ak1*(α,β)=0Ak2*(α,β)=0 (63)
(b) Compute the real solution set SiII(i=1,2) of the following system of polynomial equations:
{det(A(α,β))=0Aki*T(α,β)KAki*(α,β)=0 (64)
(c) Determine the optimal solution from:
La*=arg min{LiT(α,β)ALi(α,β) :Li(α,β)=Aki*(α,β)Aki*(α,β),(α,β)SiII,i=1,2} (65)

For the two polynomial equations in the step (b) of OPTa-II, one is of 6-degree and the other is of 10-degree, and thus it has at most 60 real solutions based on the algebraic equations theory. Next, we prove that Eq (65) is the optimal solution to the algebraic error minimization.

Proof: The first equation in Eq (50) can be rewritten as A(α,β)L = 0, thus, L≠0 leads to the following 6-degree polynomial equation of (α,β):

det(A(α,β))=0 (66)

i.e., the multipliers (α,β) of the stationary points (L,α,β) of the Lagrange function Eq (48) satisfies Eq (66).

Since the system of polynomial equations {Ak1*(α,β)=0,  Ak2*(α,β)=0} has no real solutions, there must be Ak1*(α,β)0 or Ak2*(α,β)0 for (α,β)∈R 2. This leads to rank(A(α,β)) = 5 for (α,β)∈R 2. Therefore, from Eqs (60) and (66), L of the stationary points (L,α,β) can be expressed as

L1(α,β)=s1Ak1*(α,β) or L2(α,β)=s2Ak2*(α,β)  (s10,s20) (67)

By the second equation in Eq (50), the multipliers (α,β) of the stationary points (L,α,β) must belong to one of the real solution sets of the following two systems of polynomial equations:

{det(A(α,β))=0Aki*T(α,β)KAki*(α,β)=0.(i=1,2) (68)

i.e.,(α,β)S1IIS2II. Thus, from Eq (66) and the unit norm constraint L T L = 1, L of the stationary points (L,α,β) must belong to the following set:

S={Li(α,β)=Aki*(α,β)Aki*(α,β):(α,β)SiII,i=1,2} (69)

Therefore,

La*=arg min{LTAL:LS}     =arg min{LiT(α,β)ALi(α,β) :Li(α,β)=Aki*(α,β)Aki*(α,β),(α,β)SiII,i=1,2} (70)

The algorithm OPTa-II needs only to solve some systems of polynomial equations in two variables, it effectively simplifies the algorithm OPTa-I. If for any {i,j} pairs, the system of polynomial equations {Ai*(α,β)=0, Aj*(α,β)=0} has real solution, the algorithm OPTa-II may fail. However, this situation never happened in our extensive numerical simulations.

Remark 2: In the experiments of this paper, we use the continuous homotopy method [6] [7] to solve the system of polynomial equations. The method is first proposed in [12]. Through 30 years of efforts of many researchers, the method has made a great success in computing zero points of non-linear mappings. It can give all zero points of a polynomial mapping [6][7][13]. In the field of computer vision, the method has been used to solve self-calibrations of cameras, such as the Kruppa equations [14], the modulus constraint equations and the absolute quadric constraint equations [15]. For the 2-degree polynomials with five variables in OPTa-I and the high-degree polynomials with two variables in OPTa-II, the continuous homotopy method is of high computational efficiency.

Metrics on 3D Line Space

In order to evaluate 3D errors of line triangulations, we need a metric in 3D line space. The Euclidean distance

dE(L,L)min{LL,L(L)}(where L, L′ are the normalized Plücker coordinates of lines L,L) is not appropriate for the evaluation of line triangulations since is not an intrinsic distance on 3D line space. The aim of this section is to introduce two new metrics on 3D line space, called the orthogonal metric and the quasi-Riemannian metric. Compared with the Euclidean metric and the orthogonal metric, the quasi-Riemannian metric appears more appropriate.

In this section, the 5-dimensional unit sphere centered at the origin in R6 is denoted by S5(1), and the intersection of the Klein quadric K and S5(1) is denoted by K(1)S5(1)K, which is a 4-dimensional smooth sub-manifold of S5(1), called the unit Klein quadric.

5.1 Orthogonal Metric in 3D Line Space

The proposed orthogonal metric is mainly from the angular distance of rotation matrices [8] and the orthogonal representation of 3D lines [4]. The angular metric on rotation group is given in Appendix I.

If L=(uv)K(1), then (a): u≠0,v≠0; or (b):u = 0,‖v‖ = 1; or (c): ‖u‖ = 1,v = 0. By the definition of Plücker coordinates, for the case (b), L is the Plücker coordinates of a 3D line passing through the origin and v is its direction; for the case (c), L is the Plücker coordinates of a 3D line on the infinite plane and u is its normalized coordinates as a 2D line on the infinite plane.

Let Ka(1)={LK(1):u0,v0},Kb(1)={LK(1):u=0,v=1} and Kc(1)={LK(1):u=1,v=0}. We define L = (u, v)for LKa(1), then

L=(uu,vv)AL(u00v)BL (71)

Thus, from the following mappings:

AL(uu,vv,u×vu×v)RLSO(3) (72)
BL(uvvu)WLSO(2) (73)

we obtain the mapping ϕ:Ka(1)SO(3)×SO(2) by [4]:

ϕ(L)=(RL,WL) (74)

and it is called the orthogonal representation of LKa(1).

The above mapping fails for Kb(1) and Kc(1). In order to obtain a complete mapping from K(1) into SO(3)×SO(2), we add definition for Kb(1) and Kc(1) as follows:

ϕ(L)={(2vvTI3,Wπ/2),  LKb(1)(2uuTI3,I2),  LKc(1) (75)

where Wπ/2 is the 2D rotation of angle π/2. An explanation of this definition will be given later.

Using the angular distance on SO(3)×SO(2), the following distance on K(1) is obtained:

dO(L,L)=d(ϕ(L),ϕ(L)),  L,LK(1) (76)

Since LK(1) if and only if L is the normalized Plücker coordinates of a 3D line LL(3); and ±LK(1) are the normalized Plücker coordinates of the same 3D line, the distance d O leads directly to the following distance on 3D line space L(3):

dO(L,L)=min{d(ϕ(L),ϕ(L)),d(ϕ(L),ϕ(L))},  L,LL(3) (77)

and it is called the orthogonal distance of 3D lines.

Now, we can give an explanation for the definition Eq (75). If L,LKb(1), then

dO(L,L)=arccos(2(vTv)21)                =2θ(v,v) (78)

Thus, the first mapping in the definition Eq (75) is meant the orthogonal distance of two lines passing through the origin is just twice their included angle.

Similarly, If L,LKc(1), then dO(L,L)=2θ(u,u). Since u and u are the normalized coordinates of the infinite lines L and L, respectively, and they are the normal vectors of plane passing through L and that passing through L. Hence, the second mapping in the definition Eq (75) is meant the orthogonal distance of two infinite lines L and L is just twice the included angle of the two planes.

5.2 Quasi-Riemannian Metric on 3D Line Space

Based on the Riemannian metric [16] and analysis in Appendix II, the quasi-Riemannian distance on K(1) leads directly to the quasi-Riemannian distance on L(3):

dQR(L,L)=min{dK(L,L),dK(L,L)} (79)

It is not difficult to verify that: lines L and L are coplanar if and only if their Plücker coordinates satisfy L TKL = 0. Thus, the quasi-Riemannian distance of coplanar lines is given by the following formula:

dQR(L,L)=min{arccos(LTL),πarccos(LTL)} (80)

5.3 Comparison of the Three Metrics

In order to compare the performance of different metrics, we gerenated a 3D unit cube centered at the origin in space, and Fig 3 shows the 12 edges of the unit cube. Fig 4(a), 4(b) and 4(c) shows respectively the distances between the edges computed by the Euclidean metric, the Orthogonal metric, and the quasi-Riemannian metric, where different distance values are represented with different colors.

Fig 3. 12 edges on the unit cube.

Fig 3

(a) Euclidean metric; (b) Orthogonal metric; (c) Quasi-Riemannian metric.

Fig 4. Distances between the edges on the unit cube by the three metrics.

Fig 4

Based on their relative positions, the edge pairs belong to either the two parallel relationships (P-I and P-II) or the two orthogonal relationships (O-I and O-II) are listed as below:

P-I={(1,3),(1,5),(2,4),(2,6),(3,7),(4,8),(5,7),(6,8),(9,10),(9,12),(10,11),(11,12)}P-II={(1,7),(2,8),(3,5),(4,6),(9,11),(10,12)}O-I={(1,2),(1,4),(1,9),(1,10),(2,3),(2,10),(2,11),(3,4),(3,11),(3,12),(4,9),(4,12),(5,6),(5,8),(5,9),(5,10),(6,7),(6,10),(6,11),(7,8),(7,11),(7,12),(8,9),(8,12)}O-II={(1,6),(1,8),(1,11),(1,12),(2,5),(2,7),(2,9),(2,12),(3,6),(3,8),(3,9),(3,10),(4,5),(4,7),(4,10),(4,11),(5,11),(5,12),(6,9),(6,12),(7,9),(7,10),(8,10),(8,11)}

Each of the three metrics can give a unique distance for each relationship, as shown in Table 4. However, from Table 4 it can be seen that the Euclidean metric could not distinguish between O-I and O-II; the orthogonal metric could not distinguish between P-I and O-I; while the quasi-Riemannian metric gives different distances for all four relationships, and these distances are consistent with our intuition that the distances for P-I, P-II, O-I and O-II should increase gradually. This observation implies that the quasi-Riemannian metric is reasonable than the Euclidean metric or the orthogonal metric.

Table 4. Distances computed by the three metrics for P-I, P-II, O-I and O-II.

P-I P-II O-I O-II
d E 0.82 1.15 1.29 1.29
d O 1.57 3.14 1.57 2.09
d QR 0.84 1.23 1.40 1.55

In the experiments of this paper, the quasi-Riemannian metric is used to evaluate the 3D errors of line triangulations. In real experiment, the true line and the estimated line are close with each other, so they can be considered as lying on the same plane. Therefore, the Quasi-Riemannian metric would be dQR(L,L)=arccos(LTL). Let arccos(L T L ) = θ. Since the angle of the two lines is small, so the Euclidean metric (L-L=22LTL=22cosθ=2sin(θ/2)) can be approximated by 2sin(θ/2)≈θ. Therefore, the Quasi- Riemannian metric is equal to the Euclidean metric. The same situation also applies to the Orthogonal metric. As a result, the three metrics would be equal to each other or equal up to a scale factor.

Experiments with Simulated Data

In the experiments of this section, we simulated eight 3D space lines on two orthogonal planes, as shown in Fig 5. Using the synthetic data, we generated six images by adjusting the cameras location and parameters. The size of the images is of 1024×1024. In order to simulate the effect of image noise, we evenly sample 20 points on each image line segment, and add Gaussian noise with zero mean and σ standard deviation to these sampled image points, then, the actual projected image line is fitted by the orthogonal least squares fitting from these noise-corrupted point set.

Fig 5. Eight 3D lines (in pink color) on two orthogonal planes used in the simulations, where the small pyramids stand for camera viewpoints.

Fig 5

We evaluated and compared the performance of the linear algorithm LIN [4], the proposed linear algorithm LINa; and the optimal algorithms based on the algebraic optimality criterion (AOC): OPTa-I and OPTa-II. The used criteria of evaluation are RMS (root mean square) of the 3D errors (i.e., the quasi-Riemannian distance of reconstructed line to its ground truth), the algebra errors, and the orthogonal errors.

6.1 Stability to Noise

This experiment is to test the numerical stability of the algorithms with respect to different noise levels in the same geometric configuration. During the experiment, Gaussian noise with zero mean and σ standard deviation is added to each image point, and the noise level σ varies from 0.0 to 3.0 pixels in steps of 0.5, and 150 independent trials are carried out under each noise level. Fig 6 shows the experimental results on 6 views.

Fig 6. Stability of the algorithms with respect to different noise levels.

Fig 6

(a) 3D errors; (b) algebraic errors; (c) orthogonal errors.

According to Lemma 1, LIN and LINa algorithm should yield the same result. On the other hand, since OPTa-I and OPTa-II algorithms both solve the algebraic-error minimization problem with the same error cost function, the two optimization algorithms should yield comparable estimation results, and only difference may be caused by the computational errors when solving the high-degree functions. These results have been verified by the experiments. In our experiments, both the LIN and the LINa algorithms produce the same errors, while the OPTa-I and the OPTa-II yield very close results, thus, we only show the results of LINa and OPTa-II in Fig 6. From this experiment, we can see that the RMS errors of all the algorithms increase with the increase of noise levels. The two optimal algorithms based on the AOC yield lower 3D errors, algebraic errors, and orthogonal errors than the two linear algorithms. Please note that since the three criteria are with different meanings and units, they are not comparable to each other.

In the experiments, both the OPTa-I and OPTa-II algorithms rarely have the situation of no real solutions. With the increase of noise level and image number, the possibility of no real solutions will increase slowly.

We also compared the computational cost of these algorithms. The real computation time of the LIN, LINa, OPTa-I, and OPTa-II algorithms are 0.002, 0.002, 11.681, 36.688 seconds, respectively. The two linear algorithms have comparable running time, while the two optimal algorithms are much computational intensive. Among the two optimal algorithms, the OPTa-I is faster than the OPTa-II since the former only needs to solve a 2-degree polynomial equation system, while the OPTa-II needs to solve one 6-degree and one 10-degree systems. Thus, OPTa-I is a better choice in practice.

6.2 Stability to Configurations

This experiment is to test the numerical stability of the algorithms with respect to geometrical configurations. The number of views varies from 4 to 12 in steps of 2 during the experiments. At each number of views, 150 independent trials are carried out. Fig 7 shows the experimental results at noise level σ = 1.5, where only the results from LINa and OPT-II are plotted, as analyzed in Section 6.1, the LIN and LINa algorithms yield the same results, and the OPTa-I and OPTa-II produce very similar results. We can see from this experiment that the RMS error of all the algorithms decreases when the number of view increases. The two optimal algorithms outperform the two linear algorithms in term of 3D error, algebraic error, and orthogonal error.

Fig 7. Stability of different algorithms with respect to geometrical configurations.

Fig 7

(a) 3D errors; (b) Algebraic errors; (c) orthogonal errors.

Experiments with Real Images

The proposed algorithms were evaluated using extensive real images. The experimental results on four data sets are reported below. As shown in Fig 8, the used images include a calibration cube, a planar checkerboard, and the Oxford datasets “model house” and “corridor” (http://www.robots.ox.ac.uk/~vgg/data/data-mview.html). The lines marked with white and red in these images are used to test the algorithms.

Fig 8. Image sets used in the experiments.

Fig 8

(a) Calibration cube; (b) planar checkerboard; (c) Corridor; (d) Model House.

For the calibration cube, six images were taken by a Nikon D40 camera, with the image size of 3008×2000. The correspondences between the 3D points on the cube and their images are used to compute the camera matrices. For the planar checkerboard, six images were taken by a Sony HX5C camera, with the image size of 2592×1944, while the camera matrices are computed by the calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/). For the model house images and the corridor images, the camera matrices and the two end coordinates of the image lines are provided by the Oxford datasets.

Fig 9 shows the 3D errors, algebra errors, and orthogonal errors of different algorithms associated with the four data sets. From these experiments we can obtain the same conclusion as the simulation tests. The two optimal algorithms yield similar results which are better than those from the two linear algorithms. Although we plot the 3D error, algebraic error, and orthogonal error in one graph in Fig 9, these three errors are not comparable to each other since they are obtained using different criteria with different units. Fig 10 shows the 3D reconstruction results of the fours objects using the OPTa-I algorithm. The 3D models of these lines are correctly recovered by the proposed algorithm.

Fig 9. Experimental results.

Fig 9

(a) Calibration cube; (b) planar checkerboard; (c) model house; (d) corridor.

Fig 10. 3D Reconstruction results.

Fig 10

(a) Calibration cube; (b) planar checkerboard; (c) model house; (d) corridor.

Conclusion

In this paper, we have investigated line triangulations and line metrics. First, a new formula for the Plücker correction is introduced, by which a new linear algorithm for line triangulation is proposed. Then, two optimal algorithms are proposed from the algebraic optimality criterion. In addition, two metrics in 3D line space, the orthogonal metric and the quasi-Riemannian metric, are proposed for the quality evaluation of line triangulations. The experiments using simulated data and real images validate the proposed algorithms and show that the optimal solution can reconstruct more accurate 3D lines.

Appendix I: Angular Metric on SO(3)×SO(2)

Let SO(3)={RR3×3:RRT=I, det(R)=1} be the 3D rotation group. For RSO(3) there is the following angle- axis representation:

R=I+sin(θ)[a]×+(1cos(θ))[a]×2 (81)

where θ (0≤θπ) and a(‖a‖ = 1) are respectively the rotation angle and rotation axis of R, and the rotation angle satisfies:

θ=arccos(tr(R)12) (82)

The angular distance of R, RSO(3) is defined as [8]

d(3)(R,R)=arccos(tr(RRT)12) (83)

Similarly, for the 2D rotation group SO(2){WR2×2:WWT=I, det(W)=1}, the angular distance is defined as

d(2)(W,W)=arccos(tr(WWT)2) (84)

According to the angular distances d(3) and d(2), the angular distance on SO(3)×SO(2) can be defined as

d(X,X)=d(3)(R,R)+d(2)(W,W),                   X=(R, W),X=(R,W)SO(3)×SO(2) (85)

Since the geodesic distances of metric spaces (SO(3),d(3)) and (SO(2),d(2)) are the angular distances d(3) and d(2) themselves [8], it is not difficult to verify that:d is also the geodesic distance of metric space (SO(3)×SO(2),d).

Appendix II: Quasi-Riemannian Metric on K(1)

We first state briefly the Riemannian metric on S5(1) in order to introduce quasi-Riemannian metric on K(1). Let S = (0,…,0,−1)T and N = (0,…,0,1)T, called respectively the south pole and north pole of S5(1), we define the mappings φ±:U±R5 as follows:

Y=φ±(X)11±x6(x1,x2,x5),XU± (86)

where U+=S5(1)\{S} and U=S5(1)\{N}. Their inverse mappings are

X=φ±1(Y)=11+iyi2(2y1,,2y5,±(1iyi2))T,Yφ(U±) (87)

and J={(U+,φ+),(U,φ)} is a smooth structure on S5(1). The Riemannian metric on S5(1) induced by the standard Euclidean metric, h=i(dxi)2, in R6 is

g=4(1+Σiyi2)2i(dyi)2 (88)

Let γ={Y(t)=(y1(t),,yn(t))T:0t1} be a smooth or piecewise smooth curve in S5(1), its length is defined as

L(γ)=014(1+Σiyi2(t))2i(dyi(t)/dt)2dt=0121+Y(t)2dY(t)dtdt (89)

For Y0,Y1S5(1), let ΓY0,Y1 be the set of all smooth or piecewise smooth curves with the endpoints at Y 0 and Y 1, the Riemannian distance induced by the metric Eq (88) is

dS(Y0,Y1)inf{L(γ):γΓY0,Y1}

=L(ε(Y0,Y1))  whereε(Y0,Y1) is the short arc from Y 0 to Y 1 on a great circle in S5(1).

=arccos(Y0TY1) (90)

It is not difficult to verify that the Riemannian distance d s and the Euclidean distance d E (= ‖Y 0Y 1‖) both satisfy the following relation:

Lemma 2: For Y0,Y1,Y2,Y3S5(1),

dS(Y0,Y1)<dS(Y2,Y3)dE(Y0,Y1)<dE(Y2,Y3) (91)

Next, we introduce the quasi-Riemannian distance on K(1) from the Riemannian metric on S5(1).

For X0=(u0v0),X1=(u1v1)K(1), let

X(t)=(1t)X0+tX1        =((1t)u0+tu1(1t)v0+tv1)(u(t)v(t)),  0t1 (92)

Then, we have the following lemma.

Lemma 3: (a) If u 0±v 0≠−(u 1±v 1), then u(tv(t)≠0, t∈[0,1]

(b) If u 0+v 0 = −(u 1+v 1), then

u(t)+v(t)0,  t[0,1]\{1/2};  u(1/2)+v(1/2)=0

(c) If u 0v 0 = −(u 1v 1), then

u(t)v(t)0,  t[0,1]\{1/2};  u(1/2)v(1/2)=0

Proof: From u(tv(t) = (1−t)(u 0±v 0)+t(u 1±v 1),

u(t)±v(t)=0{u0±v0=s(u1±v1)s=t/(1t) (93)

Since ‖u 0±v 0‖ = ‖u 1±v 1‖ = 1, s = 1 by the first equation in Eq (93), thus u 0±v 0 = −(u 1±v 1). Therefore (a) holds. If u 0±v 0 = −(u 1±v 1), there must be t = 1/2 by the second equation in Eq (93), and thus (b) holds. Similarly, (c) holds.

Clearly, the short arc from X 0 to X 1 on a great circle in S5(1) is

X¯(t)=X(t)X(t)=1X(t)(u(t)v(t)),  0t1 (94)

Since

XT(t)KX(t)=2uT(t)v(t)=2t(1t)(u0Tv1+v0Tu1)=2t(1t)X0TKX1,0t1

we have

  1. If X0TKX1=0, then X¯(t)K(1) for 0t1;

  2. If X0TKX10, then X¯(t)K(1) for 0<t<1.

For the case (a), the Riemannian distance on S5(1) leads directly to the Riemannian distance between X 0 and X 1 in K(1):

dK(X0,X1)=arccos(X0TX1) (95)

We consider the case (b) next. According to Proposition 1 and Lemma 3, the best approximation of X¯(t)(t1/2) on the sub-manifold K(1) under the Euclidean metric is

X*(t)=12(u(t)+v(t)u(t)+v(t)+u(t)v(t)u(t)v(t)u(t)+v(t)u(t)+v(t)u(t)v(t)u(t)v(t))K(1) (96)

By Lemma 2, X*(t) is also the best approximation of X¯(t) on K(1) under the Riemannian metric, thus X*(t) is the orthogonal projection of X¯(t) on K(1) under the Riemannian metric. By Lemma 3, X*(t) is a smooth or piecewise smooth curve on K(1). Thus by letting Y*(t)=φ(X*(t)), a quasi-Riemannian distance between X 0 and X 1 in K(1) is obtained using the Riemannian metric on S5(1):

dK(X0,X1)=0121+Y*(t)2dY*(t)dtf(t)dt (97a)

Proposition 3: The integration Eq (97a) can be expressed as:

dK(X0,X1)=201/2(a(t2+a)2+b(t2+b)2)1/2dt (97)

where,

a={(2q+)/(4q+),  q+00,  q+=0b={(2q)/(4q),  q00,  q=0q±=1(X0TX1±X0TKX1), (98)

Specifically, if X0TKX1=0 then dK(X0,X1)=arccos(X0TX1), i.e., Eq (91) is a special case of Eq (97).

Proof: By some mathematical manipulation, the integrand of Eq (97a) can be expressed as:

f(t)=12((2q+)q+(2q+(t2t)+1)2+(2q)q(2q(t2t)+1)2)1/2        =12((2q+)q+(4q+(t1/2)2+(2q+))2+(2q)q(4q(t1/2)2+(2q))2)1/2        =12(a((t1/2)2+a)2+b((t1/2)2+b)2)1/2 (99)

Thus,

dK(X0,X1)=1201(a((t1/2)2+a)2+b((t1/2)2+b)2)1/2dt                   =201/2(a(t2+a)2+b(t2+b)2)1/2dt (100)

If X0TKX1=0, then a=b=41(1+X0TX1)/(1X0TX1), and

dK(X0,X1)=2a0121t2+adt=2arctan12a                  =2arctan1X0TX11+X0TX1=arccos(X0TX1) (101)

Data Availability

All relevant data are within the paper.

Funding Statement

This work is supported by the National Natural Science Foundation of China (NSFC) under the grant 61375043. The funding is received by FCW. This work is also supported by the National Natural Science Foundation of China under the grant 61273282. The funding is received by GHW. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Josephson K. and Kahl F. (2008) Triangulation of points, lines and conics. J. Math. Imaging Vis. 32: 215–225. [Google Scholar]
  • 2. Faugeras O. and Mourrain B. (1995) On the geometry and algebra of the point and line correspondences between n images In: Proc. ICCV95, Cambridge, Massachusetts, USA, pp. 951–956. [Google Scholar]
  • 3. Bartoli A. and Sturm P. (2004) The 3D line motion matrix and alignment of line reconstructions. Int. Comput. Vis. 57: 159–178 [Google Scholar]
  • 4. Bartoli A. and Sturm P. (2005) Structure-from-motion using lines: Representation, triangulation, and bundle adjustment. Computer vision and image understanding, 100: 416–441. [Google Scholar]
  • 5. Zhang Q., Wu Y., Wang F., Dong Q., Jiao L. (2014). Suboptimal Solutions to the Algebraic-Error Line Triangulation. Journal of Mathematical Imaging and Vision, 49(3): 611–632. [Google Scholar]
  • 6. Morgan A. (1987) Solving polynomial system using continuation for engineering and scientific problems. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  • 7. Morgan A. and Sommese A. (1987) Computing all solutions to polynomial systems using homotopy continuation. Appl. Math. Comp. 24: 115–138. [Google Scholar]
  • 8. Freund R. W. and Jarre F. (2001) Solving the sum-of-ratios problem by an interior-point method. J. Glob. Opt. 19: 83–102. [Google Scholar]
  • 9. Ronda J., Valdés A. and Gallego G. (2008) Line geometry and gamera autocalibration. J. Math. Imaging Vis. 32: 193–214. [Google Scholar]
  • 10. Hartley R. and Zisserman A. (2000) Multiple view geometry in computer vision. Cambridge: Cambridge University Press. [Google Scholar]
  • 11. Lasdon L. S. (2002) Optimization theory for large systems. Mineola, New York: Dover Publications, Inc. [Google Scholar]
  • 12. Chow S. N., Mallet-Paret J. and Yorke J. (1978) Finding zeros of maps: homotopy methods that are constructive with probability one, Math. Comp., 32: 887–889. [Google Scholar]
  • 13. Chow S. N., Mallet-Paret J. and Yorke J. (1979) A homotopy method for locating all zeros of a system of polynomials, in Lecture Notes in Math., 730: 77–88. [Google Scholar]
  • 14. Maybank S. and Faugeras O. (1992) A theory of self-calibration of a moving camera. Int. Comput. Vis. 8: 123–151 [Google Scholar]
  • 15. Wu F C., Zhang M. and Hu Z Y. (2013) Self-Calibration under the Cayley framework, Int. Comput. Vis. 103: 372–398. [Google Scholar]
  • 16. Petersen P. (1998) Riemannian geometry, GTM171, New York: Springer-Verlag. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All relevant data are within the paper.


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES