Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2016 Jun 1;16(6):807. doi: 10.3390/s16060807

An Exact Formula for Calculating Inverse Radial Lens Distortions

Pierre Drap 1,*,, Julien Lefèvre 1,
Editor: Manuela Vieira1
PMCID: PMC4934233  PMID: 27258288

Abstract

This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view.

Keywords: radial distortion, distortion correction, power series

1. Introduction

Distortion is a physical phenomenon that in certain situations may greatly impact an image’s geometry without impairing quality nor reducing the information present in the image. Applying the projective pinhole camera model is often not possible without taking into account the distortion caused by the camera lens. This phenomenon can be modelled by a radial distortion, the most prominent component, and a second, with a lesser effect, a decentering distortion which has both a radial and a tangential components. Radial distortion is caused by the spherical shape of the lens, whereas tangential distortion is caused by the decentering and non-orthogonality of the lens components with respect to the optical axis ([1,2]). It is important to note that radial distortion is highly correlated with focal length [3] even if in literature it is not modelled within the intrinsic parameters of the camera [4]. This is due to the fact that the radial distortion model is not linear, in contrary to other intrinsic parameters. We can see in Figure 1 the displacement applied to a point caused by both radial and tangential distortion.

Figure 1.

Figure 1

Point shifted by distortion.

Decentering distortion was modelled by Conrady in 1919 [5] then remodelled by Brown in 1971 [6] and a radial distortion model was proposed by Brown in 1966 [7]. These distortion models have been adopted by the Photogrammetry as well as the Computer Vision communities for several decades. Most photogrammetric software such as PhotoModeler (EOS) uses these models (see Equations (1) and (2)) to correct observations visible on the images and provide ideal observations.

Roughly, radial distortion can be classified in two families, barrel distortion and pincushion radial distortion. Regarding the k1 coefficient in Formula (1), barrel distortion corresponds to a negative value of k1 and pincushion distortion to a positive value of k1, for an application of the distortion and not a compensation. As shown in Figure 2, barrel and pincushion distortions have an inverse effect and an image affected by a pincushion distortion can be corrected by a barrel distortion (and vice-versa) [8,9,10,11].

Figure 2.

Figure 2

In the center, a painting from Piet Mondrian [12] (which is now in the public domain since 1 January 2016); on the left, the painting with a barrel effect; and on the right, the same image with pincushion distortion.

Barrel distortion can be physically present in small focal length systems, while larger focal lengths can result in pincushion distortion [8,10]. These radial distortion effects can be very important, especially in inexpensive wide-angle lenses which are often used today.

Using these models to compensate the observations is now well known and many software dealing with images or panoramas propose plugins dedicated to distortion correction (mainly only radial distortion) [13] . However, although we have the equations to compensate the distortion, how to compute the inverse function in order to apply such a distortion is not obvious. For example, when an image of a known 3D point is computed using a calibrated camera, the 2D projected point can be easily computed, but we need then to apply the distortion to the image point in order to obtain an accurate projection of the original 3D point. This first application justifies the present work. How to determine the inverse of a closed form solution for distortion model equations? The second reason is the merging the work of the two communities involved, photogrammetry and computer vision. While having worked separately for years, the situation between the two communities has drastically changed for almost a decade [14]. This is also visible in the form of new commercial or open-source software dealing with photogrammetry or computer vision. For example, PhotoScan (from Agisoft) or MATLAB that comes with the camera calibration toolbox or, for the open-source side, OpenCV toolbox which provides a solution for multiview adjustment. These three software use the same Equations (1) and (2) to manage distortion, but the mathematical model here is used to apply distortion and not to compensate it. Determining an exact formula to calculate inverse lens distortion, which allows using the same software to apply and compensate distortion with two set of kn parameters can be very useful and, in fact, is the purpose of this work.

This paper is organized as follows: in the next section, a demonstration on computing an exact formula for inverse radial distortion is presented. This approach gives a set of k1...kn coefficients computed from the original polynomial distortion model. In Section 3, several applications of this formula are presented. First of all an experiment on this inversion formula is done using only the k1...kn coefficient. Then, an application used to compute a formula for an inverse radial lens distortion is applied to an image coming from a metric camera (Wild P32) with a 3-micron distortion in the edge of the frame. The next experiment is done on a calibration grid built with black disk. Finally, a discussion on converting the distortion model between several photogrammetric software, PhotoModeler (EOS [15]), PhotoScan (Agisoft [16]) and OpenCV [17] is proposed.

2. Previous Work

2.1. Calibration Approach

Radial distortion is mainly considered in the camera calibration process. Since Duane Browns first publication, a large quantity of work was performed in the field of camera calibration, opening the way for new methods. Several techniques were proposed using orthogonal planes, 2D objects with plannars patterns up to self-calibration with unknown 3D points. Interesting reviews were published on both the photogrammetry and computer vision sides by Fraser [18], Zhang [19] and more recently by Shortis [20].

When Brown [6] proposed a radial distortion model in 1971, he also proposed a way to calibrate cameras using a set of plumb lines. The idea of using a set of straight wires to compute a distortion model in a camera calibration process remains in use 45 years later in the fields of photogrammetry and computer vision; Hartley in 1993 [21,22] then Faugeras and Devernay [23] and recently Nomura [24], Clauss [25], Tardif [26], and Rosten [27].

2.2. Inverse Radial Distortion

After Conrady and Brown a lot of work was done to deal with removing distortion from images. As the problem is shared by photogrammetry community as well as computer vision we can refer to many books and papers on this topic. Including the famous Manual of photogrammetry [28] and Atkinson gives an overview of these problems for the two communities [29].

Nevertheless, the problem of reverse distortion is somewhat the poor relation of the problems of distortion. As mentioned by Heikkilä and Silvén [30], “only a few solutions to the back-projection problem can be found in literature, although the problem is evident in many applications.” And in the same paper, “we can notice that there is no analytic solution to the inverse mapping”.

In the particular case of high distortion as in wide-angle and fish-eye lenses, some non polynomial (and invertible) models have been proposed; for example Basu and Licardie introduced the Fish-Eye Transform (FET) in [31]. Also Faugeras and Devernay [23] propose another invertible model based on the Field-of-View. A complete description of these models can be found in the review written by Hugues [32] and also in [33].

Regarding the polynomial model, several solutions have been tested to perform inverse radial distortion and the solution can be classified in three main classes (even if other approaches can be found such as the use of a neural network [34]):

  • Approximation. Mallon [35], Heikkilä [30,36] and then Wei and Ma [37] proposed inverse approximations of a Taylor expansion including first order derivatives. According to Mallon and Welhan, “This is sometimes assumed to be the actual model and indeed suffices for small distortion levels.” [35]. A global approach, inverse distortion plus image interpolation, is presented in a patent held by Adobe Systems Incorporated [38].

  • Iterative. Starting from an initial guess, the distorted position is iteratively refined until a convenient convergence is determined [39,40,41];

  • Look-up table. All the pixels are previously computed and a look-up table is generated to store the mapping (as for example in OpenCV).

All these methods involve restrictions and constraints on accuracy, time processing or memory usage.

Nevertheless, some very good results can be obtained. For example, implementing the iterative approach gives excellent results, however the processing time is drastically increased. Given in Peter Abeles’ blog [40], the method is easy to implement. Results are shown in Figure 3.

Figure 3.

Figure 3

Iterative method applied to a Nikon D700 camera with a 14mm lens. Along the frame diagonal the points are first compensated by what is a normal process of the radial distortion using Equation (1) and then the distortion is applied with the iterative process [40] and the result compared to the original point. On Y-axis, the distance between the original point and the computed reverse point.

The iterative solution works by first estimating the radial distortion magnitude at the distorted point and then refining the estimate until it converges.

Algorithm 1 shows an implementation of this approach.

Algorithm 1 Iterative algorithm to compute the inverse distortion
Require: point Pn
Pc=Pn
repeat
  r=||Pc||
  dr=1+k2r2+k4r4+...
  Pc=Pn/dr
until Convergence of Pc
return Pc

Only a few iterations are necessary.

The results presented in Figure 3 are in pixels. The used camera here is a Nikon D700 with a 14 mm lens. The calibration was done with PhotoModeler EOS and the results are k1 = 1.532×104, k2 = 9.656×108 , k3 = 7.245×1011. The coefficients are expressed in millimeters. The center of autocollimation and the center of distortion are close to the image center. Only a few iterations are necessary to compute the inverse distortion. In this case, with a calibration made using PhotoModeler [15], the inverse of distortion represents the application of the distortion to a point projected from the 3D space onto the image.

This iterative approach is very interesting when the processing time is not an issue, as for example in the generation of a look-up table. Note, however, that a good initial value is needed.

According to these existing methods, we now want to obtain a formula for the inverse radial distortion when modelled by a polynomial form as described by Brown. The inverse polynomial form will be an expression of the original k1...k4 coefficients of the original distortion.

3. Exact Formula for Inverse Radial Distortion: An Original Approach

3.1. Lens Distortion Models

We consider the general model of distortion correction or distortion removal that can be written in the following form by separating radial and tangential/decentering components:

x=x+x¯k1r2+k2r4+k3r6+...+p1r2+2x¯2+2p2x¯y¯1+p3r2+... (1)
y=y+y¯k1r2+k2r4+k3r6+...RadialDistorsion+p2r2+2y¯2+2p1x¯y¯1+p3r2+...TangentialDistorsion (2)

where x¯=xx0, y¯=yy0, r=x¯2+y¯2.

In the following we will consider only radial distortion.

3.2. General Framework

Given a model of distortion or correction with parameters (k1,k2,k3,...), our general objective is to find the inverse transformation. A natural assumption is to express the inverse transformation on the same form of the direct transformation, i.e., with parameters (k1,k2,k3,...). Therefore we want to express each ki as a function of all the kj.

Radial distortion

Let us assume that there exists two transformations T1 and T2:

T1:xyxy=P(r)xy (3)
T2:xyxy=Q(r)xy (4)

where r=x2+y2, r=x2+y2, and P and Q are power series:

P(r):=n=0+anr2n (5)
Q(r):=n=0+bnr2n (6)

with a0=1, a1=k1,..., an=kn (in order to use k as an index in the calculus of the Appendix). In addition to starting at n=0 we facilitate those calculi. We can scale r as r=αr in order to have the same domain of definition for P and Q. So Q reads:

Q(r)=n=0+bnr2n (7)

with bn=bnα2n. In the following bn is used instead of bn but we keep in mind this change of variable.

Given the definition of r and by using transformation T2 in Equation (4) we obtain:

r=r|Q(r)|

and similarly with Equation (3):

r=r|P(r)|

P and Q are positive which allows removing the absolute value. Hence by injecting the last equation in the first we get:

r=rP(r)QrP(r)

and at the end:

1=P(r)Q(rP(r) (8)

It is possible to derive a very general relation between coefficients an and bn but it is not exactly adapted to real situations where P is a polynomial of finite order. Therefore we can derive a slightly simpler relationship in the case where only a1,...,a4 are given. It is summarized in the following result:

Proposition 1. 

Given the sequence a1,...,a4 it is possible to obtain the recursive relation:

b0=1 and for n0bn=k=14akq(nk)j+k=n0k1j8kbkp(j,2k) (9)

where we use the following intermediate coefficients:

p(j,k)=n1+...+nk=j0ni4an1...ank
q(k)=j=14ajq(kj)

We will derive this expression in Appendix A and show how the coefficients b1,...,bn can be computed both with symbolic and numeric algorithms in Appendix B.

Several remarks can be made about this result:

Remark 1

The problem is symmetric in terms of P and Q, so the relations found for an can of course be applied in the reverse order.

Remark 2

For any n the coefficient bn can be computed recursively. In Equation (9), the first summation is obtained thanks to a0,...,a4 and q(n1),...,q(n4) that only depends on the sequence an. Similarly the second summation involves b0,...,bn1 and values p(j,2k) which both depend only the given sequence an.

Therefore the recursive formula for bn can be implemented at any order n. We provide the 4 first terms:

b1=a1 (10)
b2=3a12a2 (11)
b3=8a1a212a13a3 (12)
b4=55a14+10a1a355a12a2+5a22a4 (13)

All formula till b9 are summarized in Appendix C.

4. Results and Experimental Section

In this section we propose three experiments to test this inverse formula for radial distortion in order to evaluate the relevance of such approach.

  • First, we begin by testing the accuracy of the inverse formula by applying the forward/inverse formula recursively within a loop. Hence, the inverse of the inverse radial distortion is computed 10,000 times and compared to the original distortion coefficients.

  • Then, for a given calibrated camera, we compute the residual after applying and compensating the distortion along the camera frame. A residual curve shows the results of the inverse camera in all the frame.

  • The last experiment is the use of inverse distortion model on a image made with a metric camera built with a large eccentricity (a film-based Wild P32 camera), without distortion. We apply a strong distortion and then compensate it and finally compare it to the original image.

4.1. Inverse Distortion Loop

In this experiment, as the formula gives an inverse formula for radial distortion we do it twice and compare the final result with the original one. In a second step, we iterate this process 10,000 times and compare the final result with the original distortion.

The Table 1 shows the original radial distortion and the computed inverse parameters. The original distortion is obtained by using PhotoModeler to calibrate a Nikon D700 camera with a 14 mm lens from Sigma.

Table 1.

Radial distortion calibration in Column two. Column three, inverse of radial distortion with coefficient from k1...k9.

Radial Distortion Coefficients Original Value Computed Inverse Value
k1 1.532×104 1.532×104
k2 9.656×108 1.6697072×107
k3 7.245×1011 2.33941625216×1010
k4 0.0 3.1255518770316804×1013
k5 0.0 4.774156462972984×1016
k6 0.0 7.680785197322419×1019
k7 0.0 1.1582853960835112×1021
k8 0.0 2.1694555835054252×1024
k9 0.0 3.779164309884112×1027

The results for this step are presented in Table 2. In the columns ‘Delta Loop 1’ and ‘Delta Loop 10000’ we can see that k1 and k2 did not change and the delta on k3 and k4 are small with respect to the corresponding coefficients: the error is close to 1E-10 smaller than the corresponding coefficient. Note that k4 was not present in the original distortion and as the inverse formula is in function of only k1...k4, the loop is computed without the coefficients k5...k9 which influences the results, visible from k4.

Table 2.

Radial distortion inverse loop and residual between coefficients of the orignal distortion and after n inversions (n = 2 and n = 10,000).

Coefficient Original Inverse 1 Delta Loop 1 Delta Loop 10,000
k1 1.532×104 1.532×104 0.0 0.0
k2 9.656×108 1.6697072×107 0.0 0.0
k3 7.245×1011 2.33941625216×1010 1.292469707114×1026 1.292469707×1026
k4 0.0 3.1255518770316804×1013 1.009741958682×1028 1.009842932×1024

This first experiments shows the inverse property of the formula and of course not the relevance of an inverse distortion model. But this experiment shows also the high stability of the inversion process. However, even if coefficients k1...k4 are sufficient in order to compensate distortion, the use of coefficients k1...k9 are important for the inversion stability.

The next two experiments show the relevance of this formula for the inverse radial distortion model.

4.2. Inverse Distortion Computation onto a Frame

This second experiment uses a Nikon D700 equipped with a 14 mm lens from Sigma. This camera is a full frame format, i.e., a 24 mm × 36 mm frame size. The camera was calibrated using PhotoModeler and the inverse distortion coefficients are presented in Table 1, where Column 1 gives the calibration result on the radial distortion, and Column 2 the computed inverse radial distortion.

Note that the distortion model provided by the calibration using PhotoModeler gives as a result a compensation of the radial distortion, in millimeters, limited to the frame.

The way to use this coefficient is to first express a 2D point on the image in the camera reference system, in millimeters, with the origin on the CoD (Center of Distortion), close to the center of the image. Then the polynomial model is applied from this point.

The inverse of this distortion is the application of such a radial distortion to a point theoretically projected onto the frame.

In all following experiments, the residuals are computed as follows:

A 2D point p, is chosen inside the frame, its coordinate are previously computed in millimeters in the camera reference system with the origin on the CoD. Then p1 is p compensated by the inverse of distortion. Finally p2 is p1 compensated by the original distortion.

The residual is the value dist(p,p2) .

The following results show the 2D distortion residual curve. For a set of points on the segment [O,maxX/2] the residuals are computed and presented in Figure 4 as Y-axis. The X-axis represents the distance from the CoD. These data comes from the calibration process and are presented in Table 1.

Figure 4.

Figure 4

Inverse residual distortion with k1..k9 Coefficient Computed for the entire frame. (a) Inverse residual distortion with k1...k4 Coefficient Computed from 0 to maxX frame; (b) Inverse residual distortion with k1..k9 Coefficient Computed from 0 to maxX frame.

The results shown in Figure 4 and Figure 5 are given in pixels.

Figure 5.

Figure 5

Inverse residual distortion with k1...k9 coef. Computed on the entire frame. (a) Axonometric view; (b) Top view.

In Figure 4a, below, we present the residuals using only coefficients k1..k4 of the inverse distortion. The maximum residual is close to 4 pixels, but residuals are less than one pixel until close to the frame border. This can be used when using non configurable software where it is not possible to use more than 4 coefficients for radial distortion modeling.

In Figure 4, below, we present the residual computed from 0 to maxX frame using coeficients k1...k9 for inverse distortion.

The results are very good, less than 0.07 pixel on the frame border along 0X axis and the performance is quite the same as for compensating the original distortion.

We can see that in almost all the images the residuals are close to the ones presented in Figure 5. Nevertheless, we can observe in Figure 4 higher residual in the corners, where the distance to the CoD is the greatest.

Here follows a brief analysis of the residuals:

These two experiments show that the results are totally acceptable even if the residuals are higher in zone furthest from the CoD, i.e., in the diagonal of the frame. As shown in Table 3, only 2.7 % of the frames have residuals > 1 pixel.

Table 3.

Residuals on the full frame format.

Pixel by Residual Nb Concerned Pixel
Pixels 10,000.0
Pixels with residual < 0.2 9344.0 (93.44 %)
Pixels with residual < 1.0 9732.0 (97.32 %)
Pixels with residual > 1.0 268.0 (2.68 %)

4.3. Inverse Distortion Computation on an Image Done with a Metric Camera

This short experiment used an image taken with a Wild P32 metric camera in order to work on an image without distortion. The Wild P32 terrestrial camera is a photogrammetric camera designed for close-range photogrammetry, topography, architectural and other special photography and survey applications.

This camera was used as film based, the film is pressed onto a glass plate fixed to the camera body on which 5 fiducial marks are incised. The glass plate prevents any film deformation.

The film format is 65 mm × 80 mm and the focal length, fixed, is 64 mm. Designed for architectural survey the camera has a high eccentricity and the 5 fiducial marks were used in this paper to compute the CoD. In Figure 6a four fiducial marks are visible (the fifth is overexposed in the sky). The fiducial marks are organized as follows: one at the principal point (PP), three at 37.5 mm from the PP (left,right,top) and one at 17.5 mm (bottom).

Figure 6.

Figure 6

Distortion-less image taken using a small-format Wild P32 metric camera and application of an artificial distortion. (a) On the left, original image taken with P32 Wild metric camera; (b) On the right, pincushion distortion applied on this original image whithout interpolation. As images have not the same pixel size some vacant pixel are visible as black lines (see Hughues [32]). These lines surround the distortion center, here located on a fiducial mark, strongly shifted from the image center.

This image was taken in 2000 in the remains of the Romanesque Aleyrac Priory, in northern Provence (France) [42]. Its semi-ruinous state gives a clear insight into the constructional details of its fine ashlar masonry as witnessed by this image taken using a Wild P32 during a photogrammetric survey.

As this image did not have any distortion, we used a polynomial distortion coming from another calibration and adapted it to the P32 file format (see Table 4). The initial values of the coefficients have been conserved and the distortion polynom expressed in millimeters is the compensation due at any point of the file format. The important eccentricity of the CoD is used in the image rectification: the COD is positioned on the central fiducial marks visible on the images in Figure 6a,b.

Table 4.

Radial distortion comensation and then application of the inverse used with the image taken with the P32 camera.

Coef. Original Inverse
k1 0.09532 −0.09532
k2 9.656×108 0.02725780376
k3 7.245×1011 −0.010392892306459602
k4 0.0 0.004540497555744342
k5 0.0 −0.0021482705738196948
k6 0.0 0.0010711249019932042
k7 0.0 5.542464764540273×104
k8 0.0 2.948490225469636×104
k9 0.0 1.6024842649677896×104

After scanning the image (the film was scanned by Kodak and the result file is a 4860 × 3575 pixel image), we first measure the five fiducial marks in pixels on the scanned image and then compute an affine transformation to pass from the scanned image in pixels to the camera reference system in millimeters where the central cross is located at (0.0, 0.0). This is done according to a camera calibration provided by the vendor, which gives the coordinates of each fiducial mark in millimeters in the camera reference system. Table 5 shows the coordinates of the fiducial marks and highlights the high eccentricity of the camera built for architectural survey. This operation is called internal orientation in photogrammetry and it is essential when using images coming from film-based camera that were scanned. The results of these measurements are shown in Table 5.

Table 5.

Photograph taken with the Wild P32 camera: data and some results of the Internal Orientation.

Param. X Value Y Value
mm 2 Pixel 58.885 58.885
Frame size in pixel 4860 3575
Frame size in mm 82.54 60.71
CoD, ppx ppy (pixel) 2423.212 2377.528
Fiducial mark-up- (mm) 0.0 37.5
Fiducial mark-right- (mm) 37.5 0.0
Fiducial mark-down- (mm) 0.0 −17.5
Fiducial mark-left- (mm) −37.5 37.5
Fiducial mark-center- (mm) 0.0 0.0

In Figure 6a we can see the original image taken in Aleyrac while in the Figure 6b we can see the result of the radial distortion inversion. Figure 7 shows the original image in grey and the image computed after a double inversion of the radial distortion model in green.

Figure 7.

Figure 7

Distortion compensation applied on the pincushion image obtained in Figure 6b. In green the image corrected by inverse distortion; in black and white the original image.

We can observe no visible difference in the image. This is correlated with the previous results in the second experiment, see Figure 4.

5. Conclusions and Discussion

The experiments presented in this article show the relevance of the proposed methodology and the reliability of the result. However, a significant difference exists depending on whether the set of coefficients k1...k4 or k1...k9. For large distortion the number of parameters should be significant. See Figure 4 and Figure 5 for the influence of the number of coefficients. We can note that since the formulation by Brown, the number of coefficients used to characterize the distortion has increased. In 2015 the Agisoft company added k4 in their radial distortion model while at the same time many software still use only k1, k2 and in 2016 they add p3 and p4 to the tangential distortion model.

Even when k1...k4 are sufficient for compensating the radial distortion, it is however necessary to increase the degree of the polynomial to correctly compute the inverse.

5.1. A Bridge between PhotoModeler and Agisoft for Radial Distortion

One of the applications for using such a formula to compute the inverse distortion coefficient in function of k1...k4 is to convert distortion models between two software programs that use the inverse distortion model, as for example PhotoModeler and PhotoScan from Agisoft. Indeed PhotoModeler uses the Brown distortion model to compensate for observations made on images and so to obtain a theoretical observation without distortion effect. In contrary, PhotoScan from Agisoft uses a similar model but it adds the distortion to a point projected onto the image. To convert a distortion model from PhotoModeler to PhotoScan, or vice versa, we need to compute the inverse distortion model. We need to take in consideration the unit used to express the 2D point coordinate: in PhotoModeler the points are measured in millimeters and their range is limited to the camera frame; whereas in PhotoScan, the points are normalized by the focal length.

To convert a distortion model from PhotoModeler to PhotoScan the following steps are necessary:

  1. Given k1...k3 as the coefficients of the polynom modeling the radial distortion in PhotoModeler. Note that PhotoModeler uses only k1...k3 coefficient.

  2. k1=k1*focalmm2

    k2=k2*focalmm4

    k3=k3*focalmm6

  3. Compute k1...k4 (PhotoScan uses k4) according to Appendix C.

And to obtain the k1...k3 for PhotoModeler starting from k1..k4 given by PhotoScan we need:

  1. Given k1...k4 as the coefficient of the polynom modeling the radial distortion in PhotoScan.

  2. k1=k1/focalmm2

    k2=k2/focalmm4

    k3=k3/focalmm6

    k4=k4/focalmm8

  3. compute k1...k3 (PhotoModeler uses k3) according to Appendix C.

The proposed approach in this article allows to compute the new coefficients in function of k1...k4

5.2. Possible Limitations

Our results on inverse residual distortion suggests a decrease with the order of approximation. In the future it could be interesting to determine some analytical bounds on the maximal residual distortion. This bound would depend on the distortion coefficients k1,k2,k3,k4, maxX and the order N. The crucial question would be to know whether this bound converges when N goes to infinity. This is absolutely not guaranteed since the formula of proposition 1 has been obtained by purely formal manipulations and the power series Q could be divergent (which implies the divergence of the residual). In this case, one could however expect good behavior, similarly to formal solutions of differential equations, whose approximations can be controled for r<R until a bound N depending on R [43] (It is also explained more briefly in [44], Section 3 page 103). At a given N one could also expect that this bound will decrease if the distortion given by P is decreased.

We thought such technical questions may be of great interest, both from a mathematical perspective as well as an applied one. Obtaining theoretical results on the inverse residual distortion might influence the software community in adding more coefficients in the polynomial models.

Acknowledgments

The authors wish to thank two postdoctoral researchers in the team; Motasem Nawaf for his implication in the iterative inverse distortion method and Jean-Philip Royer for his implementation of the inverse distortion in Python in PhotoScan Software in order to import and export camera distortion toward other photogrammetric software.

Appendix A

In Equation (8) we replace P and Q by their power series in order to identify coefficients

1=m=0+amr2mn=0+bnrP(r)2n

For k fixed P(r)k can be rewritten as a product:

P(r)k=n1=04an1r2n1...nk=04ankr2nk

which gives a more compact expression

P(r)k=m=04kr2mn1+...+nk=m0ni4an1...ank=m=04kr2mp(m,k)

Note that p(m,k)=0 as soon as m>4k. Then we obtain:

QrP(r)=n=0+bnr2nm=08nr2mp(m,2k)
QrP(r)=k=0+r2km+n=k0n0m8nbnp(m,2n)t(k)

Let us call t(k) the coefficient in the previous sum. Actually t(k) will turn out to be q(k), defined in Proposition 1. Last we can express the initial equality P(r)Qr(P(r)=1:

k=0+akr2kj=0+r2jt(j)=1
l=0+r2jk+j=lakt(j)=1

To obtain the equality:

k=0lakt(lk)=0forl1

We decompose this sum:

a0t(l)+k=1lakt(lk)=0

and since a0=1 we conclude that t(l) satisfy the same recurrence relationship as q in Proposition 1. The two quantities are therefore equal since they have the same initial value. We have also an alternative expression for q given by the definition of t depending on bn and p(m,2n) that gives:

bkp(0,2k)+m+n=k0n1m8nbnp(m,2n)=t(k)

By remarking that p(0,2k)=1 we obtain Proposition 1.

Appendix B

There are two ways of implementing the result of Proposition 1. One can be interested in having a symbolic representation of coefficients bn with respect to a1,a2,a3,a4. But one can also simply obtain numeric values for bn given numeric values for an.

The main ingredient in both cases is to compute efficiently the coefficients p(j,k). One can start by the trivial result that if n1+...+nk=j then n1+...+nk1=jnk. But nk can take only 5 values between 0 and 4 (there are no other coefficient an in our case). Therefore one can easily derive the recursive identity:

p(j,k)=p(j,k1)+a1p(j1,k1)+a2p(j2,k1)+a3p(j3,k1)+a4p(j4,k1)

To compute bn we need coefficients p(j,2k) with the constraints j+k=n, 0k and 1j8k. A table (n1)×2(n1) is defined and coefficients are progressively filled by varying the coefficient k from 1 to 2(n1) thanks to dynamic programming. This step is summarized in Algorithm B1.

Algorithm B1 Computation of p(j,k)
Require: coefficients a1,a2,a3,a4, integer N
 Define an array p of size N×2N1
for k=0:2(N1) do
   p(0,k)=1
end for
for k=1:2(N1) do
    for j=1:2(N1) do
       p(j,k)=p(j,k1)+a1p(j1,k1)+a2p(j2,k1)+a3p(j3,k1)+a4p(j4,k1)
                           ▹ With p(j,k)=0 as soon as j<0 or k0
   end for
end for
return p

The most tricky aspect is to manipulate formal terms with a1,a2,a3,a4. For that it is good to remark that coefficients bn are made of terms a1n1a2n2a3n3a4n4 such as n1+2n2+3n3+4n4=n. We can therefore have an a priori bound on each exponents, n, n/2, n/3, n/4 respectively. Given that, each multinomial term can be represented as a coefficient in a 4D array of bounded size. It is also very convenient to use a sparse representation for it due to many vanishing terms. Addition of terms are simply additions of 4D arrays of size bounded by n4. Multiplication requires shifting operations on dimensions of the array, basically multiplying by a1n1 corresponds to a translation by n1 of the first dimension.

Algorithm B2 Computation of coefficients bn
Require: coefficients a1,a2,a3,a4, integer N
 Define an array q=[0,0,0,0]
 Define an array b of size N
b(0)=1
 Compute p with Algorithm B1
for n=1:N do
   tmp=0
   for k=1:4 do
     tmp=tmpakq(k1)
   end for
   b(n)=b(n)+tmp
   for k=n/9:(n1) do
     b(n)=b(n)b(k)p(nk,2k)
   end for
   q(1:3)=q(0:2) and q(0)=tmp
  end for
  return b

We summarize the computation (Algorithm B2).

Appendix C

Here are the formula for the nine first coefficients bn. Note that there are no more coefficients kn for bn, when n5, since kn=0.

b1=k1
b2=3k12k2
b3=12k13+8k1k2k3
b4=55k1455k12k2+5k22+10k1k3k4
b5=273k15+364k13k278k1k2278k12k3+12k2k3+12k1k4
b6=1428k162380k14k2+840k12k2235k23+560k13k3210k1k2k3+7k32105k12k4+14k2k4
b7=7752k17+15504k15k27752k13k22+816k1k233876k14k3+2448k12k2k3136k22k3136k1k32+816k13k4272k1k2k4+16k3k4
b8=43263k18100947k16k2+65835k14k2211970k12k23+285k24+26334k15k323940k13k2k3+3420k1k22k3+1710k12k32171k2k325985k14k4+3420k12k2k4171k22k4342k1k3k4+9k42
b9=246675k19+657800k17k2531300k15k22+141680k13k238855k1k24177100k16k3+212520k14k2k353130k12k22k3+1540k23k317710k13k32+4620k1k2k3270k33+42504k15k435420k13k2k4+4620k1k22k4+4620k12k3k4420k2k3k4210k1k42

Author Contributions

Pierre Drap designed the research, implemented the inverse distortion method and analyzed the results. Julien Lefèvre proved the mathematical part. Both authors wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Arfaoui A. Geometric image rectification: A review of most commonly used calibration patterns. Int. J. Signal Image Process. Issues. 2015;1:1–8. [Google Scholar]
  • 2.Rahul S., Nayar S.K. Nonmetric calibration of Wide-Angle Lenses and Polycameras. IEEE Trans. Pattern Anal. Mach. Intell. 2000;8:1172–1178. [Google Scholar]
  • 3.Michael C., Wolfgang K., Jan S., Norbert H., Silvia N., Wallgrun J. Data Capture. In: Wolfgang K., Danko D.M., editors. Handbook of Geographic Information. Publishing House; Berlin, Germany: 2012. pp. 212–297. [Google Scholar]
  • 4.Ben T., Murray D.W. The impact of radial distortion on the self-calibration of rotating cameras. Comput. Vis. Image Underst. 2004;96:17–34. [Google Scholar]
  • 5.Conrady A.E. Mon. Not. R. Astron. Soc. Vol. 79. Lens; 1919. Lens-systems, Decentered; pp. 384–390. [DOI] [Google Scholar]
  • 6.Brown D.C. Close-range camera calibration. Photom. Eng. 1971;37:855–866. [Google Scholar]
  • 7.Brown D.C. Decentering Distortion of Lenses. Photom. Eng. 1966;32:444–462. [Google Scholar]
  • 8.Santana-Cedrés D., Gómez L., Alemán-Flores M., Salgado A., Esclarín J., Mazorra L., Álvarez L. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models. IPOL J. Image Process. Line. 2015;1 doi: 10.5201/ipol. [DOI] [Google Scholar]
  • 9.Tordoff B., Murray D.W. Violating rotating camera geometry: The effect of radial distortion on self-calibration; Proceedings of the 15th International Conference on Pattern Recognition; Barcelona, Spain. 3 September 2000; pp. 423–427. [Google Scholar]
  • 10.Dougherty G. In: Digital Image Processing for Medical Applications. Editor F., Meditor A., editors. Cambridge University Press; New York, NY, USA: 2009. [Google Scholar]
  • 11.De Villiers J.P., Wilhelm L.F., Geldenhuys R. Centi-pixel accurate real-time inverse distortion correction; Proceedings of the Optomechatronic Technologies, 7266 726611-1; San Diego, CA, USA. 17 November 2008. [Google Scholar]
  • 12.Mondrian P. Public Domain, 2015. [(accessed on 25 May 2016)]. Available online: http://www.aventdudomainepublic.org/mondrian.
  • 13.Papadaki A.I., Georgopoulos A. Development, comparison, and evaluation of software for radial distortion elimination; Proceedings of the SPIE 9528, Videometrics, Range Imaging, and Applications XIII; Munich, Germany. 21 June 2015; [DOI] [Google Scholar]
  • 14.Kasser M. Photogrammetrie et vision par ordinateur. XYZ. 2008;12:49–54. [Google Scholar]
  • 15.Eos Systems Inc. PhotoModeler Software. 2016. [(accessed on 29 May 2016)]. Available online: http://www.photomodeler.com/index.html.
  • 16.Agisoft PhotoScan Software. 2016. [(accessed on 29 May 2016)]. Available online: http://www.agisoft.com/
  • 17.OpenCV Toolbox. 2016. [(accessed on 29 May 2016)]. Available online: http://opencv.org//
  • 18.Fraser C.S. Photogrammetric Camera Component Calibration: A Review of Analytical Techniques. In: Gruen A., Huang T., editors. Calibration and Orientation of Cameras in Computer Vision. Springer Verlag; Berlin, Germany: 2001. pp. 95–119. [Google Scholar]
  • 19.Zhang Z. Camera Calibration. In: Medioni G., Kang S.B., editors. Emerging Topics in Computer Vision. Prentice Hall Technical References; Upper Saddle River, NJ, USA: 2004. pp. 1–37. [Google Scholar]
  • 20.Shortis M. Calibration Techniques for Accurate Measurements by Underwater Camera Systems. Sensors. 2015;15:30810–30826. doi: 10.3390/s151229831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Hartley R.I. Camera Calibration Using Line Correspondences; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA. 21–23 June 1994; pp. 361–366. [Google Scholar]
  • 22.Hartley R.I. Projective reconstruction from line correspondences; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’94); Seattle, WA, USA. 21–23 June 1994; pp. 903–907. [Google Scholar]
  • 23.Devernay F., Faugeras O.D. Straight lines have to be straight. Mach. Vis. Appl. 2001;13:14–24. doi: 10.1007/PL00013269. [DOI] [Google Scholar]
  • 24.Nomura Y., Sagara M., Naruse H., Ide A. Self-calibration of a general radially symmetric distortion model. IEEE Trans. Pattern Anal. Mach. Intell. 2002;14:1095–1099. doi: 10.1109/34.166624. [DOI] [Google Scholar]
  • 25.Claus D., Fitzgibbon A.W. A Plumbline Constraint for the Rational Function Lens Distortion Model; Proceedings of the British Machine Vision Conference; Oxford, UK. 15 September 2005; pp. 99–108. [Google Scholar]
  • 26.Tardif J.-P., Sturm P., Roy S. Self-calibration of a general radially symmetric distortion model; Proceedings of the 9th European Conference on Computer Vision; Graz, Austria. 7–13 May 2006. [Google Scholar]
  • 27.Rosten E., Loveland R. Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy. Mach. Vis. Appl. 2009;22:77–85. doi: 10.1007/s00138-009-0196-9. [DOI] [Google Scholar]
  • 28.Slama C.C. In: Manual of Photogrammetry 4 Edition. Slama C.C., Theurer C., Hendrikson S.W., editors. American Society of Photogrammetry; Falls Church, VA, USA: 1980. pp. 32–58. [Google Scholar]
  • 29.Atkinson K. In: Close Range Photogrammetry and Machine Vision. Atkinson F.K., Fryer J.G., editors. Whittles Publishing; Dunbeath, UK: 2001. [Google Scholar]
  • 30.Heikkilä J., Silven O. A Four-Step Camera Calibration Procedure with Implicit Image Correction. CVPR97. 1997;22:1106–1112. [Google Scholar]
  • 31.Basu A., Licardie S. Alternative models for fish-eye lenses. Pattern Recognit. Lett. 1995;16:433–441. doi: 10.1016/0167-8655(94)00115-J. [DOI] [Google Scholar]
  • 32.Hughes C., Glavin M., Jones E., Denny P. Review of geometric distortion compensation in fish-eye cameras; Proceedings of the Signals and Systems Conference; Galway, Irish. 18–19 June 2008; pp. 162–167. [Google Scholar]
  • 33.Hughes C., Glavin M., Jones E., Denny P. Accuracy of fish-eye lens models. Appl. Opt. 2010;49:3338–3347. doi: 10.1364/AO.49.003338. [DOI] [PubMed] [Google Scholar]
  • 34.De Villiers J.P., Nicolls F. Application of neural networks to inverse lens distortion modelling; Proceedings of the 21st Annual Symposium of the Pattern Recognition Society of South Africa (PRASA); Stellenbosch, South Africa. 22–23 November 2010; pp. 63–68. [Google Scholar]
  • 35.Mallon J., Whelan P.F. Precise Radial Un-distortion of Images; Proceedings of the 17th International Conference on Pattern Recognition (ICPR 04); Cambridge, UK. 23–26 August 2004. [Google Scholar]
  • 36.Heikkilä J. Geometric Camera Calibration Using Circular Control Points. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1066–1077. doi: 10.1109/34.879788. [DOI] [Google Scholar]
  • 37.Wei G.-Q., Ma S. Implicit and explicit camera calibration: Theory and experiments. IEEE Trans. Pattern Anal. Mach. Intell. 2002;16:469–480. [Google Scholar]
  • 38.Jin H. Method and Apparatus for Removing General Lens Distortion From Images. 8,265,422. US Patent. 2014 Sep 23;
  • 39.Han D. Real-Time Digital Image Warping for Display Distortion Correction; Proceedings of the Second International Conference on Image Analysis and Recognition ICIAR’05; Toronto, ON, Canada. 2005; pp. 1258–1265. [Google Scholar]
  • 40.Abeles P. Inverse Radial Distortion Formula. Less Than Optimal. [(accessed on 29 May 2016)]. Available online: http://peterabeles.com/blog/?p=73.
  • 41.Alvarez L., Gómez L., Sendra J.R. An Algebraic Approach to Lens Distortion by Line Rectification. J. Math. Imaging Vis. 2009;35 doi: 10.1007/s10851-009-0153-2. [DOI] [Google Scholar]
  • 42.Drap P., Grussenmeter P., Hartmann-Virnich A. Photogrammetric stone-bystone survey and archeological knowledge: An application on the Romanesque Priory Church Notre-Dame d’ Aleyrac (Provence, France); Proceedings of the VAST2000 Euroconference; Prato, Italy. 24–25 November 2000; pp. 139–145. [Google Scholar]
  • 43.Ramis J.P. Les séries k-sommables et leurs applications, Complex Analysis, Microlocal Calculus and Relativistic Quantum Theory; Proceedings of the Colloquium Held at Les Houches, Centre de Physique; Berlin, Germany. 23 September 1979; pp. 178–199. [Google Scholar]
  • 44.Ramis J.P. Séries divergentes et procédés de resommation. [(accessed on 29 May 2016)]. Available online: http://www.math.polytechnique.fr/xups/xups91.pdf.

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES