Abstract
An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.
Introduction
Vision-based inspection1–4 is broadly applied in the fields of the mechanical part manufacture5,6, the dimension measurement7,8, the diagnostic equipment9–11 and Fourier profilometry12–14, etc. The crack on the pavement is one of the most important inspection objects in the vision-based inspection. Although the road surfaces with the extreme roughness do not evidently cause the serious traffic accidents due to the cautiousness of the drivers15, the previous studies indicate that a significant decrease in the road capacity of about 30% is attributable to the road pavement distress16. Moreover, the two-lane-road capacity is augmented by 10–15% according to a perfect driving surface17. Thus, the quantitive detection and evaluation of the crack are beneficial to extend the lifetime of the pavement18 and enhance the driving quality as well as the traffic safety19.
The inspections of the pavement cracks include three kinds of methods. The first kind of methods is the measurement on the basis of the linear array camera20. Mraz A.21 constructs a pavement imaging system with the line array camera at a preset height and a computer. According to different light conditions, an additional lighting system is designed for the inspection system. The accuracy of the pavement image system is evaluated on different lighting conditions. The detection system named the automated pavement distress survey (APDS) is designed by Yao M.22 to achieve the automatic inspection of pavement cracks. The developed system consists of two line-scan cameras. The exposure settings of the two cameras are different to deal with different lighting conditions. The calibration method for the line-scan cameras and the image fusion approach are introduced in the study. The linear array camera has only one line of photosensitive elements. Therefore, it takes the advantages of the high resolution and frequency. However, in the scanning applications, the linear array camera is fixed on the moving vehicle. As the speed of the vehicle and the lateral sliding is difficult to measure accurately, the information combination of the pavement cracks is the problem to solve. The second kind of methods is provided by the laser scanner23. Li Q.24 presents a pavement generation method with the triangulation of structured light. The method consists of filtering, edge detection, spline interpolation, and laser stripe location. The pavement surface is derived from the laser stripe on the pavement. The laser scanner reconstructs the pavement cracks by 3D information. Li L.25 proposes a bounding box-based technique in order to separate the captured cracks to appropriate types. The cracks are recognized by a seed fusion method and a pavement generation system. The bounding box is determined by road marks and wheel paths, by which the cracks are classified and measured. Li W.26 outlines a detection method for the pavement cracks, on the basis of the empirical mode decomposition (EMD). The region-grow method and morphology are performed on the binary crack images. A deep-learning network, instead of the convolutional neural network, is employed by Zhang A.27 for the pavement crack detection. The method eliminates the pooling layers to simple the outputs of former layers. The laser-scanner-based approaches reconstruct the pavement cracks by 3D information. Nevertheless, the laser scanner on the vehicle tends to be influenced by the weather and cannot contribute the color information of the measured object. In addition, the laser scanner is the much more expensive than the cameras for the normal applications. The third kind of methods refers to capture the image by the planar array camera28. Tsai Y.29 evaluates the image segmentation methods, including the statistical thresholding, the edge detection, the multiscale wavelets, the iterative clipping and the dynamic optimization, for the pavement crack sealing. The pavement images with diverse lighting conditions and cracks are provided to test the method performances. The planar array camera is the measurement technology that is widely used in the most vision-based cases. The planar array camera directly captures the 2D image in the test. Hence, it is an effective and economical way for the situations requiring the moderate solution. Nevertheless, in the pavement crack measurement, the images captured from the planar array camera are measured by image pixels. The pavement cracks are dimensioned by millimeters. Thus, the cracks in the image plane should be transformed to the real cracks on the pavement plane. The pavement plane can be generated from a 2D dimension-known reference on the pavement and the Zhang’s method. The bridge from the image plane to the pavement plane is represented by a 2D-2D homography. However, the homography is not a constant matrix due to the relative motion between the camera fixed on the vehicle and the pavement. Thus, an active solution approach is proposed to contribute the homography from the image to the pavement. There are 8 freedoms of the 2D-2D homography without regard to the global freedom. Moreover, a pair of corresponding points determines 2 freedoms in the homography. Therefore, 4 laser lines, which aim to achieve the minimum number of the laser projection points on the pavement, are chosen in the active solution method.
The rest paper consists of three parts. Section 2 constructs the geometrical model of the active solution of the homography. The laser points of the laser lines are derived from the projections on the 2D reference in different positions. Then, the 3D laser lines are initialized and optimized by the 3D laser points in the camera coordinate system. Finally, the laser lines are projected to the pavement. The homography is determined by the laser projections on the pavement and the related image points. Section 3 performs the experiments to recover the pavement cracks with the active solution of homography. The recovery accuracy is also estimated in the experiments. Section 4 provides the conclusion.
Geometrical Model of Homography
The measurement system, as illustrated in Fig. 1, consists of four laser projectors and a camera. The positions of the projectors are fixed relative to the camera. A planar target is employed as the reference of the calibrations for the camera and laser lines. The world coordinate system, the camera coordinate system and the image coordinate system are attached on the target, camera and image, respectively. Here, the camera coordinate system is considered as the global coordinate system.
Figure 1.
Calibration method of the laser lines with the intersection points in the camera coordinate system. The measurement system consists of four laser projectors and a camera. The positions of the projectors are fixed relative to the camera. A planar target is employed as the reference of the calibrations for the camera and laser lines.
The target is moved to different positions in the view field of the camera. Therefore, the laser lines intersect the target at the laser points on the target. The laser point in the world coordinate system is projected to the image and recovered by30
| 1 |
where is the homogeneous coordinates of the x, y coordinates of the laser point in the world coordinate system. As the z coordinate is zero on the target plane, the laser point can be derived from . i = 1, 2, …, n is the number of the positions of the target. j = 1, 2, 3, 4 is the number of the laser lines. K is the intrinsic parameter matrix of the camera. Ri = [ri,1, ri,2, ri,3] and ti are the rotation matrix and translation vector from the world coordinate system to the global coordinate system. Zhang’s method30 is chosen to calibrate K, Riti. is the laser point in the image coordinate system.
In order to represent the laser points in the global coordinate system, the laser point is transformed to31
| 2 |
where is the laser point in the camera coordinate system.
In the set of the laser points in the camera coordinate system, two laser points , far away from each other in the j-th laser line are chosen to initially define the laser line by
| 3 |
Then the parameterized displacements from the other laser points in the set to the laser line are adopted to refine the laser line. The optimization function derived from the sum of the displacements is given by
| 4 |
where are the unknown parameters that define the laser line. The parameters are initialized by the results of Eq. (3) and identical to the arguments related to the minimization of the function.
Due to Eq. (4) and the Graßmann-Plücker relation32, the optimized laser line can be represented by the Plücker matrix
| 5 |
where is the plane-indicated Plücker matrix of the optimized laser line. The generation process of the optimized laser line is described in Fig. 2.
Figure 2.
The solution process of the optimized laser line that is derived from the intersection laser points.
The four laser lines are generated from Eq. (5). The laser projectors and camera are both attached on the vehicle in the pavement test. In Fig. 3, the laser lines are projected on the pavement. The laser points are derived from the intersections between the laser lines and the q-th pavement plane, q = 1, 2, …, m. All the pavement intersection points are represented in the global coordinate system.
Figure 3.

The active solution model of the homography in the camera coordinate system. The laser lines are projected on the pavement. The laser points are derived from the intersections between the laser lines and the pavement plane. All the pavement intersection points are represented in the global coordinate system.
The intersection point is on the laser line and satisfies31
| 6 |
The laser point on the pavement also obeys the projection relationship of33
| 7 |
where is the image mapping of the laser point on the pavement, sq is the scale factor.
The laser point on the pavement is generated from Eqs (6) and (7). The laser point obeys the condition of the laser point on the pavement plane31. However, in view of the practical non-coplanarity of the four laser points on the pavement, the integrated pavement plane is determined by the four laser points on the pavement as
| 8 |
where Γq is the q-th integrated pavement plane.
The pavement plane is derived from Eq. (8). Although the homography is determined by the relationship between the image plane and the pavement plane, there are no enough constraints to solve the homography from two planes above. Hence, we propose a method to generate the homography from the perpendicular feet of the four laser points.
Considering the condition of the perpendicular feet on the integrated pavement plane31, the perpendicular foot satisfies
| 9 |
We construct the vector consisting of the laser point and its perpendicular foot . and are two points different from on the pavement plane. The vector is orthogonal to the vectors in the pavement plane. Thus,
| 10 |
The perpendicular feet of the four laser points are solved by stacking Eqs (9) and (10). The homography Hq is then solved by32
| 11 |
The homography Hq is determined by the singular value decomposition (SVD) method34. The active solution process of the homography Hq that is generated from the four laser points is shown in Fig. 4.
Figure 4.
The active solution process of the homography on the basis of the four laser projection points.
Results
The experiments are performed by an Industrial Vision HT-U300C camera, which has the 2048 × 1536 image resolution and 3.0 megapixels. It is an industrial camera with the focus scope of 4 mm-12 mm and an aperture of F1.6. The laser line is generated from a Class IIIa Product SYD1230 laser projector. The output power of the line-laser projectors is 20 mW and the peak wavelength is 650 nm. The 2D target is a 150 mm × 150 mm board that is covered by the 10 mm × 10 mm rectangles. First, the camera is calibrated by the target board to obtain the internal and external parameters of the camera. The laser projectors provide four laser lines on the target and generate four laser points. The positions of the four laser lines are solved by the intersection laser points on the target. Then, the laser lines are projected to the pavement. The camera captures the images of the cracks and the laser projections. The homography is generated from the laser projections on the pavement. Finally, the pavement cracks are extracted in the image and transformed to the pavement plane by the active solution of the homography. The recovery results of the pavement cracks are shown in Fig. 5. Figure 5(a)–(d) and (i)–(l) are the pavement crack images. Figure 5(e–h) and (m–p) are the reconstructed pavement cracks. According to the active solution of the homography, the image coordinates of the pavement cracks are transformed to the real-dimension coordinates on the pavement plane.
Figure 5.
Recovery experiments of the pavement cracks that are achieved by the active solutions of the homographies. (I)–(IV), (IX)–(XIl), (XVII)–(XX), (XXV)–(XXVIII) are the eight images of the different pavement cracks. (V)–(VIII), (XIII)–(XVI), (XXI)–(XXIV), (XXIX)–(XXXII) are the recovery results of the different pavement cracks.
In order to evaluate the accuracy of the active solution method, the recovery errors of the homography are verified by experiments. The 225 corners of the checkerboard pattern on the target board are extracted as the feature points. The homography matrix is solved by the proposed method. The image coordinates of the 225 feature points are transformed to the real-dimension pavement coordinates by the homography. The recovery accuracy is evaluated by the differences between the reconstructed real-dimension coordinates and the real coordinates of feature points on the target. Two impact factors are considered in the experiments. One factor is the measurement distance between the target and the camera. The other factor is the relative angle between the optical axis of the camera and the normal vector of the target. In addition, the initialization method and the optimization method of the laser
line are used to calculate the recover errors to verify the accuracy of the homography. The experimental results are shown in Fig. 6. EinX and EinY denote the X, Y-direction errors between the coordinates of the recovery feature points and the coordinates of the true feature points in the initialization method. EopX and EopY denote the X, Y-direction errors between the coordinates of the recovery feature points and the coordinates of the true feature points in the optimization method. Ein and Eop represent the combined errors between the recovery feature points and the true feature points by the means of the initialization method and optimization method. Furthermore, Fig. 7 shows the statistical means and maximums of the errors between the recovery feature points and the real feature points under different experimental conditions. In Fig. 7(a), the golden balls and the green balls represent the means of the errors between the recovery feature points and the real feature points in the optimization method and initialization method, respectively. In Fig. 7(b), the golden balls and the green balls represent the maximums of the errors between the recovery feature points and the real feature points in the optimization method and initialization method, respectively.
Figure 6.
Recovery errors of the active solutions of the homographies in the verification experiments. The subscript “in” indicates the initialization method. The subscript “op” indicates the optimization method. The subscripts “X” and “Y” indicate the errors along the X direction or Y direction. Ein and Eop are the combined errors of the initialization method and the optimization method. MD indicates the measurement distance, mm. RA indicates the relative angle, °. (a) MD = 500, RA = 4. (b) MD = 600, RA = 4. (c) MD = 700, RA = 4. (d) MD = 800, RA = 4. (e) MD = 500, RA = 8. (f) MD = 600, RA = 8. (g) MD = 700, RA = 8. (h) MD = 800, RA = 8. (i) MD = 500, RA = 12. (j) MD = 600, RA = 12. (k) MD = 700, RA = 12. (l) MD = 800, RA = 12. (m) MD = 500, RA = 16. (n) MD = 600, RA = 16. (o) MD = 700, RA = 16. (p) MD = 800, RA = 16.
Figure 7.
The statistical means and maximums of the recovery errors under the measurement distances of 500 mm, 600 mm, 700 mm, 800 mm, and the relative angles of 4°, 8°, 10°, 12°. (a) The statistical means of the optimization method and initialization method. (b) The statistical maximums of the optimization method and initialization method.
In Fig. 7, when the relative angle between the normal vector of the target and the optical axis of the camera is 4° and the measurement distances between the target and the camera are 500 mm, 600 mm, 700 mm and 800 mm, the means of the recovery errors are 2.37 mm, 1.94 mm, 1.22 mm and 1.88 mm in the optimization method. The corresponding maximums of the errors are 5.41 mm, 3.41 mm, 2.34 mm and 4.00 mm. Moreover, for the initialization method, the means of the errors are 2.45 mm, 2.11 mm, 1.59 mm and 2.50 mm. the corresponding maximums are 6.39 mm, 4.09 mm, 3.48 mm and 5.16 mm. The recovery errors derived from the optimization method are smaller than the errors from the initialization method. It can be observed that the recovery errors decrease evidently when the measurement distance increases from 500 mm to 700 mm. However, when the measurement distance is up to 800 mm, the recovery errors become larger. Thus, in the test results under the relative angle of 4°, the recovery errors are smaller than others when the measurement distance is 700 mm. The test results correspond to Fig. 6(a–d). The recovery errors of Fig. 6(c) are more concentrative to zero than the errors of others.
The second group of tests is achieved by the relative angle of 8° and the measurement distances of 500 mm, 600 mm, 700 mm and 800 mm. The means of recovery errors are 2.03 mm, 1.86 mm, 0.79 mm and 1.51 mm in the optimization method in Fig. 7. The corresponding maximums are 5.18 mm, 2.94 mm, 1.52 mm and 2.87 mm in the optimization method. Furthermore, for the initialization method, the recovery errors are 2.29 mm, 2.00 mm, 1.47 mm and 2.25 mm. The related maximums are 6.37 mm, 4.07 mm, 3.98 mm and 4.00 mm. In this case the conclusion can be reached that when the measurement distance rises from 500 mm to 700 mm, the errors of recovery experiments decrease significantly. Then the recovery errors grow up on the condition that the measurement distance is 800 mm. Figure 6(e–h) relates to the group of experiments. The recovery errors of Fig. 6(g) are closer to zero than others. The tendencies of recovery errors of the optimization method and the initialization method are the same. In addition, the green balls are obviously higher than the golden balls in Fig. 7. So the recovery errors from the optimization method are smaller than the errors from the initialization method.
When the relative angle is 12° and the measurement distances are 500 mm, 600 mm, 700 mm and 800 mm, the means of recovery errors are 2.17 mm, 2.06 mm, 1.68 mm and 2.00 mm in the optimization method. The corresponding maximums are 4.98 mm, 3.07 mm, 2.72 mm and 3.02 mm. Besides, the means of the recovery errors are 2.77 mm, 2.72 mm, 1.98 mm and 2.83 mm in the initialization method. The corresponding maximums are 6.74 mm, 5.78 mm, 3.99 mm and 6.19 mm. The smallest errors in this test group are observed with respect to the measurement distance of 700 mm. The recovery errors in Fig. 6(k) are more approaching to zero than errors of others in Fig. 6(i–l). The green balls are higher than the golden balls. Therefore, the recovery errors derived from the optimization method are smaller than the errors from the initialization method.
The last group of experiments is performed by the relative angle of 16° and the measurement distances of 500 mm, 600 mm, 700 mm and 800 mm. The means of the recovery errors are 2.64 mm, 2.39 mm, 1.84 mm and 2.30 mm in the optimization method in Fig. 7. The corresponding maximums are 4.41 mm, 5.00 mm, 3.47 mm and 4.36 mm. Then, the means of the recovery errors are 3.00 mm, 3.08 mm, 2.44 mm and 3.08 mm and the corresponding maximums are 6.59 mm, 6.70 mm, 6.11 mm and 6.43 mm in the initialization method. The smaller errors are contributed when the relative angle is 12° and the measurement distance is 700 mm. The small errors from the optimization method are also observed in Fig. 7. The error reductions of the optimization method relative to the initialization method are described in Table 1, under the measurement distances of 500 mm, 600 mm, 700 mm, 800 mm, and the measurement angles of 4°, 8°, 12°, 16°. The average error reduction of the optimization method is 20.33%.
Table 1.
The error reductions of the optimization method relative to the initialization method, under the measurement distances of 500 mm, 600 mm, 700 mm, 800 mm, and the measurement angles of 4°, 8°, 12°, 16°.
| Distance (mm) | Angle (°) | Relative errors (mm) | Error reductions (mm) | |
|---|---|---|---|---|
| Initialization | Optimization | |||
| 500 | 4 | 2.45 | 2.37 | 0.08 |
| 8 | 2.29 | 2.03 | 0.26 | |
| 12 | 2.77 | 2.17 | 0.60 | |
| 16 | 3.00 | 2.64 | 0.36 | |
| 600 | 4 | 2.11 | 1.94 | 0.17 |
| 8 | 2.00 | 1.86 | 0.14 | |
| 12 | 2.72 | 2.06 | 0.66 | |
| 16 | 3.08 | 2.39 | 0.69 | |
| 700 | 4 | 1.59 | 1.22 | 0.37 |
| 8 | 1.47 | 0.79 | 0.68 | |
| 12 | 1.98 | 1.68 | 0.30 | |
| 16 | 2.44 | 1.84 | 0.60 | |
| 800 | 4 | 2.50 | 1.88 | 0.62 |
| 8 | 2.25 | 1.51 | 0.74 | |
| 12 | 2.83 | 2.00 | 0.83 | |
| 16 | 3.08 | 2.30 | 0.78 | |
Discussion
In the test, the relative angle between the normal vector of the target and the optical axis of the camera increases from 4° to 16° with the interval of 4°. The means of the recovery errors are 1.85 mm, 1.55 mm, 1.98 mm and 2.29 mm in the optimization method and 2.16 mm, 2.00 mm, 2.57 mm and 2.90 mm in the initialization method. Hence, the recovery errors decrease when the relative angle increases from 4° to 8°. Recovery errors show a trend of steady growth when the relative angle increases from 8° to 16°. Moreover, when the angle is 12°, the errors are greater than 4° but less than 16°. Due to the virtual camera optical axis, it is impossible to obtain the true relative angle between the normal vector of the target and the optical axis of the camera. So the small relative angles generally contribute the better results than the large angles. Furthermore, the measurement distance between the camera and the measured object is also a factor that affects the errors of recovery experiments. The measurement distance increases from 500 mm to 800 mm, the means of the recovery errors are 2.31 mm, 2.06 mm, 1.38 mm and 1.92 mm in the optimization method and 2.62 mm, 2.48 mm, 1.87 mm and 2.66 mm in initialization method. So the errors of the recovery experiments are minimal when the distance is 700 mm. The errors of the recovery experiments obviously decrease when the distance increases from 500 mm to 700 mm. When the measurement distance is 800 mm, the recovery errors are larger than the errors at the distance of 700 mm, but slightly smaller than 600 mm. In summary, the experimental results show that the recovery values are closest to the real values when the measurement distance is 700 mm and the relative angle is 8°. The recovery errors of the optimization method are less than the errors of the initialization method in the verification experiments. Therefore, the optimization of the four laser lines reduces the experimental errors effectively. As the homography between the image plane of the camera and the base plane of the measured object plays an important role in various inspections, the active solution method of the homography for the pavement crack recovery with four laser lines can be widely popularized to the measurements of mechanical parts, electronic devices, architecture, etc.
Summary
An active solution method of the homography is presented to recover the pavement cracks. The homography is generated from the pavement projections of four laser lines. The measurement distance between the camera and the target as well as the relative angle between the normal vector of the target and the optical axis of the camera are considered as the two impact factors on the recovery errors in the experiments. The global mean of the recovery errors of the initialization method is 2.41 mm and the global mean of the recovery errors of the optimization method is 1.91 mm. The experimental results show that the active solution method of the homography is a valid and accurate approach in the research field of the vision-based pavement measurement. Furthermore, for other vision-based inspections, it is also important to generate the homography from the dimension of the image plane to the dimension of the base plane. Therefore, the active solution method of the homography for the pavement crack recovery with four laser lines have the potentials to the measurements of mechanical parts, electronic devices, architecture, etc. In future work, the enhancement method to reduce the sunlight influence should be investigated for further applications.
Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request.
Acknowledgements
This work was funded by National Natural Science Foundation of China under Grant Nos 51478204, 51205164, and Natural Science Foundation of Jilin Province under Grant Nos 20170101214JC, 20150101027JC.
Author Contributions
G.X. contributed the idea, G.X., F.C. and X.T.L. provided the data analysis, writing and editing of the manuscript, G.W.W. and F.C. contributed the program and experiments, G.X. and F.C. prepared the figures. All authors contributed to the discussions.
Competing Interests
The authors declare no competing interests.
Footnotes
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Frollo I, Krafčík A, Andris P, Přibil J, Dermek T. Circular samples as objects for magnetic resonance imaging - mathematical simulation, experimental results. Meas. Sci. Rev. 2015;15:313–318. doi: 10.1515/msr-2015-0042. [DOI] [Google Scholar]
- 2.Phromsuwan U, Sirisathitkul Y, Sirisathitkul C, Muneesawang P, Uyyanonvara B. Quantitative analysis of X-ray lithographic pores by SEM image processing. Mapan-J. Metrol. Soc. I. 2013;28:327–333. [Google Scholar]
- 3.Glowacz A, Glowacz Z. Diagnosis of the three-phase induction motor using thermal imaging. Infrared Phys. Techn. 2017;81:7–16. doi: 10.1016/j.infrared.2016.12.003. [DOI] [Google Scholar]
- 4.Murawsk K. New vision sensor to measure gas pressure. Meas. Sci. Rev. 2015;15:132–138. [Google Scholar]
- 5.Ren Z, Liao J, Cai L. Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision. Appl. Optics. 2010;49:1789–1801. doi: 10.1364/AO.49.001789. [DOI] [PubMed] [Google Scholar]
- 6.Groot P, Biegen J, Clark J, Lega XC, Grigg D. Optical interferometry for measurement of the geometric dimensions of industrial parts. Appl. Optics. 2002;41:3853–3860. doi: 10.1364/AO.41.003853. [DOI] [PubMed] [Google Scholar]
- 7.Bell T, Vlahov B, Allebach JP, Zhang S. Three-dimensional range geometry compression via phase encoding. Appl. Optics. 2017;56:9285–9292. doi: 10.1364/AO.56.009285. [DOI] [PubMed] [Google Scholar]
- 8.Chen S, Wu C, Tie G, Zhai D. Stitching test of large flats by using two orthogonally arranged wavefront interferometers. Appl. Optics. 2017;56:9193–9198. doi: 10.1364/AO.56.009193. [DOI] [PubMed] [Google Scholar]
- 9.Elhaddad MT, Tao YK. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers. Biomed. Opt. Express. 2015;6:3014–3031. doi: 10.1364/BOE.6.003014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Liu X, Balicki MR, Taylor H, Kang JU. Towards automatic calibration of Fourier-domain OCT for robot-assisted vitreoretinal surgery. Opt. Express. 2010;18:24331–24343. doi: 10.1364/OE.18.024331. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Glowacz A, Glowacz W, Glowacz Z, Kozik J. Early fault diagnosis of bearing and stator faults of the single-phase induction motor using acoustic signals. Measurement. 2018;113:1–9. doi: 10.1016/j.measurement.2017.08.036. [DOI] [Google Scholar]
- 12.Li B, An Y, Zhang S. Single-shot absolute 3D shape measurement with Fourier transform profilometry. Appl. Opt. 2016;55:5219–5225. doi: 10.1364/AO.55.005219. [DOI] [PubMed] [Google Scholar]
- 13.Yun H, Li B, Zhang S. Pixel-by-pixel absolute three-dimensional shape measurement with modified Fourier transform profilometry. Appl. Opt. 2017;56:1472–1480. doi: 10.1364/AO.56.001472. [DOI] [Google Scholar]
- 14.Tian G, et al. Green decoration materials selection under interior environment characteristics: a grey-correlation based hybrid MCDM method. Renew. Sust. Energ. Rev. 2018;81:682–692. doi: 10.1016/j.rser.2017.08.050. [DOI] [Google Scholar]
- 15.Li Y, Liu C, Ding L. Impact of pavement conditions on crash severity. Accident Anal. Prev. 2013;59:399–406. doi: 10.1016/j.aap.2013.06.028. [DOI] [PubMed] [Google Scholar]
- 16.Chandra S. Effect of road roughness on capacity of two-lane roads. J. Transp. Eng. 2004;130:360–364. doi: 10.1061/(ASCE)0733-947X(2004)130:3(360). [DOI] [Google Scholar]
- 17.Ben-Edigbe J, Ferguson N. Extent of capacity loss resulting from pavement distress. Transport. 2005;158:27–32. [Google Scholar]
- 18.Vilaa JL, Fonseca JC, Pinho ACM, Freitas E. 3D surface profile equipment for the characterization of the pavement texture-TexScan. Mechatronics. 2010;20:674–685. doi: 10.1016/j.mechatronics.2010.07.008. [DOI] [Google Scholar]
- 19.Ouyang W, Xu B. Pavement cracking measurements using 3D laser-scan images. Meas. Sci. Technol. 2013;24:105204. doi: 10.1088/0957-0233/24/10/105204. [DOI] [Google Scholar]
- 20.Huang Y, Xu B. Automatic inspection of pavement cracking distress. J. Electron. Imaging. 2006;15:013017. doi: 10.1117/1.2177650. [DOI] [Google Scholar]
- 21.Mraz A, Gunaratne M, Nazef A, Choubane B. Experimental evaluation of a pavement imaging system: Florida department of transportation’s multipurpose survey vehicle. Transport. Res. Rec. 2006;1974:97–106. doi: 10.3141/1974-14. [DOI] [Google Scholar]
- 22.Yao M, Zhao Z, Yao X, Xu B. Fusing complementary images for pavement cracking measurements. Meas. Sci. Technol. 2015;26:025005. doi: 10.1088/0957-0233/26/2/025005. [DOI] [Google Scholar]
- 23.Monti M. Large-area laser scanner with holographic detector optics for real-time recognition of cracks in road surfaces. Opt. Eng. 1995;34:2017–2023. doi: 10.1117/12.204793. [DOI] [Google Scholar]
- 24.Li Q, Yao M, Yao X, Xu B. Real-time 3D scanning system for pavement distortion inspection. Meas. Sci. Technol. 2010;21:015702. doi: 10.1088/0957-0233/21/1/015702. [DOI] [Google Scholar]
- 25.Li L, Wang K. Bounding box-based technique for pavement crack classification and measurement using 1 mm 3D laser data. J. Comput. Civil Eng. 2016;30:04016011. doi: 10.1061/(ASCE)CP.1943-5487.0000568. [DOI] [Google Scholar]
- 26.Li W, Ju H, Tighe SL, Ren QQ, Sun ZY. Three-dimensional pavement crack detection algorithm based on two-dimensional empirical mode decomposition. J. Transp. Eng. 2017;143:04017005. doi: 10.1061/JTEPBS.0000024. [DOI] [Google Scholar]
- 27.Zhang A, et al. Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deep-learning network. Comput.-Aided Civ. Inf. 2017;32:805–819. doi: 10.1111/mice.12297. [DOI] [Google Scholar]
- 28.Shang Y, Yu Q, Zhang X. Analytical method for camera calibration from a single image with four coplanar control lines. Appl. Optics. 2004;43:5364–5369. doi: 10.1364/AO.43.005364. [DOI] [PubMed] [Google Scholar]
- 29.Tsai YC, Kaul V, Mersereau RM. Critical assessment of pavement distress segmentation methods. J. Transp. Eng. 2010;136:11–19. doi: 10.1061/(ASCE)TE.1943-5436.0000051. [DOI] [Google Scholar]
- 30.Zhang Z. A. flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 2000;22:1330–1334. doi: 10.1109/34.888718. [DOI] [Google Scholar]
- 31.Hartley, R. & Zisserman, A. Multiple view geometry in computer vision. (Cambridge University, 2003).
- 32.Faugeras, O. D., Luong, Q. T. & Papadopoulo, T. The geometry of multiple images: the laws that govern the formation of multiple images of a scene and some of their applications. (MIT, 2004).
- 33.Abdel-Aziz, Y. I. & Karara, H. M. Direct linear transformation into object space coordinates in close-range photogrammetry. Proceedings of the Symposium on Close-Range Photogrammetry Falls Church, USA, 1–18 (1971).
- 34.Nocedal, J. & Wright, S. Numerical optimization. (Springer, 2006).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets generated during the current study are available from the corresponding author on reasonable request.






