Abstract
Accurate and high-throughput determination of plant morphological traits is essential for phenotyping studies. Nowadays, there are many approaches to acquire high-quality three-dimensional (3D) point clouds of plants. However, it is difficult to estimate phenotyping parameters accurately of the whole growth stages of maize plants using these 3D point clouds. In this paper, an accurate skeleton extraction approach was proposed to bridge the gap between 3D point cloud and phenotyping traits estimation of maize plants. The algorithm first uses point cloud clustering and color difference denoising to reduce the noise of the input point clouds. Next, the Laplacian contraction algorithm is applied to shrink the points. Then the key points representing the skeleton of the plant are selected through adaptive sampling, and neighboring points are connected to form a plant skeleton composed of semantic organs. Finally, deviation skeleton points to the input point cloud are calibrated by building a step forward local coordinate along the tangent direction of the original points. The proposed approach successfully generates accurately extracted skeleton from 3D point cloud and helps to estimate phenotyping parameters with high precision of maize plants. Experimental verification of the skeleton extraction process, tested using three cultivars and different growth stages maize, demonstrates that the extracted matches the input point cloud well. Compared with 3D digitizing data-derived morphological parameters, the NRMSE of leaf length, leaf inclination angle, leaf top length, leaf azimuthal angle, leaf growth height, and plant height, estimated using the extracted plant skeleton, are 5.27, 8.37, 5.12, 4.42, 1.53, and 0.83%, respectively, which could meet the needs of phenotyping analysis. The time required to process a single maize plant is below 100 s. The proposed approach may play an important role in further maize research and applications, such as genotype-to-phenotype study, geometric reconstruction, functional structural maize modeling, and dynamic growth animation.
Keywords: maize plant, 3D point cloud, Laplacian, skeleton extraction, phenotyping
Introduction
Plant phenomics has gained more attention as a promising intervention in recent years, because it still remains a bottleneck that limits genetic gain in breeding programs (Araus et al., 2018). Accurate and high throughput measurement of high-dimensional plant morphology across plant development is the essential component of plant phenotyping (Houle et al., 2010). Traditional phenotyping technologies are usually time consuming, low throughput, and labor intensive, which is far behind the development of genomics, although efforts have been made to improve phenotyping efficiency (Yang et al., 2013; Zhang et al., 2017). Substantial changes and improvements in phenotyping technologies for crops will be required for a long term (Tester and Langridge, 2010).
Due to the complexity of plant morphometrics, many kinds of sensors need to be involved to acquire the morphological data of plants, including RGB cameras (Chaivivatrakul et al., 2014), depth cameras (Hu et al., 2018), LiDAR (Jimenez-Berni et al., 2018), and multispectral sensors (Potgieter et al., 2017), and thousands of images and point clouds are generated. Thus automatic data processing algorithms and software play an important role to covert these huge amounts of raw data into clear meaning phenotyping traits, and further into knowledge (Tardieu et al., 2017). However, there is none uniform solution for all kinds of plants, because plant morphology are diverse for different species. Therefore, specialized plant phenotyping algorithms have to be developed for architecture determined plants.
Maize is one of the most widely grown crops worldwide and the structure is relatively simple with one stem and several leaves. Its levels of phenotypic and genetic variation derived lots of attention for phenotyping algorithm development as a model plant (Yan et al., 2011). Recent developments in unmanned aerial vehicle (UAV) have provided research opportunities in assessing plot and field scale phenotyping traits (Zaman-Allah et al., 2015; Yang et al., 2017). Though canopy scale traits, including normalized difference vegetation index (NDVI), leaf area index (LAI), nitrogen stress, and plant height, could be derived using multispectral images and LiDAR, it is impossible to derive more detailed phenotyping traits, due to the space resolution of the raw data and occlusion of nearby plants (Pfeiffer et al., 2018). To meet the needs of ideal plant type genotype breeding (Zhang et al., 2017), more detailed traits at plant or organ scale such as leaf length, leaf area, and leaf curvature have to be obtained. Thus, high throughput phenotyping platforms (Yang et al., 2014; Cabrera-Bosquet et al., 2016; Guo et al., 2016) in controlled environment were set up to acquire the morphological data of individual plants. Consequently, automatic data processing algorithms have to be developed to deal with these raw data.
Most of the morphological data of individual plants exist in terms of two-dimensional (2D) images or three-dimensional (3D) point clouds (Vazquez-Arellano et al., 2016). The first process is to segment these data into independent organs with semantics. Then ideal methods are to 3D reconstruct each organ into fine organ meshes (Pound et al., 2014; Yin et al., 2016; Gibbs et al., 2017; Thapa et al., 2018) and then all the morphological phenotyping traits could be derived, such as leaf length, leaf width, and accurate leaf area. However, it is hard to obtain complete morphology of an individual plant for most cases. Therefore, an alternative way to parse these organ traits is curve skeleton extraction from 3D point clouds (Cornea et al., 2007).
Skeleton extraction from 2D images is relatively easier using image thinning algorithms. However, phenotyping traits, such as leaf length and leaf azimuth, have to be calibrated owing to one-dimensional data missing (Klukas et al., 2014; Brichet et al., 2017; Zhang et al., 2017). It is much more sophisticated of 3D skeleton extraction of branched structure plants, because another information has to be considered to determine the skeleton curve. The most notable curve skeleton extraction algorithms are rotational symmetry axis (ROSA)-based (Tagliasacchi et al., 2009) and L1-medial skeleton of point clouds (Huang et al., 2013) for most incomplete point clouds. These methods are feasible for cylindrical structure point clouds while not suitable for narrow and long plant leaves, because there are deviations between extracted curve and leaf veins. The point cloud of trees without leaves are featured as cylindrical branched structure, which are suitable for skeleton extraction (Livny et al., 2010; Delagrange et al., 2014; Wang et al., 2014). Laplacian skeleton extraction (Au et al., 2008) from point clouds is another widely used method, which has been successfully used in performing skeleton extraction without the need for mesh reconstruction (Su et al., 2011). Extracted curves from point clouds of long and narrow leaves using these methods directly are unsatisfactory, and could not meet the needs of accurate phenotyping traits analysis (Zhang et al., 2017). Meanwhile, there is an increasing maize plant morphological data acquisition equipment (Chaudhury et al., 2017; Gibbs et al., 2018), methods (Hui et al., 2018), and platforms (Cabrera-Bosquet et al., 2016), obtaining huge amount of high-quality point clouds. Therefore, developing automatic skeleton extraction from point clouds for maize leaves and plants is urgent for phenomic study.
In this study, we applied and improved Laplacian skeleton extraction method on point clouds of maize plants to achieve the following three goals: (1) extract the skeleton curve of maize leaves with less deviation of the stem and veins; (2) improve the accuracy of phenotyping traits of maize plants, such as leaf length and leaf azimuth, which needs to be calibrated in image based methods; and (3) the proposed method is able to solve the point clouds of the all growth periods of maize plants.
Materials
The experiment was conducted in the farm of Beijing Academy of Agricultural and Forestry Sciences (39.94 N, 116.28 E) and maize plants were planted in the field with the density of 6 plants/m2 (the row distance is 60 cm). Three cultivar plants, including ZhengDan958 (ZD958), JingKe968 (JK968), and XiangYu335 (XY335), were selected randomly from jointing to silking stages. Two sample plants of each three cultivars were selected. The plants were excavated with 20 cm roots and soil into pots and moved indoor without any morphological damage to avoid generating noise by wind and other external factors. Many approaches are available for 3D point cloud data acquisition of maize plants, including multi-view stereo (MVS) 3D reconstruction, 3D scanning, and 3D synthesis by 2D LiDAR. A terrestrial laser scanner Faro Focus3D X130 was used here to acquire the point clouds of selected plants indoor. The scan resolution was 0.035° horizontally and vertically with a maximum distance of 130 m (Johnson and Liscio, 2015; Seidel et al., 2015). There is integrated camera of the scanner, so the color information was also acquired simultaneously with the point clouds. To increase the scanning efficiency, six plants were placed in two rows and three plants per row and were scanned together each time, as shown in Figure 1. To promise the point cloud completeness of each plant, four scanning stations were arranged sequentially around the target area (Figure 1B). Four stacks of scanned data at the mentioned position above of different azimuths were obtained. We registered the four stacks under the same coordinate system and merged them into a complete group of point cloud using the supporting software SCENE provided by FARO. Outlier points, including room walls and floors, were removed also by SCENE. Each station scanning takes 5 min. The point cloud contains 3D coordinate and color information of all the vertices.
Methods
The pipeline presented here involved five steps, namely, (1) point cloud denoising; (2) point cloud contraction using Laplacian; (3) adaptive sampling, (4) skeleton point connection, and (5) skeleton curve correction. The overall workflow is described in Figure 2.
Point Cloud Denoising
In the 3D scanning process, even slight wind like human moving around the plants may lead to overlaps and offsets in the resultant point clouds. In addition, the point cloud of pots, used to load and hold the maize plant, also needs to be detected and deleted. Therefore, we have to pre-process the point cloud before further operation. The obtained point clouds were dense and the number of points describing a grown up maize plant could reach up to 50 thousand points. In order to improve the efficiency of the following algorithm, a uniform simplification algorithm (Han et al., 2015) was applied to reduce the density of the dense point clouds. Empirical results showed that when the density of the point cloud of each maize plant was reduced to about 10 thousand points, the accuracy of the algorithm would not be affected too much and also be satisfactory.
Pot Points Detection
The pots were usually used in many maize plant phenotyping platforms, such as potting plants in greenhouse and transplanting in field cases, they play an important role in holding the plant stable. To detect and delete the points of the pots, a color difference denoising method (Dun et al., 2011) was used to filter the noise of the pots. First, the RGB color values of the pot, manually picked originally under stable light conditions of data acquisition site, are sampled to form a noise color list. Then, the color difference D between each point of the noise color list point and the point cloud P is calculated using the equation set (1) and (2). If the color difference value D is less than a threshold (here it is 0.1), it indicates that the point is a noise point and is thus removed.
(1) |
(2) |
where pi(ri,gi,bi) and pj(rj,gj,bj) are two points, r, g, and b are the color components of each point, with and as the corresponding vectors, and η = (r + g + b)/3, and = (r -μ, g -μ, b-μ) Figure 3 illustrates the point clouds of detected pots.
Plant Denoising
A near propagation clustering algorithm was proposed to filter the scanning noise of plants, inspired by the K nearest neighbor clustering algorithm (Connor and Kumar, 2010). There are five steps of this algorithm (Figure 4).
-
simple (1)
Randomly select any point p in the point cloud, and identify the nearest NP neighboring points within a distance parameter r around the point p. Then mark the point p and its Np neighbors as the access state. Here r = 0.012l, where l is the longest diagonal of the point cloud bounding box.
-
simple (2)
Store the point p and its Np neighbors into a cluster list T, and store the Np neighbors into a search list M.
-
simple (3)
Traverse through the list M. For each point m in M, find its nearest neighboring points also within the range of r. Then for each point pm in m’s neighborhood, if the point pm has not been visited, save it to the cluster list T as well as the search list M. Once visited, delete it from the search list M. The process continues till the linked list M is empty and an empty index (the index is -1 in cluster list T in Figure 4) is inserted to segment the cluster list T.
-
simple (4)
The nearest neighbor clustering operations described above are iteratively applied until the states of all the points in the cloud are marked.
-
simple (5)
The clustering point cloud is sorted according to the number of points in the cloud, and the clusters with number of points less than the threshold are considered as the noisy point clouds and are deleted. In practice, the threshold is calculated by: , where density(p, r) is the density of the point cloud (Wu et al., 2017). In order to improve the efficiency, the parameter num was set 10, determined through many numerical experiments, and these num points are randomly selected in the point cloud. Figure 3 shows the point clouds after plant denoising and the corresponding noises.
Laplacian Point Cloud Contraction
The point cloud of a maize plant is iteratively contracted using the classical restricted Laplace operator (Cao et al., 2010). It is represented as follows:
(3) |
Here P is the processing point cloud. L is the weighted cotangent matrix which is constructed using the Delaunay neighborhood points; WL and WH are the diagonal matrices, where WL controls the intensity of contraction, and WH controls the intensity of the original position. This ensures that the point cloud of the contraction equation moves along the estimated normal direction.
The iterative shrinkage process can be described as follows: Eq. 3 is used to solve and Pt+1 is derived. Then WL and WH are updated using Eq. 4. Here and are the current and the initial neighborhood length of the contraction point pi, respectively. This allows us to obtain the new point cloud Pt+1. Next, reconstruct the Laplacian matrix Lt+1 using the new point cloud Pt+1.The termination condition of the iteration is < 0.01 or the count of iteration is greater than 20. Generally, after a finite number of iterations, the point cloud P shrinks to the shape of the skeleton. As shown in Figure 5, the maize plant point cloud has shrunk to a better skeleton shape after four iterations of contraction.
(4) |
Adaptive Sampling
Point clouds of maize plants shrunk into skeleton shape using the above Laplacian shrinkage. However, the number of points remains the same even though the shape changes. It is necessary to adaptively sample and identify the key points of the skeleton in the point cloud. In order to maintain the geometric characteristics of the branches, different spherical radius sampling technology was used at the intersections and the branches (leaves and stems). Radius of the sampling sphere is smaller at the intersections than in the branches. To determine whether a point v is located at the intersections, the directionality degree l(v) of point v is introduced to describe the linear trend at the current point (Su et al., 2011). This directionality degree l(v) is obtained using the 3 × 3 covariance matrix C of the sampled sphere (Eq. 5) as expressed as follows:
(5) |
(6) |
(7) |
where v1,...vk are local k-neighborhood of p; is the centroid of the neighbors of p, namely, . Vl is the eigenvector of matrix C, and is the eigenvalue of matrix C. It λl is assumed that λ0 ≤ λ1 ≤ λ2. The closer l(v) is to 1, the smaller λ0 and λ1 are compared to λ2; and hence, the more points around v are aligned along a branch.
Experiments show that l(v) is greater than 0.9 at the intersections and whereas otherwise. After the adaptive sampling processing, the shrinkage point cloud is reduced to the skeleton key points, and the resultant skeleton has a sparse number of points. As shown in Figure 6, the sampling accuracy obtained is better after the adaptive sampling results and the obtained results match the original point cloud.
Skeleton Point Connection
To estimate the phenotyping traits of maize plant using the extracted skeleton curve, semantic connection key points have to be determined. Every key point in the extracted skeleton has a maximum of three neighboring points since maize plant has a single branch structure. Therefore, if three nearest neighboring points are connected, the skeleton will end up as an undirected digraph. However, as shown in Figures 7A, 8A, two types of closed connection loop errors might appear: (1) triangle closed connection loop (Figure 7B) or (2) polygon closed connection loop. The skeleton of the maize plant has the characteristics that nearby points are approximately coplanar and show convergence in the direction of the growth. Therefore, we construct the constraint model of the morphological structure of the maize plant and apply the connected weight equation to break the closed loops:
(8) |
(9) |
(10) |
(11) |
(12) |
Here Ws and Wc are the approximate coplanar weight and the growth weight of the edge, respectively. For the approximate coplanar weight (shown in Figure 7C), the least squares algorithm (Zeng et al., 2012) is used to fit the closed loop nodes. And the minimum distance from each point to the plane (Eq. 10) is used as the constraint condition for solving for the fitting plane, and a, b, c, and d are the parameters of the plane equation (Eq. 9). The coplanar weights of the edges are the sum of the distances from the two fixed points to the fitted planes (Eq. 11). To determine the growth weights (shown in Figure 7D), the angle between the edge and its two adjacent edges in the direction of the growth is calculated (Eq. 12). In order to ensure the equality while comparing the weights, we rescale and normalize the weights (Eq. 8). Subsequently, to obtain the weight of each connecting edge, we disconnect the edge from the edge with the maximum weight (shown in Figure 7D).
After the closed loop searching and processing as explained above, the skeleton of a maize plant connected as a tree structure is obtained, as shown in Figure 8B. And the plant skeleton is easily segmented into leaves and stem skeleton nodes by the three connected points in the skeleton. The segmented result is visualized as Figure 8C.
Skeleton Point Calibration
The extracted skeleton of a maize plant using the above procedure contains many curving sections, especially the sections near the stem, where the skeletons ought to be linear straight. This severely affects the further estimation of phenotyping traits. Existing problems of the skeleton (Tagliasacchi et al., 2009; Cao et al., 2010) that must be corrected here could be summarized as two categories: (1) stem curving, as shown in Figure 9A, (2) too much deviation from the vein (as shown in Figure 9B), and tip missing at the end of the leaves (as shown in Figure 9C). Therefore, resampling strategy is adopted to calibrate the skeletons.
Stem Skeleton Calibration
The bent points of extracted stem skeleton are generated because the attraction of the adjacent leaves while executing the Laplacian contraction algorithm. However, this only affects the nearby points, and the stem nodes which are away from the leaves are not affected. Therefore, the stem skeleton is calibrated using the unaffected points of the stems. According to the segmentation and recognition of stem and leaves in Section “Stem Skeleton Calibration,” leaf growth point is defined as the interaction points of leaf with adjacent stem. The leaf growth points segment the entire stem into several sections. For each stem section, the extracted stem points in previous processes, barring the leaf growth points at both ends of the considered stem section, are used to fit a straight line segment based on least square distance. Finally, all the extracted skeleton points on the current stem section are projected on the new fitted line segment and are thus calibrated, as shown in Figure 10A.
Leaf Skeleton Calibration
To calibrate the skeleton points along the vein deviation (shown in Figure 10B), especially for the curved arc that usually located between the highest position on the blade and its base, we sample and calibrate all the deviated points of the skeleton starting from the blade base toward the tangential direction. First, the tangent line = p1p2 is constructed through the adjacent vein skeleton points p1 and p2. Then a cutting plane α (Figure 10C) perpendicular to through p2 interacts with the original point cloud of the leaf as the intersection set (Figure 10D). The central point of the set is the calibrated point of p2 and located on/near the vein curve. The excessive contraction problem of leaf tip is solved in the same way. Whereas the cutting plane here is constructed at a certain length interval, and the most suitable point on this plane is identified as the leaf tip. As shown in Figure 10E, we continue to move forward of the cutting plane until the entire cutting point cloud is empty, and the vein skeleton is calibrated.
Results
The algorithm was implemented using OpenGL graphics library and Point Cloud Library (PCL; Rusu and Cousins, 2011) in VC++ 2010 development platform. The algorithm was integrated into the software PlantCAD_Maize, which is one of our serious plant modeling software PlantCAD (Lu et al., 2014). The algorithm runs on a desktop workstation with the configuration of core i5 processor and 4 GB memory.
Visualization Results
Maize plants of different cultivars and growth stages were selected to evaluate the performance of the algorithm. Figure 11 shows five visualization results processed by the approach, including three growth stage plus three different cultivars of maize plants. Figure 12 shows two corresponding matching visualization in different angle of views. The matching results of extracted skeleton and the original point cloud demonstrate that the proposed algorithm has a good performance and adaptability, and the approach is feasible for different plant size and different plant type plants.
Accuracy Analysis of Phenotypic TraitsUsing the Extracted Skeleton
The extracted skeleton of maize plant contains segmentation of organs and semantic meaning of each organ, which allows rapid estimation of phenotyping parameters, including plant height and volume in plant scale, and leaf length, leaf inclination angle, leaf azimuth, and more traits in organ scale. Detailed morphological parameters are explained in Figure 13 (Zhang et al., 2017). Particularly, the leaf azimuthal angles are calculated through comparing the difference between the target leaf and the lowest leaf, where the azimuth of the lowest leaf is assumed to be 0.
To verify the accuracy of the skeleton extraction algorithm, 3D digitization data of corresponding maize plants were acquired using a 3D digitizer (Wen et al., 2017). The equipment used here is FastRak (Polhemuns, United States; Everhart et al., 2009). The acquired data do not need any further processing due to the semantic characteristics of each obtained point (Sinoquet et al., 1998). Therefore, the skeleton data obtained using the 3D digitizer could be considered as the real skeleton if we ignore the influence of any human error. So we used these 3D digitizing data as the ground truth for evaluating the algorithm. Figure 14 shows six estimated phenotyping parameter (leaf length, leaf inclination angle, leaf top length, leaf azimuthal angle, height of leaf position, and plant height) comparison between the extracted skeleton from point cloud and 3D digitizing data of the same six plants used in our experiment. The average normalized root-mean-square error (NRMSE) of plant height is the smallest, which is 0.83%, and the NRMSE of leaf inclination angle is the largest, which is 8.37%. The R2 of all phenotypic parameters are above 0.93, indicating that there is a high consistency of estimated phenotyping traits between the extracted skeleton and the measured data. In particular, the plant height can be derived by calculating directly from the point cloud for most of the plants. However, some of the plants were declining and the plant height calculation was not so accurate for these plants. Thus, plant skeletonization steps, containing the calibration of stem orientations, would promise a more satisfactory result of plant height estimation.
We also compare the proposed algorithm with the previous constrained Laplacian smoothing (CLS) point cloud contraction algorithm (Su et al., 2011). Plants of three cultivars were segmented into up and down layers by the ear-leaf, and five phenotyping parameters were estimated of the two layers. Table 1 gives the average of the five phenotyping parameters estimated using these two methods. It shows that calculation error derived using the proposed approach is almost half of the CLS algorithm, and the errors are relatively lower for different cultivars of maize plant phenotyping traits.
Table 1.
Cultivar | Layer | Leaf length | Leaf inclination angle | Leaf top length | Leaf azimuthal angle | Height of leaf position | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
Our | CLS | Our | CLS | Our | CLS | Our | CLS | Our | CLS | ||
JK968 | Up | 5.28 | 11.20 | 6.70 | 13.00 | 6.64 | 13.64 | 5.36 | 8.60 | 1.03 | 1.47 |
Down | 4.14 | 7.09 | 8.85 | 16.94 | 5.28 | 8.16 | 3.27 | 6.25 | 2.96 | 4.95 | |
ZD958 | Up | 5.84 | 10.45 | 12.19 | 25.78 | 5.13 | 7.26 | 2.91 | 4.90 | 1.79 | 4.94 |
Down | 5.28 | 7.92 | 7.19 | 12.78 | 6.15 | 12.49 | 5.44 | 9.07 | 4.43 | 7.73 | |
XY335 | Up | 5.12 | 10.71 | 12.64 | 23.21 | 5.68 | 14.22 | 5.07 | 7.84 | 0.85 | 1.75 |
Down | 3.12 | 5.15 | 8.10 | 12.72 | 4.53 | 6.79 | 4.69 | 7.40 | 1.73 | 2.94 | |
Average | 5.27 | 9.89 | 8.37 | 15.83 | 5.12 | 9.60 | 4.42 | 6.91 | 1.53 | 2.77 | |
Efficiency
Depending on the high accuracy of the 3D scanner used, high density point clouds of maize plants are obtained. Taking a silking stage maize plant as example, its point cloud contains 140 thousand points, as shown in Figure 15. According to the proposed approach, the skeleton could be extracted with different sampling point number as the input. In Figure 15B–M, it could be observed that the shape and structure of the skeleton still keeps a good approximate effect when the number of sample points is kept at about 10 thousand points. Therefore, we select 10 thousand points as the optimal sampling number for a maize plant of the approach.
In practice, the longest time consumed by the algorithm is K nearest neighborhood calculation of the input point cloud. Therefore, the computational efficiency of the algorithm can be improved reducing the number of point clouds as much as possible. However, as mentioned above, the optimal input point cloud number is 10 thousand. The total time taken for the extraction of the skeleton of a maize plant with about 10 thousand points is about 60 s excluding device scanning time. Figure 16 illustrates the time consumed of the algorithm increases with the point cloud size increases, while the RMSE of leaf length, leaf inclination angle, and leaf azimuth angle becomes smaller. And the approach of this paper does not significantly increase the time consumption compared to the CLS algorithm (Su et al., 2011). The calculation will become faster if the approach runs on high configuration computers. This is much faster than traditional manual skeleton data acquisition using 3D digitizers.
Discussion
The processing approach reported here can accurately extract skeleton from 3D point cloud of maize plant, it can approximate the geometric centric of plant organs, such as the vein of the leaves and the central axis of the stem. On the basis of which, phenotyping traits could be calculated accurately, which provides useful information for genomic studies and/or breeding programs (Fiorani and Schurr, 2013; Fahlgren et al., 2015; Yang et al., 2015). At present, most phenotyping platforms using image-based skeleton extraction algorithms (Cabrera-Bosquet et al., 2016; Zhang et al., 2017) for calculating phenotyping traits, which is difficult to extract morphological parameters in the other dimensional, such as leaf width and leaf azimuthal angle. The proposed approach provides an effective solution of the above issue using 3D point cloud data as its input. At present, there are many routines to obtain high accuracy and high-quality point clouds of plants, including data acquisition using 3D scanners (Yin et al., 2016), MVS-based 3D reconstruction (Gibbs et al., 2018), 3D synthesis by 2D LiDAR combined with plant rotation (Thapa et al., 2018). Using these 3D point clouds of plants, this algorithm will play an important role in further maize phenotyping data processing and analysis (Zhang et al., 2017).
Besides estimating phenotyping parameters for genotype-to-phenotype study, plant skeleton also plays an important role in many virtual agricultural applications, such as geometric modeling, animation, shape deformation, and growth simulation (Wade and Parent, 2002; Yan et al., 2008). Combined with delicate leaf meshes which present detailed morphological differences of diverse maize cultivars (Wen et al., 2017), it is possible to design and reconstruct various 3D maize plant models using the extracted skeleton, using skeleton driving mesh deformation technologies (Yan et al., 2008). High-quality reconstructed plants and canopy may promote the development of functional structural plant models (FSPMs; Vos et al., 2010) to a more accurate level. For example, accurate reconstructed 3D canopy model provides a precise basis for light distribution simulation and verification (Chelle and Andrieu, 1998; Cieslak et al., 2008), as well as further photosynthesis modeling (Evers et al., 2010).
We also note three limitations of the proposed approach. First, due to the 3D resolution of point cloud differs to most of the plant organs, the extracted skeleton of tassel and female ears are not satisfactory. Some of the tassels are extracted as only one truck, missing the other branches. And the female ears are always integrated into the ear leaf skeleton. This may be resolved when the point cloud density and accuracy of tassel and female ear improve. Second, the upper leaves of the maize plant are usually compact or incompletely unfolded, which makes it difficult to completely separate the leaf point cloud from the stem, and the compact structure shows that the leaves and stalks block each other, resulting in a loss of points, which brings higher error rate of upper than lower leaves (see Table 1). At present, some of the 3D maize phenotyping approaches successfully solved the 3D skeleton extraction and reconstruction of early growth stage maize plants, but plants after silking stage remain a problem due to the occlusion of upper parts of the plants (Garrido et al., 2015; Thapa et al., 2018). In practical applications of our approach, these deviations can be corrected through simple human interaction, and we provide interactively modified functional modules in the algorithm integrated software PlantCAD (Lu et al., 2014). At last, this approach is not applicable for outdoor and in situ data acquisition when the wind speed is greater than 2 m/s. The presence of wind would create noises which seriously affect the quality of the obtained data decreasing the accuracy of the extracted skeleton, especially at the intersection of the blade and the stem. In the event of no winds and stable radiations in field conditions, the target plants must be sparse enough for scanning to avoid intersection and occlusions of adjacent plants, which would decrease the noise and deficiency of points. In this case, the 3D point cloud acquired would be satisfactory for this approach and same skeleton would be extracted as indoors. Therefore, our future work will improve this approach that could be more robust and effective for rough point clouds of maize plants.
Conclusion
This paper presents a skeleton extraction approach of a maize plants from 3D point clouds. Five process including point cloud denoising, point cloud contraction using Laplacian, adaptive sampling, skeleton point connection, and skeleton curve correction are sequentially applied on input point cloud of maize. Consequently, a semantic and accurate maize plant skeleton will be derived. Phenotyping parameters of plant and organ scale could be calculated on the basis of the extracted skeleton. Experiments performed using the proposed approach on different cultivars and growth stages of maize plants show that the extracted skeleton was highly consistent with the original point cloud. The complete processing of the algorithm takes less than 100 s of a maize plant. The process is therefore robust, accurate, and efficient. Therefore, this algorithm can provide technical support for the development of automatic tools for maize phenotyping data processing and analysis.
Author Contributions
SW proposed and developed the approach, and wrote methods of this article. WW wrote other parts of the article and acquired the 3D point clouds of plants. BX evaluated the accuracy of the approach. XG designed the study. JD and CW improved the approach in some details. YW acquired the 3D digitizing data.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
Funding. This work was partially supported by China Postdoctoral Science Foundation (2018M631380), Beijing Postdoctoral Research Foundation, the National Natural Science Foundation of China (31601215), Beijing Municipal Natural Science Foundation (4162028), and Scientific and Technological Innovation Team of Beijing Academy of Agriculture and Forestry Sciences (JNKYT201604).
References
- Araus J. L., Kefauver S. C., Zaman-Allah M., Olsen M. S., Cairns J. E. (2018). Translating high-throughput phenotyping into genetic gain. Trends Plant Sci. 23 451–466. 10.1016/j.tplants.2018.02.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Au O. K. C., Tai C. L., Chu H. K., Cohen-Or D., Lee T. Y. (2008). Skeleton extraction by mesh contraction. ACM Trans. Graph. 27:10. 10.1145/1360612.1360643 23107190 [DOI] [Google Scholar]
- Brichet N., Fournier C., Turc O., Strauss O., Artzet S., Pradal C., et al. (2017). A robot-assisted imaging pipeline for tracking the growths of maize ear and silks in a high-throughput phenotyping platform. Plant Methods 13:12. 10.1186/s13007-017-0246-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cabrera-Bosquet L., Fournier C., Brichet N., Welcker C., Suard B., Tardieu F. (2016). High-throughput estimation of incident light, light interception and radiation-use efficiency of thousands of plants in a phenotyping platform. New Phytol. 212 269–281. 10.1111/nph.14027 [DOI] [PubMed] [Google Scholar]
- Cao J., Tagliasacchi A., Olson M., Zhang H., Su Z. (2010). “Point cloud skeletons via laplacian based contraction,” in Proceedings of the 2010 Shape Modeling International Conference, (Aix-en-Provence: IEEE Computer Society; ). 10.1109/SMI.2010.25 [DOI] [Google Scholar]
- Chaivivatrakul S., Tang L., Dailey M. N., Nakarmi A. D. (2014). Automatic morphological trait characterization for corn plants via 3d holographic reconstruction. Comput. Electron. Agric. 109 109–123. 10.1016/j.compag.2014.09.005 [DOI] [Google Scholar]
- Chaudhury A., Barron J. L., Talasaz A., Ivanov A. G., Brophy M., Grodsinski B. (2017). “Machine vision system for 3d plant phenotyping,” in Proceedings of the IEEE/ACM Transactions on Computational Biology and Bioinformatics, (Shanghai: IEEE; ). [DOI] [PubMed] [Google Scholar]
- Chelle M., Andrieu B. (1998). The nested radiosity model for the distribution of light within plant canopies. Ecol. Model. 111 75–91. 10.1016/S0304-3800(98)00100-8 21355008 [DOI] [Google Scholar]
- Cieslak M., Lemieux C., Hanan J., Prusinkiewicz P. (2008). Quasi-monte carlo simulation of the light environment of plants. Funct. Plant Biol. 35 837–849. 10.1071/FP08082 [DOI] [PubMed] [Google Scholar]
- Connor M., Kumar P. (2010). Fast construction of k-nearest neighbor graphs for point clouds. IEEE Trans. Vis. Comput. Graph. 16 599–608. 10.1109/tvcg.2010.9 [DOI] [PubMed] [Google Scholar]
- Cornea N. D., Silver D., Min P. (2007). Curve-skeleton properties, applications, and algorithms. IEEE Trans. Vis. Comput. Graph. 13 530–548. 10.1109/tvcg.2007.1002 [DOI] [PubMed] [Google Scholar]
- Delagrange S., Jauvin C., Rochon P. (2014). Pypetree: a tool for reconstructing tree perennial tissues from point clouds. Sensors 14 4271–4289. 10.3390/s140304271 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dun S. K., Wei H. P., Sun M. Z. (2011). A new distance color difference formula in rgb color space. Sci. Technol. Eng. 11 1833–1836. 10.3969/j.issn.1671-1815.2011.08.041 [DOI] [Google Scholar]
- Everhart S. E., Scherm H., Askew A., Seymour L., Holb I. J. (2009). Three-dimensional spatial patterns of brown rot symptoms within sour cherry tree canopies in hungary. Phytopathology 99 S33–S33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Evers J. B., Vos J., Yin X., Romero P., van der Putten PE, Struik P. C. (2010). Simulation of wheat growth and development based on organ-level photosynthesis and assimilate allocation. J. Exp. Bot. 61 2203–2216. 10.1093/jxb/erq025 [DOI] [PubMed] [Google Scholar]
- Fahlgren N., Gehan M. A., Baxter I. (2015). Lights, camera, action: high-throughput plant phenotyping is ready for a close-up. Curr. Opin. Plant Biol. 24 93–99. 10.1016/j.pbi.2015.02.006 [DOI] [PubMed] [Google Scholar]
- Fiorani F., Schurr U. (2013). Future scenarios for plant phenotyping. Annu. Rev. Plant Biol. 64 267–291. 10.1146/annurev-arplant-050312-120137 [DOI] [PubMed] [Google Scholar]
- Garrido M., Paraforos D. S., Reiser D., Vazquez Arellano M., Griepentrog H. W., Valero C. (2015). 3d maize plant reconstruction based on georeferenced overlapping lidar point clouds. Remote Sens. 7 17077–17096. 10.3390/rs71215870 [DOI] [Google Scholar]
- Gibbs J. A., Pound M., French A. P., Wells D. M., Murchie E., Tony P. (2017). Approaches to three-dimensional reconstruction of plant shoot topology and geometry. Funct. Plant Biol. 44 62–75. 10.1071/FP16167 [DOI] [PubMed] [Google Scholar]
- Gibbs J. A., Pound M., French A. P., Wells D. M., Murchie E., Pridmore T. (2018). Plant phenotyping: an active vision cell for three-dimensional plant shoot reconstruction. Plant Physiol. 178 524–534. 10.1104/pp.18.00664 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guo Q., Wu F., Pang S., Zhao X., Chen L., Liu J., et al. (2016). Crop 3d: a platform based on lidar for 3d high-throughput crop phenotyping. Sci. Sinica 46 1210–1221. 10.1360/N052016-00009 [DOI] [Google Scholar]
- Han H. Y., Han X., Sun F. S., Huang C. Y. (2015). Point cloud simplification with preserved edge based on normal vector. Optik 126 2157–2162. 10.1016/j.ijleo.2015.05.092 [DOI] [Google Scholar]
- Houle D., Govindaraju D. R., Omholt S. (2010). Phenomics: the next challenge. Nat. Rev. Genet. 11 855–866. 10.1038/nrg2897 [DOI] [PubMed] [Google Scholar]
- Hu Y., Wang L., Xiang L., Wu Q., Jiang H. (2018). Automatic non-destructive growth measurement of leafy vegetables based on kinect. Sensors 18:806. 10.3390/s18030806 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang H., Wu S. H., Cohen-Or D., Gong M. L., Zhang H., Li G., et al. (2013). L-1-medial skeleton of point cloud. ACM Trans. Graph. 32:8 10.1145/2461912.2461913 [DOI] [Google Scholar]
- Hui F., Zhu J., Hu P., Meng L., Zhu B., Guo Y., et al. (2018). Image-based dynamic quantification and high-accuracy 3d evaluation of canopy structure of plant populations. Ann. Bot. 121 1079–1088. 10.1093/aob/mcy016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jimenez-Berni J. A., Deery D. M., Rozas-Larraondo P., Condon A. T. G., Rebetzke G. J., James R. A., et al. (2018). High throughput determination of plant height, ground cover, and above-ground biomass in wheat with lidar. Front. Plant Sci. 9:18. 10.3389/fpls.2018.00237 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson M., Liscio E. (2015). Suspect height estimation using the faro focus(3d) laser scanner. J. Foren. Sci. 60 1582–1588. 10.1111/1556-4029.12829 [DOI] [PubMed] [Google Scholar]
- Klukas C., Chen D. J., Pape J. M. (2014). Integrated analysis platform: an open-source information system for high-throughput plant phenotyping. Plant Physiol. 165 506–518. 10.1104/pp.113.233932 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Livny Y., Yan F. L., Olson M., Chen B. Q., Zhang H., El-Sana J. (2010). Automatic reconstruction of tree skeletal structures from point clouds. ACM Trans. Graph. 29:8 10.1145/1866158.1866177 [DOI] [Google Scholar]
- Lu S., Guo X., Wen W., Xiao B., Wang C., Du J., et al. (2014). “Plantcad:An integrated graphic toolkit for modeling and analyzing plant structure,” in Proceedings of the 2014 IEEE International Conference on Progress in Informatics and Computing, (Shanghai: IEEE; ), 378–384. 10.1109/PIC.2014.6972361 [DOI] [Google Scholar]
- Pfeiffer S. A., Guevara J., Cheein F. A., Sanz R. (2018). Mechatronic terrestrial lidar for canopy porosity and crown surface estimation. Comput. Electron. Agric. 146 104–113. 10.1016/j.compag.2018.01.022 [DOI] [Google Scholar]
- Potgieter A. B., George-Jaeggli B., Chapman S. C., Laws K., Suárez Cadavid L. A., Wixted J., et al. (2017). Multi-spectral imaging from an unmanned aerial vehicle enables the assessment of seasonal leaf area dynamics of sorghum breeding lines. Front. Plant Sci. 8:11. 10.3389/fpls.2017.01532 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pound M. P., French A. P., Murchie E. H., Pridmore T. P. (2014). Automated recovery of three-dimensional models of plant shoots from multiple color images. Plant Physiol. 166 1688–1801. 10.1104/pp.114.248971 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rusu R. B., Cousins S. (2011). “3d is here: Point cloud library (pcl),” in Proceedings of the IEEE International Conference on Robotics and Automation, (Shanghai: IEEE; ), 1–4. 10.1109/ICRA.2011.5980567 [DOI] [Google Scholar]
- Seidel D., Ammer C., Puettmann K. (2015). Describing forest canopy gaps efficiently, accurately, and objectively: new prospects through the use of terrestrial laser scanning. Agr. Forest Meteorol. 213 23–32. 10.1016/j.agrformet.2015.06.006 [DOI] [Google Scholar]
- Sinoquet H., Thanisawanyangkura S., Mabrouk H., Kasemsap P. (1998). Characterization of the light environment in canopies using 3d digitising and image processing. Ann. Bot. 82 203–212. 10.1006/anbo.1998.0665 [DOI] [Google Scholar]
- Su Z. X., Zhao Y. D., Zhao C. J., Guo X. Y., Li Z. Y. (2011). Skeleton extraction for tree models. Math. Comput. Model. 54 1115–1120. 10.1016/j.mcm.2010.11.043 [DOI] [Google Scholar]
- Tagliasacchi A., Zhang H., Cohen-Or D. (2009). Curve skeleton extraction from incomplete point cloud. ACM Trans. Graph. 28:9 10.1145/1531326.1531377 [DOI] [Google Scholar]
- Tardieu F., Cabrera-Bosquet L., Pridmore T., Bennett M. (2017). Plant phenomics, from sensors to knowledge. Curr. Biol. 27 R770–R783. 10.1016/j.cub.2017.05.055 [DOI] [PubMed] [Google Scholar]
- Tester M., Langridge P. (2010). Breeding technologies to increase crop production in a changing world. Science 327 818–822. 10.1126/science.1183700 [DOI] [PubMed] [Google Scholar]
- Thapa S., Zhu F., Walia H., Yu H., Ge Y. (2018). A novel lidar-based instrument for high-throughput, 3d measurement of morphological traits in maize and sorghum. Sensors 18:1187. 10.3390/s18041187 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vazquez-Arellano M., Griepentrog H. W., Reiser D., Paraforos D. S. (2016). 3-d imaging systems for agricultural applications-a review. Sensors 16:24. 10.3390/s16050618 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vos J., Evers J. B., Buck-Sorlin G. H., Andrieu B., Chelle M., de Visser PH. (2010). Functional-structural plant modelling: a new versatile tool in crop science. J. Exp. Bot. 61 2101–2115. 10.1093/jxb/erp345 [DOI] [PubMed] [Google Scholar]
- Wade L., Parent R. E. (2002). Automated generation of control skeletons for use in animation. Vis. Comput. 18 97–110. 10.1007/s003710100139 [DOI] [Google Scholar]
- Wang Z., Zhang L. Q., Fang T., Mathiopoulos P. T., Qu H. M., Chen D., et al. (2014). A structure-aware global optimization method for reconstructing 3-d tree models from terrestrial laser scanning data. IEEE Trans. Geosci. Remote Sensing 52 5653–5669. 10.1109/tgrs.2013.2291815 [DOI] [Google Scholar]
- Wen W., Guo X., Wang Y., Zhao C., Liao W. (2017). Constructing a three-dimensional resource database of plants using measured in situ morphological data. Appl. Eng. Agric. 33 747–756. 10.13031/aea.12135 [DOI] [Google Scholar]
- Wu S., Zhao C. J., Guo X. Y., Wen W. L., Xiao B. X. (2017). Method of fruit tree canopy leaf reconstruction based on point cloud. Trans. Chin. Soc. Agric. Eng. 33 212–218. 10.11975/j.issn.1002-6819.2017.z1.032 [DOI] [Google Scholar]
- Yan H. B., Hu S. M., Martin R. R., Yang Y. L. (2008). Shape deformation using a skeleton to drive simplex transformations. IEEE Trans. Vis. Comput. Graph. 14 693–706. 10.1109/tvcg.2008.28 [DOI] [PubMed] [Google Scholar]
- Yan J. B., Warburton M., Crouch J. (2011). Association mapping for enhancing maize (Zea mays L.) genetic improvement. Crop Sci. 51 433–449. 10.2135/cropsci2010.04.0233 [DOI] [Google Scholar]
- Yang G., Liu J., Zhao C., Li Z., Huang Y., Yu H., et al. (2017). Unmanned aerial vehicle remote sensing for field-based crop phenotyping: current status and perspectives. Front. Plant Sci. 8:26. 10.3389/fpls.2017.01111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang W., Guo Z., Huang C., Duan L., Chen G., Jiang N., et al. (2014). Combining high-throughput phenotyping and genome-wide association studies to reveal natural genetic variation in rice. Nat. Commun. 5:9. 10.1038/ncomms6087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang W., Guo Z., Huang C., Wang K., Jiang N., Feng H., et al. (2015). Genome-wide association study of rice (Oryza sativa L.) leaf traits with a high-throughput leaf scorer. J. Exp. Bot. 66 5605–5615. 10.1093/jxb/erv100 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang W. N., Duan L. F., Chen G. X., Xiong L. Z., Liu Q. (2013). Plant phenomics and high-throughput phenotyping: accelerating rice functional genomics using multidisciplinary technologies. Curr. Opin. Plant Biol. 16 180–187. 10.1016/j.pbi.2013.03.005 [DOI] [PubMed] [Google Scholar]
- Yin K. X., Huang H., Long P. X., Gaissinski A., Gong M. L., Sharf A. (2016). Full 3d plant reconstruction via intrusive acquisition. Compu. Graph. Forum 35 272–284. 10.1111/cgf.12724 [DOI] [Google Scholar]
- Zaman-Allah M., Vergara O., Araus J. L., Tarekegne A., Magorokosho C., Zarco-Tejada P. J., et al. (2015). Unmanned aerial platform-based multi-spectral imaging for field phenotyping of maize. Plant Methods 11:10. 10.1186/s13007-015-0078-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeng L., Liu Y. J., Chen M., Yuen M. M. F. (2012). Least squares quasi-developable mesh approximation. Comput. Aided Geom. Des. 29 565–578. 10.1016/j.cagd.2012.03.009 [DOI] [Google Scholar]
- Zhang X., Huang C., Wu D., Qiao F., Li W., Duan L., et al. (2017). High-throughput phenotyping and qtl mapping reveals the genetic architecture of maize plant growth. Plant Physiol. 173 1554–1564. 10.1104/pp.16.01516 [DOI] [PMC free article] [PubMed] [Google Scholar]