Abstract
Background and Aims
Global agriculture is facing the challenge of a phenotyping bottleneck due to large-scale screening/breeding experiments with improved breeds. Phenotypic analysis with high-throughput, high-accuracy and low-cost technologies has therefore become urgent. Recent advances in image-based 3D reconstruction offer the opportunity of high-throughput phenotyping. The main aim of this study was to quantify and evaluate the canopy structure of plant populations in two and three dimensions based on the multi-view stereo (MVS) approach, and to monitor plant growth and development from seedling stage to fruiting stage.
Methods
Multi-view images of flat-leaf cucumber, small-leaf pepper and curly-leaf eggplant were obtained by moving a camera around the plant canopy. Three-dimensional point clouds were reconstructed from images based on the MVS approach and were then converted into surfaces with triangular facets. Phenotypic parameters, including leaf length, leaf width, leaf area, plant height and maximum canopy width, were calculated from reconstructed surfaces. Accurate evaluation in 2D and 3D for individual leaves was performed by comparing reconstructed phenotypic parameters with referenced values and by calculating the Hausdorff distance, i.e. the mean distance between two surfaces.
Key Results
Our analysis demonstrates that there were good agreements in leaf parameters between referenced and estimated values. A high level of overlap was also found between surfaces of image-based reconstructions and laser scanning. Accuracy of 3D reconstruction of curly-leaf plants was relatively lower than that of flat-leaf plants. Plant height of three plants and maximum canopy width of cucumber and pepper showed an increasing trend during the 70 d after transplanting. Maximum canopy width of eggplants reached its peak at the 40th day after transplanting. The larger leaf phenotypic parameters of cucumber were mostly found at the middle-upper leaf position.
Conclusions
High-accuracy 3D evaluation of reconstruction quality indicated that dynamic capture of the 3D canopy based on the MVS approach can be potentially used in 3D phenotyping for applications in breeding and field management.
Keywords: Image-based, multi-view stereo, 3D evaluation, plant population, canopy structure, laser scanning
INTRODUCTION
Global breeding programmes are facing the challenge of a phenotyping bottleneck in the dissection of the hereditary basis of complex traits (Houle et al., 2010; Furbank and Tester, 2011; Klodt et al., 2015). High-throughput and high-accuracy phenotyping have therefore become an urgent need in phenotypic analysis and structure reconstruction. Several high-throughput technologies have been applied to analyse plant phenotype traits, such as laser scanning (Paulus et al., 2013; Kazmi et al., 2014; Rose et al., 2015), lidar (Omasa et al., 2007), ultrasonic sensing (McCarthy et al., 2010) and fluorescence and hyperspectral imaging (Furbank and Tester, 2011). These technologies typically require expensive instruments and only work under certain environments, which significantly limits their applications.
Recent advances in image-based 3D reconstruction offer the opportunity for high-throughput phenotyping. The multi-view stereo (MVS) approach can realize 3D plant structure using image sequences from multi-view angles. The MVS approach has been applied in phenotypic quantification (Biskup et al., 2007; Rose et al., 2015; Duan et al., 2016), algorithm evaluation (Paproki et al., 2012; Pound et al., 2014, 2016) and yield prediction (Klodt et al., 2015; Burgess et al., 2017) for a wide variety of plant species. However, previous studies have tended to focus on individual plants, ignoring the shading effects among plants within a plant population.
Accurate simulation of plant structure is necessary for plant phenotypic trait calculation, as researchers have found that screening gene mutants would be unsuccessful in breeding once the error of phenotypic quantification exceeds 5 % (Lewin, 1986; Houle et al., 2010). The ground truth model has been used as a base surface for evaluation of reconstructed plant structure in 3D (Pound et al., 2016). However, this base surface was also produced from captured images. Applications of laser scanning demonstrated that this technique can be used for highly accurate reconstruction of plant structure in 3D (Paulus et al., 2014; Rose et al., 2015; Hou et al., 2016; Zhang et al., 2017), and therefore can be potentially used as a base surface for 3D evaluation (Cignoni et al., 1998; Hou et al., 2016; Pound et al., 2016).
The purposes of this study were to reconstruct the 3D canopy of populations of flat-leaf cucumber, small-leaf pepper and curly-leaf eggplant based on the MVS approach, to evaluate the accuracy of the reconstructed canopy structure against measurements from laser scanning in two and three dimensions, and to monitor plant growth and development from the seedling stage to the fruiting stage.
MATERIALS AND METHODS
We used the MVS approach to reconstruct the 3D structure of an individual plant and its population. The overall workflow is described in Fig. 1. It involved seven steps: (1) capture multi-view images of the plant population; (2) reconstruct 3D point clouds of the plant population from multi-view images; (3) denoise and segment point clouds into individual organs using a filtering algorithm (Rusu and Cousins, 2011) and a region-growing segmentation algorithm (Rabbani et al., 2006; Rusu et al., 2008); (4) obtain the laser scanning point clouds of an individual plant for 3D evaluation; (5) reconstruct a smooth surface from point clouds of image-based reconstruction and laser scanning; (6) extract and evaluate phenotypic parameters of plant structure; and (7) evaluate the accuracy of the image-based reconstruction in 3D.
Fig. 1.
Workflow of 3D reconstruction and accuracy evaluation for plant structure.
Greenhouse experiment
A pot experiment was carried out in a sunlight greenhouse at China Agricultural University, Beijing, China. To keep air temperatures within a range of 18–28 °C, shading cloths and evaporative cooling were used to prevent excessively high temperatures on sunny days. Each pot was filled to a depth of ~22 cm with experimental soil [1:1 mixture of loam (Shangzhuang Experimental Station, China Agricultural University, Beijing, China) and organic cultivation soil (Huaian-Huisheng Horticulture Development Company, Huaian, Jiangsu, China)]. The diameter and height of the pots were 31.5 and 29.5 cm, respectively, and were used as a reference for organ geometry calculation. Basal fertilizer consisting of 100 g of organic compound fertilizer (N 5.41 %, P2O5 5.70 %, K2O 8.75 %) per pot was thoroughly mixed with the experimental soil, and 50 g of organic compound fertilizer per pot was added as topdressing in the fruiting stage.
Plants with different levels of complexity were selected as experimental materials, comprising flat-leaf cucumber (Cucumis sativus ‘Zhongnong 106’), small-leaf pepper with branches (Capsicum annuum ‘Zhongjiao 105’) and curly-leaf eggplant (Solanum melongena ‘Yuanza 16’). Cucumber with three leaves, eggplant with four leaves and pepper with seven leaves were planted, with one plant per pot. The plants were watered, pruned and sprayed with pesticide regularly to maintain normal growth.
Laboratory experiment
To simulate a plant population, a group of four plants (2 rows × 2 plants in the rows, Supplementary Data Fig. 1) was sampled at the 20th, 40th, 60th and 70th day after transplanting for each of cucumber, pepper and eggplant. The distance between plants was equal to the actual distance between the plants grown in the greenhouse experiment. Collected plants and leaves in each sampling group were marked based on plant type and leaf position.
In the MVS approach, images were obtained by moving a Canon 500D DSLR camera (Canon, Tokyo, Japan) around the sampled plant population. Eggplants and peppers were photographed as a single layer (Fig. 2A), while cucumbers were photographed as a double layer (Fig. 2B). Each group of images had 60–70 images for the eggplant and pepper canopy, and 80–90 images for the cucumber canopy. After photographing, a FastSCAN™ laser scanner (Polhemus, USA; practical accuracy was 0.13 mm) with a TX4 magnetic field transmitter was used to obtain the 3D point clouds of the plant (Hou et al., 2016). Leaf length and leaf width of the sampled plant population were measured manually.
Fig. 2.
Camera positions around the original sparse point clouds of an experimental plant, obtained using VisualSFM. (A) Single-layer photograph of eggplant. (B) Double-layer photograph of cucumber.
Generation of 3D point clouds of the plant population based on the MVS approach
Three-dimensional point clouds of the plant population were reconstructed using VisualSFM software (Wu, 2011), which is based on the multi-view stereo and structure from motion (MVS-SFM) algorithm. The SFM algorithm establishes the relationship between 2D and 3D points through perspective projection. A scale-invariant feature transform (SIFT) keypoint detector and approximate nearest neighbour (ANN) algorithm were adopted to search and extract matched 2D points from multi-view image sequences (Arya et al., 1998). Then 3D point coordinates and camera parameters were correspondingly calculated. Based on the 3D sparse point clouds produced as described above (Fig. 2), the MVS algorithm was used to calculate the 3D dense point clouds, which included colour and texture information, using the PMVS/CMVS software (Furukawa and Ponce, 2010).
Denoising and segmentation of 3D point clouds for plant population
Noisy point removal (Rusu and Cousins, 2011) and point cloud segmentation (Rabbani et al., 2006; Rusu et al., 2008) were based on the Point Cloud Library (PCL), which contains a series of modular libraries, such as filters, registration, segmentation, surface reconstruction and visualization (Rusu and Cousins, 2011). Denoising procedures with the filtering algorithm included: (1) removing background noises and most edge noises by setting the RGB of the background (the main background noise in the current study was the point clouds of floor colour); and (2) removing outliers by adjusting statistical threshold values, such as the number of neighbours and the standard deviation. The lower the standard deviation and the higher the number of neighbours, the more outliers were deleted.
An unstructured 3D point cloud was taken as input for the region growing segmentation algorithm adopted in this study. Surface normal was used as the segmented standard to segment the given point clouds (Rabbani et al., 2006). During segmentation with the region growing segmentation algorithm, the following steps were performed: (1) manually segmenting the plant population into individual plants; and (2) adjusting threshold values to choose between the degree of over- and under-segmentation, such as the smoothness threshold and curvature threshold. A few errors in segmentation of the leaf were tuned manually. Finally, individual organs were segmented from individual plants. In this study we set the number of neighbours, standard deviation, smoothness threshold and curvature threshold to be 100, 2.5, 16.0/180.0 × π and 2.0, respectively. According to the coordinates of the leaves, we ranked and coloured the random outputs from segmentation using the R language (Ihaka and Gentleman, 1996).
Surface reconstruction and parameter value extraction
Reconstruction, appropriate repair, smoothness of surface and parameter extraction were all processed with Geomagic Studio software (Raindrop Geomagic, Morrisville, NC, USA). The encapsulating, smoothing and filling algorithm of Geomagic Studio can automatically transform point clouds into smooth surfaces consisting of triangular facets.
The extracted parameters included leaf length, leaf width, leaf area, plant height, and maximum canopy width of individual plants, indicating the competition between neighbouring plants (Fig. 3). Leaf length was the distance from the leaf base to the tip. Leaf width was the maximum distance that was perpendicular to leaf length. Leaf area was the sum of the area of the triangles (Fig. 3A). Leaf length and leaf width were calculated as distances along the surface of the leaf. Plant height was the distance from the crown of the plant to the soil layer (Fig. 3B), and the maximum canopy width (Fig. 3C) described the geometric expansion trend of the individual plant.
Fig. 3.
Sketch map of blade length and leaf width (A), plant height (B) and maximum canopy width (C). (A) The dotted line between 1 and 2 is blade width and that between 3 and 4 is blade length. (B) Plant height is equal to canopy height H1 minus soil depth H2. (C) Maximum canopy width is the maximum of canopy widths L1 and L2.
Accuracy evaluation on individual leaves
Comparisons of leaf length, leaf width and leaf area were made between manual measurements and image-based reconstruction/laser scanning. The 3D evaluations of individual leaves were performed using the Hausdorff distance, which was calculated between two surfaces from the image-based reconstruction and laser scanning reconstruction. The Hausdorff distance is the maximum distance from any point on either surface to the nearest point on the other. This concept was extended to calculate the mean distance between two surfaces (Cignoni et al., 1998; Aspert et al., 2002; Pound et al., 2016). A higher Hausdorff distance means a lower accuracy of reconstruction (Aspert et al., 2002; Hu et al., 2015). Leaf surfaces from image-based reconstructions and corresponding laser scanning surfaces were aligned to the same 3D coordinate system by aligning 8–12 feature points. The Hausdorff distance was then calculated, with a maximum distance of 1 cm.
RESULTS
Sensitivity analysis of image number required for complete reconstruction
Sensitivity analysis with five levels of image number (20, 40, 60, 80, 100 images) was conducted to check the influence of image number on accuracy of image-based 3D reconstruction for three plants at the 70th day after transplanting. Pepper plants are shown as an example in Fig. 4. As the number of photographed images decreased, the reconstructed number of leaves reduced correspondingly. The results indicated that at least 60 images for short plants and 80 images for tall plants with one or two photographed layers were needed to reconstruct at least 95% of plant leaves number.
Fig. 4.
Point clouds of pepper at the 70th day after transplanting reconstructed from 40 (A), 60 (B) and 80 (C) images.
Image-based 3D reconstruction of plant population
Point cloud segmentation and 3D surface reconstruction were conducted from the seedling stage to the fruiting stage, in which only the 40th day after transplanting of all plants (Fig. 5A, B) and the 70th day after transplanting of cucumber (Fig. 5C, D) are shown as examples. The plants were segmented into two parts: leaves (colours representing different leaf positions) and stem (black) (Fig. 5A, C). Three-dimensional surfaces were built from denoised point clouds and contained texture information (Fig. 5B, D).
Fig. 5.
Segmentation of point clouds (A) and 3D surface reconstruction (B) of (from left to right) pepper, eggplant and cucumber at the 40th day after transplanting. Segmentation of point clouds (C) and 3D surface reconstruction (D) of cucumber at the 20th, 40th, 60th and 70th days after transplanting (from left to right). Colours represent different leaf positions and black represents the stem.
Comparisons of leaf length, leaf width and leaf area
Our analysis demonstrated that there were good agreements in leaf length and leaf width between measured and estimated values for data within 70 d after transplanting [Table 1; root mean squared error (RMSE) ≤ 0.59 cm, R2 ≥ 0.96, nreconstruction = 343, nlaser scanning = 109]. A slightly higher RMSE for leaf length than leaf width of three plants was found between measured and reconstructed values. Relatively lower accuracy was found for pepper than for cucumber and eggplant. Comparison of leaf area between image-based reconstruction and laser scanning showed a high correlation (RMSE ≤ 4.34 cm2, R2 ≥ 0.98), but accuracy for pepper was relatively lower than that for cucumber and eggplant. In general, pepper had the lowest correlation while cucumber had the highest correlation among all comparisons of leaf length, leaf width and leaf area.
Table 1.
Comparisons of leaf length, leaf width and leaf area between image-based reconstruction, laser scanning and manual measurement
| Plant | Leaf parameter | N 1 | R 2 1 | RMSE1 (cm) | N 2 | R 2 2 | RMSE2 (cm or cm2) |
|---|---|---|---|---|---|---|---|
| Cucumber | length | 106 | 0.99 | 0.26 | 38 | 0.99 | 0.28 |
| width | 106 | 0.996 | 0.23 | 38 | 0.996 | 0.32 | |
| area | 38 | 0.998 | 4.34 | ||||
| Eggplant | length | 107 | 0.99 | 0.59 | 29 | 0.998 | 0.16 |
| width | 107 | 0.99 | 0.34 | 29 | 0.99 | 0.23 | |
| area | 29 | 0.996 | 3.89 | ||||
| Pepper | length | 130 | 0.96 | 0.30 | 42 | 0.96 | 0.33 |
| width | 130 | 0.96 | 0.14 | 42 | 0.96 | 0.15 | |
| area | 42 | 0.98 | 1.33 |
R 2 1, R22: subscript 1 represents comparison between image-based reconstruction and manual measurement and subscript 2 represents comparison between laser scanning and manual measurement.
Three-dimensional accuracy evaluation of image-based reconstruction on individual leaves
To evaluate the accuracy of image-based 3D reconstruction, the dataset of Hausdorff distances was divided into seven intervals. The percentages of Hausdorff distance intervals were calculated by counting the number of Hausdorff distances in each interval for the three types of plant (Table 2). The results showed that >95 % of the Hausdorff distances were <0.2 cm for pepper, 0.3 cm for cucumber and 0.4 cm for eggplant within 70 d after transplanting.
Table 2.
Percentages of intervals of Hausdorff distances calculated between surfaces of image-based reconstruction and laser scanning
| Plant | Days after transplanting | Hausdorff distance (cm) interval | ||||||
|---|---|---|---|---|---|---|---|---|
| 0.0–0.1 | 0.0–0.2 | 0.0–0.3 | 0–0.4 | 0.4–0.6 | 0.6–0.8 | 0.8–1.0 | ||
| Cucumber | 20 | 90.25 | 96.20 | 97.98 | 99.36 | 0.53 | 0.08 | 0.02 |
| 40 | 86.21 | 94.52 | 98.26 | 99.48 | 0.40 | 0.10 | 0.02 | |
| 70 | 86.83 | 94.96 | 97.93 | 99.08 | 0.61 | 0.17 | 0.07 | |
| Pepper | 20 | 82.86 | 96.18 | 98.85 | 99.54 | 0.43 | 0.03 | 0.00 |
| 40 | 81.72 | 90.82 | 95.06 | 97.17 | 1.90 | 0.74 | 0.12 | |
| 70 | 85.35 | 93.56 | 97.00 | 98.53 | 1.25 | 0.18 | 0.04 | |
| Eggplant | 20 | 83.43 | 91.16 | 95.72 | 97.32 | 1.65 | 0.87 | 0.16 |
| 40 | 85.77 | 90.77 | 94.32 | 96.78 | 1.93 | 0.77 | 0.33 | |
| 70 | 82.09 | 90.56 | 94.26 | 96.19 | 1.91 | 0.91 | 0.41 | |
The accuracy of image-based 3D reconstruction was visualized based on the Hausdorff distance for three plants at the 40th and 60th days after transplanting. Higher reconstruction accuracy was observed for cucumber while lower reconstruction accuracy was observed for eggplant (Fig. 6, Table 2). Most reconstructed errors were at leaf edges and from large undulations of the leaf surface.
Fig. 6.
Visualization of reconstruction accuracy for cucumber (A, B), pepper (C, D) and eggplant (E, F) based on Hausdorff distances at the 40th and 60th days after transplanting. Blue represents high accuracy while red represents relatively low accuracy.
Case study on dynamic monitoring of plant growth
The dynamic changes in plant height (Fig. 7A) and maximum canopy width (Fig. 7B) were obtained from the reconstructed canopy. During the 70 d after transplanting, plant height of cucumber, pepper and eggplant increased continuously to 138.8 ± 6.0, 47.7 ± 2.3 and 54.0 ± 6.3 cm, respectively. During the same period, maximum canopy width of cucumber and pepper increased from 27.9 ± 3.2 to 45.5 ± 4.1 cm and from 17.6 ± 3.1 to 34.4 ± 4.4 cm, respectively. Maximum canopy width of eggplant increased initially from 29.5 ± 2.5 to 40.8 ± 2.2 cm (40th day after transplanting), then decreased to 35.3 ± 1.8 cm at the 70th day after transplanting.
Fig. 7.
Plant height (A) and maximum canopy width (B) of cucumber, pepper and eggplant at the 20th, 40th, 60th and 70th days after transplanting.
Leaf position was sorted from the first true leaf (L1) to the top leaf. Leaf area (Fig. 8A), leaf length (Fig. 8B) and leaf width (Fig. 8C) of cucumber increased initially and then decreased with the increase in leaf position during the 70 d after transplanting. The maximum leaf area gradually increased from 126.5 ± 12.6 cm2 for L3 at the six-leaf stage to 306.0 ± 1.0 cm2 for L10 at the 18-leaf stage. Maximum leaf length and width showed the same trend as maximum leaf area, gradually increasing from 11.9 ± 0.9 and 13.7 ± 0.8 cm, respectively, to 16.7 ± 1.2 and 21.0 ± 1.0 cm at the 18-leaf stage. The larger leaves were mostly formed at the middle-upper leaf position.
Fig. 8.
Leaf area, leaf length and leaf width of cucumber at six growth stages (6-, 8-, 10-, 13-, 15- and 18-leaf stages) of cucumber within 70 d after transplanting. Leaf position was sorted from the first true leaf (position 1) to the top leaf.
DISCUSSION
Dynamic capture of 3D canopy structure on an individual plant and its population
In this study we adopted the MVS approach for the dynamic capture of 3D canopy structure. Compared with other technologies, such as lidar (Omasa et al., 2007) and laser scanning (Guo et al., 2012; Kazmi et al., 2014), the MVS approach is more cost-effective and portable. Previous research on the influence of image position and number on 3D reconstruction accuracy has demonstrated that that the top view has many advantages for short and small plants, while the side view provides more information for tall and large plants (T.T. Nguyen et al., 2016). Considering the balance between reconstruction accuracy and manual photography efficiency, one layer of photography on the top of the canopy (Fig. 2A) was used for the short plants of pepper and eggplant and two layers were used for the tall plants of cucumber (Fig. 2B).
A sensitivity analysis was conducted to check the influence of image number and distribution on the accuracy of 3D reconstruction. The results indicated that the resolution of image-based reconstruction cannot be improved with >60 images for short plants and >80 images for tall plants. A larger radius of photography and the serious leaf occlusion among plants within the plant population need relatively more images than individual plants to ensure reconstruction accuracy (Pound et al., 2014, 2016; Duan et al., 2016). Automated camera platforms with hemispherical uniform photography will be designed to meet the needs of high-accuracy reconstruction and high frequency of sampling in the near future (T.T. Nguyen et al., 2016; C.V. Nguyen et al., 2016).
Segmentation usually presents a big challenge in point cloud processing (Rabbani et al., 2006; Paulus et al., 2013). Segmenting a point cloud into regions has been described in the literature based on edges (Sappa and Devy, 2001; Wani and Arabnia, 2003), surfaces (Besl and Jain, 1988; Rabbani et al., 2006; Paulus et al., 2013) and scan lines (Natonek, 1998; Khalifa et al., 2003; Sithole and Vosselman, 2003). The region growing segmentation algorithm adopted in this study is a highly efficient approach for segmentation of point clouds using the surface smoothness constraint (Rabbani et al., 2006). In our study, organ group segmentations were more difficult for the multi-branched pepper than for cucumber or eggplant. The segmentation process is semi-automatic and the separated individual organs need to be recognized by researchers. Completely automatic segmentation has only been realized for small plants with a few leaves, such as Gossypium (Paproki et al., 2012) and Hordeum vulgare (Paulus et al., 2014; Dornbusch et al., 2007).
Surface reconstruction could also be processed by combining several algorithms (Hosoi and Omasa, 2006; Kolev et al., 2009; Kazhdan and Hoppe, 2013). However, it is very difficult to apply these algorithms to the reconstruction of complete surfaces on narrow and small leaves due to poorly defined surface boundaries (Duan et al., 2016; Pound et al., 2016). Automated outdoor/indoor image acquisition platforms (Hartmann et al., 2011; T.T. Nguyen et al., 2016; C.V. Nguyen et al., 2016), surface reconstruction algorithms and data extraction algorithms are under development and will allow high accuracy and full automation for future experiments.
Accuracy evaluation in 3D and dynamic monitoring of plant growth
A high consistency of leaf geometry information was found between different approaches. Most R2 values were >0.98. Phenotypic traits, such as like leaf width, leaf length and leaf area, cannot be extracted easily from plants with multi-branching (Table 1, pepper versus cucumber and eggplant). The main deviation of organ geometry calculation was caused by (1) the difficulty of leaf-base segmentation due to the blurred line of the leaf base; (2) the pixel loss at the leaf edges or tips because of leaf shading or denoising; and (3) pixel fusion of leaf edges and undulations of the leaf surface (Ma et al., 2007).
Plant height and canopy width are easier to quantify compared with leaf geometry traits, such as leaf width and length, due to the fact that they can be calculated even when the stem or leaf is not completely reconstructed (Rose et al., 2015). Plant height and maximum canopy width of experimental plants had an increasing trend during 70 d after transplanting, except maximum canopy width of eggplant. Plant height of eggplant reached its maximum value at the 40th day after transplanting because of pruning between 40 and 70 d after transplanting (Fig. 7). Old and lower leaves of cucumber were replaced by new and top leaves (Fig. 8). The larger leaves were mostly formed at the middle-upper leaf position, which was more conducive to biomass synthesis and accumulation because middle-upper leaves have higher photosynthetic efficiency (Critchley, 1981; Ai et al., 2002).
Phenotypic parameters such as plant height and organ size alone are inadequate for further breeding processes. More phenotypic parameters, such as leaf angle and leaf rolling index, are needed due to the fact that they are more useful for resistance breeding (Zehr et al., 1994; Blum, 2005; Kadioglu et al., 2012). Researchers have argued that screening gene mutants in breeding programmes cannot be achieved once the error in phenotypic traits exceeds 5 % (Lewin, 1986; Houle et al., 2010). Our 3D evaluation of three plants showed that the lowest reconstruction accuracy was observed in the curly-leaf eggplant (Fig. 6), but the errors in 95 % of the Hausdorff distances were within 0.4 cm and could meet the accuracy requirement.
Potential application of the MVS approach
Phenotypic traits calculated in this study and the reconstructed 3D architecture can be used as input to simulate light interception, photosynthesis and yield in functional–structural models. Researchers can also analyse nutrient deficiency (Hsiao et al., 1984), salinity and drought tolerance (Rajendran et al., 2009; Rauf et al., 2016), pathogen infection (Ghandi et al., 2016) or toxicity evaluation (Schnurbusch et al., 2010) by tracking and analysing variation in plant/organ shape (Blum, 2005; Dey et al., 2012) and colour (Munns and Tester, 2008; Rajendran et al., 2009). Combination of multiple technologies will greatly enhance the efficiency of genotype selection and breeding for tolerance genes.
As laser scanning was limited by the operating environment, our research was carried out indoors. The MVS approach can also be used outdoors (Dey et al., 2012; Burgess et al., 2017; Shafiekhani et al., 2017). However, strong direct sunlight or shadows would reduce the accuracy of the model by affecting the relationship between the image and the projected pattern (Jin and Tang, 2009). By adjusting camera parameters according to the experimental environment and using a light conversion algorithm (Lati et al., 2013), the leaf light reflection caused by strong sunlight can be mostly reduced. Furthermore, automated and high-throughput photography will significantly reduce wind interferences (T.T. Nguyen et al., 2016). The parameters, including plant height and canopy width (Shafiekhani et al., 2017), vegetation coverage and the colours of branch, leaf and fruit (Dey et al., 2012), can be easily extracted with the MVS approach which have been applied in field management and yield prediction (Burgess et al., 2017). More detailed parameters, such as length, width, angle and leaf rolling index of individual leaves, cannot be accurately extracted outdoors because of incomplete reconstruction.
Leaf occlusion occurring because of shading among leaves in the indoor/outdoor environment could cause pixel loss in the point cloud reconstructed from images. Through the combination of multiple automated camera platforms (e.g. ground platform based on hemispherical uniform photography, miniature unmanned aerial vehicle), more comprehensive and complete information on plant structure can be obtained (Shafiekhani et al., 2017). Nevertheless, the phenotyping bottleneck used to be a hardware equipment problem and now it is more related to the software algorithm (Minervini et al., 2015). Accurate algorithms are required to reconstruct complete and high-accuracy 3D plant populations from MVS image sequences and extract phenotypic information from indoor or outdoor experiments.
SUPPLEMENTARY DATA
Supplementary data are available online at www.aob.oxfordjournals.org and consist of the following. Figure S1: a few multi-view images of the plant population for eggplant (A), pepper (B) and cucumber (C).
ACKNOWLEDGEMENTS
We thank Yingpu Che and Ziwen Xie for their help in image processing. This work was supported by the National Key Research and Development Program of China (2016YFD0300202), the National Natural Science Foundation of China (No. 31000671 and No. 31210103906) .
LITERATURE CITED
- Ai XZ, Zhang ZX, He QW, Sun XL, Xing YX. 2002. Study on photosynthesis of leaves at different positions of cucumber in solar greenhouse. Scientia Agricultura Sinica 35: 1519–1524. [Google Scholar]
- Arya S, Mount DM, Netanyahu NS, Silverman R, Wu AY. 1998. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM 45: 891–923. [Google Scholar]
- Aspert N, Santa-Cruz D, Ebrahimi T. 2002. MESH: measuring errors between surfaces using the Hausdorff distance. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME) 2002. Piscataway: IEEE, 705–708. [Google Scholar]
- Besl PJ, Jain RC. 1988. Segmentation through variable-order surface fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence 10: 167–192. [Google Scholar]
- Biskup B, Scharr H, Schurr U, Rascher UWE. 2007. A stereo imaging system for measuring structural parameters of plant canopies. Plant, Cell & Environment 30: 1299–1308. [DOI] [PubMed] [Google Scholar]
- Blum A. 2005. Drought resistance, water-use efficiency, and yield potential—are they compatible, dissonant, or mutually exclusive?Crop and Pasture Science 56: 1159–1168. [Google Scholar]
- Burgess AJ, Retkute R, Pound MP, Mayes S, Murchie EH. 2017. Image-based 3D canopy reconstruction to determine potential productivity in complex multi-species crop systems. Annals of Botany 119: 517–532. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cignoni P, Rocchini C, Scopigno R. 1998. Metro: measuring error on simplified surfaces. Computer Graphics Forum 17: 167–174. [Google Scholar]
- Critchley C. 1981. Studies on the mechanism of photoinhibition in higher plants I. Effects of high light intensity on chloroplast activities in cucumber adapted to low light. Plant Physiology 67: 1161–1165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dey D, Mummert L, Sukthankar R. 2012. Classification of plant structures from uncalibrated image sequences. In: Proceedings of IEEE Workshop on Applications of Computer Vision (WACV) 2012. Piscataway: IEEE, 329–336. [Google Scholar]
- Dornbusch T, Wernecke P, Diepenbrock W. 2007. A method to extract morphological traits of plant organs from 3D point clouds as a database for an architectural plant model. Ecological Modelling 200: 119–129. [Google Scholar]
- Duan T, Chapman SC, Holland E, Rebetzke GJ, Guo Y, Zheng B. 2016. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. Journal of Experimental Botany 67: 4523–4534. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Furbank RT, Tester M. 2011. Phenomics – technologies to relieve the phenotyping bottleneck. Trends in Plant Science 16: 635–644. [DOI] [PubMed] [Google Scholar]
- Furukawa Y, Ponce J. 2010. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence 32: 1362–1376. [DOI] [PubMed] [Google Scholar]
- Ghandi A, Adi M, Lilia F et al. 2016. Tomato yellow leaf curl virus infection mitigates the heat stress response of plants grown at high temperatures. Scientific Reports 6: 19715. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guo Y, Shi TX, Wu J, Li X, Xu ZL, Yang YH. 2012. The development of static virtual tobacco model basing on three dimensional scanning methodology. Acta Tabacaria Sinica 18: 29–33. [Google Scholar]
- Hartmann A, Czauderna T, Hoffmann R, Stein N, Schreiber F. 2011. HTPheno: an image analysis pipeline for high-throughput plant phenotyping. BMC Bioinformatics 12: 148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hosoi F, Omasa K. 2006. Voxel-based 3-D modeling of individual trees for estimating leaf area density using high-resolution portable scanning lidar. IEEE Transactions on Geoscience and Remote Sensing 44: 3610–3618. [Google Scholar]
- Hou T, Zheng B, Xu Z, Yang Y, Chen Y, Guo Y. 2016. Simplification of leaf surfaces from scanned data: effects of two algorithms on leaf morphology. Computers and Electronics in Agriculture 121: 393–403. [Google Scholar]
- Houle D, Govindaraju DR, Omholt S. 2010. Phenomics: the next challenge. Nature Reviews Genetics 11: 855–866. [DOI] [PubMed] [Google Scholar]
- Hsiao TC, O’Toole JC, Yambao EB, Turner NC. 1984. Influence of osmotic adjustment on leaf rolling and tissue death in rice (Oryza sativa L.). Plant Physiology 75: 338–341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hu P, Guo Y, Li B, Zhu J, Ma Y. 2015. Three-dimensional reconstruction and its precision evaluation of plant architecture based on multiple view stereo method. Transactions of the Chinese Society of Agricultural Engineering 31: 209–214. [Google Scholar]
- Ihaka R, Gentleman R. 1996. R: a language for data analysis and graphics. Journal of Computational and Graphical Statistics 5: 299–314. [Google Scholar]
- Jin J, Tang L. 2009. Corn plant sensing using real-time stereo vision. Journal of Field Robotics 26: 591–608. [Google Scholar]
- Kadioglu A, Terzi R, Saruhan N, Saglam A. 2012. Current advances in the investigation of leaf rolling caused by biotic and abiotic stress factors. Plant Science 182: 42–48. [DOI] [PubMed] [Google Scholar]
- Kazhdan M, Hoppe H. 2013. Screened Poisson surface reconstruction. ACM Transactions on Graphics 32: 29. [Google Scholar]
- Kazmi W, Foix S, Alenyà G, Andersen HJ. 2014. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison. ISPRS Journal of Photogrammetry and Remote Sensing 88: 128–146. [Google Scholar]
- Khalifa I, Moussa M, Kamel M. 2003. Range image segmentation using local approximation of scan lines with application to CAD model acquisition. Machine Vision and Applications 13: 263–274. [Google Scholar]
- Klodt M, Herzog K, Töpfer R, Cremers D. 2015. Field phenotyping of grapevine growth using dense stereo reconstruction. BMC Bioinformatics 16: 143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kolev K, Klodt M, Brox T, Cremers D. 2009. Continuous global optimization in multiview 3D reconstruction. International Journal of Computer Vision 84: 80–96. [Google Scholar]
- Lati RN, Filin S, Eizenberg H. 2013. Plant growth parameter estimation from sparse 3D reconstruction based on highly-textured feature points. Precision Agriculture 14: 586–605. [Google Scholar]
- Lewin R. 1986. Proposal to sequence the human genome stirs debate. Science 232: 1598–1601. [DOI] [PubMed] [Google Scholar]
- Ma L, Ma J, Shen Y. 2007. Pixel fusion based curvelets and wavelets denoise algorithm. Engineering Letters 14: 130–134. [Google Scholar]
- McCarthy CL, Hancock NH, Raine SR. 2010. Applied machine vision of plants: a review with implications for field deployment in automated farming operations. Intelligent Service Robotics 3: 209–217. [Google Scholar]
- Minervini M, Scharr H, Tsaftaris SA. 2015. Image analysis: the new bottleneck in plant phenotyping [applications corner]. IEEE Signal Processing Magazine 32: 126–131. [Google Scholar]
- Munns R, Tester M. 2008. Mechanisms of salinity tolerance. Annual Review of Plant Biology 59: 651–681. [DOI] [PubMed] [Google Scholar]
- Natonek E. 1998. Fast range image segmentation for servicing robots. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1998. Piscataway: IEEE, 406–411. [Google Scholar]
- Nguyen CV, Fripp J, Lovell DR et al. 2016. 3D scanning system for automatic high-resolution plant phenotyping. In: Wee-Chung Liew A, Lovell B, Fookes C, Zhou J, Gao Y, Blumenstein M, Wang Z. eds. Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA) 2016. Piscataway: IEEE, 1–8. [Google Scholar]
- Nguyen TT, Slaughter DC, Townsley BT, Carriedo L, Maloof JN, Sinha N. 2016. In-field plant phenotyping using multi-view reconstruction: an investigation in eggplant. In: Proceedings of the 13th International Conference on Precision Agriculture (unpaginated, online). Monticello, IL: International Society of Precision Agriculture. [Google Scholar]
- Omasa K, Hosoi F, Konishi A. 2007. 3D lidar imaging for detecting and understanding plant responses and canopy structure. Journal of Experimental Botany 58: 881–898. [DOI] [PubMed] [Google Scholar]
- Paproki A, Sirault X, Berry S, Furbank R, Fripp J. 2012. A novel mesh processing based technique for 3D plant analysis. BMC Plant Biology 12: 63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulus S, Dupuis J, Mahlein A-K, Kuhlmann H. 2013. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping. BMC Bioinformatics 14: 238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulus S, Dupuis J, Riedel S, Kuhlmann H. 2014. Automated analysis of barley organs using 3D laser scanning: an approach for high throughput phenotyping. Sensors 14: 12670–12686. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pound MP, French AP, Murchie EH, Pridmore TP. 2014. Automated recovery of three-dimensional models of plant shoots from multiple color images. Plant Physiology 166: 1688–1698. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pound MP, French AP, Fozard JA, Murchie EH, Pridmore TP. 2016. A patch-based approach to 3D plant shoot phenotyping. Machine Vision and Applications 27: 767–779. [Google Scholar]
- Rabbani T, Van Den Heuvel F, Vosselmann G. 2006. Segmentation of point clouds using smoothness constraint. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 36: 248–253. [Google Scholar]
- Rajendran K, Tester M, Roy SJ. 2009. Quantifying the three main components of salinity tolerance in cereals. Plant, Cell & Environment 32: 237–249. [DOI] [PubMed] [Google Scholar]
- Rauf S, Al-Khayri JM, Zaharieva M, Monneveux P, Khalil F. 2016. Breeding strategies to enhance drought tolerance in crops. In: Al-Khayri JM, Jain SM, Johnson DV. eds. Advances in plant breeding strategies: agronomic, abiotic and biotic stress traits. Cham: Springer, 397–445. [Google Scholar]
- Rose J, Paulus S, Kuhlmann H. 2015. Accuracy analysis of a multi-view stereo approach for phenotyping of tomato plants at the organ level. Sensors 15: 9651–9665. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rusu RB, Cousins S. 2011. 3D is here: Point Cloud Library (PCL). In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2011. Piscataway: IEEE, 1–4. [Google Scholar]
- Rusu RB, Marton ZC, Blodow N, Dolha M, Beetz M. 2008. Towards 3D point cloud based object maps for household environments. Robotics and Autonomous Systems 56: 927–941. [Google Scholar]
- Sappa AD, Devy M. 2001. Fast range image segmentation by an edge detection strategy In: Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling 2001. Los Alamitos; Piscataway; Minato-ku: IEEE, 292–299. [Google Scholar]
- Schnurbusch T, Hayes J, Sutton T. 2010. Boron toxicity tolerance in wheat and barley: Australian perspectives. Breeding Science 60: 297–304. [Google Scholar]
- Shafiekhani A, Kadam S, Fritschi F, DeSouza G. 2017. Vinobot and Vinoculer: two robotic platforms for high-throughput field phenotyping. Sensors 17: 214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sithole G, Vosselman G. 2003. Automatic structure detection in a point-cloud of an urban landscape In: 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, 2003. Piscataway: IEEE, 67–71. [Google Scholar]
- Wani MA, Arabnia HR. 2003. Parallel edge-region-based segmentation algorithm targeted at reconfigurable multiring network. Journal of Supercomputing 25: 43–62. [Google Scholar]
- Wu C. 2011. VisualSFM: a visual structure from motion system. http://www.cs.washington.edu/homes/ccwu/vsfm, 2011. [Google Scholar]
- Zehr BE, Dudley JW, Rufener GK. 1994. QTLs for degree of pollen-silk discordance, expression of disease lesion mimic, and leaf curl response to drought. Maize Genetics Cooperation Newsletter 68: 110–111. [Google Scholar]
- Zhang YH, Tang L, Liu XJ, Liu LL, Cao WX, Zhu Y. 2017. Modeling curve dynamics and spatial geometry characteristics of rice leaves. Journal of Integrative Agriculture 16: 2177–2190.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.








