Skip to main content
Plants logoLink to Plants
. 2025 Nov 19;14(22):3534. doi: 10.3390/plants14223534

Estimating Maize Leaf Area Index Using Multi-Source Features Derived from UAV Multispectral Imagery and Machine Learning Models

Hongyan Li 1,2,, Caixia Huang 1,2,*,, Yuze Zhang 1,2, Shuai Li 3, Yu Liu 1,2, Kui Yang 1,2, Junsheng Lu 3
Editor: Luca Vitale
PMCID: PMC12656271  PMID: 41304685

Abstract

Leaf area index (LAI) is a critical indicator of canopy architecture and physiological performance, serving as a key parameter for crop growth monitoring and management. Although UAV multispectral imagery provides rich spectral and spatial information, the limitations of single texture features for LAI estimation still require further exploration. To address this issue, this study developed a multi-source feature fusion framework that integrates vegetation indices (VIs), texture features (TFs), and texture indices (TIs) within a stacked ensemble approach combining Partial Least Squares Regression (PLSR) with Support Vector Machine (SVM), Random Forest (RF), and Gradient Boosting Decision Tree (GBDT) algorithms to estimate maize LAI.A field experiment was conducted under three planting densities (42,000, 63,000, and 84,000 plants ha−1) and four nitrogen rates (0, 80, 160, 240 kg N ha−1) to assess the potential of UAV-based multispectral imagery for maize LAI estimation. The results show that when using partial least squares regression (PLSR) combined with RF, SVM and GBDT to estimate maize LAI, the R2 values are 0.653, 0.697 and 0.634, and the RMSE is 0.650, 0.608 and 0.668, respectively, when only vegetation indices (VIs) is used as input. After texture features (TFs) incorporation, the R2 increases to 0.717, 0.794, and 0.801, and the RMSE decreases to 0.587, 0.500, and 0.492. Further inclusion of the texture indices (TIs) raises the R2 to 0.789, 0.804, and 0.844, with RMSE of 0.506, 0.489, and 0.436, respectively. Independent test set validation under contrasting conditions confirmed that our multi-model fusion framework (PLSR+GBDT) with multi-source feature fusion (VIs+TFs+TIs) effectively estimated LAI, achieving an R2 of 0.859 and 0.794. These results demonstrate that multi-source feature integration via machine learning enables robust and accurate estimation of maize LAI, providing a valuable tool for precision agriculture and crop growth monitoring.

Keywords: leaf area index, UAV multispectral imagery, vegetation indices, texture features, machine learning, maize

1. Introduction

Leaf area index (LAI), defined as the total one-sided leaf area per unit ground surface area, is a fundamental biophysical parameter reflecting canopy structure, light interception, and photosynthetic capacity [1]. It directly influences transpiration, biomass accumulation, and crop productivity, and serves as a critical indicator for assessing growth status and forecasting yield [2,3]. For maize (Zea mays L.), one of the most widely cultivated staple and economic crops worldwide [4], accurate and dynamic monitoring of LAI is essential to understand physiological and ecological processes [5], guide efficient water and nutrient management, and improve resource use efficiency under diverse agroecological conditions. Given the increasing demand for sustainable maize production and the growing emphasis on precision agriculture, timely and spatially explicit LAI information is indispensable for optimizing management practices and enhancing yield stability [6].

Traditional approaches for measuring LAI, including destructive sampling and laboratory analyses, are labor-intensive, time-consuming, and impractical for large-scale or continuous monitoring [7,8]. These limitations restrict their utility in modern agricultural systems, particularly when high-resolution temporal and spatial data are required for dynamic crop modeling and decision support. Consequently, non-destructive, high-throughput, and flexible methods for LAI estimation are of increasing importance in both research and operational contexts [9].

Unmanned aerial vehicle (UAV) remote sensing has emerged as a promising tool to overcome these challenges [10]. UAV platforms provide high spatial and temporal resolution, flexible deployment, and cost-effective monitoring [11], enabling detailed characterization of canopy attributes across diverse growth stages and heterogeneous field conditions. VIs are mathematical combinations of spectral reflectance values from different bands (e.g., red, near-infrared) designed to quantify vegetation characteristics (e.g., chlorophyll content, canopy cover) by amplifying spectral differences between vegetated and non-vegetated surfaces [12]. By capturing multispectral or hyperspectral images, UAVs allow the extraction of VIs such as normalized difference vegetation index (NDVI), green normalized difference vegetation index (GNDVI), and optimized soil-adjusted vegetation index (OSAVI), which are widely used to estimate canopy photosynthetic activity, vegetation cover, and LAI [13]. For example, Shu et al. demonstrated that maize LAI estimation based on UAV remote sensing data exhibited high accuracy and reliability across different growth stages, with an R2 value as high as 0.852 [14]. Hussain et al. estimated the LAI of sweet maize using vegetation index and machine learning techniques generated based on drone imagery, with an R2 ranging between 0.78–0.90 [15]. However, VIs often suffer from saturation effects under dense canopy conditions or at late growth stages [16], limiting their sensitivity to subtle structural changes and reducing their effectiveness in high-biomass crops [17]. This limitation highlights the need for complementary features that capture canopy structural heterogeneity and spatial complexity.

Texture analysis provides such complementary information by quantifying the spatial arrangement and inter-pixel relationships in remote sensing imagery. Texture features can describe leaf distribution, canopy gaps, and structural complexity, which cannot be fully captured by spectral indices alone. TFs are quantitative indicators that describe the spatial arrangement and pixel-to-pixel relationships in remote sensing images, which are mainly obtained by the grayscale co-occurrence matrix (GLCM) method [18]. Previous studies have demonstrated that integrating spectral and texture information can improve the accuracy of crop phenotypic estimation [17]. For instance, when estimating aboveground biomass of potatoes, Liu et al. introduced texture features based on input spectral features in the estimation of aboveground biomass of potatoes, and their R2 increased by 41.46% [19]. Despite these promising results, most studies focus on two-dimensional or single-band texture features, leaving the potential of multidimensional texture indices largely untapped.

Multidimensional texture features derived from GLCM can integrate multiple statistical parameters such as mean, variance, contrast, correlation, and entropy across spectral bands to form two- or three-dimensional indices. These indices provide richer representations of canopy structure and spatial heterogeneity than single-feature metrics [20]. They also improve robustness to soil background effects, illumination changes, shadowing, and variations in sensor viewing geometry. Prior research has shown that multidimensional texture indices can enhance correlations with LAI compared with single-feature approaches, demonstrating their potential for high-accuracy canopy characterization [21]. Tang et al. input vegetation index, texture feature and 3D texture index into the XGBoost model as inputs during LAI inversion of winter rape and obtain the highest estimation accuracy [22]. The coefficient of determination (R2) of the validation set is 0.882, the root mean square error (RMSE) is 0.204 cm2 cm−2, and the mean relative error (MRE) is 6.498%, which provides an effective method for multispectral monitoring of LAI in winter rape based on UAV.

Beyond feature extraction, robust modeling approaches are essential for translating high-dimensional UAV data into accurate LAI estimates. In recent years, machine learning methods have made significant achievements in crop phenotypic parameter estimation [23]. For example, Wang et al. used machine learning to estimate the moisture content of maize leaves, and found that the RF model consistently outperformed the Multiple Linear Regression and Ridge Regression models throughout the growing season [24]. Miao et al. employed multi-source datasets, the lasso algorithm, and various machine learning approaches to predict maize yield at the county level in China. Their results indicated that machine learning methods outperformed the lasso algorithm, with RF, GBDT, and SVM emerging as the most effective models for maize yield prediction (R2 ≥ 0.75, RMSE = 824-875 kg/ha, MAE = 626-651 kg ha−1) [25]. Machine learning algorithms, including SVM, RF, GBDT, and ensemble methods, offer flexible nonlinear mapping capabilities to capture complex relationships among spectral, texture, and multidimensional features [26]. Coupling PLSR with these machine learning models can further improve robustness by extracting latent variables, mitigating multicollinearity, and reducing noise, while machine learning components model nonlinear residuals. Hybrid approaches, such as PLSR–GBDT or PLSR–Gaussian process models, have consistently outperformed single-model methods in LAI estimation across different crops and growth conditions, demonstrating superior predictive accuracy and generalization.

Building on these insights, the present study proposes a comprehensive framework for UAV-based maize LAI estimation. This study addresses these limitations by proposing a multi-source feature fusion framework combining VIs, TFs, and TIs within a stacking ensemble. 3D TIs, derived from UAV canopy structure data, capture fine-scale structural heterogeneity that correlates with LAI even at high densities, while stacking integrates PLSR and machine learning models to leverage both spectral and structural synergies. By focusing on maize-specific challenges, this framework advances prior work by explicitly targeting VIs saturation and illumination robustness—critical for accurate LAI estimation across the growing season. The framework enables dynamic monitoring of maize LAI, provides a reliable basis for precision water and nutrient management, and supports resource-efficient and climate-smart agricultural practices. Moreover, by combining spectral, spatial, and multidimensional information within a robust modeling framework, this study advances methodological capabilities for high-throughput crop phenotyping and contributes to the development of practical UAV-based monitoring solutions for maize production.

2. Materials and Methods

2.1. Study Site

Field experiments were conducted in 2024 at the Dryland Agriculture Experimental Station, located in Yuzhong County (104°09′ E, 35°56′ N; altitude 1749 m), Gansu province, China. The station is located in a typical semi-arid climate zone, with an annual mean evaporation of approximately 1450 mm and an annual mean precipitation of 327 mm, mainly occurring from July to September. The annual mean temperature is 7.6 °C, Annual sunshine duration ranges from 1626 to 2666 h (Figure 1).

Figure 1.

Figure 1

Overview of the study area and experimental design drawing. D1, D2 and D3 represent low, medium and high planting densities, respectively, while N0, N1, N2 and N3 are 0 kg N ha−1, 80 kg N ha−1, 160 kg N ha−1 and 240 kg N ha−1 without nitrogen fertilizer, respectively.

2.2. Experiment Design

A field experiment was conducted with three planting densities: D1, (42,000 plants ha−1), D2 (63,000 plants ha−1), and D3 (84,000 plants ha−1) and four nitrogen application rates: N0 (0 kg N ha−1), N1 (80 kg N ha−1), N2 (160 kg N ha−1), N3 (240 kg N ha−1) (Table 1). Three planting densities were achieved by adjusting row spacing, combined with four nitrogen levels, resulting in 12 treatment combinations. Each treatment was replicated three times, totaling 36 plots. Within each block, the 12 treatment plots were randomly assigned positions to minimize systematic biases from plot location. A 2 m buffer zone was set around the experimental area, with 1 m isolation strips between adjacent plots. The maize was sown on 25 April 2024 and harvested on 27 September 2024. Irrigation was applied based on reference crop evapotranspiration (ET0). Other management practices were consistent with local recommendations.

Table 1.

Details of different nitrogen application levels and planting densities.

Treatments Planting Density (Plants ha−1) Nitrogen Levels (kg N ha−1)
D1N0 42,000 0
D1N1 42,000 80
D1N2 42,000 160
D1N3 42,000 240
D2N0 63,000 0
D2N1 63,000 80
D2N2 63,000 160
D2N3 63,000 240
D3N0 84,000 0
D3N1 84,000 80
D3N2 84,000 160
D3N3 84,000 240

Note: D1, D2 and D3 represent low, medium and high planting densities, respectively, while N0, N1, N2 and N3 are 0 kg N ha−1, 80 kg N ha−1, 160 kg N ha−1 and 240 kg N ha−1 without nitrogen fertilizer, respectively.

2.3. UAV Image Acquisition and Preprocessing

Multispectral images of maize were acquired using a DJI Phantom 4 Multispectral UAV (DJI Innovations Science and Technology Co., Ltd., Shenzhen, Guangdong, China). The UAV is equipped with six CMOS sensors, including one RGB sensor for visible imaging and five monochrome sensors for multispectral imaging, covering blue (450 nm, B), green (560 nm, G), red (650 nm, R), red-edge (730 nm, RE), and near-infrared (840 nm, NIR) bands. In order to avoid the influence of crop shadow on remote sensing data and ensure the reliability and accuracy of image data, flights were conducted at 60 m above ground under clear, cloud-free conditions between 11:30 and 13:30 local time. The camera was oriented nadir, with fixed flight paths and 85% forward and side overlap. Whiteboard calibration was conducted prior to image acquisition. UAV multispectral image acquisition was carried out at five time points on 25 June, 11 July, 25 July, 13 August and 22 August 2024. Pix4Dmapper 4.8.6 software was used for image stitching, radiometric correction, and orthomosaic generation. The main workflow includes importing raw data, calibrating reflectance using a white reference plate, generating dense point clouds, and generating multispectral orthomosaic with 5 bands. ArcMap 10.8 for Region of Interest (ROI) extraction. For each plot, select an ROI (avoiding the edge of the ground) to extract the average reflectance value for the 5 bands. In addition, Python 3.9.12 was used for feature calculation and data preprocessing.

VIs were calculated from the UAV multispectral imagery. The normalized difference vegetation index (NDVI) was employed to minimize soil background effects. Python was used to extract reflectance values from five bands for all 36 plots, and 11 vegetation indices were calculated for maize LAI estimation Table 2. TFs, reflecting spatial patterns and homogeneity within images, were computed using the GLCM method. Eight texture metrics were extracted from each band: Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Energy, and Correlation, yielding a total of 40 texture features per plot. A 4 × 4 pixel window was applied with default spatial offsets (X = 1, Y = 1).

Table 2.

The 11 VIs selected for this study along with their calculation formulas.

Abbreviation Full Name Formula Source
GNDVI Green Normalized Difference Vegetation Index (NIR − G)/(NIR+G) [27]
NDVI Normalized Difference Vegetation Index (NIR − R)/(NIR+R) [28]
DVI Difference Environmental Vegetation Index NIR − R [29]
RVI Ratio Vegetation Index NIR/R [30]
OSAVI Optimized Soil Adjusted Vegetation Index 1.16(NIR − R)/(NIR+R+0.16) [31]
SAVI Soil Adjusted Vegetation Index 1.5(NIR − R)/(NIR+R+0.5) [32]
EVI Enhanced Vegetation Index 2.5(NIR − R)/(NIR+6R − 7.5B+1) [33]
VARI Visible Atmospherically Resistant Index (G − R)/(G+R − B) [34]
EXG Excess Green Index 2G − R − B [35]
EXR Excess Red Index 1.4R − G [36]
CIRE Red Edge Normalized Difference Vegetation Index (NIR/RE) − 1 [37]

To enhance stability and information content, TFs showing high correlation with LAI (Pearson correlation) were combined across bands to construct 10 multispectral TFs, including three 2D indices—normalized difference texture index (NDTI), difference texture index (DTI), ratio texture index (RTI)—and three 3D indices—NDTTI, DTTI, and RTTI. The formula is derived from Tang et al. [22].

2D texture indices were defined as:

NDTI=T1T2T1+T2 (1)
DTI=T1T2 (2)
RTI=T1T2 (3)

3D texture indices were defined as:

NDTTI=T1T2T3T1+T2+T3 (4)
DTTI=T1T2T3 (5)
RTTI=T1T2T3 (6)

where T1, T2, and T3 are texture values from any selected bands.

2.4. LAI Measurements

Simultaneously with UAV imaging, maize canopy LAI was measured using an LAI-2200C plant canopy analyzer (LI-COR, Inc., Lincoln, NE, USA). LAI was measured synchronously with the acquisition of UAV-based multispectral imagery. For each plot, three replicate measurements were taken, and their average value was used to represent the LAI of that plot. During LAI measurements, direct sunlight was avoided by having the operator rotate 180° to face away from the sun. On the shaded side, one reference measurement of sky light was taken first. Then, the sensor was placed horizontally at the base of the maize plants to collect four target readings. The average LAI value for the plot was derived from the mean of these four measurements.

2.5. Model Construction

PLSR is a multivariate statistical method that integrates the strengths of multiple linear regression, canonical correlation analysis, and principal component analysis. It is particularly effective for constructing prediction models from highly collinear variables, as it mitigates multicollinearity while efficiently handling high-dimensional data by reducing dimensionality without losing key information.

SVM, proposed by Vapnik et al. in 1995, is a supervised learning algorithm based on structural risk minimization [38]. SVM excels at modeling nonlinear relationships and small-sample datasets and can be applied to both regression and classification tasks.

RF is an ensemble learning algorithm that constructs multiple decision trees through bootstrap sampling and random feature selection, then aggregates their predictions to enhance accuracy. RF is robust against outliers and noise and is particularly suitable for datasets with nonlinear relationships and complex feature structures.

GBDT, introduced by Friedman in 1999, is an iterative boosting algorithm that sequentially builds decision trees based on the residuals of previous models, gradually improving predictive performance. Its effectiveness depends mainly on the number of base trees and the maximum depth of each tree.

In this study, models combining PLSR with SVM, RF and GBDT was established for maize LAI estimation. PLSR extracts latent variables to maximize covariance between features and LAI, mitigating multicollinearity in high-dimensional data. SVM maps data into high-dimensional space, RF reduces variance through bagging, and GBDT iteratively optimizes residuals. The linear regression meta-model is used to integrate the meta-features generated by k-fold cross-verification, forming a robust framework that combines linear extraction of PLSR, nonlinear fitting ability of machine learning, and model stacking-enhanced generalization. This approach provides accurate and consistent LAI prediction across different canopy change patterns. The detailed steps are as follows: First, PLSR is used to fit the training data and generate the predicted value of PLSR. At the same time, the same training data were fitted using SVM/RF/GBDT to generate the predicted values of SVM/RF/GBDT. Then, the predicted values of PLSR and SVM/RF/GBDT are used as new features (meta-features) to construct a new dataset. Finally, a meta-model (Lasso regression) is used to train the new dataset to fuse the prediction results of PLSR and SVM/RF/GBDT. Lasso regression, with its L1 regularization feature, automatically performs feature selection by shrinking coefficients of unimportant predictors to zero. This simplifies the final stacked model, prevents overfitting, and enhances model interpretability. This method learns the prediction results of the two foundation models through the meta-model, which can integrate their advantages and improve the generalization ability of the model.

To ensure the generalizability of the models, this study adopted a rigorous nested cross-validation strategy. The specific implementation procedure is as follows: the outer layer employed repeated K-fold cross-validation to evaluate the final performance (PLSR model: 5-fold × 2 repeats; other models: 10-fold), while the inner layer performed systematic optimization of the hyperparameter space based on grid search. The tuning ranges for each model were as follows: for GBDT, five parameters including learning rate (0.05–0.2), maximum tree depth (3–5), and minimum samples per leaf node (2–5) were evaluated across 108 combinations; for Random Forest (RF), five parameters including the number of trees (100–300) and maximum feature ratio (0.3–0.8) were assessed across 324 combinations; for Support Vector Machine (SVM), four parameters including penalty coefficient C (0.1–100), insensitivity band ε (0.01–1), and kernel parameter γ were optimized across 80 combinations. All hyperparameter tuning aimed to minimize the Root Mean Square Error (RMSE), resulting in over 4500 model fittings. In addition, Pearson correlation-based feature selection for TFs and TIs was performed within each training subset during cross-validation, ensuring that no test data were involved in this step and thus preventing information leakage. This comprehensive framework ensures both the reliability of hyperparameter selection and the robustness of the final model performance.

2.6. Sample Set Division and Model Evaluation

Firstly, the 180 LAI data were sorted from smallest to largest, and then 1/3 of the samples were selected as the validation set (n = 60), and the remaining 2/3 samples were selected as the modeling set (n = 120), of which the validation set accounted for 33.33% of the total sample size and the modeling set accounted for 66.67% of the total sample size, ensuring that the training set and the validation set are distributed consistently across the LAI domain and different treatment combinations. Hyperparameter optimization is performed using repeated cross-validation within the training set to reliably evaluate model variance and reduce the risk of overfitting. Model performance was evaluated using coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE). Figure 2 presents the workflow and data processing steps of the study. The higher the R2 of the estimation model and the validation model, the smaller the corresponding RMSE and MAE, indicating that the stability of the model is better and the predictive ability is stronger.

R2=1i=1nxiyi2i=1nxiy¯2 (7)
RMSE=i=1nxiyi2n (8)
MAE=1ni=1nxiyi (9)

where xi is the measured LAI value, yi is the predicted LAI value, y¯ is the mean of the measured LAI values, and n is the number of samples in the test set.

Figure 2.

Figure 2

Workflow of this study: data collection, indicators extraction, model building and main results.

2.7. Independent Test Set Validation

To validate the aforementioned models, this study established experiments with different irrigation and nitrogen application levels. Three nitrogen application rates were implemented: 0, 160, and 240 kg N ha−1. Three irrigation treatments were designed: full drip irrigation (FI), deficit drip irrigation (75% DI), and rainfed (RF). For the FI and DI treatments, irrigation amounts were determined based on soil properties and maize root depth. Specifically, the irrigation thresholds were set at 60% and 90% of field capacity (0.25 cm3 cm−3), with a wetted soil depth of 60 cm (the primary maize root zone) and a target wetted soil volume ratio of 80%. The calculated available soil water capacity was 30.2 mm, leading to irrigation amounts of 30 mm for FI and 22.5 mm for DI. Irrigation timing was determined by integrating cumulative crop evapotranspiration (ETc = Kc × ET0, where Kc was the field-measured crop coefficient and ET0 was estimated using the Penman–Monteith equation) with 3-day rainfall forecasts. Irrigation was triggered when the accumulated ETc reached the irrigation amount and no rainfall greater than 10 mm was forecast. UAV multispectral image acquisition was carried out at four time points on 25 June, 11 July, 25 July and 22 August 2024. Field sampling dates coincided precisely with the UAV data collection missions.

3. Results

3.1. Field Maize LAI

The dynamic changes in maize leaf area index with maize growth are shown in Figure 3, and overall, with the progression of maize growth period, the LAI value showed a trend of increasing first and then decreasing. Specifically, there were significant differences in LAI under three planting densities and four nitrogen application levels, and the LAI value reached the maximum under the D3N2 treatment.

Figure 3.

Figure 3

Effects of different planting densities and nitrogen application levels on maize LAI. The labels 1, 2, 3, 4, and 5 on the horizontal axis represent the seedling stage, jointing stage, tasseling stage, early grain-filling stage, and late grain-filling stage, respectively.

The statistical characteristics of LAI for both the modeling and validation datasets are presented in Figure 4. In the modeling set, LAI ranged from 0.77 to 5.26, while in the validation set, it ranged from 0.80 to 5.28. Overall, the observed LAI values covered a broad and representative range, providing a robust foundation for evaluating and comparing different LAI inversion approaches.

Figure 4.

Figure 4

Statistical characteristics of measured leaf area index (LAI) for the modeling set (n = 120) and validation set (n = 60) used in the study. The asterisks in the figure represent individual LAI data points of the modeling set and validation set, respectively. The minimum, maximum, and mean values of the modeling set are 0.77, 5.26, and 2.82, respectively; while those of the validation set are 0.80, 5.28, and 2.88, respectively. There are no apparent significant differences in LAI between the modeling set and validation set.

3.2. Correlation Between LAI and VIs

Correlation analysis was conducted between the 180 sampled LAI values and 11 VIs listed in Table 2. The results are shown in Figure 5. Most VIs showed significant correlations with maize LAI (p < 0.05). Among them, the Green Normalized Difference Vegetation Index (GNDVI) exhibited the highest correlation coefficient of 0.712. The high correlation between GNDVI and LAI stems from its unique physiological spectral response mechanism. The green band, located at the chlorophyll reflection peak, is highly sensitive to changes in leaf chlorophyll content, while the near-infrared (NIR) band reflects canopy structural information. The synergistic use of these two bands not only effectively mitigates the saturation issue exhibited by traditional red-light-based indices under high LAI conditions but also amplifies the combined response of chlorophyll and canopy structure through ratio-based calculation. This dual sensitivity to both chlorophyll content and canopy structure enables GNDVI to more reliably quantify maize growth dynamics across scales—from individual leaves to canopy-level—while maintaining a linear response particularly in medium-to-high-density canopies with LAI values between 2 and 5. Consequently, the selected VIs inputs for model construction included OSAVI, SAVI, DVI, GNDVI, RVI, EVI, NDVI, CIRE, EXR, and VARI.

Figure 5.

Figure 5

Correlation analysis between LAI and VIs. The Pearson correlation coefficient (r) between the variables is shown in the figure. The circular sector area in each grid indicates the strength of the correlation (the size of |r|). The direction of the fan represents a positive and negative correlation (e.g., padding clockwise from the right side represents a positive correlation, and padding counterclockwise from the left side represents a negative correlation). A complete circle on the diagonal indicates that the variable is completely positively correlated with itself (r = +1.00).

3.3. Correlation Between LAI and TFs

The correlations between LAI and 40 GLCM-based TFs are shown in Figure 6. Most TFs were significantly correlated with LAI (p < 0.05). The highest correlation was observed for the variance (var) of the B-band, with a coefficient of −0.702. The high correlation between the variance (var) of the B-band and LAI may be attributed to the strong absorption of blue light by chlorophyll, which causes its reflectance to be highly sensitive to leaf density and canopy structure. By quantifying the spatial variability of blue-band reflectance, the var value effectively captures the heterogeneity of canopy leaf distribution. As LAI increases, the canopy leaves become more densely and multi-layered arranged, resulting in significantly enhanced local fluctuations (variance) in blue-light reflectance. Therefore, selected texture features TFs for modeling included: B-band var, mean, contrast (con), dissimilarity (dis), correlation (cor); R-band var, con, dis; NIR-band cor; and G-band con.

Figure 6.

Figure 6

Pearson correlation coefficients between maize LAI and 40 TFs derived from 5 multispectral bands (B: blue, G: green, R: red, Reg: red-edge, NIR: near-infrared). TFs include mean (mean), variance (var), homogeneity (hom), contrast (con), dissimilarity (dis), entropy (et), energy (sem), and correlation (cor).

3.4. Correlation Between LAI and TIs

Based on random combinations of TFs, three two-dimensional (2D) texture indices and three three-dimensional (3D) texture indices were constructed. Their correlations with LAI are summarized in Table 3. Most combined TIs exhibited significant correlations with LAI (p < 0.05), and 3D indices generally showed higher correlations than 2D indices. Among them, NDTTI composed of B_con, B_dis, and R_dis exhibited the highest correlation with LAI (−0.789), while NDTI composed of B_con and R_dis was the strongest 2D index (−0.777). The reason why the correlation coefficient values of NDTTI2-NDTTI6 and LAI are relatively similar may be due to the dominant effect of the B_con. Three-dimensional texture indices exhibit a higher correlation with LAI compared to two-dimensional texture indices, primarily because they simultaneously capture spatial heterogeneity in both horizontal and vertical dimensions of the canopy through the multi-band stereoscopic combination of textural features. These 3D indices not only quantify the arrangement patterns of leaves in a two-dimensional plane but also reflect gradient variations in leaf density and geometric structure across vertical canopy layers. Thereby, they provide a more comprehensive characterization of the three-dimensional structural complexity of the canopy driven by LAI. LAI was negatively correlated with the texture index, mainly because the dense distribution of leaves in the high LAI canopy led to uniform grayscale distribution in the pixels, which reduced the texture eigenvalue, while the low LAI canopy enhanced the gray scale difference between pixels due to the exposure of soil background and leaf gap, and increased the texture eigenvalue. Accordingly, 10 texture indices TIs were selected as input features for model construction.

Table 3.

Correlation analysis of LAI with TIs.

Type Texture Combination Correlation Coefficient
NDTTI1 B_con, B_dis, R_dis −0.789
NDTI1 B_con, R_dis −0.777
NDTI2 B_con, B_dis −0.765
RTI1 B_con, R_dis −0.749
RTTI1 B_con, NIR_cor, R_dis −0.748
NDTTI2 B_var, NIR_cor, G_con −0.745
NDTTI3 B_var, R_con, NIR_cor −0.745
NDTTI4 B_var, R_var, NIR_cor −0.744
NDTTI5 B_var, B_con, NIR_cor −0.744
NDTTI6 B_var, G_con, B_cor −0.744

3.5. Model Construction and Validation of Maize LAI Estimation

Based on the selected input features, maize LAI estimation models were constructed by coupling PLSR with three widely used machine learning algorithms: SVM, RF, and GBDT. The performance of each model was evaluated using both training and validation datasets to assess predictive capability and generalization.

3.5.1. LAI Estimation of Maize Based on PLSR+SVM Method

When VIs were used as the sole input, the model exhibited only moderate performance, with R2 values of 0.653 (training) and 0.592 (validation), indicating relatively large estimation errors (Figure 7a). This limitation can be attributed to the sensitivity of VIs to growth stages and environmental conditions, as well as their saturation at high LAI levels. The inclusion of TFs markedly enhanced model performance, raising R2 to 0.717 (training) and 0.741 (validation). Compared with VIs alone, RMSE was reduced by 9.69% and 20.47% for the training and validation sets, respectively, while MAE decreased by 13.04% and 27.57% (Figure 7b). These improvements highlight that spatial structural information provided by TFs complements spectral indices in representing canopy characteristics.

Figure 7.

Figure 7

Maize LAI was estimated based on PLSR+SVM. (a) with VIs as input; (b) using VIs and TFs as inputs; (c) VIs, TFs, and multidimensional TIs are used as inputs.

Further improvement was achieved by integrating VIs, TFs, and TIs. This multi-source fusion achieved the highest predictive accuracy, with R2 values of 0.789 (training) and 0.786 (validation). Relative to the VIs+TFs model, the fusion further reduced RMSE by 13.80% (training) and 9.01% (validation), and MAE by 20.42% (training) and 10.84% (validation) (Figure 7c). These results demonstrate the synergistic effect of combining spectral, structural, and texture indexes in enhancing LAI estimation accuracy.

3.5.2. LAI Estimation of Maize Based on PLSR+RF Method

When VIs were used exclusively, the model exhibited limited predictive capability, with R2 values of 0.697 (training) and 0.546 (validation) (Figure 8a). Incorporating TFs markedly improved performance, raising R2 to 0.794 (training) and 0.759 (validation). Compared with VIs alone, RMSE decreased by 17.76% and 27.05% and MAE by 19.68% and 34.89% for the training and validation sets (Figure 8b), respectively, underscoring the importance of spatial heterogeneity information in accurate LAI estimation.

Figure 8.

Figure 8

Maize LAI was estimated based on PLSR+RF. (a) with VIs as input; (b) using VIs and TFs as inputs; (c) VIs, TFs, and multidimensional TIs are used as inputs.

Further improvement was achieved by integrating VIs, TFs, and TIs. This combined model yielded the highest accuracy, with R2 values of 0.804 (training) and 0.802 (validation). Relative to the VIs+TFs model, RMSE was reduced by 2.20% (training) and 4.95% (validation), while MAE decreased by 9.51% (training) and 10.51% (validation) (Figure 8c). These results highlight the synergistic effect of complementary spectral, structural, and TIs in enhancing LAI estimation.

3.5.3. LAI Estimation of Maize Based on PLSR+GBDT Method

Models based solely on VIs exhibited the lowest performance, with R2 = 0.634 (training) and 0.530 (validation) (Figure 9a). Incorporating TFs substantially improved performance, increasing R2 to 0.801 (training) and 0.760 (validation). Compared with VIs alone, RMSE decreased by 17.76% and 27.05% and MAE by 19.68% and 34.89% for the training and validation sets (Figure 9b), respectively, highlighting the importance of feature diversity for robust prediction. The fully integrated model combining VIs, TFs, and TIs achieved the highest accuracy, with R2 of 0.844 (training) and 0.812 (validation). Relative to the VIs+TFs model, RMSE was reduced by 2.20% (training) and 4.95% (validation), while MAE decreased by 9.51% (training) and 10.51% (validation), demonstrating that multi-source feature fusion effectively captures the complex nonlinear relationships between canopy spectral–spatial traits and LAI (Figure 9c).

Figure 9.

Figure 9

Maize LAI was estimated based on PLSR+GBDT. (a) with VIs as input; (b) using VIs and TFs as inputs; (c) VIs, TFs, and multidimensional TIs are used as inputs.

Overall, models based on single or simple features were constrained by limited information dimensionality and could not adequately capture the structural complexity of maize canopies. In contrast, multi-source feature fusion markedly enhanced estimation accuracy, underscoring its potential as a robust framework for UAV-based maize LAI retrieval. Among the tested approaches, the PLSR+GBDT model consistently outperformed others. Its superior performance can be attributed to the gradient boosting mechanism, which iteratively minimizes residual errors; the dimensionality reduction capability of PLSR, which alleviates multicollinearity; and the meta-model integration, which strengthens generalization. By comparison, SVM relies on kernel functions to handle nonlinear relationships but is prone to overfitting in high-dimensional spaces, while RF reduces variance through bagging but has limited ability to dynamically adjust feature importance compared with GBDT.

Quantitatively, PLSR+GBDT achieved consistent and significant improvements in LAI estimation across all feature combinations. With VIs alone, its performance was slightly inferior to PLSR+SVM and PLSR+RF, reflecting the limitations of single-source input. However, after incorporating TFs (VIs+TFs), PLSR+GBDT exhibited clear advantages, increasing R2 by 11.71% (training) and 2.56% (validation) compared with PLSR+SVM, and by 0.88% and 0.13% compared with PLSR+RF. With the full feature set (VIs+TFs+TIs), PLSR+GBDT further increased R2 by 6.97% (training) and 3.31% (validation) over PLSR+SVM, and by 4.98% and 1.25% over PLSR+RF. These results demonstrate that PLSR+GBDT effectively exploits multi-source feature fusion to capture the complex nonlinear relationships between canopy traits and LAI, delivering superior accuracy and generalization relative to the other methods.

3.5.4. Results of Independent Test Set Validation

To rigorously evaluate the generalizability of our multi-model fusion framework, an independent test set derived from a separate field experiment was employed for external validation. This experiment was conducted under distinct environmental conditions and management practices, thereby providing a robust assessment of model transferability. When only VIs were used as inputs, the model achieved an R2 of 0.658 on the training set and 0.569 on the validation set (Figure 10a). The incorporation of TFs improved model performance, yielding R2 values of 0.778 for the training set and 0.619 for the validation set(Figure 10b). Using VIs+TFs+TIs as inputs produced the best-performing estimation model, which obtaining R2 of 0.859 (training) and 0.794 (validation) (Figure 10c). The results from the test set were consistent with those from the modeling set, demonstrating that PLSR+GBDT with multi-source feature fusion can effectively estimate LAI.

Figure 10.

Figure 10

Maize LAI in the test set was estimated based on PLSR+GBDT. (a) with VIs as input; (b) using VIs and TFs as inputs; (c) VIs, TFs, and multidimensional TIs are used as inputs.

Comparison of model performance indicated that the PLSR+GBDT model was the most effective for maize LAI estimation using UAV multispectral imagery. Based on this model, the distribution map of LAI in the early stage of maize filling was plotted (Figure 11). High LAI values (>4.5) were concentrated in plots with D3 (84,000 plants ha−1) and N2 (160 kg N ha−1), which was consistent with the in situ measurement results. Low LAI values (<2.5) were found in plots with D1 (42,000 plants ha−1) and N0 (0 kg N ha−1). This spatial pattern reflected the combined effects of planting density and nitrogen application rate, demonstrating the utility of the model for field-scale LAI monitoring. Overall, the integration of vegetation indices, texture features, and texture indices derived from UAV imagery enables accurate maize LAI estimation and demonstrates strong potential for applications in crop growth monitoring and agricultural diagnosis.

Figure 11.

Figure 11

Spatial distribution map of maize LAI at the early stage of maize filling (13 August 2024) in the study area, predicted by the PLSR+GBDT model (input = VIs + TFs + TIs). The color bar represents LAI values (m2/m2).

4. Discussion

This study developed maize LAI estimation models based on UAV multispectral data by integrating VIs, TFs, and TIs, coupled with PLSR+SVM, PLSR+RF, and PLSR+GBDT algorithms. The results clearly demonstrate that multi-source feature fusion significantly enhances LAI estimation accuracy, providing both theoretical and methodological support for UAV-based crop monitoring.

Canopy spectral information has been widely applied in LAI estimation studies [39]. Different spectral bands extracted from multispectral sensors exhibit distinct responses to LAI. Since LAI largely determines the absorption of photosynthetically active radiation (PAR) by the canopy, VIs—constructed from spectral reflectance—indirectly reflect the relationship between radiation absorption and canopy structure [40]. Therefore, VIs often exhibits significant correlations with LAI [39]. In this study, GNDVI showed the highest correlation (r = 0.712), likely due to its sensitivity to chlorophyll content. During maize growth, LAI increases are accompanied by chlorophyll accumulation, which GNDVI effectively captures through differences between NIR and green bands [41]. By contrast, EXG showed a very low correlation with LAI (−0.10), possibly because the Excess Green Index saturates at high canopy coverage, limiting its ability to capture further LAI increases.

By extracting TFs from UAV images, the data dimensionality for maize LAI estimation was enriched, providing a new technical approach for UAV-based crop parameter assessment [18]. While VIs based on spectral reflectance can sensitively capture canopy health and density, they tend to saturate under high LAI conditions [16]; in contrast, TFs are capable of capturing spatial heterogeneity in canopy structure, such as leaf arrangement and gap distribution. In this study, the incorporation of TFs reduced the validation RMSE by 20.47–27.05% across all models (Figure 7, Figure 8 and Figure 9), which is consistent with the findings of Meng et al., the introduction of texture features significantly improved the estimation of maize biomass [42]. The strong correlation between the blue (B)-band variance and LAI (r = −0.702) highlights the importance of blue-light reflectance in characterizing canopy density. In addition, to address the issue of weak correlation between individual texture features and LAI, a method for constructing TIs was proposed, which enhanced the performance of texture information in LAI estimation. TIs integrate multi-band spatial information, reducing sensitivity to soil background and illumination changes [20]. Results indicate that both 2D and 3D TIs exhibit significantly higher correlations with LAI than individual TFs [43]. This improvement may be attributed to the following: normalized texture indices effectively suppress soil background, solar angle, and sensor viewing effects; difference-based indices reduce homogenous background noise; ratio-based indices mitigate topographic and shadow effects while enhancing vegetation reflectance characteristics. Multi-dimensional texture indices integrate multi-scale texture information, comprehensively representing canopy spatial heterogeneity, thus overcoming the information limitations of single features [44]. For instance, NDTTI1 (r = −0.789) outperforms individual TFs by combining blue (B)-band contrast, B-band dissimilarity, and red (R)-band dissimilarity to capture both horizontal and vertical canopy structures. This is consistent with the findings of Tang et al., who reported that 3D TIs improved winter oilseed rape LAI estimation by 12.3% compared to 2D Tis [22]. By integrating multi-dimensional features (hyperspectral reflectance, observation geometry, and PROSAIL-simulated priors), Dey et al. achieved high accuracy with R2 > 0.89 in the simultaneous inversion of Equivalent Water Thickness (EWT) and Leaf Area Index (LAI). This result demonstrates that multi-source information fusion is an effective approach to improve the performance of vegetation parameter inversion [45].

Among the three coupled models, PLSR+GBDT outperformed PLSR+SVM and PLSR+RF, achieving a validation R2 of 0.812 and RMSE of 0.464 when multi-source features were integrated. This superior performance likely arises from GBDT’s gradient boosting mechanism, which iteratively optimizes residuals and adapts well to complex nonlinear relationships between LAI and multi-source features [46]; PLSR preprocessing effectively extracts linear principal components, mitigating multicollinearity in high-dimensional data and providing a strong foundation for subsequent nonlinear modeling [47]; the meta-model integration strategy further enhances the generalization capability of base models [48]. In comparison, SVM relies on kernel mapping for nonlinearity, which may lead to overfitting in high-dimensional feature spaces; RF reduces variance via bagging, but its dynamic feature weighting is less flexible than GBDT [49].

Nevertheless, this study has limitations. It was conducted during a single growing season (2024) with only five temporal sampling points, lacking multi-year and multi-climate validation; the generalization of models under interannual climate variability and different ecological zones remains to be tested, future work will incorporate multi-year, multi-site datasets for cross-season validation to comprehensively assess the model’s robustness and adaptability. In addition, LAI estimation did not consider independent models for each phenological stage, although canopy structure changes dramatically from jointing to grain-filling stages, which may affect the stability of feature-LAI relationships. Moreover, environmental factors and multi-source remote sensing data were not fully incorporated. Future work should include multi-season and multi-climate data, develop stage-specific models, integrate UAV hyperspectral, thermal, or LiDAR data to complement spectral and texture information, and incorporate soil physicochemical properties and meteorological variables as covariates to quantify direct and indirect environmental effects on LAI, thus improving the accuracy and reliability of LAI estimation. In addition, the current study focuses on validation against traditional machine learning models, but lacks direct comparisons with physics-guided baselines (e.g., PROSAIL-PLSR) and more diverse non-stacked ML baselines. Future research should integrate a physics-guided baseline (PROSAIL-PLSR) by combining radiative transfer model simulations with our UAV spectral data, thereby enabling direct comparisons between data-driven and physics-constrained approaches.

In addition to the aforementioned limitations, three critical factors that may influence LAI estimation accuracy require further discussion. First, the reliance on sunlit LAI-2200 measurement protocols may introduce systematic bias, as the instrument’s restricted field-of-view and sensitivity to canopy gap fraction could lead to underestimation in densely vegetated conditions or overestimation in heterogeneous canopies. Second, TFs and TIs derived from GLCM are sensitive to UAV flight altitude and sliding window size. The fixed parameters in this study limit the generalizability of texture-based models across different UAV operation scenarios. Future work should conduct sensitivity analyses on flight altitudes (e.g., 30 m, 60 m, 90 m) and window sizes (e.g., 3 × 3 to 7 × 7) to identify robust parameter combinations for cross-scenario LAI estimation. Third, the presence of reproductive organs—particularly tassels during the silking stage—may introduce confounding effects by altering the canopy’s spectral and structural properties, thereby decoupling the typical relationship between vegetation indices and true leaf area. Future research should therefore prioritize multi-platform data fusion to mitigate measurement biases, establish altitude-invariant texture normalization procedures, and explicitly incorporate phenological stages into the modeling framework to disentangle contributions from leaves and reproductive structures.

5. Conclusions

Based on field experiments and UAV-acquired multispectral data, this study developed a multi-source feature framework for maize LAI estimation by integrating VIs, TFs, and TIs with three machine learning algorithms (PLSR+SVM, PLSR+RF, and PLSR+GBDT). The results demonstrated that most VIs were significantly correlated with maize LAI (p < 0.05), with GNDVI exhibiting the highest correlation (r = 0.712), while most TFs were also significantly correlated (p < 0.05), with the B-band variance showing the strongest relationship (r = −0.702). Furthermore, randomly combined TIs were significantly associated with LAI (p < 0.05), and 3D indices generally outperformed 2D indices, with NDTTI (B_con, B_dis, R_dis) and NDTI (B_con, R_dis) identified as the most strongly correlated 3D and 2D indices (r = −0.789 and −0.777, respectively). Comparative analysis of the three modeling algorithms indicated that all approaches achieved satisfactory predictive performance, whereas the PLSR+GBDT model consistently yielded the highest R2. In addition, incorporating multi-source features (VIs+TFs+TIs) substantially improved model accuracy across all algorithms, highlighting the advantage of spectral-spatial feature fusion for capturing LAI variability. Overall, these findings provide a robust and practical methodology for UAV-based crop growth monitoring and quantitative assessment of canopy structural parameters.

Author Contributions

Conceptualization, J.L.; methodology, S.L. and J.L.; software, H.L. and Y.Z.; validation, H.L.; formal analysis, H.L.; investigation, H.L., Y.Z. and S.L.; resources, C.H.; data curation, H.L., Y.Z., S.L., Y.L. and K.Y.; writing—original draft preparation, H.L.; writing—review and editing, C.H. and J.L.; visualization, H.L.; supervision, C.H. and J.L.; project administration, C.H.; funding acquisition, C.H. and J.L. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This study was supported by the National Natural Science Foundation of China (No. 52309053), the Key Program of the Natural Science Foundation of Gansu Province (No. 24JRRA635), the Young Ph.D. Support Program of Colleges and Universities in Gansu Province (No. 2024QB-071), and the Discipline Team Project on Efficient Water Use and Water-Saving Mechanisms in Crops.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Qiao L., Gao D., Zhao R., Tang W., An L., Li M., Sun H. Improving estimation of LAI dynamic by fusion of morphological and vegetation indices based on UAV imagery. Comput. Electron. Agric. 2022;192:106603. doi: 10.1016/j.compag.2021.106603. [DOI] [Google Scholar]
  • 2.Zhi X., Chen Q., Han Y., Yang B., Wang Y., Wu F., Xiong S., Jiao Y., Ma Y., Shang S., et al. Multi-modal feature integration from UAV-RGB imagery for high-precision cotton phenotyping: A paradigm shift toward cost-effective agricultural remote sensing. Comput. Electron. Agric. 2025;239:111002. doi: 10.1016/j.compag.2025.111002. [DOI] [Google Scholar]
  • 3.Li Y., Wang B., Zhao X., Zhang Y., Qiao L. Inversion and analysis of leaf area index (LAI) of urban park based on unmanned aerial vehicle (UAV) multispectral remote sensing and random forest (RF) PLoS ONE. 2025;20:e0320608. doi: 10.1371/journal.pone.0320608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zong M., Manevski K., Liang Z., Abalos D., Jabloun M., Lærke P.E., Jørgensen U. Diversifying maize rotation with other industrial crops improves biomass yield and nitrogen uptake while showing variable effects on nitrate leaching. Agric. Ecosyst. Environ. 2024;371:109091. doi: 10.1016/j.agee.2024.109091. [DOI] [Google Scholar]
  • 5.Cheng Q., Xu H.G., Fei S.P., Li Z.P., Chen Z. Estimation of maize leaf area index (LAI) using ensemble learning and unmanned aerial vehicle (UAV) multispectral imagery under different water and fertilizer treatments. Agriculture. 2022;12:1267. doi: 10.3390/agriculture12081267. [DOI] [Google Scholar]
  • 6.Shao M., Nie C., Zhang A., Shi L., Zha Y., Xu H., Yang H., Yu X., Bai Y., Liu S., et al. Quantifying effect of maize tassels on LAI estimation based on multispectral imagery and machine learning methods. Comput. Electron. Agric. 2023;211:108029. doi: 10.1016/j.compag.2023.108029. [DOI] [Google Scholar]
  • 7.Dong L., Jiang Y., Luo Y., Cheng X., Ai L. Optimization of leaf area index measurement method and correction of green plot ratio formula based on regional plant characteristics—A study in Chongqing, China. Environ. Sci. Pollut. Res. Int. 2024;31:30914–30942. doi: 10.1007/s11356-024-33125-z. [DOI] [PubMed] [Google Scholar]
  • 8.Yamaguchi T., Tanaka Y., Imachi Y., Yamashita M., Katsura K. Feasibility of combining deep learning and RGB images obtained by unmanned aerial vehicle for leaf area index estimation in rice. Remote Sens. 2021;13:84. [Google Scholar]
  • 9.Chen Y., Zhang Z., Tao F. Improving regional winter wheat yield estimation through assimilation of phenology and leaf area index from remote sensing data. Eur. J. Agron. 2018;101:163–173. doi: 10.1016/j.eja.2018.09.006. [DOI] [Google Scholar]
  • 10.Zhang M., Zhou J., Sudduth K.A., Kitchen N.R. Estimation of maize yield and effects of variable-rate nitrogen application using UAV-based RGB imagery. Biosyst. Eng. 2020;189:24–35. doi: 10.1016/j.biosystemseng.2019.11.001. [DOI] [Google Scholar]
  • 11.Zhang J.J., Cheng T., Guo W., Xu X., Qiao H.B., Xie Y.M., Ma X.M. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods. 2021;17:1–14. doi: 10.1186/s13007-021-00750-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ochiai S., Erika K., Ryo S. Comparative analysis of RGB and multispectral UAV image data for leaf area index estimation of sweet potato. Smart Agric. Technol. 2024;9:100579. doi: 10.1016/j.atech.2024.100579. [DOI] [Google Scholar]
  • 13.Jay S., Baret F., Dutartre D., Malatesta G., Héno S., Comar A., Weiss M., Maupas F. Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sens. Environ. 2019;231:110898. doi: 10.1016/j.rse.2018.09.011. [DOI] [Google Scholar]
  • 14.Shu M.Y., Fei S., Zhang B., Yang X., Guo Y., Li B., Ma Y. Application of UAV multisensor data and ensemble approach for high-throughput estimation of maize phenotyping traits. Plant Phenomics. 2022;2022:9802585. doi: 10.34133/2022/9802585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hussain S., Teshome F.T., Tulu B.B., Awoke G.W., Hailegnaw N.S., Bayabil H.K. Leaf area index (LAI) prediction using machine learning and UAV based vegetation indices. Eur. J. Agron. 2025;168:127557. doi: 10.1016/j.eja.2025.127557. [DOI] [Google Scholar]
  • 16.Blancon J., Dutartre D., Tixier M.H., Weiss M., Comar A., Praud S., Baret F. A High-Throughput Model-Assisted Method for Phenotyping Maize Green Leaf Area Index Dynamics Using Unmanned Aerial Vehicle Imagery. Front. Plant Sci. 2019;10:685. doi: 10.3389/fpls.2019.00685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Liu Y., Feng H.K., Yue J.B., Jin X.L., Li Z.H., Yang G.J. Estimation of potato above-ground biomass based on unmanned aerial vehicle red-green-blue images with different texture features and crop height. Front. Plant Sci. 2022;13:938216. doi: 10.3389/fpls.2022.938216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ilniyaz O., Du Q.Y., Shen H.F., He W.W., Feng L.W., Azadi H., Kurban A., Chen X. Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images. Comput. Electron. Agric. 2023;207:107723. doi: 10.1016/j.compag.2023.107723. [DOI] [Google Scholar]
  • 19.Liu Y., Feng H., Yue J., Li Z., Yang G., Song X., Yang X., Zhao Y. Remote-sensing estimation ofpotato above-ground biomass based on spectral and spatial features ex-tracted from high-definition digital camera images. Comput. Electron. Agric. 2022;198:107089. doi: 10.1016/j.compag.2022.107089. [DOI] [Google Scholar]
  • 20.Yang N., Zhang Z.T., Zhang J.R., Guo Y.H., Yang X.Z., Yu G.D., Bai X., Chen J., Chen Y., Shi L., et al. Improving estimation of maize leaf area index by combining of UAV-based multispectral and thermal infrared data: The potential of new texture index. Comput. Electron. Agric. 2023;214:108294. doi: 10.1016/j.compag.2023.108294. [DOI] [Google Scholar]
  • 21.Sun X.K., Yang Z.Y., Su P.Y., Wei K.X., Wang Z.G., Yang C.B., Feng M.C. Non-destructive monitoring of maize LAI by fusing UAV spectral and textural features. Front. Plant Sci. 2023;14:1158837. doi: 10.3389/fpls.2023.1158837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Tang Z.J., Lu J.S., Abdelghany A.E., Su P.H., Jin M., Li S.Q., Sun T., Xiang Y.Z., Li Z.J., Zhang F.C. Winter oilseed rape LAI inversion via multi-source UAV fusion: A three-dimensional texture and machine learning approach. Plants. 2025;14:1245. doi: 10.3390/plants14081245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ding F., Li C., Zhai W., Fei S., Cheng Q., Chen Z. Estimation of nitrogen content in winter wheat based on multi-source data fusion and machine learning. Agriculture. 2022;12:1752. doi: 10.3390/agriculture12111752. [DOI] [Google Scholar]
  • 24.Wang Y.C., Wang J.L., Li J.Y., Wang J.C., Xu H.Z., Liu T., Wang J. Estimating Maize Leaf Water Content Using Machine Learning with Diverse Multispectral Image Features. Plants. 2025;14:973. doi: 10.3390/plants14060973. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Miao L.J., Zou Y.F., Cui X.F., Kattel G.R., Shang Y., Zhu J.W. Predicting China’s Maize Yield Using Multi-Source Datasets and Machine Learning Algorithms. Remote Sens. 2024;16:2417. doi: 10.3390/rs16132417. [DOI] [Google Scholar]
  • 26.Hu H., Ren Y., Zhou H.K., Lou W.D., Hao P.F., Lin B.G., Zhang G.Z., Gu Q., Hua S.J. Oilseed Rape Yield Prediction from UAVs Using Vegetation Index and Machine Learning: A Case Study in East China. Agriculture. 2024;14:1317. doi: 10.3390/agriculture14081317. [DOI] [Google Scholar]
  • 27.Gttelson A.A., Kaufman Y.J., Merzlyak M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996;58:289–298. doi: 10.1016/S0034-4257(96)00072-7. [DOI] [Google Scholar]
  • 28.Rouse J.W., Jr., Haas R.H., Deering D.W., Schell J.A., Harlan J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation. NASA; Washington, DC, USA: 1974. [Google Scholar]
  • 29.Jordan C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology. 1969;50:663–666. doi: 10.2307/1936256. [DOI] [Google Scholar]
  • 30.Tucker C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. NASA; Washington, DC, USA: 1979. [Google Scholar]
  • 31.Li Z.J., Chen G.F., Zhi J.W., Xiang Y.Z., Li D.M., Zhang F.C., Chen J.Y. Estimation model of soybean soil moisture content based on UAV spectral information and texture features. Nongye Jixie Xuebao. 2024;55:347–357. [Google Scholar]
  • 32.Ren H., Feng G. Are soil-adjusted vegetation indices better than soil-unadjusted vegetation indices for above-ground green biomass estimation in arid and semi-arid grasslands? Grass Forage Sci. 2015;70:611–619. doi: 10.1111/gfs.12152. [DOI] [Google Scholar]
  • 33.Liu H.Q., Huete A. A feedback-based modification of the NDVI to minimize canopy background and atmospheric noise. IEEE Trans. Geosci. Remote Sens. 1995;33:457–465. doi: 10.1109/TGRS.1995.8746027. [DOI] [Google Scholar]
  • 34.Han L., Yang G.J., Dai H.Y., Xu B., Yang H., Feng H.K., Li Z.H., Yang X.D. Modeling maize above-ground biomass based on machine learning approaches using UAV remote-sensing data. Plant Methods. 2019;15:10. doi: 10.1186/s13007-019-0394-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Woebbecke D.M., Meyer G.E., Von Bargen K., Mortensen D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE. 1995;38:259–269. doi: 10.13031/2013.27838. [DOI] [Google Scholar]
  • 36.Meyer G.E., Camargo Neto J. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008;63:282–293. doi: 10.1016/j.compag.2008.03.009. [DOI] [Google Scholar]
  • 37.Gitelson A.A., Viña A., Ciganda V., Rundquist D.C., Arkebauer T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005;32:L08403. doi: 10.1029/2005GL022688. [DOI] [Google Scholar]
  • 38.Hosseini M., McNairn H., Mitchell S., Robertson L.D., Davidson A., Ahmadian N., Bhattacharya A., Borg E., Conrad C., Dabrowska-Zielinska K., et al. A Comparison between Support Vector Machine and Water Cloud Model for Estimating Crop Leaf Area Index. Remote Sens. 2021;13:1348. doi: 10.3390/rs13071348. [DOI] [Google Scholar]
  • 39.Xu W.C., Yang W.G., Chen S.D., Wu C.S., Chen P.C., Lan Y.B. Establishing a model to predict the single boll weight of cotton in northern Xinjiang by using high resolution UAV remote sensing data. Comput. Electron. Agric. 2020;179:105762. doi: 10.1016/j.compag.2020.105762. [DOI] [Google Scholar]
  • 40.Monsi M., Saeki T. On the factor light in plant communities and its importance for matter production. Ann. Bot. 2005;95:549–567. doi: 10.1093/aob/mci052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Gao X., Yao Y., Chen S.Y., Li Q.W., Zhang X.D., Liu Z., Zeng Y.L., Ma Y.T., Zhao Y.Y., Li S.M. Improved maize leaf area index inversion combining plant height corrected resampling size and random forest model using UAV images at fine scale. Eur. J. Agron. 2024;161:127360. doi: 10.1016/j.eja.2024.127360. [DOI] [Google Scholar]
  • 42.Meng L., Ming B., Liu Y., Nie C.W., Fang L., Zhou L.L., Xin J.F., Xue B.B., Liang Z.Y., Guo H.R., et al. Maize biomass estimation by integrating spectral, structural, and textural features from unmanned aerial vehicle data. Eur. J. Agron. 2025;168:127647. doi: 10.1016/j.eja.2025.127647. [DOI] [Google Scholar]
  • 43.Zhang X.W., Zhang K.F., Sun Y.Q., Zhao Y.D., Zhuang H.F., Ban W., Chen Y., Fu E., Chen S., Liu J., et al. Combining spectral and texture features of UAS-based multispectral images for maize leaf area index estimation. Remote Sens. 2022;14:331. doi: 10.3390/rs14020331. [DOI] [Google Scholar]
  • 44.Pei S.Z., Dai Y.L., Bai Z.T., Li Z.J., Zhang F.C., Yin F.H., Fan J.L. Improved estimation of canopy water status in cotton using vegetation indices along with textural information from UAV-based multispectral images. Comput. Electron. Agric. 2024;224:109176. doi: 10.1016/j.compag.2024.109176. [DOI] [Google Scholar]
  • 45.Dey B., Ahmed R. A comprehensive review of AI-driven plant stress monitoring and embedded sensor technology: Agri-culture 5.0. J. Ind. Inf. Integr. 2025;47:100931. doi: 10.1016/j.jii.2025.100931. [DOI] [Google Scholar]
  • 46.Yao H.M., Huang Y., Wei Y.M., Zhong W.P., Wen K. Retrieval of Chlorophyll-a Concentrations in the Coastal Waters of the Beibu Gulf in Guangxi Using a Gradient-Boosting Decision Tree Model. Appl. Sci. 2021;11:7855. doi: 10.3390/app11177855. [DOI] [Google Scholar]
  • 47.Sun B., Rong R., Cui H.W., Guo Y., Yue W., Yan Z.Y., Wang H., Gao Z.H., Wu Z.T. How can integrated Space–Air–Ground observation contribute in aboveground biomass of shrub plants estimation in shrub-encroached Grasslands? Int. J. Appl. Earth Obs. 2024;130:103856. doi: 10.1016/j.jag.2024.103856. [DOI] [Google Scholar]
  • 48.Wang S.N., Wu Y.J., Li R.P., Wang X.Q. Remote sensing-based retrieval of soil moisture content using stacking ensemble learning models. Land Degrad. Dev. 2023;34:911–925. doi: 10.1002/ldr.4505. [DOI] [Google Scholar]
  • 49.Maurya K.A., Pathak A. Leveraging the use of digital agriculture and machine learning for accurate prediction of Leaf Area Index (LAI) Comput. Electron. Agric. 2025;239:110947. doi: 10.1016/j.compag.2025.110947. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.


Articles from Plants are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES