Graphical abstract
Keywords: Seed phenotype, Optical sensor, Digital technique, Artificial intelligence, Spectroscopy, Imaging processing
Highlights
-
•
Optical sensors are highly-promising for seed phenotype digitization.
-
•
Spectroscopy, digital imaging, and 3D reconstruction are primarily used digital techniques.
-
•
Optical sensors can detect both visible external and invisible internal phenotypes.
-
•
Matched optical sensors can effectively reduce resource loss in seed phenotype evaluation.
-
•
Future research should focus on phenotype equipment, platform, and data processing algorithms for automatic, integrated, and intelligent evaluation.
Abstract
Background
The breeding of high-quality, high-yield, and disease-resistant varieties is closely related to food security. The investigation of breeding results relies on the evaluation of seed phenotype, which is a key step in the process of breeding. In the global digitalization trend, digital technology based on optical sensors can perform the digitization of seed phenotype in a non-contact, high throughput way, thus significantly improving breeding efficiency.
Aim of review
This paper provides a comprehensive overview of the principles, characteristics, data processing methods, and bottlenecks associated with three digital technique types based on optical sensors: spectroscopy, digital imaging, and three-dimensional (3D) reconstruction techniques. In addition, the applicability and adaptability of digital techniques based on the optical sensors of maize seed phenotype traits, namely external visible phenotype (EVP) and internal invisible phenotype (IIP), are investigated. Furthermore, trends in future equipment, platform, phenotype data, and processing algorithms are discussed. This review offers conceptual and practical support for seed phenotype digitization based on optical sensors, which will provide reference and guidance for future research.
Key scientific concepts of review
The digital techniques based on optical sensors can perform non-contact and high-throughput seed phenotype evaluation. Due to the distinct characteristics of optical sensors, matching suitable digital techniques according to seed phenotype traits can greatly reduce resource loss, and promote the efficiency of seed evaluation as well as breeding decision-making. Future research in phenotype equipment and platform, phenotype data, and processing algorithms will make digital techniques better meet the demands of seed phenotype evaluation, and promote automatic, integrated, and intelligent evaluation of seed phenotype, further helping to lessen the gap between digital techniques and seed phenotyping.
Introduction
Quality seed development remains the primary goal of today’s breeding programs since it is considered one of agriculture’s most essential and fundamental components [1]. Over time, the evolution of breeding has evolved drastically from breeding 1.0 to breeding 4.0. Breeding 1.0 is based on farmers’ experience, which is essential for the subjective selection of seeds; breeding 2.0 places a greater emphasis on field statistics and genetic analysis. Breeding 3.0 (the current state of the art) is concerned with establishing the relationship between genes and crop traits, and breeding 4.0 emphasizes interdisciplinary research (such as life science and information science) and data material [2]. Regardless of this evolution, seed phenotypes have consistently been the most direct expressions of breeding [3].
Seed phenotypes are primarily composed of the original apparent traits such as weight, color, size, shape, and number [4]. Besides, there are also physiological and biochemical traits, including protein, moisture, oil content in seed variety, fungal infection in food safety, and internal structure in seed damage [5]. The evaluation of seed phenotypes (or seed testing) is the main procedure for breeders to obtain accurate seed quality information, which is always faced with the challenge of being more efficient, accurate, and automatic [6]. Numerous external phenotypes are typically evident to the naked eye. Thus traditional seed testing procedures are mainly based on manual measurement techniques and sensory evaluations of color, shape, and quantitative factors. However, these evaluation criteria are frequently inconsistent. Additionally, it requires considerable time and workforce.
Over the last decades, with the breakthrough in information and communication technologies, digital techniques such as artificial intelligence (AI), machine vision, and big data have developed tremendously [7], especially in agriculture. Digital techniques have spread throughout the global life cycle of crops by assisting in a very effective way during automated detection and tracking [8], aiming to establish the entire network of processing information and promoting food security and human health [9]. Optical sensors, as one of the representative digital techniques, have been rapidly used in seed phenotype analysis due to their non-contact and high-throughput measurement characteristics [8], [10]. Compared with traditional manual phenotype evaluation, optical sensor based techniques can decompose the complex compound phenotype with non-destructive testing and eliminate the subjective deviation introduced by naked eye investigation [2]. These advantages could reduce seed loss and offer high-quality digital phenotype information, by combining with advanced data processing algorithms. Moreover, the significant improvement of seed phenotype data quality reduces the resource expenditure of critical breeding decisions [11]. It facilitates rapidly mining of crop trait regulatory genes, providing a basis for ameliorating target traits and accurately predicting phenotypes [12]. Digital techniques based on optical sensors can assist in different seed phenotype detection, promoting the efficiency of seed evaluation and breeding decision-making.
Because of the diversity of optical sensors and the complexity of seed phenotypes, it is essential to match suitable techniques according to the characteristics of seed phenotype traits. In this review, we first illustrate the principles of optical sensors commonly used in seed phenotype digitization, summarize the data processing methods, and emphasize the bottlenecks of different digital techniques. Secondly, taking maize ear as a typical case, we explore the applicability of relevant digital techniques according to the phenotype characteristics. Finally, challenges and trends are discussed to promote the development of seed phenotype digitization for providing guidance and reference to future research.
Review methodology
A thorough literature search was conducted using electronic databases such as Web of Science, ScienceDirect, and IEEE Explore, as well as Google Scholar, to find studies reporting on digital techniques for seed phenotyping. Since this review focuses on the development and application of these techniques, the year of publication was not limited.
The search included titles, abstracts, and keywords of articles. The search keywords of the study were divided into three groups. The first group included keywords related to the objects being detected, specifically “seed” and “corn or maize”, corresponding to the research in the third and fourth sections of the review. The second group consisted of digital technique keywords, including “spectroscopy”, “imaging or RGB”, and “three-dimension (3D) reconstruction”. The third group included application object keywords, such as “content”, “disease”, “damage”, “quality”, “viability”, “variety”, “purity”, and “microstructure”. We established multiple combinations of these groups of keywords to complete the comprehensive search of this study. After conducting a general search, research articles were carefully selected based on the following inclusion criteria: (1) relevance to seed phenotype, including traits such as original apparent, exogenous infection, biochemical composition, and physiological structure, (2) detailed data processing methods and evaluation performance, and (3) prioritized selection of newly published literature for similar research. Additionally, the impact factor of journals and the number of citations each paper received up to the studied period were considered.
Seed phenotype digitization techniques and algorithms based on optical sensors
The commonly used optical sensors for seed phenotype include spectroscopy for reflecting chemical components, digital imaging for morphology and component visualization, and 3D reconstruction for restoring seed morphology from a 3D level. These techniques have different advantages in digitizing seed phenotypes according to their characteristics. In this section, we introduce the principles and characteristics of different optical sensors, summarize the common processing pipelines or algorithms of the obtained data, and discuss the current development bottlenecks and existing solutions (Fig. 1).
Fig. 1.
Seed phenotype digitization based on optical sensors. The process of seed phenotype digitization based on optical sensors is to record and obtain the seed phenotype through spectroscopy, digital imaging, 3D reconstruction techniques, and then digitize the seed phenotype through various data processing methods. Various optical sensors have different characteristics and limitations, affecting seed phenotype digitization development. Through multi-source data fusion, AI modeling and data mining, equipment integration, and multi-scale phenotyping platforms, efficient, accurate, and high-throughput evaluation of seed phenotypes can be achieved. Abbreviations: NIR, near-infrared; MIR, mid-infrared; THz, Terahertz; RGB, red–green–blue; HSI, hyperspectral imaging; MRI, magnetic resonance imaging; X-ray micro-CT, X-ray micro-computed tomography; 3D, three-dimensional; MVS, multi-view stereo; PCA, principal component analysis; PLS, partial least squares; SVM, support vector machine; RF, random forest; ROI, region of interest; CNN, convolution neural network; AI, artificial intelligence. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Spectroscopy techniques and data processing algorithms
Three types of energy transfer occur when light interacts with objects: reflection, absorption, and transmission. However, due to the difference in seed apparent morphology, texture, and internal biochemical components, the optical signal attenuates with different intensities [13]. The signals can be collected by optical sensors and visualized into spectral lines with computers. Integrated with chemometric methods, the internal composition, variety, and microstructure of seeds can be determined [14]. A basic spectral system typically comprises an optical platform and a detecting unit [15]. As the system’s central component, different optical sensors can reflect the optical properties of seeds in various regions of the electromagnetic spectrum. Near-infrared (NIR), mid-infrared (MIR), fluorescence, Raman, and Terahertz (THz) spectroscopy are commonly used optical sensors for seed phenotype digitization.
Different spectroscopies offer distinct benefits for detecting specific characteristics. NIR spectroscopy mainly reflects the information of double and combined frequencies of molecules, i.e., the information of hydrogen-containing groups such as O–H, N–H, C-O, and C–H in organic compounds [16]. MIR spectroscopy measures the fundamental frequency signal, and its spectral peak information is relatively affluent, putting forward higher requirements for data processing and model optimization [17]. For fluorescence spectroscopy, the substance is stimulated to generate characteristic lines, which are related to its internal structure and component composition, enabling the substance to be identified rapidly and accurately [18]. Compared with other techniques, Raman spectroscopy is more sensitive to the symmetrical vibration of covalent bonds in nonpolar molecular groups and is not disturbed by water molecules [19]. As the fingerprint spectrum of substances, THz spectroscopy can reflect the structure and properties of organic biomolecules qualitatively and quantitatively, which is between microwave and infrared light in the electromagnetic spectrum. Many molecules, especially organic molecules, show strong absorption and dispersion characteristics in the THz frequency band [20]. These spectroscopy techniques can be complemented and integrated according to the detection characteristics for precise and comprehensive information. The application of spectroscopy techniques to the digitalization of seed phenotype provides the benefits of convenience, speed, precision, and environmental friendliness. Therefore, spectroscopy techniques have become a research hot-spot, resulting in more and more scholars devoted to related studies. Table 1 summarizes the applications of spectroscopy techniques for distinct phenotypes of diverse seeds.
Table 1.
Application of spectroscopy techniques for seed phenotype digitization. Abbreviations: DA, discriminant analysis; DBN, deep belife network; iPLS, interval partial least squares; LDA, linear discriminant analysis; LOD, limit of detection; MPLS, modified partial least squares; MPLSR, modified partial least square regression; NIR, near-infrared; PCA-SVM, principal component analysis-support vector machine; PLS-DA, partial least squares discriminant analysis; PLSR, partial least squares; SKNIR, single-kernel near-infrared.
Optical sensor | Pros and cons | Seed | Application | Method and performance | Ref |
---|---|---|---|---|---|
NIR | Affordable price High penetration depth Poor sensitivity for low concentrationsBroad and overlapping absorption bands |
Soybean | Protein and oil content determination | SAS, NIR analyzer | [21] |
Amino and fatty acid determination | ISI program, PLSR, R2 = 0.06 ∼ 0.85 | [22] | |||
Sunflower | Oleic acid determination | Regression analysis, R2 = 0.983 | [23] | ||
Brassica napus | Tocopherol content determination | MPLS; R2 = 0.74 | [24] | ||
Wheat | Fusarium-damaged kernels and deoxynivalenol identification | SAS, SKNIR, R = 0.72 | [25] | ||
Sesame | Origin discrimination | IBM SPSS Statistics, DA, accuracy = 89.4 % | [26] | ||
Maize | Provitamin A Carotenoids Content determination | WinISI III Software, bayesian and MPLSR, R2 = 0.22 ∼ 0.75 | [27] | ||
Viability evaluation | Matlab, PLS-DA, accuracy > 98 % | [28] | |||
Forage grass | Seed germination and vigor evaluation | R software, PLS-DA, accuracy = 61 ∼ 82 % | [29] | ||
MIR | High specificity Few overlaps Low sensitivity for quantitative analysisHigh requirements for analysis models |
Soybean | Isoflavones and oligosaccharides determination | Matlab, PLSR, R2 = 0.72,0.80 | [30] |
Pea | total protein, starch, fiber, phytic acid, and carotenoids determination | Orange, PLSR, R > 0.71 | [17] | ||
Brown rice | Aflatoxin contamination identification | Matlab, DA, accuracy = 90.6 % | [31] | ||
Peanut | Fungal contamination levels identification | TQ Analyst, PLSR, R2 = 0.9157 | [32] | ||
Fluorescence | High sensitivity High specificity Cluttered and weak signalBackground interference |
Pea | Mineral nutrient (K, Ca, Mn, Cu, Zn, and Se) analysis | PyMca, R > 0.85 | [33] |
Maize | Aflatoxin contamination identification | IBM SPSS Statistics, LDA, accuracy = 100 % | [34] | ||
Rice | Seed germination and vigor evaluation | DBN, R = 0.9792 | [35] | ||
Brassica oleracea | Seed maturity and quality evaluation | – | [36] | ||
Raman | High specificity Good signal-to-noise ratio Hardly disturbed by water molecules Expensive experiment materialsUnsuitable for fluorescent samples |
Soybean | Crude protein and oil content determination | Matlab, PLSR, R2 = 0.916, 0.872 | [37] |
Rapeseed | Iodine value determination | OPUS IDENT, R = 0.9904 | [38] | ||
Kidney beans | Deoxynivalenol identification | Gaussian 03 package, LOD = 10-6M | [39] | ||
Maize | Aflatoxin contamination identification | SAS, PLSR, R2 = 0.941 ∼ 0.957 | [40] | ||
Transgenic maize discrimination | Matlab, LDA, accuracy = 87.5 % | [41] | |||
Viability evaluation | Matlab, PLS-DA, accuracy > 95 % | [42] | |||
THz | Fingerprint spectrum High sensitivity Mostly used for solution detectionHardly nondestructive detection |
Maize | Moisture content determination | PLSR, R = 0.9969 | [43] |
Viability evaluation | – | [44] | |||
Suger beet | Seed quality identification | Python, accuracy = 87 % | [45] | ||
Wheat | Seed quality identification | PCA-SVM, accuracy = 95 % | [46] | ||
Variety discrimination | Matlab, iPLS, R = 0.992 | [47] | |||
Rice | Transgenic rice discrimination | TQ Analyst, DA, accuracy = 89.4 % | [48] |
Data processing methods such as preprocessing, feature extraction, and modeling are necessary after obtaining spectral data to perform qualitative or quantitative chemometric analysis of the measured data and extract sufficient information from it [49]. Multiple sampling is often needed for spectral data to improve the signal-to-noise ratio (SNR). Then, preprocessing methods (such as normalization, smoothing algorithm, and wavelet transform) are applied for noise reduction [24]. Due to the large volume of spectral data, in order to further eliminate irrelevant information and noise, feature extraction methods (such as principal component analysis, partial least squares, and linear discriminant analysis) are used to extract essential information to prepare for subsequent modeling [29]. Finally, depending on the phenotype requirements, the models are built based on the extracted features employing pattern recognition (such as K-means clustering, Fisher discriminant, and support vector machine) or regression (such as multiple linear regression, logistic regression, random forest, and artificial neural network) [31], [37].
For breeding, seed phenotype evaluation usually consists of many steps, such as weighing, size measurement, purity, quality detection, and biochemical component analysis [50]. Spectroscopy techniques are generally used to obtain the spectral information of seeds, detect biochemical components, and evaluate the quality of samples. Many spectral instruments are often utilized to detect a single or specific trait and a complete assessment always requires adding the samples to different equipment, which may result in complicated evaluation steps and unavoidable seed waste. Therefore, it is necessary to improve the versatility or integration of spectral instruments. Moreover, environmental conditions have a great impact on the detection results, and some instruments are employed in laboratory detection, which also limits the potential of spectroscopy techniques for field detection.
Digital imaging techniques and data processing algorithms
The fast development of optical sensors has led to the emergence and evolution of digital imaging techniques. To examine the apparent morphology of seeds, red–green–blue (RGB) imaging is frequently substituted for the naked eye. Typically, spectroscopy imaging techniques, including hyperspectral imaging (HSI), thermal imaging, X-ray micro-computed tomography (X-ray micro-CT), and magnetic resonance imaging (MRI), are used to observe the internal composition and structure of seeds. Table 2 lists the applications of digital imaging techniques on various seed phenotypes.
Table 2.
Application of digital imaging techniques for seed phenotype digitization. Abbreviations: BPNN, back propagation neural network; CNN, convolutional neural networks; FNN, feed-forward neural network; ICA-ANN, imperialist competitive algorithm and artificial neural networks; JMBoF, jointly multi-modal bag-of-feature; KNN, k-nearest neighbor; LS-SVM, least squares–support vector machines; MLP, multilayer perceptron; QDA, quadratic discriminant analysis; RBGM, row-by-row gradient based method; SVM, support vector machine.
Optical sensor | Pros and cons | Seed | Application | Method and performance | Ref |
---|---|---|---|---|---|
RGB imaging | Low equipment cost Wide application Only apparent analysisRely on image processing algorithms |
Maize | Variety discrimination | Matlab, neuro-fuzzy, accuracy = 94 % | [51] |
Kernel counting | Multi-threshold segmentation and RBGM, accuracy = 96.4 % | [52] | |||
Size measurement | – | [53] | |||
Peeling damage identification | Matlab, accuracy = 95.35 % | [54] | |||
Haploid seed sorting | Python, CNN, accuracy = 96.8 % | [55] | |||
Soybean | Quality grading | SVM, accuracy > 97 % | [56] | ||
Appearance quality discrimination | JMBoF and SVM, accuracy = 82.1 % | [57] | |||
Seed germination and vigor evaluation | Matlab, accuracy = 92.6 ∼ 98.8 % | [58] | |||
Rapeseed | Kernel counting | Visual Studio, accuracy = 89.33 % | [59] | ||
Variety discrimination | Python, SVM, accuracy = 99.24 % | [60] | |||
Rice | Quality grading | LabVIEW, image processing algorithm | [61] | ||
Variety discrimination | Visual C++, FNN, accuracy = 86.65 % | [62] | |||
Wheat | Damage identification (mold, black tip, and sprout) | SAS, LDA and KNN, accuracy = 91 ∼ 94 % | [63] | ||
Purity discrimination | ICA-ANN, accuracy = 96.25 % | [64] | |||
Pepper | Variety discrimination | Matlab, MLP, accuracy = 84.94 % | [65] | ||
Areca nuts | Disease identification | Visual C++, BPNN, accuracy = 90.9 % | [66] | ||
HSI | Rich spatial and spectral information Internal and external inspection High equipment costComplex data processing |
Soybean | Crude protein and fat content determination | PLSR, R > 0.9 | [67] |
Peanut | Fungal contamination identification | Matlab, SVM, accuracy > 94 % | [68] | ||
Seed quality identification | PCA and watershed algorithm, accuracy = 98.73 % | [69] | |||
Rice | Origin discrimination | Matlab and Unscrambler, SVM, accuracy = 91.67 % | [70] | ||
Canola | Fungal contamination identification | Matlab and SAS, quadratic and mahalanobis classifiers, accuracy = 91.7 ∼ 100 % | [71] | ||
Maize | Variety discrimination | LS-SVM, accuracy > 90 % | [72] | ||
Seed vigor and aging degree evaluation | SVM, accuracy = 61 %∼100 % | [73] | |||
Wheat | Viability evaluation | Matlab, PLS-DA, accuracy > 89.5 % | [74] | ||
Cucumber | Viability evaluation | Matlab and Unscrambler, PLS-DA, accuracy > 99 % | [75] | ||
Grape | Seed maturity evaluation | Unscrambler, PLSR, R > 0.95 | [76] | ||
Thermal imaging | Non-contact No lighting required Poor spatial resolution and repeatabilityHighly affected by environmental factors |
Wheat | Fungal contamination identification | SAS, LDA and QDA, accuracy > 96 % | [77] |
Cryptolestes ferrugineus infestation identification | Matlab and SAS, QDA and LDA, accuracy > 80 % | [78] | |||
Variety discrimination | SAS, QDA, accuracy = 95 % | [79] | |||
X-raymicro-CT | 3D microstructure Sensitive Time consuming for imagingNegative impact on seed germination |
Maize | Internal microstructure analysis | Scan IP, region growing, R > 0.8 | [80] |
Hardness classification | VGStudio MAX and Statistica, PCA, accuracy > 75 % | [81] | |||
Mechanical damage detection | NRecon software | [82] | |||
Kidney bean | Internal crack detection | Matlab, histogram thresholding and morphological operations | [83] | ||
Muskmelon | Seed germination and vigor evaluation | LDA, accuracy = 98.9 % | [84] | ||
Tomato | Seed germination and vigor evaluation | Visual Studio | [85] | ||
MRI | High resolution Ensure germination vigor Long imaging timeHigh cost |
Wheat | Sugar allocation determination | Matlab | [86] |
Soybean | Water distribution observation | Stanford Graphics and PV-Wave programs | [87] | ||
Jatropha curcas | Fungal contamination identification | – | [88] | ||
Rice | Seed germination and vigor evaluation | IBM SPSS Statistics | [89] |
The main flow of extracting the external seed phenotype by machine vision instead of naked eyes is utilizing a digital camera for RGB imaging and then mining the required phenotype traits from the RGB image. RGB image records the apparent information of objects such as color, morphology, and texture [90]. Over the past decades, researchers have invested significant efforts in RGB image processing algorithms to explore the information as much as possible, making machine vision develop rapidly. Based on the features of different classes, the classification algorithms can achieve seed variety recognition [62], obvious damage detection [54], and seed purity analysis [51]. According to the discontinuity and similarity between pixels, the segmentation algorithms can complete the rapid counting of seeds [52], quality grading [56], and appearance size measurement [53]. Combined with object recognition and location functions, the detection algorithms can accurately identify each seed’s difference, position, and shape [50], [55], which provides decision support for seed sorting and promotes the automation of the whole process of seed phenotype evaluation. On this basis, machine vision based on RGB imaging technique has been relied on to extract seed’s external information such as the size, area, and quantity of various seeds with high throughput and automation, which significantly accelerated the efficiency of seed testing.
HSI is a high-resolution spectroscopy imaging technique that provides data covering a spectral range from 350 to 2,500 nm, containing dozens to hundreds of bands [91]. Compared with spectroscopy and RGB images, HSI technique can fuse spatial information and spectral information simultaneously and create 3D “hypercube” datasets composed of two spatial dimensions and a single spectral dimension [10]. Therefore, HSI data is generally analyzed using a combination of image processing and spectral analysis techniques. A typical HSI system consists of a hyperspectral optical sensor, a light source, and a control unit for measuring and analyzing images [91]. After the system obtains the image of the object in hundreds of wavelengths, the hyperspectral image is processed by several procedures, including calibration, noise reduction, region of interest (ROI) selection, segmentation, and spectral or spatial feature extraction [92]. In addition, the images from some spectra can reflect information that is difficult to observe with RGB images or the naked eye, especially in the infrared range [93]. HSI has excellent potential and performance in seed grading, vigor evaluation, disease identification, composition discrimination, and quality-related detection.
All objects produce infrared radiation, and the differences in the internal structure or composition of seeds lead to heat transfer changes. The thermal imaging technique can record the infrared radiation emitted by seeds using infrared detectors without the need for light sources [94]. Therefore, the seed variety and quality can be evaluated intuitively, quickly, and non-destructive from a thermal image. X-ray micro-CT and MRI techniques are gradually explored for precise evaluation of seed phenotypes, such as distinguishing seed quality and variety based on internal structure and composition differences. According to the difference in X-ray absorption and transmittance of the object, a highly sensitive sensor can measure the signal and transmit it to a computer for data processing. Afterward, the collection of the cross-section or 3D image of the object can directly reflect the internal defect and the change of density and structure [80]. Nuclear magnetic resonance (NMR) is often used for the non-destructive detection of water content in objects. By detecting the vibration signal of hydrogen atoms in objects and imaging by computer processing, MRI can show the change and distribution of water content in seeds based on the difference in signal density [95].
Traditional image processing pipeline involves steps such as image preprocessing, image segmentation, image enhancement, and feature mining, requiring professional knowledge and complex parameter adjustment processes [96]. With the increasing growth of data information, these steps are time-consuming, especially for the HSI data. Usually, the processing flow of the HSI data includes data preprocessing (correction, noise reduction, ROI segmentation, etc.), feature extraction (feature of spectral scale, image scale, or fusion data), and model establishment for tasks (statistical method, machine learning, deep learning, etc.) [93], [97]. Commonly used analysis software includes ImageJ (https://imagej.nih.gov/ij/) and Fiji (https://fiji.sc/) for image processing, and ENVI (https://www.nv5geospatialsoftware.com/Products/ENVI) for spectral imagery processing. With the improvement of computer performance, researchers focus on the computer’s understanding and learning of data, using computing power to extract effective information and train a robust model with better performance. Deep learning is popularly used to automatically extract image features, making the computer more intelligent [98]. Before model building, the preprocessing of standardized data is necessary. As deep learning has high requirements for data, data augmentation methods are often used to avoid overfitting [99]. Subsequently, the network architecture is defined or selected according to the target task. Then, researchers train the model by setting up the hyperparameters, including learning rate, optimizer, training epoch, etc., to obtain optimal results. Apart from model training, deploying the model into practical applications to reach a wide range of users is also involved. Many researchers have paid attention to the practical application of deep learning and put forward a series of lightweight networks designed for mobile devices, such as MobileNet [100], [101], [102] and ShuffleNet [103], [104]. The lightweight networks are lighter while retaining the performance of models, which are portable for mobile terminals, further accelerating the application of deep learning models and the digital extraction of field phenotype.
The discussion on the application of digital imaging techniques in digitizing seed phenotypes involves two parts: hardware and model. With the decrease in the cost of RGB cameras and the development of sensors, acquiring high-quality RGB images has become more convenient, making RGB imaging a widely used method to acquire seed traits. On the other hand, HSI cameras are costly and generally have lower image quality compared to RGB cameras. In addition, images collected by HSI cameras often need complex preprocessing steps, which require the appropriate use of a computer. Therefore, it is worth further exploring the efficiency improvement of HSI cameras for obtaining seed phenotypes in the field [105], such as integrating small devices and developing end-to-end models. The thermal infrared image also has low resolution and is easily disturbed by ambient temperature. The thermal infrared camera can only be used to detect defects or abnormal states that cause noticeable temperature differences, which is difficult for a single tiny seed damage detection. As for X-ray micro-CT and MRI instruments, the application in seed phenotype is mainly in the exploratory stage because of the high cost and complex operation processes. Even though deep learning has achieved significant advances in image processing, the model’s excellent performance cannot be separated from the support of rich datasets and powerful computing capabilities. So, it faces the query of high hardware cost and challenging field online detection while applied in agriculture [106]. In addition, the results of deep learning models often lack interpretability, which leads to difficulties in finding the problem’s essence.
3D reconstruction techniques and data processing algorithms
The 3D reconstruction technique is used to replicate 3D structures of real objects in computers and is gaining increasing attention [107]. The principles of 3D reconstruction are divided into active vision and passive vision. The active vision method transmits energy sources such as laser and electromagnetic waves to the target object and obtains the depth information of the object through the returned signal, which can be represented by the time-of-flight (TOF) method [108] and structured light method [109]. The passive vision approach collects the multi-directional projection image of the target object using cameras and then reconstructs the object’s 3D information by image measurement and the correlation of feature information, which can be represented by multi-view stereo (MVS) [110]. Based on the above methods and theories, researchers have developed a variety of 3D reconstruction instruments and platforms. Among the four main output and presentation forms (i.e., depth image, point cloud, mesh, and volumetric grid) of 3D models, the point cloud is the preferred representation for extracting model structure parameters from 3D data [111]. In the field of agriculture, 3D laser scanners, MVS, and TOF cameras are commonly employed to obtain 3D point cloud data of plants, extract the plant skeleton, and achieve morphological feature measurement (e.g., stem major axis, stem minor axis, stem height, leaf length, and leaf angle) [112], [113], [114], [115].
The 3D laser scanner adopts an active light source, which can quickly obtain high-precision and high-resolution point cloud data without being limited by the lighting environment [116]. Usually, the 3D laser scanner has matching platforms to support real-time construction and viewing of the model. And the calibration steps and repeated scanning are necessary for high-precision point cloud data. The MVS method enables direct acquisition of the projected image of an object from multiple camera views, followed by several procedures including camera calibration, feature extraction, and feature matching to reconstruct the model [117]. Some open-source implementation platforms, such as COLMAP [118], have already integrated these steps and supported automatic 3D reconstruction. The MVS method has the advantages of low cost, simple image acquisition operation, and high point cloud precision. It should be noted that MVS is highly dependent on object features and unsuitable for smooth or texture-lacked objects. TOF camera measures the object’s distance through the round-trip light time by continuously sending laser pulses to the object [119]. This method has a high capability of anti-interference. Besides, it is not affected by the object’s surface characteristics and can quickly calculate the depth information of the object in real-time.
Because of the small size of some seeds like rapeseed, rice, and Arabidopsis, the resolution of the 3D reconstruction technique for large scenes is not applicable to depict the details of seeds [120]. As indicated in Table 3, current research on 3D seed reconstruction focuses mainly on the 3D laser scanner and MVS [121], [122], [123], [124]. Compared with an image, a 3D model provides an opportunity for a better understanding of the real object [111] and can obtain the object’s size, volume, surface area, and other external properties more conveniently from the 3D level. Also, the 3D model can construct 3D-seed phenotype databases, bringing more breeding references.
Table 3.
Application of 3D reconstruction techniques for seed phenotype digitization.
Optical sensor | Pros and cons | Seed | Application | Method and performance | Ref |
---|---|---|---|---|---|
3D laser scanner | High-precision and high-resolution Not limited by lighting environment Cumbersome to operateTime-consuming for data collection |
Rice | 3D geometric characteristic modeling | Geomagic Studio | [122] |
Surface shape feature measurement | Geomagic Studio, accuracy > 98 % | [125] | |||
Maize | 3D geometric characteristic modeling | – | [126] | ||
Legume | 3D modeling and trait measurement | Visual Studio | [121] | ||
Grape | Variety discrimination | cluster analysis and PCA | [123] | ||
MVS | Convenient image acquisition operation High color reproduction Time-consuming for calculationNot suitable for texture-lacked objects |
Maize | 3D geometric characteristic modeling | Visual Studio, Matlab and Python, relative error = 0.1 % | [120] |
Arabidopsis, rapeseed, barley | 3D modeling and trait measurement | Matlab, R2 = 0.492 ∼ 0.994 | [124] |
The processing of 3D data should be combined with the types and properties of the data. For example, point cloud can contain much information about the object, including the primary 3D coordinates of each point, the color and normal vector information with photogrammetry measurement, and the laser reflection intensity with laser measurement. Noise and outliers are always unavoidable in data acquisition and modeling; therefore, removing noise from the raw 3D data is necessary before analyzing [127], [128]. When it comes to processing 3D data, 3D data processing software like CloudCompare (https://www.cloudcompare.org/) and MeshLab (http://www.meshlab.net/) are commonly employed to perform essential processing steps, including noise removal, point cloud segmentation, and apparent information extraction. These steps play a crucial role in enhancing the accuracy and comprehensiveness of 3D data analysis. Due to the pursuit of evaluation efficiency and automation, the development of deep learning algorithms has also promoted the research of 3D processing algorithms based on the point cloud. The goals of processing point cloud data are similar to those of 2D images, including 3D shape classification, point cloud segmentation, and detecting 3D objects [111]. Given the unstructured and disorderly nature of the 3D point cloud, indirect and direct processing methods have been developed for extracting information. The indirect method must convert the point cloud data into a structured voxel grid [129] or 2D images based on multi-views [130]. Besides that, the converted data was fed into a relatively standard and mature 2D convolution neural network (CNN). The indirect method can overcome CNN’s inability to analyze point cloud data, minimize the point cloud’s dimensions, and achieve greater accuracy at a lower computational cost. The direct method retains the complete geometric data information by inputting the point cloud directly into the network. Representative frameworks include PointNet++ [131] and DCCNN [132], which can fully learn the characteristics of each point and minimize the loss of spatial information.
Compared with the digital imaging technique, 3D reconstruction can provide a more accurate presentation of the seed phenotype, but the seed size may require a high resolution of 3D reconstruction instruments, which will increase the cost. When point cloud is collected, repeated scanning or multi-view data acquisition is often needed to ensure the high-quality reproduction of the object at the 3D level. Such a delicate collection operation makes it challenging to meet the demand for phenotype evaluation of a large number of seeds, which is more suitable for single seed testing. As for 3D data processing algorithms, indirect and direct methods also have disadvantages. The indirect method cannot avoid the loss of crucial information in the conversion process, which may reduce the model’s performance. The direct method has obvious limitations, such as the complex network model and a large number of calculations, which need to be continuously optimized. In addition, the automation from 3D modeling of seed to phenotype extraction has not been completed, which further hinders the application of the 3D reconstruction technique in seed phenotype digitization [107]. In recent years, 3D reconstruction research for seed phenotype in agriculture has evolved, necessitating additional effort to investigate equipment and algorithms with adequate performance and cost-effectiveness for practical applications.
Generally, optical sensor-based techniques are employed for digitizing seed phenotypes, which involves two main parts: capturing and recording phenotypes through optical sensors, and analyzing and mining the phenotypes using relevant algorithms. Therefore, improving the efficiency of obtaining phenotypes and the analysis accuracy is the primary research objective of seed phenotype digitization. To date, many studies have used multi-source data fusion to obtain diverse phenotype data [133] and enhanced the ability of seed phenotype mining by combining state-of-the-art algorithms [55], [134]. Moreover, sensors and algorithms have been integrated into equipment to achieve automated seed phenotype acquisition [124]. The phenotyping platforms have been accepted to complete multi-scale and high-throughput seed phenotype evaluation [135], [136]. These studies have provided valuable solutions to overcome the bottlenecks of the development of seed phenotype digitization and bring more precise references for breeding decision-making.
Seed phenotype digitization – A case of maize ear
As one of the three major food crops in the world, maize (Zea mays L.) is an essential source of food and industrial raw materials. The evaluation of maize ear phenotype plays a vital role in maize breeding and yield determination, leading to numerous studies. In this section, the applications of digital techniques based on optical sensors in seed phenotype are summarized and discussed according to the studies on maize ear and kernel, aiming to provide a reference for the overall development of seed phenotype digitization.
Maize seed testing includes the evaluation of maize ear, maize ear cross-section, and maize kernel/seed phenotype; since maize ear includes maize ear cross-section and maize kernel, their phenotypes are grouped as maize ear phenotype. The seed phenotype traits can be generally divided into external visible phenotype (EVP) and internal invisible phenotype (IIP) according to the phenotype properties of the maize ear.
The EVP traits of maize ear are related to original apparent and exogenous infection traits. Original apparent traits refer to physical morphological indexes of maize ear, such as apparent color, weight, shape (e.g., roundness, flatness), size (e.g., ear length, width, projected area, volume), and number traits (e.g., the number of rows, maize kernels). Exogenous infection traits refer to the obvious external state changes of maize ears affected by diseases, insect pests, and mechanical damage [50]. The digitization of EVP can assist in evaluating apparent size characters, variety classification, quality detection, and disease classification. Since the EVP traits of the maize ear have prominent surface characteristics, which are usually visible to the naked eye, RGB imaging, hyperspectral/multispectral imaging, and 3D reconstruction techniques are popular in analyzing seed EVP traits due to their different imaging/modeling principles.
The IIP traits of maize ear focus on the traits related to the biochemical composition and physiological structure of seeds. Biochemical composition traits include protein, water, starch, oil content, fatty acids, etc., and physiological structure traits refer to microstructure, including embryo, endosperm, cavity, etc., in the maize seed [19], [137]. Through the digitization of IIP, the nutrient content, seed vigor, pesticide residues, and internal damage of maize ears can be measured to guarantee maize seed quality, food safety, and breeding efficiency. IIP traits are always not visible and are difficult to reflect by appearance. Using traditional biochemical analysis methods to evaluate the IIP of maize ears, the determination speed is slow, and the pretreatment is complex. Furthermore, the methods also cause seed destruction, which may result in no seed being sown in the next season, so they are not advisable for quality identification and selection of early-generation materials in breeding [73]. Currently, spectroscopy and spectral imaging techniques are the primary methods used for digitizing IIP traits of maize ears.
By dividing the phenotypes into external visible and internal invisible, we can efficiently match the appropriate phenotype digitization techniques in terms of the characteristics of these two phenotypes and promote the development of the whole process automation of seed testing. Fig. 2 depicts the primary matching digital techniques and applications for different maize ear phenotypes.
Fig. 2.
Techniques and applications for maize ear phenotype digitization. Maize ear phenotype traits can be divided into external visible phenotype (including original apparent and exogenous infection traits) and internal invisible phenotype (including biochemical composition and physiological structure traits). The refined phenotype can be effectively extracted and evaluated by matching digital techniques to promote the whole process of digitizing the seed phenotype. Abbreviations: EVP, external visible phenotype; IIP, internal invisible phenotype.
Original apparent traits digitization of maize ear
Digital imaging and 3D reconstruction techniques can record the color, shape, texture, size, and number information of maize ears. In conjunction with an RGB camera, there are numerous digital and automatic tools on the market for evaluating the original apparent features of maize ear. The majority of them primarily serve laboratory and seed companies. They can detect basic parameters, such as maize ear weight, ear length, ear diameter, bald tip length, ear rows, kernel number per row, kernel number, kernel length, kernel width, kernel thickness, and 100-seed weight [53], [138]. Specific indicators related to the evaluation demand can be calculated with these basic parameters, for example, length–width ratio, volume, area, perimeter, and bald tip ratio of maize ear [139]. The digitization process of original apparent traits on maize ear involves placing the maize ear into the seed testing instrument, RGB image acquisition, parameter extraction, ear weighing, and ear threshing, followed by RGB image acquisition for the maize kernel, kernel weighing, and packaging [140]. This process can obtain most EVP traits of maize ears in high throughput. In addition, the specific size trait of a maize ear can also be calculated by professional image analysis software, such as ImageJ [50], [141], which can be used to extract the statistical information of ROI. The weight of the seed is determined by both its volume and density, yet imaging tools can only estimate its size based on the number of pixels. Makanza et al. [141] tried to mine the weight of maize ears directly using an RGB image, but because of the different moisture content in maize seeds, the weight estimation model is not universal and unreliable compared to the electronic scale.
Accurately and quickly counting the kernels of a maize ear is crucial for estimating maize yield in the current season [142]. With the improvement of camera image quality, researchers have used the developing image processing algorithms to perform the in-situ counting of maize kernels. Khaki et al. [143] took images of maize ears on one side in uncontrolled lighting conditions, then used a sliding window and CNN classifier to complete the counting task, achieving an accuracy of 91.84 % within 5.79 s per ear. Shi et al. [144] used a developed network and row mask generator to generate density maps from a single image and inferred kernels per ear, rows per ear, and kernels per row, achieving a mean absolute error of 7.48, 0.32, and 1.07, respectively. Different maize cultivars and field management lead to differences in the appearance of mature maize ears. Especially when the arrangement of corn kernels is irregular, there will be a large error in estimating the total number of maize kernels from one side image. In order to accurately and quickly count kernels, it will be more practical to use multi-side images to cover the whole maize ear surface for predicting the number of maize kernels.
The evaluation performance of maize ear’s original traits highly correlates with the imaging resolution. Images with high resolution can bring clear texture and shape information. However, compared with the RGB image, the resolution of the image dimension in HSI data is low; the cost, portability, and operability also do not have enough competitiveness. These limitations may explain why there are few relevant studies on the application of HSI cameras for digitizing the original apparent features of maize ears.
The research of 3D reconstruction techniques on the digitization of maize ear phenotype has emerged in recent years. Wen et al. [126] used a 3D scanner to obtain high-resolution 3D point cloud data of ear and seed by rotating the maize ear in three directions. Given the deviation between the reduced dimensional 2D image and the actual object, the 3D model has precise details, such as object type, morphological parameters, connection, and spatial coordinates [115]. Moreover, it shows the appearance characteristics of the measured object more intuitively and stereoscopically. However, from the number of relevant literature, the application of the 3D reconstruction technique in the study of maize ear phenotype is still in the exploratory stage. Jahnke et al. [124] introduced a device named phenoSeeder, and MVS 3D reconstruction was used to complete the modeling of seeds, which determined the seed volume and calculated the seed’s length, width, and height. Furthermore, the results showed good repeatability for both rapeseed and barley seeds. Therefore, the 3D reconstruction technique has great potential and research value for the digitization of EVP.
Exogenous infection traits digitization of maize ear
The apparent exogenous infection traits reveal the health state of maize ears. Indeed, the original apparent traits, such as color, shape, and texture of maize ear will change in case of infection (diseases and pests), mechanical damage, storage, and marking. These traits can be extracted from maize ear’s RGB images and used as features to detect and sort the obvious aberrant states or differences. For example, variety classification in terms of morphological and color differences of maize seeds [51], detection of defective seeds/ears due to apparent unavoidable damage during mechanical harvest [54], [145], identification of moldy seeds because of color differences caused by mold infection during storage [146], and mutation gene transfer tracking based on visible endosperm markers [147]. According to the differences in EVP traits, combined with the image processing algorithms, the detection and marking of maize variety, quality, and abnormality can be completed rapidly and automatically.
Compared to RGB imaging, which only captures three wavelengths, HSI can reflect more information from the spectral dimension, providing additional references for identifying exogenous infection traits [105]. This is particularly useful for the early detection of fungi in maize kernels. Williams et al. [148] assessed the feasibility of using near-infrared hyperspectral imaging (NIR-HSI) and hyperspectral image analysis models to detect fungal contamination and activity of maize kernels before the appearance of visual symptoms. Chu et al. [149] have shown that both object-wise and pixel-wise methods based on NIR-HSI can be used for the classification of fungi-infected maize kernels. Furthermore, pixel-wise methods are useful for visualizing the extent of disease infection. Combining with sparse auto-encoders and CNN algorithms, Yang et al. [150] distinguished four grades of moldy kernels using hyperspectral imaging and achieved high correct recognition rates of 99.47 % and 98.94 % for the training and testing sets, respectively. Compared with HSI, a multispectral imaging (MSI) camera captures images at the selected wavelengths for specific applications with a lower volume of data, and higher processing efficiency, which also has good test capability [10]. Wang et al. [151] applied an MSI camera to form RGB and NIR images of maize seeds for defect detection. The result showed that with a two-pathway CNN model, the average detection accuracy could be improved by 1.83 % compared with only RGB images. Nevertheless, the HSI/MSI data inevitably need more processing steps and computing resources to mine principal component features. Considering the low cost-effectiveness of an HSI camera, the balance between the cost and benefits of the application must be comprehensively considered while using the HSI technique.
When the 3D reconstruction technique is employed to analyze the EVP traits of maize ears, the phenotype results are more detailed and reliable [152]. Compared to the RGB-based evaluation of exogenous infection traits, the 3D reconstruction technique can complete the recording of the arrangement and features of maize ears and seeds, yielding more comprehensive results. Besides, the 3D point cloud’s processing algorithm needs to be explored to migrate for evaluating the maize ear EVP trait, which will promote the development of maize ear digitization and automation [153]. The development of a 3D assessment index based on the 3D processing method to replace traditional estimation will also give more accurate assistance for maize ear quality analysis, for example, by using the proportion of maize disease area instead of the simple disease degree.
Biochemical composition traits digitization of maize ear
Spectroscopy techniques have been widely used in the qualitative and quantitative analysis of various chemical components in maize seeds. NIR spectroscopy is often used to analyze compounds’ component concentration or quality parameters and their mixtures. It has many applications in the determination of nutritional substances such as maize kernel protein [154], starch [155], oil [154], [156], fatty acid[157], and carotenoid [158]. Compared with NIR spectroscopy, small Raman spectroscopy combined with chemometrics can form a set of fast, low-cost, and field detection methods for maize kernel components (e.g., endosperm, germ, and peel) [19]. The sensitivity and specificity of detection can be further improved by surface-enhanced Raman spectroscopy (SERS) based on electromagnetic and chemical enhancement mechanisms [159]. SERS is often used for the rapid determination of trace residues in agricultural products; for example, the detection limit of fenitrothion in maize could reach 0.48 μg·mL−1 with SERS, which is lower than the maximum residue limit of China in crops and suggests the high sensitivity of SERS [160]. THz spectroscopy is a highly competitive emerging detection technique that has been studied in determining components, toxin detection, and viability analysis of maize seeds [161]. However, since most current research objects are solution or powder, it is not easy to perform non-destructive detection now [20]. Fluorescence spectroscopy can also detect the concentration of toxins in maize seeds, but it is easily disturbed by the background because many substances emit fluorescence. Laser-induced fluorescence spectroscopy (LIFS) can enhance the sensitivity of aflatoxin measurements [162], promoting fluorescence spectroscopy’s application. The nuclear magnetic resonance (NMR) technique can infer the water and oil content of maize seed according to the hydrogen proton vibration signal. This technique does not damage the seed but also maintains its germination vigor, which is especially suitable for liveness detection [51], [163]. Low-field NMR technique can also reflect the combination degree of seed moisture and macromolecular substances to further distinguish bound water, semi-bound water, and free water. It has been widely used in the study of the water phase, distribution, and migration of food and crops [164].
Due to the increase in the spatial dimension, the spectral imaging technique can also reflect the distribution and change of components based on chemical components. For instance, the moisture change of maize seed during drying can be predicted by the NIR-HSI technique [165], the distribution of components in maize kernel can be visualized by the Raman HSI technique [166], and the distribution and migration of water in maize seeds during storage can be demonstrated by MRI technique [95]. THz spectral imaging technique can extract the spectral information of different tissues of maize seeds [161], which makes up for the limitation of THz spectroscopy. In addition to measuring the internal components, HSI and fluorescence imaging techniques also perform well in the early and rapid detection of aflatoxin infections in maize seeds [167], [168], [169]. According to the difference in internal composition and distribution, combined with data processing algorithms, the variety, freshness, hardness, viability, and damage of maize seed can be further detected and classified. Due to the internal component differences between new and old seeds, maize seeds of different years can be detected [170], [171]. Heating changes the physical morphology and chemical composition of maize seeds, which can be used for heat-damaged seeds’ automatic detection [172]. The hardness of maize seeds can be classified because of the hardness difference caused by distinct endosperm composition [173], and seed viability can be detected according to changes in starch and protein content after seed aging [174]. Besides, rapid sorting of haploid maize kernels can also rely on the significant difference in oil content between diploid and haploid kernels induced by high oil [163].
Physiological structure traits digitization of maize ear
The microscopic 3D structure of the maize kernel is an essential part of the IIP of the maize ear, which reflects the inner characteristics at the tissue level of the maize ear [80]. As a typical 3D tomography technique, X-ray micro-CT reconstructs the cross-section or 3D image of object tissue in terms of the attenuation difference of X-ray passing through plant tissue, which can identify and analyze the 3D structure and function of maize seed from the microscopic perspective [175]. Guelpa et al. [81] determined the density of vitreous and floury endosperm of maize kernels by micro-CT and achieved the non-destructive grading of kernel hardness with an accuracy of 88 %. Kernel density and subcutaneous cavity volume are crucial factors affecting kernel breakage rate, which can be obtained by X-ray micro-CT technique and then used to predict the breakage rate of maize varieties [176]. Gargiulo et al. [177] suggested that X-ray exposure would lead to abnormal seedlings; consequently, paying attention to possible side effects during living-seed research with X-ray micro-CT is essential. In addition, since the existing micro-CT image processing process involves a large number of manual interactions, which are time-consuming and inefficient, it is challenging to detect maize kernels in the field. Besides, it cannot meet the high-throughput acquisition of micro internal phenotype characteristics. Even so, micro-CT has great value in providing technical support for the research of variety classification, quality detection, and kernel position effect, which completes the precise identification of seed phenotype at the micro-level through the analysis of maize kernel traits such as embryo, endosperm, cavity, subcutaneous cavity, embryo cavity, and endosperm cavity [80], [137].
In summary, for evaluating EVP traits, an RGB camera is mainly applied for the digitization of EVP traits of maize ears due to the low cost, the convenience of acquisition, and the rapid development of image processing algorithms. The HSI technique can provide richer information than RGB images because of the fusion of spectral and spatial information, which may improve performance in evaluating the exogenous infection traits. The 3D reconstruction technique provides a convenient and potential method for researchers to access, record, and store at any time. 3D reconstruction and data processing algorithms are urgently needed to meet the demands of the digitization of maize ear morphological traits [153]. Moreover, the digitization of IIP traits of maize ears is mainly based on spectroscopy and its imaging techniques, which have the advantage of obtaining the internal chemical components of maize ears in a non-destructive and quick way. The HSI technique is popular for digitizing IIP traits due to its rich information and universality across various IIP traits. Raman, THz, NMR spectroscopy/imaging, and X-ray micro-CT have particular advantages in water and oil content, trace pesticide residues, in vivo detection, and internal structure measurement, respectively, according to their detection characteristics. Due to the diversity and special advantages of techniques, precision, efficiency, and cost-effectiveness should be considered comprehensively to select suitable detection methods based on the requirements of phenotype digitization.
Concluding remarks and prospects
The richness of crop resources worldwide is reflected in the wide range of seed varieties, resulting in a need to evaluate complex seed phenotypes. Although plentiful phenotype digitization techniques have been employed for the research of seed phenotypes, there is still much work to be done to match the requirements of various seed phenotypes. This review aimed to lessen the gap by providing an overview of standard optical sensors and related data processing algorithms for the digital evaluation of seed phenotype. In addition, it discusses corresponding digital techniques for each set of seed phenotype characteristics, using maize ear as a case study. In order to make the digital techniques better meet the demands of seed phenotype in practical applications, further support and research can be considered in phenotype digitization equipment and platform, phenotype data, and processing algorithms.
To effectively apply seed phenotype digitization techniques in practical settings and derive direct benefits for seed phenotype evaluation, it is essential to develop phenotype digitization devices and platforms that offer high performance and cost-effectiveness. These devices should be easy to operate, have stable performance, support high-throughput acquisition for high efficiency, and be portable for field testing. Therefore, the miniaturization of spectrometers is one of the development trends of portable spectrometers [178]. At the same time, phenotype devices need to be integrated to support the detection of multiple phenotypes, to reduce the seed wasting and time-consuming caused by frequent replacement of devices or repeated detections. In addition, much attention has been paid to the field detection of seed phenotypes based on unmanned aerial vehicles (UAVs) [135], which can further promote the efficient acquisition and multi-scale analysis of seed phenotypes. Considering the widespread use of mobile phones and the superior recording capability of RGB images for EVP, it becomes necessary to consider how to obtain seed appearance, texture, quantity, and status information in the field through mobile phones or mobile devices, which also involves the deployment of lightweight models and microspectrometer systems in mobile devices.
The high-throughput phenotype digitization techniques make the phenotype data expand rapidly. Establishing a standard, high-quality, and shared phenotype database can effectively promote the analysis and mining of phenotype data and maximize the utilization of phenotype data resources. On the other hand, various phenotype digitization techniques produce different formats of phenotype data. Therefore, research on the fusion of heterogeneous data can also enhance the systematic evaluation of seed phenotypes and has the potential to achieve the integrated evaluation of seed external phenotypes, internal components, and structures.
With the emergence of data-driven processing algorithms, it is important to focus on the interpretability and universality of evaluation models in the agricultural field. Taking the seed variety classification as an example, mining the classification features learned by the deep learning algorithms and the attention to the task can promote researchers’ understanding of the inherent differences in seeds of different varieties. Meanwhile, the classification weight and score of various classes can further refine the variety differences [179]. As for universality, an excellent and effective model should be compatible with complex and diverse application scenarios, particularly in the agricultural environment. It is important to note that many models exhibit excellent performance only on the datasets used in related studies. When these models are applied to new locations or different varieties, they often require retraining from scratch, which can be inefficient and lead to a waste of computational resources. To address this issue, dataset construction should consider the possible diverse data in the task to ensure the stability and robustness of the model in practical applications. Additionally, transfer learning can be employed to leverage pre-training parameters from similar tasks, reducing model fitting time and improving training efficiency [180].
It is noteworthy to highlight the universality of dividing seed phenotypes and utilizing optical sensor-based digital techniques for evaluation. These methodologies extend beyond maize ears and can be effectively employed to digitize diverse seed phenotypes across various contexts, encompassing rice seeds, wheat seeds, rapeseeds, and more. We believe that the continuous development of optical sensors and data processing methods will help to understand seeds better and pave the way for the process’s digitization, standardization, and automation of seed phenotype evaluation.
CRediT authorship contribution statement
Fei Liu: Conceptualization, Methodology, Writing – review & editing. Rui Yang: Writing – original draft, Investigation, Visualization. Rongqin Chen: Writing – review & editing, Visualization. Mahamed Lamine Guindo: Writing – review & editing. Yong He: Conceptualization, Resources. Jun Zhou: Investigation, Validation. Xiangyu Lu: Investigation, Validation. Mengyuan Chen: Investigation, Validation. Yinhui Yang: Conceptualization, Writing – review & editing. Wenwen Kong: Supervision, Writing – review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This work was supported by Science and Technology Department of Zhejiang Province (2022C02034, 2021C02023) and Science and Technology Department of Shenzhen (CJGJZD20210408092401004).
Biographies
Fei Liu received his Bachelor degree in Agricultural Mechanization and Automation from China Agricultural University, Beijing, China, in 2006, and Ph.D. degree in Biosystems Engineering from Zhejiang University, Hangzhou, China, in 2011. He is currently a professor in Zhejiang University. He is the deputy director of Institute of Agricultural Information Technology, Zhejiang University. He is mainly focused on the sensing technology and equipment development for smart agriculture, inlcuding agricultrual IoT, remote sensing using UAS, hyperspectral imaging and LIBS for soil-plant-agricultural products information detection.
Rui Yang received the Bachelor degree in Agricultural Architectural Environment and Energy Engineering from China Agriculture University, Beijing, China, in 2020. She is currently a Ph.D. candidate in College of Biosystems Engineering and Food Science in Zhejiang University. Her research interests include nondestructive detection of plant phenotype and remote sensing on agriculture.
Rongqin Chen received Bachelor degree in Food Science and Engineering from Fujian Agriculture and Forestry University, Fuzhou, China, in 2019. She is currently a Ph.D. candidate in College of Biosystems Engineering and Food Science, Zhejiang University. Her research focuses on nondestructive detection of quality and safety of agricultural products using spectroscopy techniques.
Mahamed Lamine Guindo received a Bachelor degree in Information technology from Dakar University, Senegal, in 2017, and an Master degree in Mechanical Engineering from Zhejiang University of Science and Technology in 2019. He received Ph.D. degree in Agricultural Electrification and Automation from Zhejiang University, Hangzhou, China, in 2023. His research interests include machine learning, multi-spectral, and spectral imaging technology for plant and soil information detection.
Yong He received the Bachelor and Master degrees in Agricultural Engineering from Zhejiang Agricultural University, Hangzhou, China, in 1984 and 1987, respectively. He received Ph.D. degree in Biosystems Engineering from Zhejiang University, Hangzhou, China, in 1998. He is currently a professor in Zhejiang University. He is the director of the Key Laboratory of spectroscopy, Ministry of Agricultural and Rural Affairs; the national prestigious teacher; one of the hundred thousands of national talents. He was selected as Clarivate Analytics Global Highly Cited Researchers in 2016-2018. He is the Editor-in-Chief of Computers and Electronics in Agriculture and editorial board member of Food and Bioprocess Technology.
Jun Zhou received the Bachelor degree in 2012 and the Master in 2015 in College of Mechanical Engineering and Traffic from Xinjiang Agricultural University, Urumqi, China. He received Ph.D. degree in Biosystems Engineering from Zhejiang University, Hangzhou, China, in 2023. His research interests include remote sensing on agricultural and agricultural informatization.
Xiangyu Lu received the Bachelor degree in Agricultural Mechanization and Automation from Northwest A&F University, Yangling, China, in 2020. He is currently a Ph.D. candidate in College of Biosystems Engineering and Food Science in Zhejiang University. His research interests include image processing and interpretation, direct geolocation of target in drone images and automatic data processing workflow.
Mengyuan Chen received the Bachelor degree in agricultural engineering from Zhejiang University, Hangzhou, China, in 2021. She is currently a master student in College of Biosystems Engineering and Food Science in Zhejiang University. Her research interests include forestry remote sensing and detection of forest phenotype.
Yinhui Yang received his Bachelor degree in Electronic Information Science and Technology and Master degree in Biophysics from China Agricultural University, Beijing, China, in 2006 and 2009, respectively. He received Ph.D. degree in Computer Science and Technology from Zhejiang University, Hangzhou, China, in 2017. He is currently a lecturer in Zhejiang A& F University. He is mainly focused on image-based 3D reconstruction technology and its applications in digital agriculture and forestry.
Wenwen Kong received her Bachelor degree in Agricultural Engineering from China Agricultural University, Beijing, China, in 2009, and Ph.D. degree in Biosystems Engineering from Zhejiang University, Hangzhou, China, in 2015. She is currently a associate professor in Zhejiang A&F University. She is mainly focused on information digital detection and sensing technology of plants.
Contributor Information
Fei Liu, Email: fliu@zju.edu.cn.
Wenwen Kong, Email: wwkong16@zafu.edu.cn.
References
- 1.Zheng Y., Yuan F., Huang Y., Zhao Y., Jia X., Zhu L., et al. Genome-wide association studies of grain quality traits in maize. Sci Rep. 2021;11:9797. doi: 10.1038/s41598-021-89276-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Wang X., Cai Z. Era of maize breeding 4.0. Journal of Maize Sciences. 2019;27:1–9. doi: 10.13597/j.cnki.maize.science.20190101. [DOI] [Google Scholar]
- 3.Wallace J.G., Rodgers-Melnick E., Buckler E.S. On the road to breeding 4.0: unraveling the good, the bad, and the boring of crop quantitative genomics. Annu Rev Genet. 2018;52:421–444. doi: 10.1146/annurev-genet-120116-024846. [DOI] [PubMed] [Google Scholar]
- 4.Wang X., Qiu L., Jing R., Ren G., Li Y., Li C., et al. Evaluation on phenotypic traits of crop germplasm: status and development. Journal of Plant Genetic Resources. 2022;23:12–20. doi: 10.13430/j.cnki.jpgr.20210802001. [DOI] [Google Scholar]
- 5.Rahman A., Cho B.K. Assessment of seed quality using non-destructive measurement techniques: a review. Seed Sci Res. 2016;26:285–305. doi: 10.1017/S0960258516000234. [DOI] [Google Scholar]
- 6.Araus J.L., Cairns J.E. Field high-throughput phenotyping: the new crop breeding frontier. Trends Plant Sci. 2014;19:52–61. doi: 10.1016/j.tplants.2013.09.008. [DOI] [PubMed] [Google Scholar]
- 7.Budd J., Miller B.S., Manning E.M., Lampos V., Zhuang M., Edelstein M., et al. Digital technologies in the public-health response to COVID-19. Nat Med. 2020;26:1183–1192. doi: 10.1038/s41591-020-1011-4. [DOI] [PubMed] [Google Scholar]
- 8.Sun D., Robbins K., Morales N., Shu Q., Cen H. Advances in optical phenotyping of cereal crops. Trends Plant Sci. 2022;27:191–208. doi: 10.1016/j.tplants.2021.07.015. [DOI] [PubMed] [Google Scholar]
- 9.Rotz S., Duncan E., Small M., Botschner J., Dara R., Mosby I., et al. The politics of digital agricultural technologies: a preliminary review. Sociol Rural. 2019;59:203–229. doi: 10.1111/soru.12233. [DOI] [Google Scholar]
- 10.Xia Y., Xu Y., Li J., Zhang C., Fan S. Recent advances in emerging techniques for non-destructive detection of seed viability: a review. Artificial Intelligence in Agriculture. 2019;1:35–47. doi: 10.1016/j.aiia.2019.05.001. [DOI] [Google Scholar]
- 11.Clohessy J.W., Pauli D., Kreher K.M., Buckler V.E.S., Armstrong P.R., Wu T., et al. A low-cost automated system for high-throughput phenotyping of single oat seeds. The Plant Phenome Journal. 2018;1 doi: 10.2135/tppj2018.07.0005. [DOI] [Google Scholar]
- 12.Nordborg M., Weigel D. Next-generation genetics in plants. Nature. 2008;456:720–723. doi: 10.1038/nature07629. [DOI] [PubMed] [Google Scholar]
- 13.Lu R., Van Beers R., Saeys W., Li C., Cen H. Measurement of optical properties of fruits and vegetables: a review. Postharvest Biol Tec. 2020;159 doi: 10.1016/j.postharvbio.2019.111003. [DOI] [Google Scholar]
- 14.Kokot S., Grigg M., Panayiotou H., Phuong T.D. Data interpretation by some common chemometrics methods. Electroanal. 1998;10:1081–1088. doi: 10.1002/(SICI)1521-4109(199811)10:16<1081::AID-ELAN1081>3.0.CO;2-X. [DOI] [Google Scholar]
- 15.Jin X., Zarco-Tejada P.J., Schmidhalter U., Reynolds M.P., Hawkesford M.J., Varshney R.K., et al. High-throughput estimation of crop traits: a review of ground and aerial phenotyping platforms. IEEE Geosc Rem Sen M. 2021;9:200–231. [Google Scholar]
- 16.Zhou L., Zhang C., Qiu Z., He Y. Information fusion of emerging non-destructive analytical techniques for food quality authentication: a survey. TrAC-Trend Anal Chem. 2020;127 doi: 10.1016/j.trac.2020.115901. [DOI] [Google Scholar]
- 17.Karunakaran C., Vijayan P., Stobbs J., Bamrah R.K., Arganosa G., Warkentin T.D. High throughput nutritional profiling of pea seeds using Fourier transform mid-infrared spectroscopy. Food Chem. 2020;309 doi: 10.1016/j.foodchem.2019.125585. [DOI] [PubMed] [Google Scholar]
- 18.Smeesters L., Meulebroeck W., Raeymaekers S., Thienpont H. Optical detection of aflatoxins in maize using one- and two-photon induced fluorescence spectroscopy. Food Control. 2015;51:408–416. doi: 10.1016/j.foodcont.2014.12.003. [DOI] [Google Scholar]
- 19.Ildiz G.O., Kabuk H.N., Kaplan E.S., Halimoglu G., Fausto R. A comparative study of the yellow dent and purple flint maize kernel components by raman spectroscopy and chemometrics. J Mol Struct. 2019;1184:246–253. doi: 10.1016/j.molstruc.2019.02.034. [DOI] [Google Scholar]
- 20.Ge H., Jiang Y., Lian F., Zhang Y., Xia S. Quantitative determination of aflatoxin B1 concentration in acetonitrile by chemometric methods using terahertz spectroscopy. Food Chem. 2016;209:286–292. doi: 10.1016/j.foodchem.2016.04.070. [DOI] [PubMed] [Google Scholar]
- 21.Jiang G. Comparison and application of non-destructive NIR evaluations of seed protein and oil content in soybean breeding. Agronomy. 2020;10:77. doi: 10.3390/agronomy10010077. [DOI] [Google Scholar]
- 22.Pazdernik D.L., Killam A.S., Orf J.H. Analysis of amino and fatty acid composition in soybean seed, using near infrared reflectance spectroscopy. Agron J. 1997;89:679–685. doi: 10.2134/agronj1997.00021962008900040022x. [DOI] [Google Scholar]
- 23.Hutsalo I., Mank V., Kovaleva S. Determination of oleic acid in the samples of sunflower seeds by method of NIR-spectroscopy. Ukr Food j. 2017:6. doi: 10.24263/2304-974X-2017-6-1-6. [DOI] [Google Scholar]
- 24.Xu J., Nwafor C.C., Shah N., Zhou Y., Zhang C. Identification of genetic variation in Brassica napus seeds for tocopherol content and composition using near-infrared spectroscopy technique. Plant Breed. 2019;138:624–634. doi: 10.1111/pbr.12708. [DOI] [Google Scholar]
- 25.Jin F., Bai G., Zhang D., Dong Y., Ma L., Bockus W., et al. Fusarium-damaged kernels and deoxynivalenol in Fusarium-infected U.S. winter wheat. Phytopathology. 2014;104:472–478. doi: 10.1094/PHYTO-07-13-0187-R. [DOI] [PubMed] [Google Scholar]
- 26.Choi Y.H., Hong C.K., Park G.Y., Kim C.K., Kim J.H., Jung K., et al. A nondestructive approach for discrimination of the origin of sesame seeds using ED-XRF and NIR spectrometry with chemometrics. Food Sci Biotechnol. 2016;25:433–438. doi: 10.1007/s10068-016-0059-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Rosales A., Crossa J., Cuevas J., Cabrera-Soto L., Dhliwayo T., Ndhlela T., et al. Near-infrared spectroscopy to predict provitamin a carotenoids content in maize. Agronomy. 2022;12:1027. doi: 10.3390/agronomy12051027. [DOI] [Google Scholar]
- 28.Qiu G., Lv E., Lu H., Xu S., Zeng F., Shui Q. Single-kernel FT-NIR spectroscopy for detecting supersweet corn (Zea mays L. saccharata sturt) seed viability with multivariate data analysis. Sensors. 2018;18:1010. doi: 10.3390/s18041010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.de Medeiros A.D., da Silva L.J., Ribeiro J.P.O., Ferreira K.C., Rosas J.T.F., Santos A.A., et al. Machine learning for seed quality classification: an advanced approach using merger data from FT-NIR spectroscopy and X-ray imaging. Sensors. 2020;20:4319. doi: 10.3390/s20154319. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Amanah H.Z., Tunny S.S., Masithoh R.E., Choung M.-G., Kim K.-H., Kim M.S., et al. Nondestructive prediction of isoflavones and oligosaccharides in intact soybean seed using Fourier transform near-infrared (FT-NIR) and Fourier transform infrared (FT-IR) spectroscopic techniques. Foods. 2022;11:232. doi: 10.3390/foods11020232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Shen F., Wu Q., Shao X., Zhang Q. Non-destructive and rapid evaluation of aflatoxins in brown rice by using near-infrared and mid-infrared spectroscopic techniques. J Food Sci Technol. 2018;55:1175–1184. doi: 10.1007/s13197-018-3033-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Jiang X., Liu P., Shen F., Zhou H., Chen Q. Analysis of moldy peanut kernel by attenuated total reflectance-fourier transform infrared infrared spectroscopy. Food Sci. 2017;38:315–320. doi: 10.7506/spkx1002-6630-201712049. [DOI] [Google Scholar]
- 33.Bamrah R.K., Vijayan P., Karunakaran C., Muir D., Hallin E., Stobbs J., et al. Evaluation of X-ray fluorescence spectroscopy as a tool for nutrient analysis of pea seeds. Crop Sci. 2019;59:2689–2700. doi: 10.2135/cropsci2019.01.0004. [DOI] [Google Scholar]
- 34.Bartolić D., Mutavdžić D., Carstensen J.M., Stanković S., Nikolić M., Krstović S., et al. Fluorescence spectroscopy and multispectral imaging for fingerprinting of aflatoxin-b1 contaminated (Zea mays L.) seeds: a preliminary study. Sci Rep. 2022;12:4849. doi: 10.1038/s41598-022-08352-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Lu W., Guo Y., Dai D., Zhang C., Wang X. Rice germination rate detection based on fluorescent spectrometry and deep belief network. Spectrosc Spect Anal. 2018;38:1303–1312. doi: 10.3964/j.issn.1000-0593(2018)04-1303-10. [DOI] [Google Scholar]
- 36.Jalink H., Frandas A., van der Schoor R., Bino J.B. Chlorophyll fluorescence of the testa of Brassica oleracea seeds as an indicator of seed maturity and seed quality. Sci Agr. 1998;55:88–93. doi: 10.1590/S0103-90161998000500016. [DOI] [Google Scholar]
- 37.Lee H., Cho B.-K., Kim M.S., Lee W.-H., Tewari J., Bae H., et al. Prediction of crude protein and oil content of soybeans using Raman spectroscopy. Sensor Actuat B-Chem. 2013;185:694–700. doi: 10.1016/j.snb.2013.04.103. [DOI] [Google Scholar]
- 38.Reitzenstein S., Rösch P., Strehle M.A., Berg D., Baranska M., Schulz H., et al. Nondestructive analysis of single rapeseeds by means of Raman spectroscopy. J Raman Spectrosc. 2007;38:301–308. doi: 10.1002/jrs.1643. [DOI] [Google Scholar]
- 39.Yuan J., Sun C., Guo X., Yang T., Wang H., Fu S., et al. A rapid raman detection of deoxynivalenol in agricultural products. Food Chem. 2017;221:797–802. doi: 10.1016/j.foodchem.2016.11.101. [DOI] [PubMed] [Google Scholar]
- 40.Lee K.-M., Herrman T.J., Yun U. Application of raman spectroscopy for qualitative and quantitative analysis of aflatoxins in ground maize samples. J Cereal Sci. 2014;59:70–78. doi: 10.1016/j.jcs.2013.10.004. [DOI] [Google Scholar]
- 41.Dib S.R., Silva T.V., Neto J.A.G., Guimarães L.J.M., Ferreira E.J., Ferreira E.C. Raman spectroscopy for discriminating transgenic corns. Vib Spectrosc. 2021;112 doi: 10.1016/j.vibspec.2020.103183. [DOI] [Google Scholar]
- 42.Ambrose A., Lohumi S., Lee W.-H., Cho B.K. Comparative nondestructive measurement of corn seed viability using Fourier transform near-infrared (FT-NIR) and Raman spectroscopy. Sensor Actuat B-Chem. 2016;224:500–506. doi: 10.1016/j.snb.2015.10.082. [DOI] [Google Scholar]
- 43.Wu J., Li X., Sun L., Liu C., Sun X., Sun M., et al. Study on the optimization method of maize seed moisture quantification model based on THz-ATR spectroscopy. Spectrosc Spect Anal. 2021;41:2005–2011. doi: 10.3964/j.issn.1000-0593(2021)07-2005-07. [DOI] [Google Scholar]
- 44.Wu J., Li X., Liu C., Sun X., Yu L., Sun L. Screening method of characteristic THz region to corn seed vigor based on ATR. Transactions of the Chinese Society for Agricultural. Machinery. 2020;51:382. doi: 10.6041/j.issn.1000-1298.2020.04.044. [DOI] [Google Scholar]
- 45.Gente R., Busch S.F., Stübling E.-M., Schneider L.M., Hirschmann C.B., Balzer J.C., et al. Quality control of sugar beet seeds with THz time-domain spectroscopy. Ieee T Thz Sci Techn. 2016;6:754–756. doi: 10.1109/TTHZ.2016.2593985. [DOI] [Google Scholar]
- 46.Ge H., Jiang Y., Xu Z., Lian F., Zhang Y., Xia S. Identification of wheat quality using THz spectrum. Opt Express. 2014;22:12533–12544. doi: 10.1364/OE.22.012533. [DOI] [PubMed] [Google Scholar]
- 47.Ge H., Jiang Y., Lian F., Zhang Y., Xia S. Characterization of wheat varieties using terahertz time-domain spectroscopy. Sensors. 2015;15:12560–12572. doi: 10.3390/s150612560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Xu W., Xie L., Ye Z., Gao W., Yao Y., Chen M., et al. Discrimination of transgenic rice containing the Cry1Ab protein using terahertz spectroscopy and chemometrics. Sci Rep. 2015;5:11115. doi: 10.1038/srep11115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Wakholi C., Kandpal L.M., Lee H., Bae H., Park E., Kim M.S., et al. Rapid assessment of corn seed viability using short wave infrared line-scan hyperspectral imaging and chemometrics. Sensor Actuat B-Chem. 2018;255:498–507. doi: 10.1016/j.snb.2017.08.036. [DOI] [Google Scholar]
- 50.Liang X., Wang K., Huang C., Zhang X., Yan J., Yang W. A high-throughput maize kernel traits scorer based on line-scan imaging. Measurement. 2016;90:453–460. [Google Scholar]
- 51.Pazoki A., Farokhi F., Pazoki Z. Corn seed varieties classification based on mixed morphological and color features using artificial neural networks. RJASET. 2013;6:3506–3513. doi: 10.19026/rjaset.6.3553. [DOI] [Google Scholar]
- 52.Zhao M., Qin J., Li S., Liu Z., Cao J., Yao X., et al. In: Computer and Computing Technologies in Agriculture VIII. Li D., Chen Y., editors. Springer International Publishing; Cham: 2015. An automatic counting method of maize ear grain based on image processing; pp. 521–533. [DOI] [Google Scholar]
- 53.Miller N.D., Haase N.J., Lee J., Kaeppler S.M., de Leon N., Spalding E.P. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images. Plant J. 2017;89:169–178. doi: 10.1111/tpj.13320. [DOI] [PubMed] [Google Scholar]
- 54.Fu J., Yuan H., Zhao R., Chen Z., Ren L. Peeling damage recognition method for corn ear harvest using RGB image. Appl Sci. 2020;10:3371. doi: 10.3390/app10103371. [DOI] [Google Scholar]
- 55.Veeramani B., Raymond J.W., Chanda P. DeepSort: deep convolutional networks for sorting haploid maize seeds. BMC Bioinf. 2018;19:289. doi: 10.1186/s12859-018-2267-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Jitanan S., Chimlek P. Quality grading of soybean seeds using image analysis. IJECE. 2019;9:3495. doi: 10.11591/ijece.v9i5.pp3495-3503. [DOI] [Google Scholar]
- 57.Lin P., Xiaoli L., Li D., Jiang S., Zou Z., Lu Q., et al. Rapidly and exactly determining postharvest dry soybean seed quality based on machine vision technology. Sci Rep. 2019;9:17143. doi: 10.1038/s41598-019-53796-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Lee H., Huy T.Q., Park E., Bae H.-J., Baek I., Kim M.S., et al. Machine vision technique for rapid measurement of soybean seed vigor. J of Biosystems Eng. 2017;42:227–233. doi: 10.5307/JBE.2017.42.3.227. [DOI] [Google Scholar]
- 59.Peng S., Zhao Z., Xiaobo W., Yue Y., Li L., Weng Z., et al. Research on rapeseed counting based on machine vision. J Phys: Conf Ser. 2021;1757 doi: 10.1088/1742-6596/1757/1/012028. [DOI] [Google Scholar]
- 60.Kurtulmuş F., Ünal H. Discriminating rapeseed varieties using computer vision and machine learning. Expert Syst Appl. 2015;42:1880–1891. doi: 10.1016/j.eswa.2014.10.003. [DOI] [Google Scholar]
- 61.Birla R., Chauhan A.P.S. An efficient method for quality analysis of rice using machine vision system. JAIT. 2015:140–145. doi: 10.12720/jait.6.3.140-145. [DOI] [Google Scholar]
- 62.OuYang A-G, Gao R, Liu Y, Sun X, Pan Y, Dong X. An automatic method for identifying different variety of rice seeds using machine vision technology. 2010 Sixth International Conference on Natural Computation, vol. 1, 2010, p. 84–8. https://doi.org/10.1109/ICNC.2010.5583370.
- 63.Delwiche S.R., Yang I.-C., Graybosch R.A. Multiple view image analysis of freefalling U.S. wheat grains for damage assessment. Comput Electron Agr. 2013;98:62–73. doi: 10.1016/j.compag.2013.07.002. [DOI] [Google Scholar]
- 64.Ebrahimi E., Mollazade K., Babaei S. Toward an automatic wheat purity measuring device: a machine vision-based neural networks-assisted imperialist competitive algorithm approach. Measurement. 2014;55:196–205. doi: 10.1016/j.measurement.2014.05.003. [DOI] [Google Scholar]
- 65.Kurtulmuş F., Alibaş İ., Kavdır I. Classification of pepper seeds using machine vision based on neural network. Int J Agr Biol Eng. 2016;9:51–62. doi: 10.25165/ijabe.v9i1.1790. [DOI] [Google Scholar]
- 66.Huang K.-Y. Detection and classification of areca nuts with machine vision. Comput Math Appl. 2012;64:739–746. doi: 10.1016/j.camwa.2011.11.041. [DOI] [Google Scholar]
- 67.Zhu D., Wang K., Zhang D., Huang W., Yang G., Ma Z., et al. Quality assessment of crop seeds by near-infrared hyperspectral imaging. Sensor Lett. 2011;9:1144–1150. doi: 10.1166/sl.2011.1377. [DOI] [Google Scholar]
- 68.Qiao X., Jiang J., Qi X., Guo H., Yuan D. Utilization of spectral-spatial characteristics in shortwave infrared hyperspectral images to classify and identify fungi-contaminated peanuts. Food Chem. 2017;220:393–399. doi: 10.1016/j.foodchem.2016.09.119. [DOI] [PubMed] [Google Scholar]
- 69.Jiang J., Qiao X., He R. Use of near-infrared hyperspectral images to identify moldy peanuts. J Food Eng. 2016;169:284–290. doi: 10.1016/j.jfoodeng.2015.09.013. [DOI] [Google Scholar]
- 70.Sun J., Lu X., Mao H., Jin X., Wu X. A method for rapid identification of rice origin by hyperspectral imaging technology. J Food Process Eng. 2017;40:e12297. [Google Scholar]
- 71.Senthilkumar T., Jayas D.S., White N.D.G. Detection of different stages of fungal infection in stored canola using near-infrared hyperspectral imaging. J Stored Prod Res. 2015;63:80–88. doi: 10.1016/j.jspr.2015.07.005. [DOI] [Google Scholar]
- 72.Huang M., He C., Zhu Q., Qin J. Maize seed variety classification using the integration of spectral and image features combined with feature transformation based on hyperspectral imaging. Appl Sci. 2016;6:183. doi: 10.3390/app6060183. [DOI] [Google Scholar]
- 73.Feng L., Zhu S., Zhang C., Bao Y., Feng X., He Y. Identification of maize kernel vigor under different accelerated aging times using hyperspectral imaging. Molecules. 2018;23:3078. doi: 10.3390/molecules23123078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Zhang T., Wei W., Zhao B., Wang R., Li M., Yang L., et al. A reliable methodology for determining seed viability by using hyperspectral data from two sides of wheat seeds. Sensors. 2018;18:813. doi: 10.3390/s18030813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Mo C., Lim J., Lee K., Kang S., Kim M.S., Kim G., et al. Determination of germination quality of cucumber (Cucumis sativus) seed by LED-induced hyperspectral reflectance imaging. J of Biosystems Eng. 2013;38:318–326. doi: 10.5307/JBE.2013.38.4.318. [DOI] [Google Scholar]
- 76.Rodríguez-Pulido F.J., Barbin D.F., Sun D.-W., Gordillo B., González-Miret M.L., Heredia F.J. Grape seed characterization by NIR hyperspectral imaging. Postharvest Biol Tec. 2013;76:74–82. doi: 10.1016/j.postharvbio.2012.09.007. [DOI] [Google Scholar]
- 77.Chelladurai V., Jayas D.S., White N.D.G. Thermal imaging for detecting fungal infection in stored wheat. J Stored Prod Res. 2010;46:174–179. doi: 10.1016/j.jspr.2010.04.002. [DOI] [Google Scholar]
- 78.Manickavasagan A., Jayas D.S., White N.D.G. Thermal imaging to detect infestation by cryptolestes ferrugineus inside wheat kernels. J Stored Prod Res. 2008;44:186–192. doi: 10.1016/j.jspr.2007.10.006. [DOI] [Google Scholar]
- 79.Manickavasagan A., Jayas D.S., White N.D.G., Paliwal J. Wheat class identification using thermal imaging. Food Bioprocess Tech. 2010;3:450–460. doi: 10.1007/s11947-008-0110-x. [DOI] [Google Scholar]
- 80.Zhao H., Wang W., Liao S., Zhang Y., Lu X., Guo X., et al. Study on the micro-phenotype of different types of maize kernels based on micro-CT. Smart Agriculture. 2021;3:16. doi: 10.12133/j.smartag.2021.3.1.202103-SA004. [DOI] [Google Scholar]
- 81.Guelpa A., du Plessis A., Kidd M., Manley M. Non-destructive estimation of maize (Zea mays L.) kernel hardness by means of an X-ray micro-computed tomography (μCT) density calibration. Food Bioprocess Tech. 2015;8:1419–1429. doi: 10.1007/s11947-015-1502-3. [DOI] [Google Scholar]
- 82.Junior F.G.G., Cícero S.M., Vaz C.M.P., Lasso P.R.O. X-ray microtomography in comparison to radiographic analysis of mechanically damaged maize seeds and its effect on seed germination. Acta Sci-Agron. 2019;41:e42608. doi: 10.4025/actasciagron.v41i1.42608. [DOI] [Google Scholar]
- 83.Sood S, Mahajan S, Doegar A, Das A. Internal crack detection in kidney bean seeds using X-ray imaging technique. 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2016, p. 2258–61. https://doi.org/10.1109/ICACCI.2016.7732388.
- 84.Ahmed M.R., Yasmin J., Collins W., Cho B.-K. X-ray CT image analysis for morphology of muskmelon seed in relation to germination. Biosyst Eng. 2018;175:183–193. doi: 10.1016/j.biosystemseng.2018.09.015. [DOI] [Google Scholar]
- 85.Zhao X, Gao Y, Wang X, Li C, Wang S, Feng Q. Research on tomato seed vigor based on X-ray digital image. Optoelectronic Imaging and Multimedia Technology IV, vol. 10020, SPIE; 2016, p. 96–106. https://doi.org/10.1117/12.2246145.
- 86.Melkus G., Rolletschek H., Fuchs J., Radchuk V., Grafahrend-Belau E., Sreenivasulu N., et al. Dynamic 13C/1H NMR imaging uncovers sugar allocation in the living seed. Plant Biotechnol J. 2011;9:1022–1037. doi: 10.1111/j.1467-7652.2011.00618.x. [DOI] [PubMed] [Google Scholar]
- 87.Pietrzak L.N., Frégeau-Reid J., Chatson B., Blackwell B. Observations on water distribution in soybean seed during hydration processes using nuclear magnetic resonance imaging. Can J Plant Sci. 2002;82:513–519. doi: 10.4141/P01-150. [DOI] [Google Scholar]
- 88.Barboza da Silva C, Bianchini V de JM, Medeiros AD de, Moraes MHD de, Marassi AG, Tannús A. A novel approach for jatropha curcas seed health analysis based on multispectral and resonance imaging techniques. Ind Crop Prod 2021;161:113186. https://doi.org/10.1016/j.indcrop.2020.113186.
- 89.Song P., Song P., Yang H., Yang T., Xu J., Wang K. Detection of rice seed vigor by low-field nuclear magnetic resonance. Int J Agr Biol Eng. 2018;11:195–200. doi: 10.25165/ijabe.v11i6.4323. [DOI] [Google Scholar]
- 90.Gong Z., Cheng F., Liu Z., Yang X., Zhai B., You Z. 2015 ASABE International Meeting. American Society of Agricultural and Biological Engineers; 2015. Recent developments of seeds quality inspection and grading based on machine vision. [Google Scholar]
- 91.Mahlein A.-K., Kuska M.T., Behmann J., Polder G., Walter A. Hyperspectral sensors and imaging technologies in phytopathology: state of the art. Annu Rev Phytopathol. 2018;56:535–558. doi: 10.1146/annurev-phyto-080417-050100. [DOI] [PubMed] [Google Scholar]
- 92.Jia B., Wang W., Ni X.Z., Chu X., Yoon S.C., Lawrence K.C. Detection of mycotoxins and toxigenic fungi in cereal grains using vibrational spectroscopic techniques: a review. World Mycotoxin J. 2020;13:163–178. doi: 10.3920/WMJ2019.2510. [DOI] [Google Scholar]
- 93.Sendin K., Manley M., Baeten V., Fernández Pierna J.A., Williams P.J. Near infrared hyperspectral imaging for white maize classification according to grading regulations. Food Anal Method. 2019;12:1612–1624. doi: 10.1007/s12161-019-01464-0. [DOI] [Google Scholar]
- 94.ElMasry G., ElGamal R., Mandour N., Gou P., Al-Rejaie S., Belin E., et al. Emerging thermal imaging techniques for seed quality evaluation: principles and applications. Food Res Int. 2020;131 doi: 10.1016/j.foodres.2020.109025. [DOI] [PubMed] [Google Scholar]
- 95.Wang H., Liu J., Min W., Zheng M., Li H. Changes of moisture distribution and migration in fresh ear corn during storage. J Integr Agr. 2019;18:2644–2651. doi: 10.1016/S2095-3119(19)62715-2. [DOI] [Google Scholar]
- 96.Gavhale K.R., Gawande U. An overview of the research on plant leaves disease detection using image processing techniques. IOSRJCE. 2014;16:10–16. doi: 10.9790/0661-16151016. [DOI] [Google Scholar]
- 97.Wang L., Sun D.-W., Pu H., Zhu Z. Application of hyperspectral imaging to discriminate the variety of maize seeds. Food Anal Method. 2016;9:225–234. [Google Scholar]
- 98.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- 99.Shorten C., Khoshgoftaar T.M. A survey on image data augmentation for deep learning. J Big Data. 2019;6:60. doi: 10.1186/s40537-019-0197-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. ArXiv Preprint ArXiv:170404861 2017. https://doi.org/10.48550/arXiv.1704.04861.
- 101.Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L.-C. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. MobileNetV2: Inverted residuals and linear bottlenecks; pp. 4510–4520. [Google Scholar]
- 102.Howard A., Sandler M., Chu G., Chen L.-C., Chen B., Tan M., et al. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. Searching for MobileNetV3; pp. 1314–1324. [Google Scholar]
- 103.Zhang X., Zhou X., Lin M., Sun J. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. ShuffleNet: an extremely efficient convolutional neural network for mobile devices; pp. 6848–6856. [Google Scholar]
- 104.Ma N., Zhang X., Zheng H.-T., Sun J. Proceedings of the European conference on computer vision (ECCV) 2018. ShuffleNet v2: practical guidelines for efficient cnn architecture design; pp. 116–131. [Google Scholar]
- 105.Feng L., Zhu S., Liu F., He Y., Bao Y., Zhang C. Hyperspectral imaging for seed quality and safety inspection: a review. Plant Methods. 2019;15:91. doi: 10.1186/s13007-019-0476-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Zhou L., Zhang C., Liu F., Qiu Z., He Y. Application of deep learning in food: a review. Compr Rev Food Sci F. 2019;18:1793–1811. doi: 10.1111/1541-4337.12492. [DOI] [PubMed] [Google Scholar]
- 107.Zhu R., Li S., Sun Y., Cao Y., Sun K., Guo Y., et al. Research advances and prospects of crop 3D reconstruction technology. Smart Agriculture. 2021;3:94. doi: 10.12133/j.smartag.2021.3.3.202102-SA002. [DOI] [Google Scholar]
- 108.Foix S., Alenya G., Torras C. Lock-in time-of-flight (ToF) cameras: a survey. IEEE Sens J. 2011;11:1917–1926. doi: 10.1109/JSEN.2010.2101060. [DOI] [Google Scholar]
- 109.Bell T., Li B., Zhang S. Structured light techniques and applications. Wiley Encyclopedia of Electrical and Electronics Engineering. John Wiley & Sons, Ltd. 2016:1–24. doi: 10.1002/047134608X.W8298. [DOI] [Google Scholar]
- 110.Schönberger JL, Zheng E, Frahm J-M, Pollefeys M. Pixelwise view selection for unstructured multi-view stereo. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer Vision – ECCV 2016, Cham: Springer International Publishing; 2016, p. 501–18. https://doi.org/10.1007/978-3-319-46487-9_31.
- 111.Guo Y., Wang H., Hu Q., Liu H., Liu L., Bennamoun M. Deep learning for 3D point clouds: a survey. IEEE T Pattern Anal. 2021;43:4338–4364. doi: 10.1109/TPAMI.2020.3005434. [DOI] [PubMed] [Google Scholar]
- 112.Chaivivatrakul S., Tang L., Dailey M.N., Nakarmi A.D. Automatic morphological trait characterization for corn plants via 3d holographic reconstruction. Comput Electron Agr. 2014;109:109–123. [Google Scholar]
- 113.Thapa S., Zhu F., Walia H., Yu H., Ge Y. A novel LiDAR-based instrument for high-throughput, 3D measurement of morphological traits in maize and sorghum. Sensors. 2018;18:1187. doi: 10.3390/s18041187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Bao Y., Tang L., Srinivasan S., Schnable P.S. Field-based architectural traits characterisation of maize plant using time-of-flight 3D imaging. Biosyst Eng. 2019;178:86–101. [Google Scholar]
- 115.Wen W, Wang Y, Wu S, Liu K, Gu S, Guo X. 3D phytomer-based geometric modelling method for plants-the case of maize. Aob Plants 2021;13:plab055. https://doi.org/10/gnzfcm. [DOI] [PMC free article] [PubMed]
- 116.Su W., Zhu D., Huang J., Guo H. Estimation of the vertical leaf area profile of corn (zea mays) plants using terrestrial laser scanning (TLS) Comput Electron Agr. 2018;150:5–13. doi: 10.1016/j.compag.2018.03.037. [DOI] [Google Scholar]
- 117.Elnashef B., Filin S., Lati R.N. Tensor-based classification and segmentation of three-dimensional point clouds for organ-level plant phenotyping and growth analysis. Comput Electron Agr. 2019;156:51–61. doi: 10.1016/j.compag.2018.10.036. [DOI] [Google Scholar]
- 118.Schonberger JL, Frahm J-M. Structure-from-motion revisited. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE; 2016, p. 4104–13. https://doi.org/10.1109/CVPR.2016.445.
- 119.Teng X., Zhou G., Wu Y., Huang C., Dong W., Xu S. Three-dimensional reconstruction method of rapeseed plants in the whole growth period using RGB-D camera. Sensors. 2021;21:4628. doi: 10.3390/s21144628. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Roussel J, Geiger F, Fischbach A, Jahnke S, Scharr H. 3D surface reconstruction of plant seeds by volume carving: performance and accuracies. Front Plant Sci 2016;7. https://doi.org/10.3389/fpls.2016.00745. [DOI] [PMC free article] [PubMed]
- 121.Huang X., Zheng S., Zhu N. High-throughput legume seed phenotyping using a handheld 3D laser scanner. Remote Sens. 2022;14:431. doi: 10.3390/rs14020431. [DOI] [Google Scholar]
- 122.Liu C., Wang Y., Song J., Li Y., Ma T. Experiment and discrete element model of rice seed based on 3D laser scanning. Transactions of the Chinese Society of Agricultural Engineering. 2016;32:294–300. doi: 10.11975/j.issn.1002-6819.2016.15.041. [DOI] [Google Scholar]
- 123.Karasik A., Rahimi O., David M., Weiss E., Drori E. Development of a 3D seed morphological tool for grapevine variety identification, and its comparison with SSR analysis. Sci Rep. 2018;8:6545. doi: 10.1038/s41598-018-24738-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Jahnke S., Roussel J., Hombach T., Kochs J., Fischbach A., Huber G., et al. PhenoSeeder - a robot system for automated handling and phenotyping of individual seeds. Plant Physiol. 2016;172:1358–1370. doi: 10.1104/pp.16.01122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Li H., Qian Y., Cao P., Yin W., Dai F., Hu F., et al. Calculation method of surface shape feature of rice seed based on point cloud. Comput Electron Agr. 2017;142:416–423. doi: 10.1016/j.compag.2017.09.009. [DOI] [Google Scholar]
- 126.Wen W., Guo X., Lu X., Wang Y., Yu Z. In: Li D., Zhao C., editors. vol. 545. Springer International Publishing; Cham: 2019. Multi-scale 3D data acquisition of maize; pp. 108–115. (Computer and Computing Technologies in Agriculture XI). [DOI] [Google Scholar]
- 127.Ren Y., Li T., Xu J., Hong W., Zheng Y., Fu B. Overall filtering algorithm for multiscale noise removal from point cloud data. IEEE Access. 2021;9:110723–110734. doi: 10.1109/ACCESS.2021.3097185. [DOI] [Google Scholar]
- 128.Wolff K., Kim C., Zimmer H., Schroers C., Botsch M., Sorkine-Hornung O., et al. 2016 Fourth International Conference on 3D Vision (3DV) 2016. Point cloud noise and outlier removal for image-based 3D reconstruction; pp. 118–127. [DOI] [Google Scholar]
- 129.Maturana D., Scherer S. VoxNet: a 3D convolutional neural network for real-time object recognition. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015;2015:922–928. doi: 10.1109/IROS.2015.7353481. [DOI] [Google Scholar]
- 130.Su H., Maji S., Kalogerakis E., Learned-Miller E. Proceedings of the IEEE International Conference on Computer Vision. 2015. Multi-view convolutional neural networks for 3D shape recognition; pp. 945–953. [Google Scholar]
- 131.Qi CR, Yi L, Su H, Guibas LJ. PointNet++: deep hierarchical feature learning on point sets in a metric space. Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA: Curran Associates Inc.; 2017, p. 5105–14. https://doi.org/10.48550/arXiv.1706.02413.
- 132.Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM. Dynamic graph CNN for learning on point clouds. Acm T Graphic 2019;38:146:1-146:12. https://doi.org/10.1145/3326362.
- 133.Mahlein A.-K., Alisaac E., Al Masri A., Behmann J., Dehne H.-W., Oerke E.-C. Comparison and combination of thermal, fluorescence, and hyperspectral imaging for monitoring fusarium head blight of wheat on spikelet scale. Sensors-Basel. 2019;19:2281. doi: 10.3390/s19102281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Khaki S., Pham H., Han Y., Kuhl A., Kent W., Wang L. DeepCorn: a semi-supervised deep learning method for high-throughput image-based corn kernel counting and yield estimation. Knowl-Based Syst. 2021;218 [Google Scholar]
- 135.Chen Y., Yang W., Li M., Hao Z., Zhou P., Sun H. Research on pest image processing method based on android thermal infrared lens. IFAC-PapersOnLine. 2018;51:173–178. doi: 10.1016/j.ifacol.2018.08.083. [DOI] [Google Scholar]
- 136.Colmer J., O’Neill C.M., Wells R., Bostrom A., Reynolds D., Websdale D., et al. SeedGerm: a cost-effective phenotyping platform for automated seed imaging and machine-learning based phenotypic analysis of crop seed germination. New Phytol. 2020;228:778–793. doi: 10.1111/nph.16736. [DOI] [PubMed] [Google Scholar]
- 137.Yin X., Hou J., Ming B., Zhang Y., Guo X., Gao S., et al. Kernel position effects of grain morphological characteristics by X-ray micro-computed tomography (μCT) Int J Agr Biol Eng. 2021;14:159–166. doi: 10.25165/ijabe.v14i2.6039. [DOI] [Google Scholar]
- 138.Song P., Zhang H., Wang C., Luo B., Lu W., Hou P. Design and experiment of high throughput automatic measuring device for corn. Transactions of the Chinese Society of Agricultural Engineering. 2017;33:41–47. doi: 10.11975/j.issn.1002-6819.2017.16.006. [DOI] [Google Scholar]
- 139.De Camargo V.R., Dos Santos L.J., Pereira F.M.V. A proof of concept study for the parameters of corn grains using digital images and a multivariate regression model. Food Anal Method. 2018;11:1852–1856. doi: 10.1007/s12161-017-1028-6. [DOI] [Google Scholar]
- 140.Wu G., Chen X., Xie J., Zheng Y., Li L., Tan J. Design and experiment of automatic variety test system for corn ear. Transactions of the Chinese Society for Agricultural Machinery. 2016:433–441. doi: 10.6041/j.issn.1000-1298.2016.S0.066. [DOI] [Google Scholar]
- 141.Makanza R., Zaman-Allah M., Cairns J.E., Eyre J., Burgueño J., Pacheco Á., et al. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging. Plant Methods. 2018;14:49. doi: 10.1186/s13007-018-0317-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 142.Wu D., Cai Z., Han J., Qin H. Automatic kernel counting on maize ear using RGB images. Plant Methods. 2020;16:79. doi: 10.1186/s13007-020-00619-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Khaki S., Pham H., Han Y., Kuhl A., Kent W., Wang L. Convolutional neural networks for image-based corn kernel detection and counting. Sensors. 2020;20:2721. doi: 10.3390/s20092721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Shi M., Zhang S., Lu H., Zhao X., Wang X., Cao Z. Phenotyping multiple maize ear traits from a single image: Kernels per ear, rows per ear, and kernels per row. Comput Electron Agr. 2022;193 doi: 10.1016/j.compag.2021.106681. [DOI] [Google Scholar]
- 145.Valiente-González J.M., Andreu-García G., Potter P., Rodas-Jordá Á. Automatic corn (zea mays) kernel inspection system using novelty detection based on principal component analysis. Biosyst Eng. 2014;117:94–103. doi: 10.1016/j.biosystemseng.2013.09.003. [DOI] [Google Scholar]
- 146.Chu X., Tao Y., Wang W., Yuan Y., Xi M. Rapid detection method of moldy maize kernels based on color feature. Adv Mech Eng. 2014;6 doi: 10.1155/2014/625090. [DOI] [Google Scholar]
- 147.Warman C., Sullivan C.M., Preece J., Buchanan M.E., Vejlupkova Z., Jaiswal P., et al. A cost-effective maize ear phenotyping platform enables rapid categorization and quantification of kernels. Plant J. 2021;106:566–579. doi: 10.1111/tpj.15166. [DOI] [PubMed] [Google Scholar]
- 148.Williams P.J., Geladi P., Britz T.J., Manley M. Investigation of fungal development in maize kernels using NIR hyperspectral imaging and multivariate data analysis. J Cereal Sci. 2012;55:272–278. doi: 10.1016/j.jcs.2011.12.003. [DOI] [Google Scholar]
- 149.Chu X., Wang W., Ni X., Li C., Li Y. Classifying maize kernels naturally infected by fungi using near-infrared hyperspectral imaging. Infrared Phys Techn. 2020;105 doi: 10.1016/j.infrared.2020.103242. [DOI] [Google Scholar]
- 150.Yang D., Jiang J., Jie Y., Li Q., Shi T. Detection of the moldy status of the stored maize kernels using hyperspectral imaging and deep learning algorithms. Int J Food Prop. 2022;25:170–186. doi: 10.1080/10942912.2022.2027963. [DOI] [Google Scholar]
- 151.Wang L, Liu J, Zhang J, Wang J, Fan X. Corn seed defect detection based on watershed algorithm and two-pathway convolutional neural networks. Front Plant Sci 2022;13. https://doi.org/10.3389/fpls.2022.730190. [DOI] [PMC free article] [PubMed]
- 152.Wang Y., Wen W., Wu S., Wang C., Yu Z., Guo X., et al. Maize plant phenotyping: comparing 3D laser scanning, multi-view stereo reconstruction, and 3D digitizing estimates. Remote Sens. 2019;11:63. [Google Scholar]
- 153.Li J., Tang L. Developing a low-cost 3D plant morphological traits characterization system. Comput Electron Agr. 2017;143:1–13. doi: 10.1016/j.compag.2017.09.025. [DOI] [Google Scholar]
- 154.Chen H., Ai W., Feng Q., Jia Z., Song Q. FT-NIR spectroscopy and whittaker smoother applied to joint analysis of duel-components for corn. Spectrochim Acta A. 2014;118:752–759. doi: 10.1016/j.saa.2013.09.065. [DOI] [PubMed] [Google Scholar]
- 155.Egesel C., Kahriman F. Determination of quality parameters in maize grain by NIR reflectance spectroscopy. Journal of Agricultural Sciences. 2012;18:31–42. doi: 10.1501/Tarimbil_0000001190. [DOI] [Google Scholar]
- 156.Fassio A.S., Restaino E.A., Cozzolino D. Determination of oil content in whole corn (zea mays L.) seeds by means of near infrared reflectance spectroscopy. Comput Electron Agr. 2015;110:171–175. doi: 10.1016/j.compag.2014.11.015. [DOI] [Google Scholar]
- 157.Egesel C.Ö., Kahrıman F., Ekinci N., Kavdır İ., Büyükcan M.B. Analysis of fatty acids in kernel, flour, and oil samples of maize by NIR spectroscopy using conventional regression methods. Cereal Chem. 2016;93:487–492. doi: 10.1094/CCHEM-12-15-0247-R. [DOI] [Google Scholar]
- 158.Brenna O.V., Berardo N. Application of near-infrared reflectance spectroscopy (NIRS) to the evaluation of carotenoids content in maize. J Agr Food Chem. 2004;52:5577–5582. doi: 10.1021/jf0495082. [DOI] [PubMed] [Google Scholar]
- 159.Sharma B., Frontiera R.R., Henry A.-I., Ringe E., Van Duyne R.P. SERS: materials, applications, and the future. Mater Today. 2012;15:16–25. doi: 10.1016/S1369-7021(12)70017-2. [DOI] [Google Scholar]
- 160.Huang L., Wang F., Weng S., Pan F., Liang D. Surface-enhanced raman spectroscopy for rapid and accurate detection of fenitrothion residue in maize. Spectrosc Spect Anal. 2018;38:2782–2787. doi: 10.3964/j.issn.1000-0593(2018)09-2782-06. [DOI] [Google Scholar]
- 161.Hui L, Jingzhu W, Cuiling L, Xiaorong S, le Y. Study on pretreatment methods of terahertz time domain spectral image for maize seeds. IFAC-PapersOnLine 2018;51:206–10. https://doi.org/10.1016/j.ifacol.2018.08.142.
- 162.Paghaleh S.J., Askari H.R., Marashi S.M.B., Rahimi M., Bahrampour A.R. A method for the measurement of in line pistachio aflatoxin concentration based on the laser induced fluorescence spectroscopy. J Lumin. 2015;161:135–141. doi: 10.1016/j.jlumin.2014.12.057. [DOI] [Google Scholar]
- 163.Wang H., Liu J., Xu X., Huang Q., Chen S., Yang P., et al. Fully-automated high-throughput NMR system for screening of haploid kernels of maize (corn) by measurement of oil content. PLoS One. 2016;11:e0159444. doi: 10.1371/journal.pone.0159444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Yue X., Bai Y., Wang Z., Song P. Low-field nuclear magnetic resonance of maize seed germination process under salt stress. Transactions of the Chinese Society of Agricultural Engineering. 2020;36:292–300. doi: 10.11975/j.issn.1002-6819.2020.24.034. [DOI] [Google Scholar]
- 165.Huang M., Zhao W., Wang Q., Zhang M., Zhu Q. Prediction of moisture content uniformity using hyperspectral imaging technology during the drying of maize kernel. Int Agrophys. 2015;29:39–46. doi: 10.1515/intag-2015-0012. [DOI] [Google Scholar]
- 166.Yang G., Wang Q., Liu C., Wang X., Fan S., Huang W. Rapid and visual detection of the main chemical compositions in maize seeds based on raman hyperspectral imaging. Spectrochim Acta A. 2018;200:186–194. doi: 10.1016/j.saa.2018.04.026. [DOI] [PubMed] [Google Scholar]
- 167.Zhao Y. Research on nondestructive detection methods of crop seed quality based on hyperspectral imaging technique. Zhejiang University, 2021. https://doi.org/10.27461/d.cnki.gzjdx.2021.003818.
- 168.Hruska Z., Yao H., Kincaid R., Darlington D., Brown R.L., Bhatnagar D., et al. Fluorescence imaging spectroscopy (FIS) for comparing spectra from corn ears naturally and artificially infected with aflatoxin producing fungus. J Food Sci. 2013;78:T1313–T1320. doi: 10.1111/1750-3841.12202. [DOI] [PubMed] [Google Scholar]
- 169.Kandpal L.M., Lee S., Kim M.S., Bae H., Cho B.-K. Short wave infrared (SWIR) hyperspectral imaging technique for examination of aflatoxin b1 (AFB1) on corn kernels. Food Control. 2015;51:171–176. doi: 10.1016/j.foodcont.2014.11.020. [DOI] [Google Scholar]
- 170.Huang Y., Zhang L., Wang R. Rapid discrimination of fresh and stale corn using raman spectroscopy. Modern Food Science and Technology. 2014;30:149–152. doi: 10.13982/j.mfst.1673-9078.2014.12.025. [DOI] [Google Scholar]
- 171.Huang M., Tang J., Yang B., Zhu Q. Classification of maize seeds of different years based on hyperspectral imaging and model updating. Comput Electron Agr. 2016;122:139–145. doi: 10.1016/j.compag.2016.01.029. [DOI] [Google Scholar]
- 172.Esteve Agelet L., Ellis D.D., Duvick S., Goggi A.S., Hurburgh C.R., Gardner C.A. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels. J Cereal Sci. 2012;55:160–165. doi: 10.1016/j.jcs.2011.11.002. [DOI] [Google Scholar]
- 173.Williams P.J., Kucheryavskiy S. Classification of maize kernels using NIR hyperspectral imaging. Food Chem. 2016;209:131–138. doi: 10.1016/j.foodchem.2016.04.044. [DOI] [PubMed] [Google Scholar]
- 174.Ambrose A., Kandpal L.M., Kim M.S., Lee W.-H., Cho B.-K. High speed measurement of corn seed viability using hyperspectral imaging. Infrared Phys Techn. 2016;75:173–179. doi: 10.1016/j.infrared.2015.12.008. [DOI] [Google Scholar]
- 175.Du Z., Hu Y., Ali Buttar N., Mahmood A. X-ray computed tomography for quality inspection of agricultural products: a review. Food Sci Nutr. 2019;7:3146–3160. doi: 10.1002/fsn3.1179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 176.Hou J., Zhang Y., Jin X., Dong P., Guo Y., Wang K., et al. Structural parameters for X-ray micro-computed tomography (μCT) and their relationship with the breakage rate of maize varieties. Plant Methods. 2019;15:161. doi: 10.1186/s13007-019-0538-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 177.Gargiulo L., Leonarduzzi C., Mele G. Micro-CT imaging of tomato seeds: Predictive potential of 3D morphometry on germination. Biosyst Eng. 2020;200:112–122. doi: 10.1016/j.biosystemseng.2020.09.003. [DOI] [Google Scholar]
- 178.Yang Z, Albrow-Owen T, Cai W, Hasan T. Miniaturization of optical spectrometers. Science 2021;371:eabe0722. https://doi.org/10.1126/science.abe0722. [DOI] [PubMed]
- 179.Yang R., Lu X., Huang J., Zhou J., Jiao J., Liu Y., et al. A multi-source data fusion decision-making meathod for disease and pest detection of grape foliage based on ShuffleNet v2. Remote Sens. 2021;13:5102. doi: 10.3390/rs13245102. [DOI] [Google Scholar]
- 180.Xu P., Tan Q., Zhang Y., Zha X., Yang S., Yang R. Research on maize seed classification and recognition based on machine vision and deep learning. Agriculture. 2022;12:232. doi: 10.3390/agriculture12020232. [DOI] [Google Scholar]