Skip to main content
Plant Direct logoLink to Plant Direct
. 2025 Dec 11;9(12):e70124. doi: 10.1002/pld3.70124

RGB‐Based Deep Learning for Freeze Damage Detection in Strawberry: Comparing Scratch and Transfer Learning Approaches on Custom Data

Nijhum Paul 1,2, G C Sunil 1, Amin Khan 3, Samriddha Das 1, Harlene Hatterman‐Valenti 3, James V Anderson 4, Jinita Stapit Kandel 4, David Horvath 4, Xin Sun 1,
PMCID: PMC12696421  PMID: 41393171

ABSTRACT

Freeze damage presents a critical threat to agricultural productivity, resulting in substantial economic losses, especially in sensitive crops such as strawberries. Traditional methods for assessing freeze damage, including manual inspection, are time‐consuming, subjective, and labor‐intensive. In this study, a deep learning (DL) and computer vision‐based approach was proposed to automate freeze damage classification in strawberry plants using RGB images. The performance of four convolutional neural network (CNN) architectures was evaluated: DenseNet‐121, Inception V3, ResNet‐50, and Xception. Two training methods are compared: transfer learning (TL) using pretrained ImageNet weights and training models from scratch. The models are assessed based on classification accuracy, precision, recall, F1‐score, and inference time. The results indicate that models trained from scratch outperform TL models, achieving up to 97% accuracy with ResNet‐50, whereas TL models attained a maximum accuracy of 84%. The ResNet‐50 model also achieved the fastest inference time (3.0 s) while DenseNet‐121 was the smallest (26. 86 MB). Furthermore, the models were most effective at identifying severely damaged plants but struggled to differentiate mild damage from minimal or no damage. The findings suggest that scratch‐trained models deliver more accurate solutions for freeze damage classification in strawberry plants. Additionally, DenseNet‐121 was the best choice for memory‐limited applications, while ResNet‐50 excelled in speed‐sensitive tasks. This study underscores the potential of deep learning and computer vision to automate freeze damage assessment in strawberry plants, providing a more accurate, rapid, and nondestructive alternative to traditional methods.

Keywords: CNN, computer vision, deep learning, freeze damage, plant stress, RGB, transfer learning

1. Introduction

Computer vision‐based deep learning (DL) approaches in precision agriculture are transforming how farmers manage crops and maximize yields. By utilizing advanced image processing and DL techniques, farmers can conduct nondestructive, rapid, and precise analyses of a variety of agricultural phenomena, including the detection of freeze damage in plants. Freeze damage presents a significant threat to agricultural crops globally, leading to considerable economic losses and undermining food security (Mirzabaev et al. 2023). It can result from sudden temperature drops, which affect the structural integrity and physiological functions of plants. Accurate and timely assessment of freeze damage is increasingly vital for reducing economic losses, enhancing agricultural management, and safeguarding food security as extreme weather events occur more frequently (Grigorieva et al. 2023; Shankar and Shikha. 2018). In parallel, the development of cold‐tolerant or resilient plant varieties is gaining significant attention. Researchers and breeders are actively assessing the cold damage tolerance of various plants to identify and cultivate more resilient species. Integrating these breeding efforts with advanced DL‐based monitoring systems can aid researchers in evaluating plant responses to cold stress more efficiently and accurately.

Traditional methods for assessing freeze damage often rely on manual inspection and evaluation, such as wilting, discoloration, or tissue damage, which are labor‐intensive, time‐consuming, subjective, and prone to human error (Sun et al. 2025). Chlorophyll fluorescence imaging is another approach to determining the freezing tolerance of leaves, as it indicates stress and damage to the photosynthetic machinery of plants caused by freezing temperatures (Ehlert and Hincha 2008). However, this method requires controlled temperature and lighting conditions, unlike DL‐based methods. Measuring electrolyte leakage from plant tissues is also utilized to assess cellular damage caused by freezing temperatures (Hatsugai and Katagiri 2018). Nevertheless, this process is destructive and necessitates careful sampling, preparation, and handling of plant tissues to avoid additional damage that could influence results. In contrast, DL and computer vision‐based freeze damage detection systems are nondestructive, offering consistent and rapid analysis of freeze damage.

Imaging sensors have gained popularity in agriculture due to their ability to provide nondestructive and timely information about crops, soil, and environmental conditions. Freezing damage in plants can be identified through phenotypic markers such as changes in leaf coloration, wilting, and the appearance of necrotic spots. These visual indicators are effectively detectable using imaging sensors. Some popular imaging sensors used for freeze damage detection in plants are RGB (Enders et al. 2018; Macedo‐Cruz et al. 2011), multispectral (Chen et al. 2019), hyperspectral (Perry et al. 2017), and thermal (Kokin et al. 2018). Furthermore, imaging sensors mounted on machines (i.e., unmanned ground vehicles [UGV], unmanned aerial vehicles [UAV], satellites, and airplanes) provide rapid freeze damage detection over large areas without significant labor costs. Yuan and Choi (2021) utilized UAVs with RGB and thermal sensors mounted to propose a frost management system in an apple orchard. Li et al. (2022) used UAV images captured by DJI Phantom 4 Pro V2.0 to propose a low‐cost freezing‐tolerant rapeseed material recognition system.

Several studies have conducted various statistical analyses on image data for assessing agricultural damage due to cold weather. Wu et al. (2021) applied Partial Least Squares Regression (PLSR) and Support Vector Regression (SVR) to predict yield differences in frost‐damaged crops. Asante et al. (2021) employed the Lagrange Multiplier (LM) method to evaluate the impact of nitrogen on cold tolerance in tea plants. Lu et al. (2021) used Partial Least Squares Path Analysis (SPA‐PLS) for assessment of freeze‐induced damage and prediction of Minimum Winter Temperature (MWT) at the seed source origin of loblolly pine seedlings. Gobbett et al. (2020) analyzed the relationship between nighttime temperatures and frost risk in grapevines using a Multivariate Adaptive Regression Spline (MARS) model. Similarly, López‐Granados et al. (2019) used Tukey HSD for classifying cold‐stress responses of inbred maize seedlings using RGB imaging, while Kimball et al. (2017) employed ANOVA to demonstrate that digital imaging analysis provides a reliable alternative to visual ratings of freezing injury. Murphy et al. (2020) used a Student's t‐test to evaluate the effect of frost stress on the spectral response of wheat plant components, including heads and flag leaves. Beyond statistical approaches, traditional machine learning (ML) methods have also been employed in freeze damage studies. Goswami et al. (2019) applied Random Forest (RF) to distinguish between stress‐free and frost‐stressed crops. L. Zhang et al. (2020) used Support Vector Machines (SVM), Decision Tree (DT), and K‐nearest Neighbor (K‐NN) to classify rice seeds with different degrees of frost damage. J. Zhang et al. (2021) implemented an Extreme Learning Machine (ELM) model to categorize different levels of freeze damage in corn seeds. Gao et al. (2019) utilized Partial Least Squares Discriminant Analysis (PLS‐DA) to differentiate between normal and injured bud samples after lab‐simulated freezing events. Jia et al. (2016) employed biomimetic pattern recognition (BPR) to classify normal and frost‐damaged maize seeds. However, traditional statistical and machine learning models often face limitations when handling high‐dimensional, complex datasets or capturing nonlinear relationships. Many of these models require manual feature selection or transformation, making them less efficient for large‐scale and diverse agricultural applications. Additionally, their fixed structures limit adaptability to new data types or evolving tasks, necessitating more advanced and flexible approaches for freeze damage assessment.

DL provides a larger number of advantages over traditional machine learning (ML) and statistical analysis due to its capacity for automatic feature extraction and its ability to manage complex, large‐scale agricultural data (Miraei Ashtiani et al. 2021). Zhang et al. (2020) compared the performance of a deep forest (DF) model with various traditional ML models, such as decision trees (DT), K‐nearest neighbors (K‐NN), and support vector machines (SVM), for classifying frost‐damaged rice seeds. Their results showed that the DF model outperformed the traditional models in a small‐scale sample set. Furthermore, J. Zhang et al. (2021) compared the performance of three ML models (K‐NN, SVM, ELM) against a deep convolutional neural network (DCNN) to classify different levels of freeze damage in corn seeds, and the DCNN surpassed all three methods.

Nonetheless, there are research limitations in identifying freeze damage in plants using deep learning, specifically with subtle degrees of freeze damage, particularly in the early stages. One of the primary limitations is the scarcity of high‐quality labeled data specific to freeze‐damaged plants. To develop a robust and generalizable deep learning model, it is essential to curate a diverse dataset that captures variations in environmental conditions, including lighting, shadows, plant height differences, and growth environments. Evaluating the necessity of constructing a custom dataset is crucial when training a DL‐based model, as existing datasets may not sufficiently represent the nuanced and subtle phenotypic indicators of freezing damage. As a result, the pretrained models trained on existing datasets can perform poorly compared to a model trained on a custom dataset. A custom freeze‐damaged dataset possesses unique characteristics, such as subtle phenotypic indicators of various levels of freezing damage, which helps the DL model learn unique features for better prediction. Furthermore, previous research on developing DL‐based models for freeze damage detection has been limited to strawberry plants using multispectral imaging (Sunil et al. 2025). However, no studies have been published that utilize RGB or hyperspectral imagery for this purpose. Therefore, the practical application of robust deep learning models trained on properly labeled training data by experts is vital for agricultural productivity, as it enables informed decision‐making for farmers to mitigate losses and optimize resource use.

The objectives of this paper are (1) to develop a freeze damage classification model for strawberry plants using RGB images, (2) to evaluate various deep learning models to identify the one that performs best, and (3) to compare two training approaches: models trained using pretrained weights (transfer learning [TL]) on the ImageNet dataset and models trained from scratch, regarding prediction performance and inference time.

2. Materials and Methods

2.1. Experimental Design

The dataset was collected from two separate experiments to increase the generalizability of the model. In experiment 1, the dataset was prepared by subjecting strawberry plants (cv. Honeoye) to various levels of freezing stress under controlled growth chamber conditions. Strawberry runners were planted in square 10 cm pots filled with horticulture potting media and initially grown in a greenhouse under controlled conditions at 20°C ± 2°C with a 12:12 light/dark photoperiod. When the plants had reached the 4‐leaf stage, the treatment groups were transferred to an acclimation chamber set at 5°C with a 12‐h photoperiod. A control set of plants was maintained at room temperature in the greenhouse. At the completion of each acclimation period (1, 2 and 3 weeks), the corresponding set of plants was transferred to a freezing chamber, where they were exposed to different freezing temperatures (−4°C, −8°C, or −12°C) for 4 h. Following freezing and a brief recovery period at 15°C, the plants were transferred back to the greenhouse and maintained under the controlled conditions (20°C ± 2°C with a 12:12 light/dark photoperiod). RGB images of plants were collected 3–7 days after the freezing treatments and return of plants to the greenhouse. In experiment 1, the plants were frozen at three temperature levels: slightly low (−4°C), moderately low (−8°C) and very low (−12°C) temperature. To address the gap in freezing temperature treatments of experiment 1 and obtain additional damage level images, another experiment was conducted using the same strawberry variety. In the second experiment, the protocols remained the same except for plants being exposed to a single freezing temperature of −10°C, a temperature determined to provide the greatest range of damage following freezing.

2.2. Data Acquisition

The RGB images of strawberry plants were collected using Canon EOS 90D digital cameras (Canon Inc., Tokyo, Japan) in natural lighting conditions in the greenhouse (Figure 1). In addition to these RGBs, images captured from the hyperspectral sensor (Specim FX10, Specim, Oulu, Finland) were added to increase the data size. Leveraging a large data size in DL enables the development of a robust model without model overfitting. There were a total of 759 image samples, which were labeled by experts into three categories: severely damaged or dead, mild damage, and minimal or no damage. The pie chart in Figure 2 illustrates the number of images in each category. Figure 3 demonstrates some examples of plant samples in each category.

FIGURE 1.

FIGURE 1

An illustration of data acquisition setup in greenhouse.

FIGURE 2.

FIGURE 2

Pie chart showing sample size in each category.

FIGURE 3.

FIGURE 3

Images of strawberry plant samples with different damage conditions.

2.3. Data Preprocessing

Data preprocessing starts with cropping the images to minimize background content. Afterward, they were rescaled by dividing each pixel value by 255, normalizing the pixel values to a range between 0 and 1. Additionally, the images were resized to either 224 × 224 or 299 × 299 to ensure compatibility with certain DL architectures. These dimensions were chosen based on the standard input sizes for the selected convolutional neural network (CNN) architectures, ensuring optimal feature extraction and computational efficiency. The dataset was then split into training, validation, and test sets in a 70:15:15 ratio using a Python script. This resulted in 530 samples for training, 114 for validation, and 115 for testing (Table 1). To enhance the training data, augmentation techniques such as vertical and horizontal flipping, zooming, height and width shifting, shearing, and rotation were applied, effectively increasing the number of training samples. Data augmentation was performed using Keras ImageDataGenerator. Although the number of stored training images remained constant (530), random augmentation produced a new transformed version of each image at every epoch, resulting in approximately (epoch × training samples) augmented variants encountered by the model during training.

TABLE 1.

Summary of training, validation, and test dataset.

Training Validation Test
Minimal or no damage 187 45 46
Mild damage 111 19 19
Severely damaged or dead 232 50 50
Total 530 114 115

2.4. Deep Learning Models

For model development, four of the most popular state‐of‐the‐art CNN architectures were selected: DenseNet‐121, Xception, ResNet‐50, and Inception v3. These models have been extensively utilized for plant stress detection tasks and have proven to perform effectively. For instance, Rezaei et al. (2024) employed all four models for disease detection in barley. Malvade et al. (2022) utilized Inception, ResNet‐50, and DenseNet for disease detection in paddy. Chandel et al. (2024) applied Inception and ResNet‐50 for water stress detection in wheat and maize crops. Sujatha et al. (2023) used DenseNet and ResNet for disease detection in paddy crops. These models have shown consistent performance across various plant stress detection tasks and agricultural applications. Their extensive use in previous research underscores their reliability and robustness, making them suitable choices for the freeze damage classification task in this study.

2.4.1. DenseNet‐121 Architecture

DenseNet‐121 is a popular CNN architecture that was introduced in the paper “Densely Connected Convolutional Networks” by Huang et al. (2017). It has been previously used for plant stress classification by Tassis and Krohling (2022). Instead of drawing representational power from extremely deep or wide architectures, DenseNets exploit the potential of the network through feature reuse (Ruiz 2018). They concatenate the output feature maps of a layer with the incoming feature maps instead of summing them as done in the ResNet architecture. This design improves information flow between layers, reduces the number of parameters, and enhances feature reuse. DenseNets are organized into DenseBlocks, where the dimensions of the feature maps remain consistent within a block, but the number of filters varies between them. The layers between the blocks are known as transition layers, which reduce the spatial dimensions of the feature maps and manage the number of channels. A transition layer consists of a batch normalization layer, a convolutional layer, and a pooling layer (typically average pooling). Following the dense blocks and transition layers, the model includes a global average pooling layer, which is succeeded by a fully connected (dense) layer with three output classes.

2.4.2. Inception v3 Architecture

The Inception architecture, commonly referred to as GoogLeNet, is a CNN recognized for its innovative use of “Inception modules” to enhance the network's efficiency and performance. It was developed by researchers at Google (Szegedy et al. 2014). Inception modules consist of several parallel convolutional layers with varying filter sizes (e.g., 1 × 1, 3 × 3, 5 × 5) and a max‐pooling layer, enabling the network to learn features at different scales simultaneously. The outputs from these layers are concatenated, offering a rich representation of the input data. Inception also integrates auxiliary classifiers in the intermediate layers to provide additional gradient signals for more stable training. The architecture has achieved high accuracy on benchmark datasets such as ImageNet and is noted for its efficiency, especially regarding parameters and computational cost.

2.4.3. Xception Architecture

Xception was introduced by Chollet (2016) as an extension of the Inception architecture. This neural network architecture relies solely on depthwise separable convolution layers, which reduce the number of parameters and computational costs, resulting in a more efficient model. This design permits the model to be deeper without a significant increase in computational resources. Xception consists of a series of blocks, each containing multiple depthwise separable convolutional layers. The blocks are interconnected by residual (skip) connections that enhance gradient flow and facilitate efficient learning. Similar to ResNet, Xception employs residual connections to combine the output of a block with its input. After passing through the convolutional blocks, the network utilizes a global average pooling layer to condense the spatial dimensions of the feature maps into a single vector. The network concludes with a fully connected layer (dense layer) and a softmax activation function to yield the final classification output.

2.4.4. ResNet‐50 Architecture

ResNet‐50 is a 50‐layer CNN comprising 48 convolutional layers, one MaxPool layer, and one average pool layer, introduced by He et al. (2015). It enables the training of deep networks without performance degradation due to vanishing gradients. The architecture features a 7 × 7 convolutional layer with a stride of 2, followed by four main stages of convolutional blocks with different filter sizes and strides. Within each stage, there are also identity blocks that maintain the same input and output dimensions and consist of two convolutional layers. ReLU activation functions are utilized throughout the network, which ends with a global average pooling layer and a fully connected layer for classification.

2.5. Model Training Setup and Hyperparameter Selection

In this study, two approaches were used for model training: TL and training a model from scratch. Both methods were applied to assess whether a pretrained ImageNet model or a model trained on a custom strawberry dataset would provide better performance. The computational environment used for deep learning model training consisted of an AMD Ryzen 57,530 U processor with Radeon Graphics and 32 GB of RAM (Advanced Micro Devices Inc., Santa Clara, California, USA).

To classify freeze damage using TL, four models employed pretrained weights from the ImageNet dataset. Initially, the top fully connected layers of these models were removed, retaining only the convolutional base. The pretrained ImageNet weights were loaded, and the layers were frozen to maintain the features learned from ImageNet without updating them during training. The layers were then flattened to convert the 2D feature maps into a 1D feature vector, which is a necessary step before adding fully connected (dense) layers. A dense layer with three neurons (corresponding to the three classes) and a softmax activation function was subsequently added. Unlike TL models, scratch models started with randomly initialized weights, learning features and patterns directly from the training data. While TL offers advantages such as reduced computational cost and lower data requirements, it also has potential drawbacks. For example, if the pretrained model was trained on a dataset significantly different from the target task's dataset, it may not transfer effectively, leading to poor performance.

There were differences in choosing model parameters when training the model using these two approaches. In TL, the model has already been trained on a large dataset (such as ImageNet) and has learned meaningful features. In contrast, when training a model from scratch, the parameters are initialized randomly, requiring the model to learn all features and representations directly from the training data, which demands more data and longer training time. Table 2 presents the model training parameters for all models during both TL and scratch learning. The number of epochs is set to 500 for TL and 700 for scratch learning. These different epoch values were chosen to ensure optimal model performance during training. The higher number of epochs for scratch learning reflects the additional time required for training and optimization, which is expected since, in TL, all the backbone layers are pretrained and optimized.

TABLE 2.

Model training parameters for each individual model.

Parameters Value
Image size

DenseNet, ResNet‐50: 224,224,3

Inception, Xception: 299,299,3

Learning rate 0.001
Batch size 16
Epoch Transfer learning: 500 and scratch learning: 700

2.6. Model Evaluation

Several evaluation metrics were employed to assess the classification models, including accuracy, precision, recall, and F1‐score (Sokolova and Lapalme 2009). Accuracy measures the proportion of correct predictions made by the model out of the total number of samples. Precision reflects the proportion of correctly predicted positive cases that were truly positive, calculated by dividing the number of true positives by the total number of predicted positives. Recall, on the other hand, indicates how well the model identified actual positive cases, determined by the ratio of correctly predicted positives to all actual positive cases. The F1‐score provides a balance between precision and recall, reaching its maximum value when precision equals recall.

Furthermore, the size of each model, their trainable parameters, and inference time were compared to assess model performance. Inference time refers to the amount of time it takes for a trained DL model to process new input data and make predictions. The inference time measurement for the models was processed on a 12th Gen Intel Core i7‐12650H (16 CPUs), ~2.7 GHz, and 16 GB of RAM; NVIDIA GeForce RTX 4050 Graphics Processor (NVIDIA Corporation, Santa Clara, California, USA). To ensure a reliable measurement, the prediction was run multiple times (500), and the average was considered the inference time of a model.

3. Results

In this study, freeze damage classification analysis was conducted for strawberry plants using DL algorithms. Four DL models were evaluated using two different model training processes: TL and training models from scratch. The models were evaluated using classification metrics such as accuracy, precision, recall, and F1‐score. Additionally, the models' performances were assessed by their model size and inference time.

3.1. Freeze Damage Classification Using TL Models

In this training process, the models (Densenet‐121, Inception V3, ResNet50, and Xception) were pretrained on the ImageNet dataset, and they contain the weights and biases learned from ImageNet. Table 3 shows that Inception V3 performs better than the other models, with an accuracy of 84%. Xception also provides good performance, with the best precision, recall, and F1‐score, and its accuracy is close to Inception. However, ResNet‐50 and DenseNet‐121 performed poorly compared to the other models, with an accuracy of 71%.

TABLE 3.

Classification performance of TL models.

Accuracy Precision Recall F1‐score
DenseNet‐121 71 73 74 67
Inception V3 84 78 77 77
ResNet‐50 71 74 73 68
Xception 83 79 82 79

Figure 4 and Figure 5 demonstrate the confusion matrices for all TL models. DenseNet predicted 18 minimal or no damage samples as mild damage, which was less than the samples it predicted correctly (16) for this category (Figure 4a). Inception predicted seven minimal or no damage samples as mild damage (Figure 4b). ResNet‐50 model predicted 16 minimal or no damage samples as mild damage (Figure 5a). Xception predicted 14 minimal or no damage samples as mild damage (Figure 5b). In conclusion, the TL models mostly made wrong predictions for minimal or no damage category as they are wrongly predicted as mild damage.

FIGURE 4.

FIGURE 4

Confusion matrix for TL models (a) DenseNet‐121, (b) Inception V3.

FIGURE 5.

FIGURE 5

Confusion matrix for TL models (a) ResNet‐50, (b) Xception.

Figure 6 and Figure 7 show the classification performance of TL model, such as precision, recall, and F1‐score for each category. Precision indicates how many of the positive predictions were actually correct. High precision shows that the model has a low false positive rate. For the severely damaged category, Xception has 100% precision, which means all of the positive predictions were actually positive. However, ResNet has a low precision rate (79%) for this category. For minimal or no damage category, ResNet has a 100% precision rate. The precision rate for mild damage category is comparatively lower than the other two classes, ranging from 43% to 56% for all TL models, indicating a large number of predicted mild damage cases are not actually mild damage plants. Recall shows how well the model can identify all positive instances. Figure 6 and Figure 7 show all models has good recall value (≥ 90%) for severely damaged category, where Inception captured all the positive instances (100%) correctly. However, for mild damage class Inception only shows 47% recall score. The F1‐score provides a balance between precision and recall. A high F1‐score indicates that the model performs well in terms of both precision and recall. It can be observed from Figure 6 and Figure 7 that Xception has the highest F1‐score (98%) for severely damaged category among all TL models. However, DenseNet has the highest (68%) F1‐score for mild damage class, and Inception has the highest F1‐score (83%) for minimal or no damage class.

FIGURE 6.

FIGURE 6

Precision, Recall and F1‐score for each class for TL models (a) DenseNet‐121, (b) Inception V3.

FIGURE 7.

FIGURE 7

Precision, Recall and F1‐score for each class for TL models (a) ResNet‐50 and (b) Xception.

Table 4 shows parameters of TL models, such as model size, number of total parameters, number of trainable parameters, and inference time. DenseNet‐121 is the smallest model of size 27.42 MB and with the lowest number of parameters, which is best for practical deployment on edge devices. ResNet‐50 has the largest size (91.13 MB), with Inception V3 and Xception being slightly smaller at 84.67 and 81.92 MB, respectively. All TL models have much fewer trainable parameters compared to total parameters, suggesting that much of its architecture is frozen for training efficiency. Xception achieved the fastest inference time, making it ideal for speed‐critical tasks.

TABLE 4.

Summary of model size, number of total parameters, number of trainable parameters, and inference time of TL models.

Model size (MB) Total parameters Trainable parameters Inference time (s)
DenseNet‐121 27.42 7,188,035 150,531 3.6142
Inception V3 84.67 22,196,003 393,219 3.1444
ResNet‐50 91.13 23,888,771 301,059 3.0906
Xception 81.92 21,475,883 614,403 3.0054

3.2. Freeze Damage Classification Using Scratch Models

In this training process, the models (Densenet‐121, Inception V3, ResNet50, and Xception) started with randomly initialized weights and learned features and patterns directly from the training data. ResNet‐50 outperformed other models with an accuracy of 97% on test data as shown in Table 5. However, DenseNet‐121 provided better recall than the other models, which is 96%.

TABLE 5.

Classification performance of models trained from scratch.

Accuracy Precision Recall F1‐score
DenseNet‐121 95 92 96 94
Inception V3 94 93 94 93
ResNet‐50 97 96 95 96
Xception 94 91 94 92

Figure 8 and Figure 9 show the confusion matrix for four models that were trained from scratch. DenseNet predicted four minimal or no damage samples as mild damage and one sample as severely damaged as shown in Figure 8a. It predicted all 19 samples in mild damage correctly. It also performed very well in predicting severely damaged class as only one sample was wrongly predicted as mild damage. However, for Inception model, four minimal or no damage samples were predicted as severely damaged and two samples were predicted as mild damage (Figure 8b).

FIGURE 8.

FIGURE 8

Confusion matrix for models trained from scratch (a) DenseNet‐121, (b) Inception V3.

FIGURE 9.

FIGURE 9

Confusion matrix for models trained from scratch (a) ResNet‐50, (b) Xception.

ResNet‐50 predicted one minimal or no damage class as mild damage, and one sample was predicted as severely damaged. However, it could predict all samples in the severely damaged class correctly (Figure 9a). Xception predicted five minimal or no damage samples as mild damage (Figure 9b).

Figure 10 and Figure 11 show the classification performance of scratch models, such as precision, recall, and F1‐score for each category. For severely damaged class, the models have good precision, ranging from 93% to 100%. For mild damage class, it ranges from 78% to 94%, whereas for minimal or no damage class, the precision ranges from 95% to 100% among the scratch models. Additionally, Inception and ResNet has 100% recall score for severely damaged class. DenseNet has 100% recall value for mild damage class. For F1‐score, severely damaged class performed better than the other categories, ranging from 96% to 99%. F1‐score has comparatively poor value for the mild damage class across all the scratch models.

FIGURE 10.

FIGURE 10

Precision, Recall and F1‐score for each class for models trained from scratch (a) DenseNet‐121 and (b) Inception V3.

FIGURE 11.

FIGURE 11

Precision, Recall and F1‐score for each class for models trained from scratch (a) ResNet‐50 and (b) Xception.

Table 6 shows parameters of scratch models, such as model size, number of total parameters, number of trainable parameters, and inference time. DenseNet‐121 is the smallest model at 26.86 MB with the lowest number of parameters, which makes it highly efficient for deployment in environments with limited storage. ResNet‐50 has the largest size at 90 MB, followed closely by Inception V3 (83.19 MB). ResNet‐50 has the fastest inference time at 3.0016 s, which makes it efficient for real‐time or time‐sensitive tasks.

TABLE 6.

Summary of model size, number of total parameters, number of trainable parameters, and inference time of scratch models.

Model size (MB) Total parameters Trainable parameters Inference time (s)
DenseNet‐121 26.86 7,040,579 6,956,931 5.6257
Inception V3 83.19 21,808,931 21,774,499 3.1217
ResNet‐50 90 23,593,859 23,540,739 3.0016
Xception 79.6 20,867,627 20,813,099 3.1160

4. Discussion

Currently, there is no published article on classifying freeze damage in strawberry plants using computer vision and deep learning. Damage assessment is typically done manually, which can be subjective and time‐consuming. Additionally, hiring agricultural experts can be expensive, and they may be unavailable in some areas. Computer vision and deep learning‐based systems can effectively assess freeze damage promptly and provide a consistent, cost‐effective solution. This study proposes a deep learning‐based model for classifying freeze damage in strawberry plants. Utilizing images from two experiments conducted under natural conditions (with inconsistent light and shadow) enhances the model's generalizability and reliability.

4.1. Performance Comparison of Different Training Approaches

In the field of precision agriculture, several studies have compared the performance of models trained from scratch with those utilizing TL to evaluate and understand their relative strengths, weaknesses, and suitability for a specific task or dataset. Gulzar (2024) introduced a novel deep learning model for seed classification, comparing training the Xception model from scratch versus using TL with a pretrained Xception model. Xu et al. (2022) compared both TL and models trained from scratch while developing models for versatile plant disease recognition with limited data.

In this work, four deep learning (DL) models (Densenet‐121, Inception V3, ResNet50, and Xception) were evaluated using two training approaches: TL models with ImageNet weights and models trained from scratch. Evaluation of these two approaches indicates that the models trained from scratch on the strawberry dataset outperformed the TL models utilizing ImageNet weights. This demonstrates that the features learned from the ImageNet dataset are not sufficiently relevant for classifying freeze damage conditions in strawberry plants. Moreover, the training data is large and of high quality enough for the scratch models to outperform the TL models. A larger and more diverse dataset, encompassing a broader range of scenarios and variations, is crucial for training robust models. For example, a plant's appearance can change significantly within a few days following a freezing event. As the plant begins to recover, new tissue may develop, leading to variations in image appearance over time.

These results are associated with the findings of Gupta et al. (2024), where a transformer‐based model was trained in both ways, TL and from scratch. For TL, they first used ModelNet10 data to pretrain and then fine‐tuned the pretrained model to classify the MNIST dataset. For training from scratch, they trained the model from scratch on the MNIST dataset. They observed that transfer learned models did not outperform the scratch models since the two datasets have a large difference in the degree of the distributions. Similarly, He et al. (2018) stated that ImageNet pretraining does not necessarily help reduce overfitting unless the data size is significantly small. Additionally, Zhao et al. (2024) indicated that TL models may encounter a domain mismatch issue between the pretraining and target domains, leading to poor performance.

4.2. Performance Comparison of Different Models

It is noteworthy that the ResNet‐50 model trained from scratch achieved the highest accuracy of 97%, outperforming the other models in this study. The superior performance of ResNet‐50 trained from scratch can be attributed to its residual learning structure, which facilitates efficient gradient propagation and better feature reuse across layers. Unlike Inception or Xception, which rely on multiscale or depthwise separable convolutions, ResNet's skip connections allow deeper feature learning without vanishing gradients. Consequently, it can effectively capture the complex texture variations associated with freeze damage symptoms in strawberry leaves. The findings of this study also align with prior research that compared various deep learning algorithms. For instance, Shin et al. (2021) evaluated AlexNet, SqueezeNet, GoogLeNet, ResNet‐50, SqueezeNet‐MOD1, and SqueezeNet‐MOD2 for detecting powdery mildew disease on strawberry plants and reported that ResNet‐50 achieved the highest classification accuracy of 98.11%. Similarly, Rajwade et al. (2024) compared ResNet‐50 and MobileNetV2 for water stress detection in Okra plants, with ResNet‐50 outperforming MobileNetV2. Malvade et al. (2022) compared Inception‐V3, VGG‐16, ResNet‐50, DenseNet‐121, and MobileNet‐28 and found that ResNet‐50 achieved the highest accuracy of 92.16%. Additionally, (Khan et al. 2024) evaluated VGGNet, InceptionV3, ResNet‐50, and InceptionResNetV2, with ResNet‐50 demonstrating superior accuracy of 87.51%. These consistent findings highlight ResNet‐50's effectiveness across diverse plant stress detection tasks.

4.3. Performance Comparison Across Different Categories

A careful evaluation of the classification models for each category suggests that the models are predicting the severely damaged class more accurately than the other two categories. However, the models have difficulty in identifying the mild damage class as it sometimes mistakes a mildly damaged plant for a minimal or no damage plant. One of the reasons can be a lower number of mild damage samples than the other two categories. Furthermore, there is a subtle difference between the symptoms of some mild damage plants and a minimal damage plant to distinguish them properly. This issue is significant for the TL models as the features learned from the ImageNet dataset are not adequate for the models to identify these subtle differences. Scratch models have far better classification performance for the mild damage category than the TL models.

4.4. Model Parameters Comparison

Model parameters, such as size and inference time, are critical metrics in evaluating the performance of a DL model, especially when it comes to practical deployment in real‐world applications. It was observed that DenseNet‐121 is the smallest model with the smallest number of parameters. This makes it ideal for memory‐constrained applications. However, it has the highest inference time, which makes it unsuitable for low‐latency tasks. This observation aligns with a previous statement by Jain (2024), who noted that during the inference of the DenseNet model, the concatenation operations can slow down the forward pass, increasing prediction latency. Moreover, Chao et al. (2019) stated that memory traffic generated by extracting the feature map of the middle part of DenseNet is a factor affecting the inference speed of the network model. Their results indicated DenseNet‐121 (51.5 ms) having higher inference time than ResNet‐50 (46.5 ms), even with a significantly lower number of parameters. In this study, ResNet‐50 trained from scratch had the lowest inference time (3.0016 s), making it the fastest model overall.

4.5. Potential Benefits to Research Community and Practical Application in Precision Agriculture

The findings of this study have significant implications for advancing both scientific research and practical applications in the field of precision agriculture. Employing computer vision and DL for freeze damage detection, this work highlights the potential of exploiting automated systems to address the challenges associated with traditional methods of plant stress assessment, which are often subjective, time‐consuming, and labor‐intensive. The adoption of DL‐based systems, as demonstrated in this study, can facilitate timely and accurate stress detection, enabling farmers to make informed decisions that minimize crop losses. Additionally, the comparison between TL and scratch learning demonstrates the value of developing custom datasets for agricultural applications. This reinforces the need for researchers to focus on generating high‐quality, domain‐specific data to enhance model performance and generalizability. The evaluation of multiple state‐of‐the‐art CNN architectures offers valuable insights into model selection, balancing tradeoffs between accuracy, inference speed, and hardware constraints. Furthermore, because the models are trained on RGB images, which can be easily captured using standard digital cameras or smartphones, the system offers a cost‐effective solution that can be readily adopted in both field and greenhouse environments. For practical deployment, cameras can be mounted on fixed stations, mobile robots, or drones to enable rapid, nondestructive assessment of freeze damage following cold events. The methods and findings presented in this study can serve as a reference for exploring similar applications across other crops and stress types, enabling the research community to replicate and extend this work.

4.6. Limitations and Future Directions

The dataset used in this study showed an imbalance in class distribution, particularly with fewer samples in the mild damage category compared to the minimal or no damage and severely damaged categories. This imbalance may have contributed to the model's challenges in accurately classifying mild damage cases. Additionally, subtle differences between the phenotypic markers of mild damage and the minimal or no damage categories presented classification challenges for the models, especially those trained with TL. Future studies should concentrate on increasing the number of samples in underrepresented categories, such as mild damage. Techniques like Synthetic Minority Oversampling Technique (SMOTE) or Adaptive Synthetic (ADASYN) sampling can effectively balance the dataset. Collecting images in diverse environmental conditions, including outdoor fields and various growth stages, can enhance the generalizability and robustness of the models. Incorporating additional data modalities, such as thermal or multispectral imaging alongside RGB images, may help capture subtle differences in freeze damage symptoms and boost model performance. Future work should aim to optimize model architectures for faster inference without sacrificing accuracy, enabling real‐time applications in precision agriculture. The use of emerging deep learning architectures, such as Vision Transformers (ViT) or hybrid models that combine CNNs and attention mechanisms, could be explored to further enhance model performance.

5. Conclusions

In this study, deep learning (DL) models were evaluated for classifying freeze damage in plants, utilizing both TL models and models trained from scratch. Four models, DenseNet‐121, Inception V3, ResNet‐50, and Xception were assessed using these two training approaches. The results demonstrated that the scratch models outperformed the TL models in terms of accuracy, precision, recall, and F1‐score. Additionally, the DL models excelled at identifying severely damaged plants compared to the other two categories. TL models faced more challenges in distinguishing mild damage plants from those with minimal or no damage due to fewer samples than the other two categories and subtle differences in their symptoms. However, scratch models were more effective at identifying mild damage plants than the TL models. The ResNet‐50 scratch model surpassed the others with 97% accuracy and the lowest inference time. Meanwhile, DenseNet‐121 proved to be ideal for memory‐constrained applications. Nonetheless, this study has some limitations. For instance, there were misclassifications by the models, particularly within the mild damage class. Therefore, adding more training data and further optimizing the models should be pursued in future research to enhance classification. Overall, this work establishes a strong foundation for applying DL in agricultural freeze damage assessment, contributing to more preventive and sustainable farming practices.

Author Contributions

Nijhum Paul: writing – original draft, investigation, methodology, formal analysis, validation, visualization, data curation. G. C. Sunil: writing – review and editing, investigation, data curation. Amin Khan: writing – review and editing, data curation. Samriddha Das: writing – review and editing. Harlene Hatterman‐Valenti: writing – review and editing, supervision, resources. James V. Anderson: writing – review and editing, supervision, resources. Jinita Stapit Kandel: writing – review and editing, supervision, resources. David Horvath: writing – review and editing, supervision, resources, funding acquisition, project administration. Xin Sun: writing – review and editing, supervision, resources, project administration, funding acquisition, data curation.

Funding

This work was supported by the US Department of Agriculture (FAR0037147, FAR0037111) and National Institute of Food and Agriculture (7008233, 7008910).

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported by the USDA Agricultural Research Service, project number FAR0037147 and FAR0037111. This work was supported in part by the USDA National Institute of Food and Agriculture (NIFA), Hatch project number ND01487. This research was supported, in part, by the intramural research program of the US Department of Agriculture, National Institute of Food and Agriculture, Hatch Multistate project accession number, 7008233 and 7008910.

Paul, N. , Sunil G. C., Khan A., et al. 2025. “RGB‐Based Deep Learning for Freeze Damage Detection in Strawberry: Comparing Scratch and Transfer Learning Approaches on Custom Data.” Plant Direct 9, no. 12: e70124. 10.1002/pld3.70124.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

References

  1. Asante, E. A. , Du Z., Lu Y., and Hu Y.. 2021. “Detection and Assessment of Nitrogen Effect on Cold Tolerance for Tea by Hyperspectral Reflectance With PLSR, PCR, and LM Models.” Information Processing in Agriculture 8, no. 1: 96–104. 10.1016/j.inpa.2020.03.001. [DOI] [Google Scholar]
  2. Chandel, N. S. , Chakraborty S. K., Chandel A. K., et al. 2024. “State‐of‐the‐Art AI‐Enabled Mobile Device for Real‐Time Water Stress Detection of Field Crops.” Engineering Applications of Artificial Intelligence 131: 107863. 10.1016/j.engappai.2024.107863. [DOI] [Google Scholar]
  3. Chao, P. , Kao C.‐Y., Ruan Y.‐S., Huang C.‐H., and Lin Y.‐L. 2019. “HarDNet: A Low Memory Traffic Network.” http://arxiv.org/abs/1909.00948.
  4. Chen, Y. , Sidhu H. S., Kaviani M., McElroy M. S., Pozniak C. J., and Navabi A.. 2019. “Application of Image‐Based Phenotyping Tools to Identify QTL for In‐Field Winter Survival of Winter Wheat (Triticum aestivum L.).” Theoretical and Applied Genetics 132, no. 9: 2591–2604. 10.1007/s00122-019-03373-6. [DOI] [PubMed] [Google Scholar]
  5. Chollet, F. 2016. “Xception: Deep Learning With Depthwise Separable Convolutions.” http://arxiv.org/abs/1610.02357.
  6. Ehlert, B. , and Hincha D. K.. 2008. “Chlorophyll Fluorescence Imaging Accurately Quantifies Freezing Damage and Cold Acclimation Responses in Arabidopsis Leaves.” Plant Methods 4, no. 1: 12. 10.1186/1746-4811-4-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Enders, T. A. , St Dennis S., Oakland J., et al. 2018. “Classifying Cold‐Stress Responses of Inbred Maize Seedlings Using RGB Imaging.” Plant Direct 3: e00104. 10.1101/432039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Gao, Z. , Zhao Y., Khot L. R., Hoheisel G. A., and Zhang Q.. 2019. “Optical Sensing for Early Spring Freeze Related Blueberry Bud Damage Detection: Hyperspectral Imaging for Salient Spectral Wavelengths Identification.” Computers and Electronics in Agriculture 167: 105025. 10.1016/j.compag.2019.105025. [DOI] [Google Scholar]
  9. Gobbett, D. L. , Nidumolu U., and Crimp S.. 2020. “Modelling Frost Generates Insights for Managing Risk of Minimum Temperature Extremes.” Weather and Climate Extremes 27: 100176. 10.1016/j.wace.2018.06.003. [DOI] [Google Scholar]
  10. Goswami, J. , Sharma V., Chaudhury B. U., and Raju P. L. N.. 2019. “Rapid Identification of Abiotic Stress (FROST) in In‐Filed Maize Crop Using UAV Remote Sensing.” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives 42, no. 3/W6: 467–471. 10.5194/isprs-archives-XLII-3-W6-467-2019. [DOI] [Google Scholar]
  11. Grigorieva, E. , Livenets A., and Stelmakh E.. 2023. “Adaptation of Agriculture to Climate Change: A Scoping Review.” Climate 11, no. 10: 202. 10.3390/cli11100202. [DOI] [Google Scholar]
  12. Gulzar, Y. 2024. “Enhancing Soybean Classification With Modified Inception Model: A Transfer Learning Approach.” Emirates Journal of Food and Agriculture 36: 1–9. 10.3897/ejfa.2024.122928. [DOI] [Google Scholar]
  13. Gupta, K. , Vippala R., and Srivastava S.. 2024. “Transfer Learning With Point Transformers”. http://arxiv.org/abs/2404.00846.
  14. Hatsugai, N. , and Katagiri F.. 2018. “Quantification of Plant Cell Death by Electrolyte Leakage Assay.” Bio‐Protocol 8, no. 5: e2758. 10.21769/bioprotoc.2758. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. He, K. , Girshick R., and Dollár P.. 2018. “Rethinking ImageNet Pre‐Training.” http://arxiv.org/abs/1811.08883.
  16. He, K. , Zhang X., Ren S., and Sun J.. 2015. “Deep Residual Learning for Image Recognition.” http://arxiv.org/abs/1512.03385.
  17. Huang, G. , Liu Z., Van Der Maaten L., and Weinberger K. Q.. 2017. “Densely Connected Convolutional Networks.” In Proceedings—30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017‐January, 2261–2269. 10.1109/CVPR.2017.243. [DOI]
  18. Jain, A. 2024. “Deep Learning Architecture 7: DenseNet.” Medium. December 17. https://medium.com/@abhishekjainindore24/deep‐learning‐architecture‐7‐densenet‐feee44d57f89.
  19. Jia, S. , Yang L., An D., et al. 2016. “Feasibility of Analyzing Frost‐Damaged and Non‐Viable Maize Kernels Based on Near Infrared Spectroscopy and Chemometrics.” Journal of Cereal Science 69: 145–150. 10.1016/j.jcs.2016.02.018. [DOI] [Google Scholar]
  20. Khan, I. , Sohail S. S., Madsen D. Ø., and Khare B. K.. 2024. “Deep Transfer Learning for Fine‐Grained Maize Leaf Disease Classification.” Journal of Agriculture and Food Research 16: 101148. 10.1016/j.jafr.2024.101148. [DOI] [Google Scholar]
  21. Kimball, J. A. , Tuong T. D., Arellano C., Livingston D. P., and Milla‐Lewis S. R.. 2017. “Assessing Freeze‐Tolerance in St. Augustine Grass: Temperature Response and Evaluation Methods.” Euphytica 213, no. 5: 100. 10.1007/s10681-017-1899-z. [DOI] [Google Scholar]
  22. Kokin, E. , Pennar M., Palge V., and Jürjenson K.. 2018. “Strawberry Leaf Surface Temperature Dynamics Measured by Thermal Camera in Night Frost Conditions.” Agronomy Research 16, no. 1: 122–133. 10.15159/AR.18.010. [DOI] [Google Scholar]
  23. Li, L. , Qiao J., Yao J., Li J., and Li L.. 2022. “Automatic Freezing‐Tolerant Rapeseed Material Recognition Using UAV Images and Deep Learning.” Plant Methods 18, no. 1: 5. 10.1186/s13007-022-00838-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. López‐Granados, F. , Torres‐Sánchez J., Jiménez‐Brenes F. M., Arquero O., Lovera M., and De Castro A. I.. 2019. “An Efficient RGB‐UAV‐Based Platform for Field Almond Tree Phenotyping: 3‐D Architecture and Flowering Traits.” Plant Methods 15, no. 1: 160. 10.1186/s13007-019-0547-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Lu, Y. , Walker T. D., Acosta J. J., et al. 2021. “Prediction of Freeze Damage and Minimum Winter Temperature of the Seed Source of Loblolly Pine Seedlings Using Hyperspectral Imaging.” Forest Science 67, no. 3: 321–334. 10.1093/forsci/fxab003. [DOI] [Google Scholar]
  26. Macedo‐Cruz, A. , Pajares G., Santos M., and Villegas‐Romero I.. 2011. “Digital Image Sensor‐Based Assessment of the Status of Oat (Avena sativa L.) Crops After Frost Damage.” Sensors 11, no. 6: 6015–6036. 10.3390/s110606015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Malvade, N. N. , Yakkundimath R., Saunshi G., Elemmi M. C., and Baraki P.. 2022. “A Comparative Analysis of Paddy Crop Biotic Stress Classification Using Pre‐Trained Deep Neural Networks.” Artificial Intelligence in Agriculture 6: 167–175. 10.1016/j.aiia.2022.09.001. [DOI] [Google Scholar]
  28. Miraei Ashtiani, S. H. , Javanmardi S., Jahanbanifard M., Martynenko A., and Verbeek F. J.. 2021. “Detection of Mulberry Ripeness Stages Using Deep Learning Models.” IEEE Access 9: 100380–100394. 10.1109/ACCESS.2021.3096550. [DOI] [Google Scholar]
  29. Mirzabaev, A. , Bezner Kerr R., Hasegawa T., et al. 2023. “Severe Climate Change Risks to Food Security and Nutrition.” Climate Risk Management 39: 100473. 10.1016/j.crm.2022.100473. [DOI] [Google Scholar]
  30. Murphy, M. E. , Boruff B., Callow J. N., and Flower K. C.. 2020. “Detecting Frost Stress in Wheat: A Controlled Environment Hyperspectral Study on Wheat Plant Components and Implications for Multispectral Field Sensing.” Remote Sensing 12, no. 3: 477. 10.3390/rs12030477. [DOI] [Google Scholar]
  31. Perry, E. M. , Nuttall J. G., Wallace A. J., and Fitzgerald G. J.. 2017. “In‐Field Methods for Rapid Detection of Frost Damage in Australian Dryland Wheat During the Reproductive and Grain‐Filling Phase.” Crop and Pasture Science 68, no. 6: 516–526. 10.1071/CP17135. [DOI] [Google Scholar]
  32. Rajwade, Y. A. , Chandel N. S., Chandel A. K., et al. 2024. “Thermal–RGB Imagery and Computer Vision for Water Stress Identification of Okra (Abelmoschus esculentus L.).” Applied Sciences (Switzerland) 14, no. 13: 5623. 10.3390/app14135623. [DOI] [Google Scholar]
  33. Rezaei, M. , Gupta S., Diepeveen D., Laga H., Jones M. G. K., and Sohel F.. 2024. “Barley Disease Recognition Using Deep Neural Networks.” European Journal of Agronomy 161: 127359. 10.1016/j.eja.2024.127359. [DOI] [Google Scholar]
  34. Ruiz, P. 2018. “Understanding and Visualizing DenseNets.” TDS Archive on Medium. October 10. https://medium.com/data‐science/understanding‐and‐visualizing‐densenets‐7f688092391a.
  35. Shankar, S. , and Shikha. 2018. “Chapter 7—Impacts of Climate Change on Agriculture and Food Security.” In Biotechnology for Sustainable Agriculture, edited by Singh R. L. and Mondal S., 207–234. Woodhead Publishing. 10.1016/B978-0-12-812160-3.00007-6. [DOI] [Google Scholar]
  36. Shin, J. , Chang Y. K., Heung B., Nguyen‐Quang T., Price G. W., and Al‐Mallahi A.. 2021. “A Deep Learning Approach for RGB Image‐Based Powdery Mildew Disease Detection on Strawberry Leaves.” Computers and Electronics in Agriculture 183: 106042. 10.1016/j.compag.2021.106042. [DOI] [Google Scholar]
  37. Sokolova, M. , and Lapalme G.. 2009. “A Systematic Analysis of Performance Measures for Classification Tasks.” Information Processing & Management 45, no. 4: 427–437. 10.1016/j.ipm.2009.03.002. [DOI] [Google Scholar]
  38. Sujatha, K. , Reddy T. K., Bhavani N. P. G., Ponmagal R. S., Srividhya V., and Janaki N.. 2023. “UGVs for Agri Spray With AI Assisted Paddy Crop Disease Identification.” Procedia Computer Science 230: 70–81. 10.1016/j.procs.2023.12.062. [DOI] [Google Scholar]
  39. Sun, F. , Yin M., Zheng S., et al. 2025. “FreezeNet: A Lightweight Model for Enhancing Freeze Tolerance Assessment and Genetic Analysis in Wheat.” Plant Phenomics 7, no. 2: 100061. 10.1016/j.plaphe.2025.100061. [DOI] [Google Scholar]
  40. Sunil, G. C. , Khan A., Horvath D., and Sun X.. 2025. “Evaluation of Multispectral Imaging for Freeze Damage Assessment in Strawberries Using AI‐Based Computer Vision Technology.” Smart Agricultural Technology 10: 100851. 10.1016/j.atech.2025.100851. [DOI] [Google Scholar]
  41. Szegedy, C. , Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., and Rabinovich A. 2014. “Going Deeper With Convolutions”. http://arxiv.org/abs/1409.4842.
  42. Tassis, L. M. , and Krohling R. A.. 2022. “Few‐Shot Learning for Biotic Stress Classification of Coffee Leaves.” Artificial Intelligence in Agriculture 6: 55–67. 10.1016/j.aiia.2022.04.001. [DOI] [Google Scholar]
  43. Wu, Y. , Ma Y., Hu X., Ma J., Zhao H., and Ren D.. 2021. “Narrow‐Waveband Spectral Indices for Prediction of Yield Loss in Frost‐Damaged Winter Wheat During Stem Elongation.” European Journal of Agronomy 124: 126240. 10.1016/j.eja.2021.126240. [DOI] [Google Scholar]
  44. Xu, M. , Yoon S., Jeong Y., and Park D. S.. 2022. “Transfer Learning for Versatile Plant Disease Recognition With Limited Data.” Frontiers in Plant Science 13: 1010981. 10.3389/fpls.2022.1010981. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Yuan, W. , and Choi D.. 2021. “UAV‐Based Heating Requirement Determination for Frost Management in Apple Orchard.” Remote Sensing 13: 273. 10.3390/rs13020273. [DOI] [Google Scholar]
  46. Zhang, J. , Dai L., and Cheng F.. 2021. “Identification of Corn Seeds With Different Freezing Damage Degree Based on Hyperspectral Reflectance Imaging and Deep Learning Method.” Food Analytical Methods 14, no. 2: 389–400. 10.1007/s12161-020-01871-8/Published. [DOI] [Google Scholar]
  47. Zhang, L. , Sun H., Rao Z., and Ji H.. 2020. “Hyperspectral Imaging Technology Combined With Deep Forest Model to Identify Frost‐Damaged Rice Seeds.” Spectrochimica Acta—Part A: Molecular and Biomolecular Spectroscopy 229: 117973. 10.1016/j.saa.2019.117973. [DOI] [PubMed] [Google Scholar]
  48. Zhao, Z. , Alzubaidi L., Zhang J., Duan Y., and Gu Y.. 2024. “A Comparison Review of Transfer Learning and Self‐Supervised Learning: Definitions, Applications, Advantages and Limitations.” In Expert Systems With Applications, vol. 242. Elsevier Ltd. 10.1016/j.eswa.2023.122807. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.


Articles from Plant Direct are provided here courtesy of Wiley

RESOURCES