Abstract
Food waste is a global challenge, primarily driven by inaccurate prediction of food quality and remaining shelf life. Approximately 30% of food waste occurs at retail and household levels, where traditional assessment methods such as subjective visual inspection or destructive instrumental analysis are inefficient. In this study, we integrated smartphone imaging and deep learning to non-destructively predict avocado firmness and internal quality, aiming to support smarter consumption and distribution decisions. Avocados were selected as a representative model food due to their high market value and high waste rate (∼40%). Over an eight-day storage period at room temperature, a dataset of 1400 avocado images was collected using a smartphone. Firmness was measured using a texture analyzer and served as the ripeness metric and ground truth for model training. To predict ripeness, a convolutional neural network residual regression (CNN ResNetR) model achieved the highest accuracy (R2 = 0.919), outperforming support vector machine regression (R2 = 0.818) and random forest (R2 = 0.860) models. Predicted firmness values were further mapped to recommend remaining shelf life using industrial guidelines. To assess internal quality, state-of-the-art CNN and vision transformer models were developed to classify avocados as fresh or rotten, achieving an accuracy above 84%. Model interpretability was obtained using the local interpretable model-agnostic explanations (LIME) technique, which identified key image features influencing classification. This deep learning-enabled framework offers a rapid, scalable, and non-destructive solution to evaluate avocado ripeness and internal quality, with the potential to reduce food waste and improve decision-making across the supply chain.
Keywords: Food quality, Machine learning, Explainable artificial intelligence, Smartphone imaging, Shelf life
Graphical abstract
Highlights
-
•
Deep learning-enabled imaging approach was used to reduce food waste.
-
•
Avocado quality was assessed non-destructively by deep learning and smartphone imaging.
-
•
CNN ResNet-18 regression model predicted avocado firmness with an R2 of 0.919.
-
•
CNN and ViT models classified internally fresh and rotten avocados with an accuracy of 84%.
-
•
Explainable AI provided interpretability for deep learning classifiers.
1. Introduction
Food waste has become a global challenge, accounting for approximately 30% of the world's food production (Kibler et al., 2018, Xue et al., 2017). Among all commodities, fruits and vegetables dominate food waste, with over 50% left uneaten annually in the United States (Xue et al., 2017; Gunders, 2012). Most of the food waste occurs during the post-harvest stage (Shafiee-Jood and Cai, 2016). According to the United States Department of Agriculture (USDA) Economic Research Service, 31% of the edible food supply at the retail and consumer levels went uneaten, leading to around 133 billion pounds and a financial loss of $161 billion (Buzby et al., 2014). This level of waste is also associated with about 25% of land and water resources used for crop and animal production (Shafiee-Jood and Cai, 2016). In response, the USDA and the Environmental Protection Agency (EPA) have set a national goal to reduce food waste by 50% by 2030 (U.S. Department of Agriculture USDA, 2023).
Climacteric fruits, such as apples, avocados, and tomatoes, continue to ripen after harvest, often leading to undesirable food waste when their firmness declines to levels unacceptable to consumers (Payasi and Sanwal, 2010). Notably, overripe avocados account for about 40% of global avocado production, a problem that may worsen as avocado consumption continues to rise (Salazar-López et al., 2020; Sandoval-Contreras et al., 2023). During ripening, Hass avocados transition in peel color from light green to dark purple, and gradually decrease firmness (Cox et al., 2004). While visual inspection of color and surface defects is common, particularly in retail and consumer settings, it is subjective and can lead to premature product disposal. Alternatively, food industries and analytical laboratories implement colorimeters and penetrometers to evaluate avocado ripeness (Mishra et al., 2021, Osuna-García et al., 2011). Yet, these methods require specialized instruments and may damage the fruits, making them unsellable. Together, these challenges highlight the urgent need for a non-destructive, accurate, affordable, and rapid tool to assess food quality and ripeness, thereby minimizing waste across the supply chain.
Recent studies have advanced non-destructive imaging and machine learning techniques for food quality assessment (Ma et al., 2022; Mehdizadeh et al., 2025; Bhole and Kumar, 2020; Xu et al., 2024). For example, an integrated photography imaging system was used to capture the RGB and thermal images of mangoes, followed by grade classification into extra class, class I, and class II categories based on the parameters, such as mango weights and maturity (Bhole and Kumar, 2020). By integrating with a convolutional neural network (CNN)-based SqueezeNet model, the RGB images and thermal images resulted in classification accuracies of 93.33% and 92.27%, respectively (Bhole and Kumar, 2020). Similarly, visible/near-infrared hyperspectral imaging has been used to predict the shelf life of grapes (Xu et al., 2024). A stacked denoising autoencoder model was used to extract spectral features, followed by a support vector machine (SVM) classifier, which achieved an accuracy of 98.13% in predicting the shelf life of grapes at storage intervals of 1, 3, 5, and 7 days (Xu et al., 2024). While these methods are rapid and non-destructive, they rely on bench-top, expensive equipment, making them impractical for on-site use by food retailors and consumers.
Alternatively, smartphones offer a user-friendly and widely accessible tool for collecting images, requiring no additional instruments for quality assessment. For instance, the ripeness level of tomatoes was predicted from smartphone images based on the measured color, texture, and shape features (Goyal et al., 2024). After selecting 63 features, tomato ripeness levels were predicted, achieving an R2 value of 0.73 with SVM, 0.64 with decision tree, and 0.73 with random forest (RF) (Goyal et al., 2024). Very few studies predicted avocado firmness using smartphones (Cho et al., 2020, 2021). For example, color features in CIELAB color space were manually extracted from smartphone RGB images of avocados (Cho et al., 2020). SVM, k-nearest neighbor, Ridge, and Lasso regression models estimated the firmness values with R2 of 0.92, 0.89, 0.88, and 0.88, respectively (Cho et al., 2020). However, no studies have developed a smartphone-based approach to assess the internal quality of avocados. The appearance of avocados may look similar between ripe and overripe stages, making it difficult to distinguish them based on surface features. In addition, previous smartphone-based studies primarily relied on manual feature selection and traditional machine learning algorithms, which limited their prediction performance by dependence on manually extracted color features. To solve this challenge, deep learning approaches can be used to automatically capture a broader range of information, including but not limited to shape, texture, and spatial patterns, potentially enhancing the accuracy and robustness of quality prediction.
Deep learning is a subset of machine learning that uses artificial neural networks with multiple layers to automatically learn complex patterns from large amounts of data. Although deep learning techniques have demonstrated competitive prediction performance, their “black box” nature makes it difficult to interpret the underlying decision-making process (Von and Eschenbach, 2021). To address this challenge, interpretable models such as gradient class activation mapping (Grad-CAM) and local interpretable model-agnostic explanations (LIME) highlight the weighted pixels or features that contribute most to deep learning model prediction (Cian et al., 2020). A recent study on automated broiler chicken meat sorting applied LIME to interpret the decisions by the CNN InceptionV3 model, using images of chicken meat parts (Hasan et al., 2024). The heatmap indicated areas with different impact weights on the prediction of whether fresh or rotten chicken meat. To our knowledge, no studies have yet applied these explainability techniques to explain the deep learning predictions for fresh produce quality assessment. Understanding the key features driving the model decisions is essential for improving model transparency and trustworthiness in food applications.
This study aimed to develop a non-destructive, portable, and user-friendly method to predict the firmness and internal quality of avocados using smartphone imaging and deep learning. We investigated the relationship between color and firmness changes in avocados during storage. A deep learning model (CNN ResNet18) was applied to predict avocado firmness and remaining shelf life at room temperature, with two traditional machine learning models (SVM and RF) being used as benchmarks. To evaluate internal quality by distinguishing between fresh and rotten avocados, multiple deep learning classifiers were developed based on CNN ResNet and vision transformer (ViT) architectures. To assess the reliability and interpretability of the classification results, LIME, an interpretable artificial intelligence technique, was used to highlight the most influential regions in avocado images. This study demonstrates the potential of integrating smartphone imaging with advanced deep learning models to non-destructively assess avocado quality, offering a practical and scalable solution for reducing postharvest losses and improving consumer satisfaction.
2. Materials and methods
2.1. Avocado selection and storage conditions
Hass avocados were purchased from local grocery stores. About 90% of all avocados consumed globally are the Hass variety (Rendón et al., 2019). Two batches of avocados (n = 70 avocados per batch; total n = 140) were selected and screened by visual inspection to ensure they were free of any surface damage and sunburn. Batch 1 was purchased in February 2024 and Batch 2 in April 2024 to account for potential seasonal variation in validating the generalization of our deep learning-based smartphone approach. For each batch, these 70 avocados were randomly assigned into seven groups corresponding to storage periods of 1, 2, 3, 4, 5, 6, and 8 days at room temperature, with 10 avocados per storage period. The data and images from both batches were combined as a single dataset for data analysis and deep learning model development. Avocados were placed in plastic bags to prevent the potential moisture loss during storage periods. Storage temperature and relative humidity were monitored daily using a thermo-hygrometer, with mean values of 22.7 ± 0.5°C and 33.3 ± 10.9% RH. The weight of each avocado was recorded daily using an analytical balance. The original weight of avocados was 185.17 ± 12.91 g (n = 70) for Batch 1 and 173.52 ± 11.57 g (n = 70) for Batch 2. The weight loss of avocados in 8-day storage periods was negligible, which was 1.64 ± 0.49% for Batch 1 and 2.25 ± 1.02% for Batch 2 (Fig. S1).
2.2. Determination of avocado surface color by colorimeter
The surface color of avocados was measured using a colorimeter (LabScan XE, HunterLab, VA, USA) to evaluate the correlation between firmness and surface color during ripening. Color values were recorded in the CIELAB color space using EasyMatch QC software, where L∗ indicates lightness, a∗ indicates the red/green component, and b∗ indicates the yellow/blue component (Jiang et al., 2020). Prior to avocado measurement, the colorimeter was calibrated with standard black and white tiles. Delta E (ΔE) was calculated to quantify the color difference between the avocado and the reference (i.e., standard white tile) according to Eq. (1). To measure the surface color, a sample holding port with a 3-cm diameter opening was used to position the avocados. Measurements were taken at the bottom, center, and top sections of the avocado at 90° intervals to record the potential surface color variation of individual fruit (i.e., a total of 12 sampling points per avocado). The average value of L∗, a∗, b∗, and ΔE from twelve points was calculated to represent the color of each avocado sample.
| Eq. (1) |
where Ls, as, and bs represent the lightness, red/green, and yellow/blue values of the standard white tile, respectively. L∗, a∗, and b∗ represent the lightness, red/green, and yellow/blue values of an individual avocado, respectively.
2.3. Firmness measurements of avocados
Firmness is a standard metric for determining avocado ripeness and predicting remaining shelf life, according to the United States Department of Agriculture and the California Avocado Commission (USDA, 2000; California Avocado Commission (CAC), 2020). The firmness of avocados was measured using a texture analyzer (TA.XTplus, Texture Technologies, MA, USA) and recorded with Exponent software. A 50-kg load cell and a Magness Taylor probe (length: 7 cm, diameter: 7.94 mm, and domed tip) were equipped to apply force to the avocados. Prior to measurement, the texture analyzer was calibrated by placing a 2-kg weight on the calibration platform for force calibration, and the height was set to 10 cm so that avocados could be properly placed between the probe and the measurement platform. The maximum force was reported as the firmness value (Sierra et al., 2019). The pre-test, test, and post-test speeds were 10, 3, and 10 mm/s, respectively, and the test was terminated when the probe penetrated the avocado exocarp and travelled an additional 5 mm. For each avocado, firmness was measured at four points along the equator at 90° intervals, and the average of these measurements was used to represent the sample.
2.4. Statistical analysis of avocado surface color and firmness
Color and firmness data were analyzed using SPSS v. 29.0.1.1 (IBM SPSS Statistics, NY, USA). Logistic regression was used to model the relationship between firmness and individual color parameters, namely L∗, a∗, b∗, and ΔE. The model performance was evaluated using the coefficient of determination (R2). Spearman's correlation was performed to evaluate the non-linear relationship between avocado surface color and firmness. P-value <0.05 is defined as a statistical difference among groups.
2.5. Collecting avocado image dataset using a smartphone camera
Images of the avocados were captured using an iPhone 14 Pro Max. The camera configuration and setup details are summarized in Table S1. To ensure consistent imaging conditions, a photo box (12 × 16 × 12 inches) was used to control the light temperature, light intensity, and the distance and angle between the avocados and the smartphone camera (Fig. S2). Two linear LED strips (1/2 × 10 inches) were mounted on the ceiling of the photo box to provide controlled illumination at a color temperature of 5,500K and a total light intensity of 10 W. Each avocado was placed at the center of the photo box, where the smartphone camera was positioned directly above the fruit, outside the photo box, at a distance of 30 cm. A sheet of black-printed paper was placed beneath the avocados to minimize light reflection from the background. For each avocado, ten images were taken, rotating approximately 30° between each shot to capture surface variation within the same fruit and to evaluate whether deep learning models could accurately predict ripeness based on image data.
2.6. Avocado image preprocessing
Data preprocessing is essential for improving the predictive performance of machine learning models (Cui et al., 2024). The original images with a resolution of 6048 × 8064 pixels were first center cropped to a square region to exclude non-avocado regions and then resized to 256 × 256 pixels. The black backgrounds within the avocado images were removed and replaced with a white background using Adobe Photoshop (Adobe Inc., CA, USA) to eliminate potential background effects on feature extraction by deep learning algorithms.
2.7. Training and testing machine learning regression models to predict avocado firmness and remaining shelf life based on smartphone images
The smartphone image dataset consisted of 1400 preprocessed avocado images at various ripeness levels. Each image was annotated with its corresponding firmness value, which served as the ground truth for machine learning model training. All programming was conducted using Python 3.12, with key libraries including PyTorch, torchvision, NumPy, pandas, scikit-learn, OpenCV, matplotlib, and wandb. The dataset was randomly shuffled and split into training, validation, and test sets in a ratio of 8:1:1, yielding 1120 images for training and 140 images each for validation and testing. To improve model generalization and incorporate additional image variation, data augmentation techniques, including rotation and translation, were applied to the training set, doubling its size to 2240 images.
Multiple machine learning regression algorithms were selected to represent different methodological approaches and complexities: (i) convolutional neural network ResNet-18 regression (CNN ResNet-18R) is a deep learning, neural network-based method; (ii) random forest regression (RFR) is a tree-based ensemble method, and (iii) support vector machine regression (SVMR) is a margin-based method. RFR and SVMR were used as benchmark models to assess whether deep learning models could achieve superior prediction performance. This comparison considers the trade-off between prediction accuracy and computational cost, as more complex model architectures typically demand greater computational resources.
To train CNN ResNet-18R model, we utilized its pretrained feature extraction layers and replaced the final classification layer with a regression head to predict continuous values. Specifically, the number of output nodes in the pre-trained ResNet-18 model was set to 1 (corresponding to predicted firmness), and no activation function was applied in the output layer. The mean squared error (MSE) loss function was used to quantify prediction error, and the Adam optimizer was employed to update model parameters.
For SVMR and RFR model training, ResNet-18 was used as a feature extractor to generate a 512 × 1-dimensional feature vector from each resized image (i.e., 256 × 256 pixels). The ResNet-18 model was pretrained on the ImageNet database, and used solely for inference, without any additional training or fine-tuning. After feature extraction, hyperparameters of the SVMR and RFR models were tuned using GridSearchCV, which exhaustively explored the specified hyperparameter space and evaluated performance via 5-fold cross-validation with the training and validation sets.
Model performance was evaluated using root mean square error (RMSE) and R2. Predictions were generated on the test set via forward propagation, and the predicted and ground-truth firmness values were collected to compute RMSE and R2. Based on these evaluation metrics, the key optimal hyperparameter combinations for CNN ResNet-18R, SVMR, and RFR models are reported in Table S2. To assess whether the CNN ResNet-18R model was underfitting or overfitting, loss function curves were generated by plotting the model loss (i.e., MSE) against the number of epochs.
Based on the fact sheet on remaining shelf life at room temperature published by the California Avocado Commission (California Avocado Commission (CAC), 2020), the predicted firmness values obtained from machine learning regression models were converted into estimates of remaining shelf life. Specifically, the fact sheet defines five stages of ripeness, each associated with a recommended remaining shelf life and characteristic firmness. That is, stage 1 (firm): firmness ≥111 N (or ≥25 lbs of pressure; 1 lb of pressure = 4.448 N) (Lorenzini et al., 2011), indicating unripe avocados; stage 2 (pre-conditioned): firmness between 67 and 111 N (or 15–25 lbs of pressure), avocados will be ready to eat in about 3 days at room temperature; stage 3 (breaking): firmness between 44 and 67 N (or 10–15 lbs of pressure), avocados are pre-ripened and ready to eat in about 2 days at room temperature; stage 4 (firm ripe): firmness between 22 and 44 N (or 5–10 lbs of pressure), avocados are suitable for slicing and will become fully ripe the next day at room temperature; and stage 5 (ripe): firmness of ≤22 N (or ≤5 lbs of pressure), avocados are fully ripe, suitable for all uses, and will remain in this condition for 2–3 days at room temperature.
2.8. Deep learning classification models for predicting avocado internal quality based on smartphone images
To predict avocado internal quality, two types of deep learning classification models were applied, including vision transformers (ViTs) and CNN models. The dataset (n = 1400) for internal quality classification was the same as the regression tasks in Section 2.7; however, with different labels. After capturing images of avocados with the smartphone camera, the fruits were cut open to document their internal quality condition. That is, each image was labeled as either fresh (class: 0) or rotten (class: 1) for training binary classifiers. This dataset was split into training, validation, and test sets in a ratio of 8:1:1. Data augmentation techniques, including rotation and translation, were applied to the training set, doubling its size to 2240 images.
The ViT algorithm was proposed for natural image classification tasks using large-scale image datasets, such as ImageNet and CIFAR (Dosovitskiy et al., 2020). Its architecture processes input images as a sequence of patches, applies positional embeddings, and passes the sequence through a transformer encoder composed of multi-head attention and multi-layer perceptron blocks input image, culminating in a classification head. In this study, the ViT-Large model was pre-trained on the ImageNet-21k database, which contains millions of images and over 21,000 classes of objects. The preprocessed avocado images were resized from 256 × 256 pixels to 224 × 224 pixels to meet the input dimension requirements of the Vit-Large model. Model depth was set to 24, the embedding dimension to 1024, and the number of attention heads to 16. We compared two patch size settings, 16 × 16 (ViT-patch16) and 32 × 32 (ViT-patch32), to determine the optimal configuration. Following previous protocols (Feurer and Hutter, 2019), we trained the model for 500 epochs with a batch size of 64 and SGD optimizer. The details of optimal hyperparameters are listed in Table S2.
Additionally, CNN ResNet-18, CNN ResNet-50, and CNN ResNet-152 models were developed and compared to identify the best-performing model. The numbers after ResNet indicate the number of layers in each architecture, which reflect their depth and capacity to learn complex features (He et al., 2016). Evaluating models with different depths helps balance predictive performance against computational cost and the risk of overfitting. During hyperparameter fine-tuning for CNN training, the optimal settings were identified as follows: batch size of 64, 500 training epochs, Adam optimizer, and StepLR learning rate scheduler. For StepLR, the step size was set as 5 and gamma as 0.5 to determine the optimal learning rate. Table S2 shows the summary of optimal hyperparameters.
Model performance was evaluated using several metrics. The receiver operating characteristics (ROC) curve and the area under the curve (AUC) were generated to assess classification performance across all possible classification thresholds. Additionally, accuracy, recall, precision, and F1-score were computed on the test set according to Eq. Eq. (2A), Eq. (2B), Eq. (2C), Eq. (2D). These metrics are computed based on the predicted class labels obtained via: = arg max(softmax(z)), where z denotes the model's output logits over K classes. A confusion matrix was also constructed using sklearn.metrics.confusion_matrix. Based on these metrics, the best-performing ViT and CNN ResNet models were selected.
| Eq. (2A) |
| Eq. (2B) |
| Eq. (2C) |
| Eq. (2D) |
Where TP, TN, FP, and FN indicate true positive, true negative, false positive, and false negative, respectively.
2.9. Model interpretation using local interpretable model-agnostic explanations (LIME)
In the present study, LIME was used to explain how the image classification model made its decisions about internal avocado quality. LIME is an interpretability method designed to help users understand the prediction of complex machine learning models, such as deep learning classifiers used in sentiment analysis (Ribeiro et al., 2016). The working principle of LIME is to divide the input image into superpixels, which are groups of close pixels with similar properties and perturbed to generate different versions from the original image, and assign weights to each, reflecting their contribution to the classification. Positive weights indicate regions that support the predicted class, while negative weights indicate regions that contradict it. We selected the top twenty superpixels with the highest positive weights to be highlighted in green, and those with the highest negative weights were highlighted in red, visualizing the most influential regions in the model's decision. The test set was analyzed with LIME to ensure that the classification from the ViT and CNN ResNet models was reasonable and aligned with visible image features, if applied.
3. Results and discussion
3.1. Correlation of avocado surface color and firmness
Firmness is a standard quality attribute used by regulatory authorities to determine the ripeness of avocados (USDA, 2000), whereas surface color is a more subjective indicator commonly relied upon by consumers and retailers. In this study, we first investigated the relationship between avocado surface color and firmness. To achieve this goal, Hass avocados were stored at room temperature to simulate typical conditions found on grocery store shelves and in households, where food waste and loss occur most frequently in the United States (Buzby et al., 2011). Fig. 1 illustrates the changes in avocado color and firmness over the storage period. During storage, avocados had color change from green to brown (Fig. 1A). Statistically, L∗ and b∗ values decreased, while a∗ and ΔE increased (Fig. 1B). These color changes are attributed to alterations in skin pigment concentrations, such as anthocyanin and chlorophyll (Cox et al., 2004). In particular, the concentration of cyanidin-3-glucoside increased during ripening, whereas chlorophyll content decreased. Firmness exhibited a sharp decline on storage days 1 and 2, followed by a slower rate of decline throughout the remaining storage period (Fig. 1C). The relationship between firmness and color followed a logistic function, with a∗ value explaining firmness the best (R2 = 0.772), outperforming b∗ (R2 = 0.746), L∗ (R2 = 0.694), and ΔE (R2 = 0.606).
Fig. 1.
Dynamic changes of avocado properties during storage at room temperature. (A) Avocado images of different storage dates (1–6 and 8 days). (B) Change of lightness (L∗), red/green component (a∗), yellow/blue component (b∗), and delta E (ΔE) during storage (1–6 and 8 days). (C) Change of firmness during storage (1–6 and 8 days). Logistic fitting between firmness and surface color indexes, including (D) L∗, (E) a∗, (F) b∗, and (G) ΔE values. The logistic fitting was evaluated by the coefficient of determination (R2).
Table 1 shows the Spearman's coefficients (ρ) and P-values between color and firmness for each avocado sample (n = 140). Overall, L∗ and b∗ values were positively correlated with firmness, with the ρ values exceeding 0.74. In contrast, a∗ and ΔE values showed negative correlations with firmness, with ρ values of -0.605 and -0.733, respectively. In other words, avocados with less firmness would have lower lightness (L∗), lower b∗ (indicating a less bluish color), higher a∗ (indicating a more reddish color), higher ΔE (indicating a more color change), and vice versa. All P-values were below 0.0001, indicating statistically significant relationships between color and firmness. A previous study revealed a similar relationship between firmness and color values, reporting correlation coefficients of 0.69, -0.83, and 0.74 for firmness versus L∗, a∗, and b∗ values, respectively (Cho et al., 2021).
Table 1.
Non-parametric Spearman correlation between each CIELAB color parameter and firmness of avocado (n = 140).
| Firmness vs | L∗ | a∗ | b∗ | ΔE |
|---|---|---|---|---|
| Spearman's ρ | 0.741 | -0.605 | 0.786 | -0.733 |
| P-value | <0.0001 | <0.0001 | <0.0001 | <0.0001 |
Table 1 indicates that color change of avocados exhibits a non-linear relationship with their firmness, as indicated by the non-parametric Spearman's correlation coefficient (ρ). However, the logistic fitting was insufficient to accurately predict firmness values by L∗, a∗, b∗ and ΔE values, considering the relatively low prediction value (R2 in a range of 0.606–0.772) (Fig. 1D–G). In a previous study, logistic model based on the Arrhenius equation successfully predicted the shelf-life of avocados using CIELAB color parameters and firmness, achieving an R2 greater than 0.95 (Sierra et al., 2019). Despite its accuracy, the previous studies required specialized equipment, such as a colorimeter, which limits accessibility and is time-consuming to implement. Taken together, avocado firmness can potentially be predicted from surface color. To simplify the assessment process, we developed a smartphone-based imaging approach to capture surface color and use this information to predict firmness. This approach is user-friendly, nondestructive, and portable, making it suitable for household use and industrial on-site applications.
3.2. Prediction of avocado firmness and remaining shelf life using machine learning regression models
Three machine learning regression models were trained on the preprocessed avocado images covering a wide firmness range (4.79–113.63 N) (Fig. 1C). The models included CNN ResNet-18R (He et al., 2016), SVMR (Brereton and Lloyd, 2010), and RFR (Rodriguez-et al., 2015), which differ in architecture, methodological principles, complexity, and computational cost. Specifically, CNN ResNet-18R was optimized iteratively by minimizing the MSE loss using gradient-based methods. As shown in Fig. S3, the trained CNN ResNet-18R model exhibited no signs of underfitting nor overfitting after 100 epochs. For SVMR and RFR, there is no iterative loss minimization, and 5-fold cross-validation was applied to assess and mitigate potential overfitting or underfitting. SVMR training solves a convex quadratic programming problem to find the optimal solution in a single step, while RFR training constructs an ensemble of decision trees independently.
Fig. 2 shows the scatter plots of observed firmness, measured using a standard texture analyzer, versus predicted firmness obtained from smartphone imaging combined with machine learning regression models. The best-performing model was CNN ResNet-18R, achieving the highest R2 value of 0.919 and the lowest RMSE of 5.951 (Fig. 2C). The prediction performance of SVMR and RFR was slightly lower but still acceptable, with the R2 values of 0.818 and 0.860, respectively (Fig. 2A–B). These findings are comparable with previous research on smartphone images and machine learning. For example, Cho and coauthors extracted CIELAB and YUV color values from smartphone images of avocados to estimate the ripeness of avocados (Cho et al., 2021). The artificial neural network model, with an R2 of 0.937, was better than the SVM model (R2 = 0.909) (Cho et al., 2021). In the present study, we applied CNN-based deep learning models capable of extracting not only color information but also spatial patterns and texture features from avocado images. By incorporating multiple factors beyond color alone, these models can generate more comprehensive predictions of firmness. The integration of diverse image features provides an advantage over traditional color-based prediction methods, enhancing the feasibility of real-world applications.
Fig. 2.
Scatter plot of true and predicted firmness values of the test dataset using machine learning models. The remaining shelf life is marked in a different color code (from green to brown) according to California Avocado Commission guidelines. (A) Support vector machine regression. (B) Random forest regression. (C) Convolutional neural network ResNet18 regression. The performance of models was evaluated by the coefficient of determination (R2) and root mean square error (RMSE) on the test dataset (n = 140). (D) Industrial instruction of avocado remaining shelf life based on the firmness value.
To reduce food waste, one strategy is to inform food industries and consumers about the remaining shelf life so they can plan wisely on distribution and consumption plans. While knowing firmness would be informative for regulatory or analytical professionals, the firmness values may not provide direct and easy to understand information for consumers. Hence, we aligned the firmness predicted by machine learning-enabled smartphone imaging with the recommended remaining shelf life based on publicly available data from California Avocado Commission (Fig. 2D). All models were able to provide information on recommended remaining shelf life for underripe, pre-conditioned, breaking, firm ripe, and ripe avocados (Fig. 2A–C, color codes).
Several studies have sought to classify the ripeness stage or evaluating fruit firmness values using other imaging techniques (e.g., hyperspectral imaging) and acoustic techniques (e.g., tapping technique and audio signal processing). For example, Han and coauthors predicted the days to ripen for Hass and Shepard avocados using hyperspectral imaging combined with CNN-based regression and classification models (Han et al., 2023). For Hass avocados, they achieved a R2 value up to 0.77 and a classification accuracy of 67.28% across 15 ripeness classes, which was equivalent to storage periods from 2 to 16 days. In a follow-up study, hyperspectral imaging was combined with a spectral-spatial residual network to classify the remaining days to ripen (from 0 to 11 days) for Hass avocados (Davur et al., 2023). However, the classification accuracy remained 51.43% (Davur et al., 2023). In another study, acoustic-based techniques were used to evaluate the condition of avocados. Four ripeness stages of avocados were classified using spectrograms derived from the acoustic signal produced by a tapping machine. The CNN models using different spectrogram coefficients as input features achieved classification accuracies ranging from 62% to 90% (Becerra-Sanchez et al., 2024). Given that both hyperspectral imaging and acoustic-based methods require laboratory equipment and specialized training to operate, smartphone imaging presents a more attractive alternative for end users due to its accessibility, ease of use, and competitive performance.
3.3. Classification of avocado internal quality using deep learning
Furthermore, we classified the internal quality of ripe avocados (i.e., fresh or rotten) using smartphone imaging and deep learning classification models. During the postharvest, avocados are exposed to various mechanical and environmental stresses that can lead to bruising and decay. Mechanical forces, such as dropping, compression, and vibration during harvesting, storage, transportation, and handling, can cause internal tissue damage and bruising. These damages may not be externally visible at the time of injury but manifests as internal browning during storage. Prolonged storage durations or delays at distribution and retail stages may cause the fruit to exceed its optimal ripeness window, resulting in softening and spoilage. Additionally, improper handling at the point of sale, such as repeated squeezing by consumers, can exacerbate internal bruising and further compromise fruit quality and shelf life. Distinguishing fresh, ripe avocados from bruised or spoiled ones based on surface color alone remains challenging, as their external appearance can be similar and subjective to individual visual perception (Fig. 3A). To provide a more reliable decision-making tool, we selected CNN ResNet and ViT architectures because they represent two state-of-the-art approaches for image classification. CNN ResNet leverages deep convolutional layers with residual connections to effectively capture hierarchical spatial features (Fig. 3B), while ViT applies self-attention mechanisms to model global relationships in the image (Fig. 3C) (Dosovitskiy et al., 2020; He et al., 2016). To balance model complexity, learning capacity, and overfitting risk, we evaluated CNN ResNet architectures with 18, 50, and 152 layers and ViT models with 16 and 32 patches.
Fig. 3.
Performance evaluation of deep learning classification models using the avocado image test dataset (n=140). This study developed and compared five different deep learning classification models, including a convolutional neural network (CNN) ResNet-18, CNN ResNet-50, CNN ResNet-152, vision transformer 16 patches (ViT-patch16), and ViT-patch32. (A) Representative images of fresh and rotten avocados, shown whole and cut. (B) Architecture of CNN ResNet-18. (C) Architecture of ViT-patch16 model. (D) Receiver operating characteristic (ROC) curves of classification models. (E) Area under the curve (AUC) values of classification models.
Fig. 3D–E shows the ROC curves and AUC values of different deep learning classification models on the test set (n = 140). In Fig. 3D, the true positive rate (sensitivity) and the false positive rate (1 - specificity) are plotted across all possible classification thresholds. A better model is visually identified by an ROC curve that rises more steeply toward the top-left corner, indicating higher sensitivity at lower false positive rates. All models performed well, with a ROC curve showing a steep rise (Fig. 3D). To quantify model performance, AUC values were calculated. A larger AUC value reflects better overall classification ability. All models achieved AUC values greater than 0.86 (Fig. 3E). The ViT-patch16 model had the highest AUC value of 0.925. Among the CNN ResNet models, the deeper ResNet-152 outperformed the shallower versions, with AUC values of 0.915, 0.890, and 0.865 for ResNet-152, ResNet-50, and ResNet-18, respectively. All models were confirmed to be free from overfitting or underfitting based on the training and validation loss function curves (Fig. S4).
To further evaluate the classification performance, confusion matrices were constructed for each model to show the numbers of true positives, false positives, true negatives, and false negatives on the test set (n = 140) (Fig. 4). Among the CNN ResNet models, increasing the number of model layers resulted in fewer rotten avocados being misclassified as fresh (Fig. 4A–C). For the ViT models, the ViT-patch32 model showed fewer fresh avocados misclassified as rotten but more rotten avocados misclassified as fresh compared to the ViT-patch16 model (Fig. 4D–E). The highest accuracy (84.3%) was achieved by the CNN ResNet-50 and CNN ResNet-152 models, followed by the ViT-patch16 (83.6%), ViT-patch32 (82.1%), and CNN ResNet-18 (76.4%) (Fig. 4F). For all models except CNN ResNet-18, precision, recall, and F1-scores exceeded 81%, ranging from 81.6% to 88.6% (Fig. 4F). Acceptable performance was also obtained by cross-validation of deep learning models, using one batch as training set and the other as test set (Table S3). Higher precision reflects fewer false positives (i.e., fewer fresh avocados mistakenly predicted as rotten), higher recall indicates fewer false negatives (i.e., fewer rotten avocados mistakenly predicted as fresh), and a higher F1-score represents a good balance between false positives and false negatives. Minimizing false positives helps prevent unnecessary food waste by avoiding the discard of still-edible avocados, while minimizing false negatives helps reduce potential food safety risks for consumers.
Fig. 4.
Performance of deep learning classification models in predicting avocado internal quality(fresh or rotten). The avocado image test dataset (n = 140) was used to evaluate the model's performance. The evaluation metrics included confusion matrix, accuracy, precision, recall, and F1-score. (A) Confusion matrix of the convolutional neural network (CNN) ResNet-18. (B) Confusion matrix of CNN ResNet-50. (C) Confusion matrix of CNN ResNet-152. (D) Confusion matrix of vision transformer 16 patches (ViT-patch16). (E) Confusion matrix of ViT-patch32. (F) Radar plot for displaying the accuracy, precision, recall, and F1-score of the deep learning classification models.
Previous studies have shown that the self-attention mechanism of ViT often yields better performance than CNN models in image classification tasks (Alshammari et al., 2022). Moreover, model performance improved as the dataset size increased. In a study on butterfly image classification, researchers showed that doubling the dataset from 10,000 to 20,000 images improved the ViT accuracy from 45.57 % to 75.46 %, whereas CNN accuracy increased from 68.99 % to 75.71 % (Lu et al., 2021). Given the relatively small size of our avocado image dataset (n = 1400), it is reasonable that the CNN ResNet models slightly outperformed the ViT models in our study (Fig. 4F). Collecting more avocado images in future work may further improve the accuracy of deep learning models.
Laboratory bench-top equipment was previously employed to assess internal avocado quality. For example, Fourier transform near-infrared spectroscopy (FT-NIRS) was used to detect bruises in avocado flesh (Wedding et al., 2019). Principal component-linear discriminant analysis achieved a classification accuracy of 73.3% between avocados with bruises and without bruises, where partial least squares discriminant analysis had classification accuracies of 53.2–87.7% (Wedding et al., 2019). Additionally, X-ray imaging combined with a U++ net classification model achieved an average accuracy of 98% in identifying internally damaged avocados (Matsui et al., 2023). However, such bench-top, expensive equipment is impractical for use in retail environments and by consumers. In contrast, our study developed a non-destructive, portable, and user-friendly technique using smartphone cameras, achieving competitive performance with over 84% accuracy without requiring specialized workforce training. To further enhance the generalization of this AI-based imaging approach, future studies will focus on training models that account for real-world variability, including differences in lighting, background, and imaging devices. Techniques such as object detection could be applied to isolate target fruits from complex backgrounds, while domain adaptation strategies could leverage the images collected under controlled imaging conditions (source domain) to improve performance on smartphone images captured under real-life conditions (target domain) (Thota et al., 2020). Transferring this method to other fruits and vegetables is also critical to improve the deployment.
3.4. Interpretation of deep learning models for avocado internal quality classification using LIME
It remains a challenge to understand how deep learning models reach their decisions (Shwartz-Ziv and Tishby, 2017). Nevertheless, it is necessary to reveal which features or regions the models focus on. For example, in cancer classification using tissue images, deep learning models were shown to rely on artifacts, such as marks indicating a cancer patient, rather than the tissue itself (Castelvecchi, 2016). To address this “black box” issue in computer vision tasks, explanation tools such as LIME and Grad-CAM have been developed. In this study, we used LIME because it is model-agnostic, meaning it does not depend on the internal architecture of the model, and so it is compatible with both CNN and ViT architectures. LIME perturbs parts of the input (e.g., superpixels of the images) and observes the resulting changes in the model's output to identify the regions that most influence the prediction. In contrast, Grad-CAM relies on the spatial feature maps produced by convolutional layers in CNNs (Selvaraju et al., 2017).
Fig. 5 presents the LIME-generated explanation maps. The superpixel maps illustrate which regions of images that CNN ResNet and ViT models used to classify internal avocado freshness on the test dataset. As shown in Fig. 5, the yellow lines are the boundaries of superpixels, and the color of superpxiels indicates the area where models have greater weights for classification. Green and red areas explained the influential area for classification models to predict fresh and rotten avocados, respectively, while non-colored areas indicated less substantial influence. As shown in Fig. 5, the LIME model highlighted major green areas in the avocados classified as fresh by both ground truth and prediction, whereas red areas appeared in avocados labeled as rotten in the ground truth and correctly predicted as rotten by deep learning models. In contrast, avocados misclassified by the models showed a mix of green and red areas, indicating conflicting feature importance between the ground truth and prediction (Fig. 5). It is noted that larger areas in avocados were generally identified by the ViT-patch16 model than the CNN ResNet-18 model. This difference might be due to the ability of the ViT model to focus on global relationships across the image, whereas CNN ResNet models extract local important features in the images by their convolutional structure. The importance values of superpixels per class were also demonstrated in Fig. S5.
Fig. 5.
Visualization ofclassification results for test dataset, generated andinterpreted by local interpretable model-agnostic explanations (LIME) technique. Yellow lines are the boundaries of superpixels. The superpixels in green indicate a critical region to be classified as positive (fresh avocados) by classification models. The superpixles in red indicate a critical region to be classified as negative (rotten avocados). Non-colored areas indicate less substantial influence. (A) Representative explanation images for vision transformer 16 patches (ViT-patch16). (B) Representative explanation images for convolutional neural network (CNN) ResNet-18.
4. Conclusion
This study developed deep learning models to evaluate the firmness and internal quality of avocados to reduce undesirable food waste. Using the non-destructive smartphone imaging method resolved the problem of conventional methods with relatively expensive and hard-to-access laboratory equipment for the public. CNN ResNet18R model assessed firmness with an R2 of 0.919, surpassing conventional machine learning models (i.e., SVMR and RFR). To determine avocado internal quality, both CNN ResNet and ViT classification models achieved an accuracy exceeding 84.3% in identifying rotten avocados. The artificial intelligence interpretation model, LIME, successfully explained what deep learning models concentrated on the avocado smartphone images to ensure the reason for classification. This study established a decision supporting tool to predict and assess the quality of avocados. To develop a practical, smartphone-based avocado ripeness prediction technology, further studies should incorporate images captured under diverse and natural backgrounds. This technology has the potential to be expanded to other varieties of avocados and even other types of food. Once food industries and consumers have access to predicted firmness, remaining shelf life, and internal quality, they can make data-driven, evidence-based decisions for smarter consumption and distribution planning. This, in turn, can help reduce food waste and loss caused by over-ripeness.
CRediT authorship contribution statement
The authors confirm their contribution to the paper as follows: Lee IH: Methodology, Data Collection, Programming, Formal Analysis, Validation, Visualization, and Writing – Original Draft, Review & Editing. Li Z: Programming, Writing – Review & Editing. Ma L: Conceptualization, Methodology, Visualization, Supervision, Project Administration, Funding Acquisition, and Writing – Original Draft, Review & Editing. All authors reviewed and approved the final version of the manuscript.
Data availability statement
The data are available upon reasonable request.
Funding sources
This work was supported by the U.S. Department of Agriculture's (USDA) National Institute of Food and Agriculture (NIFA) Capacity Building Grants for Non-Land-Grant Colleges of Agriculture Program (Grant No. 2024-70001-43485) and Oregon State University Startup Grant.
Declaration of competing interest
The authors declare that they have no conflict of interest.
Acknowledgements
We acknowledge Jiacheng Zhang, Ojasvini Sharman, and Samuel Willert for their assistance with partial data collection.
Footnotes
Supplementary data to this article can be found online at https://doi.org/10.1016/j.crfs.2025.101196.
Appendix A. Supplementary data
The following is/are the supplementary data to this article.
References
- Becerra-Sanchez F.J., Pérez-Espinosa H., Meza-Aguilar M.A. Development of non-destructive system for estimating avocado quality parameters. Postharvest Biol. Technol. 2024;212 [Google Scholar]
- Bhole V., Kumar A., editors. Proceedings of the 21st Annual Conference on Information Technology Education. 2020. Mango quality grading using deep learning technique: perspectives from agriculture and food industry. [DOI] [Google Scholar]
- Brereton R.G., Lloyd, G. R. Support vector machines for classification and regression. Analyst. 2010;135(2):230–267. doi: 10.1039/B918972F. [DOI] [PubMed] [Google Scholar]
- Buzby J.C., Farah-Wells H., Hyman J. U.S Department of Agirculture Economic Research Service. 2014. The estimated amount, value, and calories of postharvest food losses at the retail and consumer levels in the United States. [Google Scholar]
- Buzby J.C., Hyman J., Stewart H., Hodan F.W. The value of retail‐and consumer‐level fruit and vegetable losses in the United States. J. Consum. Aff. 2011;45(3):492–515. doi: 10.1111/j.1745-6606.2011.01214.x. [DOI] [Google Scholar]
- California Avocado Commission (CAC) 2020. Avocado stages of ripe sheet by California Avocado Commission.https://californiaavocado.com/wp-content/uploads/2020/09/CAC-Stages-of-Ripe-Sheet-5-18-20-scaled.jpg Accessed 2025 July 19. [Google Scholar]
- Castelvecchi D. Can we open the black box of AI? Nature. 2016;538(7623):20. doi: 10.1038/538020a. [DOI] [PubMed] [Google Scholar]
- Cho B.-H., Koyama K., Olivares Díaz E., Koseki S. Determination of “Hass” avocado ripeness during storage based on smartphone image and machine learning model. Food Bioproc. Technol. 2020;13(9):1579–1587. doi: 10.1007/s11947-020-02494-x. [DOI] [Google Scholar]
- Cho B.-H., Koyama K., Koseki S. Determination of ‘Hass’ avocado ripeness during storage by a smartphone camera using artificial neural network and support vector regression. J. Food Meas. Char. 2021;15(2):2021–2030. doi: 10.1007/s11694-020-00793-7. [DOI] [Google Scholar]
- Cian D., van Gemert J., Lengyel A. Evaluating the performance of the LIME and Grad-CAM explanation methods on a LEGO multi-label image classification task. arXiv preprint arXiv:2008.01584. 2020 doi: 10.48550/arXiv.2008.01584. [DOI] [Google Scholar]
- Cox K.A., McGhie T.K., White A., Woolf A.B. Skin colour and pigment changes during ripening of ‘Hass’ avocado fruit. Postharvest Biol. Technol. 2004;31(3):287–294. doi: 10.1016/j.postharvbio.2003.09.008. [DOI] [Google Scholar]
- Cui F., Zheng S., Wang D., Ren L., Meng Y., Ma R., et al. Development of machine learning-based shelf-life prediction models for multiple marine fish species and construction of a real-time prediction platform. Food Chem. 2024 doi: 10.1016/j.foodchem.2024.139230. [DOI] [PubMed] [Google Scholar]
- Davur Y.J., Kämper W., Khoshelham K., Trueman S.J., Bai S.H. Estimating the ripeness of Hass avocado fruit using deep learning with hyperspectral imaging. Horticulturae. 2023;9(5):599. doi: 10.3390/horticulturae9050599. [DOI] [Google Scholar]
- Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. 2020 https://arxiv.org/pdf/2010.11929/1000 [Google Scholar]
- Feurer M., Hutter F. Springer International Publishing Cham; 2019. Hyperparameter Optimization. Automated Machine Learning: Methods, Systems, Challenges; pp. 3–33. [DOI] [Google Scholar]
- Goyal K., Kumar P., Verma K. Tomato ripeness and shelf-life prediction system using machine learning. J. Food Meas. Char. 2024:1–16. doi: 10.1007/s11694-023-02349-x. [DOI] [Google Scholar]
- Gunders D. 2012. Wasted: How America is losing up to 40 percent of its food from farm to fork to landfill. Natural Resources Defense Council (NRDC)https://www.nrdc.org/resources/wasted-how-america-losing-40-percent-its-food-farm-fork-landfill Accessed 2025 July 19. [Google Scholar]
- Han Y., Bai S.H., Trueman S.J., Khoshelham K., Kämper W. Predicting the ripening time of ‘Hass’ and ‘Shepard’avocado fruit by hyperspectral imaging. Precis. Agric. 2023;24(5):1889–1905. doi: 10.1007/s11119-023-10022-y. [DOI] [Google Scholar]
- Hasan M., Vasker N., Khan M.S.H. Real-time sorting of broiler chicken meat with robotic arm: XAI-enhanced deep learning and LIME framework for freshness detection. J. Agric. Food Res. 2024;18 doi: 10.1016/j.jafr.2024.101372. [DOI] [Google Scholar]
- He K., Zhang X., Ren S., Sun J., editors. Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. [Google Scholar]
- Jiang X., Wu M., Dong W., Rao Q., Huo H., Han Q. Monoclonal antibody-based sandwich enzyme-linked immunosorbent assay for porcine hemoglobin quantification. Food Chem. 2020;324 doi: 10.1016/j.foodchem.2020.126880. [DOI] [PubMed] [Google Scholar]
- Kibler K.M., Reinhart D., Hawkins C., Motlagh A.M., Wright J. Food waste and the food-energy-water nexus: a review of food waste management alternatives. Waste Manag. 2018;74:52–62. doi: 10.1016/j.wasman.2018.01.014. [DOI] [PubMed] [Google Scholar]
- Lorenzini G., Moretti S., Conti A. Springer; 2011. Units and Conversion Factors. Fin Shape Thermal Optimization Using Bejan's Constructal Theory; pp. 41–46. [Google Scholar]
- Lu K., Xu Y., Yang Y., editors. ICMLCA 2021; 2nd International Conference on Machine Learning and Computer Application. VDE; 2021. Comparison of the potential between transformer and CNN in image classification. [Google Scholar]
- Ma L., Liang C., Cui Y., Du H., Liu H., Zhu L., et al. Prediction of banana maturity based on the sweetness and color values of different segments during ripening. Curr. Res. Food Sci. 2022;5:1808–1817. doi: 10.1016/j.crfs.2022.08.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matsui T., Sugimori H., Koseki S., Koyama K. Automated detection of internal fruit rot in Hass avocado via deep learning-based semantic segmentation of X-ray images. Postharvest Biol. Technol. 2023;203 doi: 10.1016/j.postharvbio.2023.112390. [DOI] [Google Scholar]
- Mehdizadeh S.A., Noshad M., Chaharlangi M., Ampatzidis Y. AI-driven non-destructive detection of meat freshness using a multi-indicator sensor array and smartphone technology. Smart Agricultural Technology. 2025;10 doi: 10.1016/j.atech.2025.100822. [DOI] [Google Scholar]
- Mishra P., Paillart M., Meesters L., Woltering E., Chauhan A., Polder G. Assessing avocado firmness at different dehydration levels in a multi-sensor framework. Infrared Phys. Technol. 2021;118 doi: 10.1016/j.infrared.2021.103901. [DOI] [Google Scholar]
- Olive disease classification based on vision transformer and CNN models. Alshammari H., Gasmi K., Ben Ltaifa I., Krichen M., Ben Ammar L., Mahmood M.A., editors. Comp. Intel. Neurosc. 2022;2022(1):3998193. doi: 10.1155/2022/3998193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Osuna-García J.A., Doyon G., Salazar-García S., Goenaga R., González-Durán I.J. Relationship between skin color and some fruit quality characteristics of ‘Hass’ avocado. J. of Agric. of the University of Puerto Rico. 2011;95(1–2):15–23. [Google Scholar]
- Payasi A., Sanwal G. Ripening of climacteric fruits and their control. J. Food Biochem. 2010;34(4):679–710. doi: 10.1111/j.1745-4514.2009.00307.x. [DOI] [Google Scholar]
- Rendón-Anaya M., Ibarra-Laclette E., Méndez-Bravo A., Lan T., Zheng C., Carretero-Paulet L., et al. The avocado genome informs deep angiosperm phylogeny, highlights introgressive hybridization, and reveals pathogen-influenced gene space adaptation. Proc. Natl. Acad. Sci. 2019;116(34):17081–17089. doi: 10.1073/pnas.1822129116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ribeiro M.T., Singh S., Guestrin C. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386. 2016 doi: 10.48550/arXiv.1606.05386. [DOI] [Google Scholar]
- Rodriguez-Galiano V., Sanchez-Castillo M., Chica-Olmo M., Chica-Rivas M. Machine learning predictive models for mineral prospectivity: an evaluation of neural networks, random forest, regression trees and support vector machines. Ore Geol. Rev. 2015;71:804–818. doi: 10.1016/j.oregeorev.2015.01.001. [DOI] [Google Scholar]
- Salazar-López N.J., Domínguez-Avila J.A., Yahia E.M., Belmonte-Herrera B.H., Wall-Medrano A., Montalvo-González E., González-Aguilar G.A. Avocado fruit and by-products as potential sources of bioactive compounds. Food Res. Int. 2020;138:109774. doi: 10.1016/j.foodres.2020.109774. [DOI] [PubMed] [Google Scholar]
- Sandoval-Contreras T., González Chávez F., Poonia A., Iñiguez-Moreno M., Aguirre-Güitrón L. Avocado waste biorefinery: towards sustainable development. Recycling. 2023;8(5):81. doi: 10.3390/recycling8050081. [DOI] [Google Scholar]
- Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D., editors. Proceedings of the IEEE International Conference on Computer Vision. 2017. Grad-cam: visual explanations from deep networks via gradient-based localization. [Google Scholar]
- Shafiee-Jood M., Cai X. Reducing food loss and waste to enhance food security and environmental sustainability. Environ. Sci. Technol. 2016;50(16):8432–8443. doi: 10.1021/acs.est.6b01993. https://pubs.acs.org/doi/abs/10.1021/acs.est.6b01993 [DOI] [PubMed] [Google Scholar]
- Shwartz-Ziv R., Tishby N. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810. 2017 doi: 10.48550/arXiv.1703.00810. [DOI] [Google Scholar]
- Sierra N.M., Londoño A., Gómez J.M., Herrera A.O., Castellanos D.A. Evaluation and modeling of changes in shelf life, firmness and color of ‘Hass’ avocado depending on storage temperature. Food Sci. Technol. Int. 2019;25(5):370–384. doi: 10.1177/1082013219826825. [DOI] [PubMed] [Google Scholar]
- Thota M., Kollias S., Swainson M., Leontidis G. Multi-source domain adaptation for quality control in retail food packaging. Comput. Ind. 2020;123 doi: 10.1016/j.compind.2020.103293. [DOI] [Google Scholar]
- U.S. Department of Agriculture (USDA) 2023. U.S. Food Loss and Waste 2030 Champions: 2023 Milestones Report.https://www.usda.gov/about-food/food-safety/food-loss-and-waste/us-food-loss-and-waste-2030-champions Accessed 2025 July 19. [Google Scholar]
- USDA Shipping point and market inspection instructions for Florida avocados. 2000. https://www.ams.usda.gov/sites/default/files/media/AvocadoFloridaInspectionInstructions.pdf Accessed 2025 July 19.
- Von Eschenbach W.J. Transparency and the black box problem: why we do not trust AI. Philos. & Technol. 2021;34(4):1607–1622. doi: 10.1007/s13347-021-00477-0. [DOI] [Google Scholar]
- Wedding B.B., Wright C., Grauf S., Gadek P., White R.D. The application of FT‐NIRS for the detection of bruises and the prediction of rot susceptibility of ‘Hass’ avocado fruit. J. Sci. Food Agric. 2019;99(4):1880–1887. doi: 10.1002/jsfa.9383. [DOI] [PubMed] [Google Scholar]
- Xu M., Sun J., Cheng J., Yao K., Shi L., Zhou X. Non-destructive estimation for Kyoho grape shelf-life using Vis/NIR hyperspectral imaging and deep learning algorithm. Infrared Phys. Technol. 2024;142 doi: 10.1016/j.infrared.2024.105532. [DOI] [Google Scholar]
- Xue L., Liu G., Parfitt J., Liu X., Van Herpen E., Stenmarck Å., et al. Missing food, missing data? A critical review of global food losses and food waste data. Environ. Sci. Technol. 2017;51(12):6618–6633. doi: 10.1021/acs.est.7b00401. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data are available upon reasonable request.






