Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Aug 25;15:31259. doi: 10.1038/s41598-025-15940-7

Bayesian optimized CNN ensemble for efficient potato blight detection using fuzzy image enhancement

Achin Jain 1, Arun Kumar Dubey 1, Vincent Shin-Hung Pan 2,10,, Saoucene Mahfoudh 3, Turki A Althaqafi 4, Varsha Arya 5,15,16, Wadee Alhalabi 6,17, Sunil K Singh 7, Vanita Jain 8, Arvind Panwar 9, Sudhakar Kumar 7, Ching-Hsien Hsu 10, Brij B Gupta 10,11,12,13,14,
PMCID: PMC12378213  PMID: 40854933

Abstract

Potato blight is a serious disease that affects potato crops and leads to substantial agricultural and economic losses. To enhance detection accuracy, we propose Bayesian Optimized CNN Weighted Ensemble Potato Blight Detection, a deep learning-based approach that optimizes CNN models through Bayesian optimization and ensemble learning. In the proposed study, extensive experiments were conducted to develop an optimized Bayesian Weighted Ensemble CNN model for the detection of potato leaf blight. First, multiple CNN architectures were trained using different optimizers: ADAM (DL1), SGD (DL2), RMSProp (DL3), and ADAMAX (DL4), evaluating their individual performance. To mitigate class imbalance, data augmentation techniques were applied, increasing the number of healthy leaves by 6 times. In addition, fuzzy image enhancement was implemented to improve feature extraction and classification accuracy. Bayesian optimization was then used to determine the optimal weights for a deep ensemble model, exploring 11 possible model combinations. The final EDL7 ensemble model (DL1 + DL2 + DL3), optimized through Bayesian optimization, achieved the highest accuracy of 97.94%, outperforming individual models. Furthermore, the ensemble model achieved a precision of 0.981, recall of 0.983, and an F1 score of 0.982, ensuring a well-balanced trade-off between precision and recall. These results highlight the effectiveness of Bayesian-optimized ensemble learning in improving potato blight detection, making it a robust and reliable solution for agricultural disease classification.

Keywords: Potato blight detection, CNN, Optimizer, Bayesian optimization, Ensemble learning

Subject terms: Energy science and technology, Engineering

Introduction

Potatoes (Solanum tuberosum) are the most widely consumed non-cereal food crop in the world. Its cultivation has been expanded worldwide and flourished under various climatic conditions. Their high nutritional value and economic importance make them crucial for food security. However, this also exposes them to various diseases that severely affect yield and quality. It has led to substantial economic losses13. Early and late blight are particularly devastating. Alternaria solani causes early blight. It manifests itself as small dark brown lesions on leaves, disrupting photosynthesis and accelerating plant senescence. In contrast, it is famous for triggering the Irish Potato Famine of the 1840s and spreads rapidly as wet rot, capable of destroying entire fields under favorable environmental conditions4,5. Phytophthora infestans causes late blight. These diseases not only threaten potato production, but also compromise food security by reducing market quality. Effective and timely disease monitoring is essential, but traditional methods, such as farmer visual inspection and laboratory tests, are labor intensive, time consuming, and prone to human error. Thus, the development of automated and efficient detection techniques is critical to the protection of potato crops and to the security of global food stability.

Traditional methods for managing potato diseases, such as routine visual surveys and chemical applications, are labor intensive and can contribute to environmental degradation. Recent advances have integrated artificial intelligence (AI) and machine learning (ML) into agricultural practices to improve disease management, with models being used to predict outbreaks, optimize treatment plans, and minimize chemical use6,7. AI uses data sets from satellite images, aerial drone data, and ground-level sensors to detect early signs of infection, although the correlation between weather variables, such as temperature, humidity, wind speed and atmospheric pressure, and disease prevalence remains underexplored4. A particularly promising development in ML is the use of computational neural networks (CNNs) to automate plant disease diagnosis, offering a faster and more cost-effective alternative to traditional methods. CNN-based models have been applied to datasets such as PlantVillage, where researchers optimize hyperparameters to enhance classification accuracy while minimizing computational costs and overfitting. Given the critical role of the agricultural sector in the global economy providing food, employment, and income. AI-driven disease detection is vital to ensure crop productivity. In India, for example, agriculture employs 53.3% of the workforce and contributes 18% to GDP, with recent trends showing an increase in its economic importance811. However, plant diseases and pests pose a major threat to agricultural productivity, often requiring early detection and precise monitoring to prevent widespread outbreaks. Traditional leaf inspection methods for diagnosing late blight, caused by Phytophthora infestans, are time consuming and subjective1214. In countries like Colombia, where late blight affects high-density potato crops, the excessive use of pesticides is common. Advances in computer vision have improved the diagnosis of plant disease, using various imaging techniques and ML models, such as SVM, clustering of K-means, and backpropagation neural networks, for classification1517. Studies have demonstrated the effectiveness of CNN architectures such as GoogLeNet, AlexNet, and ResNet in classifying plant diseases, with ResNet proving superior for tomato leaf samples18,19. LeNet has also been evaluated for the detection of banana disease, while models such as Inception-v3 have been applied to the recognition of banana leaf disease20, pest detection21, and cassava disease ?. Researchers have found that reducing the number of pooling operations relative to convolutional layers enhances CNN performance by preserving essential features, ultimately improving potato blight detection and optimizing feature extraction.

Key contributions

In this paper, we propose the Bayesian-Optimized CNN Weighted Ensemble for Potato Blight Detection, a novel deep learning framework designed to address key challenges in agricultural disease diagnosis, such as class imbalance, feature degradation, and lack of ensemble optimization. The proposed approach introduces key innovations that advance the state of the art in the field.

  • Data augmentation: To mitigate class imbalance, data augmentation techniques were used. Specifically, the number of healthy leaf images was increased six times, ensuring a balanced distribution across all classes in the dataset.

  • Fuzzy-based image enhancement: A fuzzy image enhancement technique was applied to improve contrast and preserve texture details, thereby enhancing the discriminative power of CNN models. Unlike traditional image enhancement (for example, histogram equalization), the fuzzy approach improves edge clarity and lesion visibility in Potato Leaves. An ablation study demonstrated that CNNs trained on fuzzy-enhanced images outperformed those trained on CLAHE enhanced images.

  • CNN model training: Multiple CNN models were trained using diverse optimizers: Adam, SGD, RMSProp, and Adamax to promote architectural and learning diversity, which is critical to building an effective ensemble.

  • Bayesian optimization for weighted ensemble fusion: Unlike standard ensembles that use equal weight averaging or simple voting, we applied Bayesian optimization to assign dynamic weights to each CNN model based on accuracy.

  • Performance comparison and validation: A thorough comparative analysis was conducted across individual models and the proposed Bayesian ensemble. We also employed TOPSIS multi-criteria decision-making to validate ensemble performance from a robustness perspective rather than relying on accuracy alone.

Paper organization

This paper is structured as follows. Section “Related work” covers related work. Section “Materials and methods” discusses methods, materials, and data preprocessing techniques. Section “System design: bayesian optimized CNN ensemble for potato blight detection” introduces the proposed architecture. Section “Experimental results and discussion” presents the results along with the discussion. Finally, Section “Conclusion and future work” concludes the study and explores future research directions.

Related work

The integration of AI and machine learning (ML) in agricultural disease management has transformed traditional empirical and manual methods, offering precise and data-driven solutions that improve disease forecasting and control22. Historically, plant disease prediction was based on outbreak records and weather data, which, while helpful, lacked the flexibility to accommodate variable and unpredictable conditions23. AI introduces advanced tools capable of analyzing complex datasets to predict disease outbreaks, using satellite imagery, high-resolution drone images, and ground-based real-time sensors monitoring microclimatic conditions2426. Machine learning algorithms process these data to detect patterns and anomalies that can signal early signs of disease, significantly improving precision and efficiency in agricultural disease management27,28.

Among the most critical challenges in potato farming are diseases such as early and late blight, which, if left unchecked, can cause severe yield losses. Traditional crop monitoring by agronomists is prone to errors, which requires the adoption of automated systems. In recent years, Convolutional Neural Networks (CNNs) have emerged as powerful tools for detecting and classifying plant diseases, offering a fast and cost-effective alternative to manual diagnosis. Various studies have explored CNN-based methods for identifying disease patterns in potato leaves. Rayhan Asif et al.29 developed a CNN-based model to accurately distinguish between healthy and infected potato leaves, ensuring reliable disease detection. Similarly, Rozaqi and Sunyoto30 examined how CNN algorithms automate disease diagnosis, demonstrating the advantages of deep learning in agriculture. Bangari et al.31 reviewed CNN-based disease detection techniques, emphasizing their potential to improve agricultural productivity. Liao et al.32 proposed the image segmentation method. Lee et al.33 proposed a deep learning approach to detect defects in potato leaves, improving accuracy. Agarwal et al.34 focused on optimizing CNN parameters and architectures to enable efficient and rapid disease diagnosis. Rashid et al.35 introduced a multilayer deep learning system for the comprehensive management of potato disease, highlighting the effectiveness of AI-driven solutions in the control of plant diseases. Wang et al.36 proposed the image retrieval method.

With AI-driven disease prediction and CNN-based classification methods, agricultural disease management is becoming increasingly efficient. Using diverse data sources and optimizing CNN architectures, researchers are developing robust and scalable solutions to combat potato diseases, ultimately enhancing crop protection, reducing chemical dependence, and improving global food security. Convolutional Neural Networks (CNNs) have emerged as powerful tools for the autonomous identification of plant diseases because of their superior image recognition and classification capabilities. Researchers such as Khobragade et al.37 have demonstrated the effectiveness of CNN-based methods in the detection and classification of potato diseases, highlighting the need for an accurate diagnosis to improve crop yield. Mahum et al.38 further validated the potential of deep learning in precision agriculture, while Islam et al. explored alternative classification techniques using picture segmentation and multiclass support vector machines (PFSVM). Lee et al. and Joseph et al.39 reinforced the success of CNN-based models in agricultural applications, laying the foundation for early and late detection of blight (Sharma et al.40). Furthermore, Asfaw41 highlighted the significance of parameter tuning to improve model performance. As AI-driven disease detection advances, the development of user-friendly interfaces will be crucial to the wider adoption by non-technical farmers42. These interfaces should integrate real-time forecasting with actionable recommendations, enhancing accessibility and usability. In addition, AI applications are expanding to genetic data analysis, predicting how different potato varieties respond to diseases under varying climatic conditions43,44. This fusion of genetic and environmental insights holds promise for revolutionizing disease prevention strategies, ensuring healthier crops and sustainable agricultural practices45.

Materials and methods

This study presents the implementation of a Convolutional Neural Network (CNN) model developed with TensorFlow 2 to classify potato blight. The data set, sourced from Kaggle Plant Village Dataset46, includes 1000 images each of early and late blight and 152 images of healthy potato leaves, highlighting a significant data imbalance. The subsequent subsections discuss the methods applied for data balancing and augmentation in this work.

Data preparation: fuzzy enhancement and data augmentation

For experimentation, 80% of the data set was used for training (of which 10% was considered for validation) and 20% for testing. To enable more effective processing, the photos were scaled to 128 by 128 pixels. In this paper, we have implemented a fuzzy-based contrast enhancement technique47 as shown in Algorithm 1 to improve the visual quality of potato leaf images. This method applies fuzzy logic to adjust pixel intensities adaptively, enhancing details and contrast in the image. The algorithm normalizes the values of the image pixel to the range [0, 1], processes them using a fuzzy enhancement formula, and then scales them back to the original intensity range. This enhancement technique amplifies the contrast of the pixels around the midpoint, making subtle variations in the details of the image more pronounced while ensuring the values remain within valid intensity limits. Figure 1 shows the Fuzzy Enhancement process and sample of images after processing. Before performing Fuzzy enhancement, we have also carried out Data Augmentation to balance the data set by increasing healthy potato leaves (6 times). Figure 2 shows the class-wise distribution before and after data augmentation.

Fig. 1.

Fig. 1

Fuzzy Enhancement of Potato Leaves.

Fig. 2.

Fig. 2

Class wise distribution before and after augmentation.

Algorithm 1.

Algorithm 1

Fuzzy contrast enhancement

CNN

Convolutional Neural Networks (CNNs) are a popular deep learning approach, particularly effective in image-based applications48. CNNs consist of several layers, including input, convolution, activation, pooling, and fully-connected layers. The convolution layers extract features from the input data, with ReLU as the activation function. The pooling layers reduce parameters and mitigate overfitting, while the fully connected layers make predictions, with classification finalized using a softmax classifier.

Input layer

The fuzzy enhanced Potato Blight Image Dataset enters the neural network through this layer with a specification about the height, width, and number of color channels that make up an image.

graphic file with name d33e684.gif 1

where:Inline graphic is the height of the input image. Inline graphic is the width of the input image. Inline graphic is the number of channels (e.g., 1 for grayscale, 3 for RGB).

Preprocessing layer

This layer modifies the data using a rescaling layer (1/255) to facilitate pattern recognition in the model by normalizing the values of pixels in the pictures to the range [0,1]. To normalize pixel values to the range [0, 1], a rescaling operation is applied:

graphic file with name d33e713.gif 2

where I(xyc) represents the original pixel intensity at position (xy) for channel c, and Inline graphic is the normalized pixel value.

Convolutional layer

This layer extracts features after preprocessing by performing operations such as convolutions on the input using a set of learnable filters (kernels), which aid in the detection of patterns in images such as edges, textures, or shapes. Then the ReLU activation function is applied to introduce nonlinearity to the model, and the Max pooling operation is used to reduce the spatial dimensions of the feature map. Each filter (kernel) is represented as:

graphic file with name d33e752.gif 3

where Inline graphic is the kernel size.

The convolution operation at position (xy) in the output feature map is defined as:

graphic file with name d33e773.gif 4

where: O(xyf) is the output at position (xy) for filter f. K(ijcf) is the kernel value at position (ij) for channel c of filter f. Inline graphic is the bias term for the filter f.

After the convolution operation, an activation function Inline graphic is applied:

graphic file with name d33e848.gif 5

where Inline graphic is a nonlinear function ReLU:

graphic file with name d33e861.gif 6

Flatten layer

A one-dimensional vector space is created from the multidimensional feature maps produced by the convolutional layers. The Flatten operation reshapes I into a one-dimensional vector:

graphic file with name d33e874.gif 7

Each element in F is mapped from the original tensor using:

graphic file with name d33e884.gif 8

for Inline graphic.

This operation ensures that the spatial dimensions are serialized into a continuous vector for input into fully connected layers.

Fully connected layer

Also called a Dense Layer, it is used to classify blight or healthy potato leaves in our work. Dropout regularization is used, which inhibits neuronal co-adaptation and reduces overfitting.

graphic file with name d33e903.gif 9

where: Inline graphic is the weight matrix connecting N inputs to M neurons, Inline graphic is the bias vector, Inline graphic is the pre-activation output.

Applying a non-linear activation function Inline graphic:

graphic file with name d33e943.gif 10

where: Inline graphic is the final output of the Fully Connected layer,

Optimizers used

In this paper we have used the following Optimizer based CNN models for comparison and construction of the ensemble model for detection of potato blight.

  1. Adam optimizer: It integrates the advantages of two other extensions of stochastic gradient descent: AdaGrad and RMSprop. It incorporates momentum to address sparse gradients and enhance convergence speed while also computing adaptive learning rates for each parameter.

  2. RMSprop optimizer It modifies the learning rate for each parameter, helping stabilize the training process. This approach is less affected by initial learning rates and is effective in non-stationary objective problems. It is particularly beneficial for recurrent neural networks (RNNs), especially in tackling the vanishing-gradient problem.

  3. SGD optimizer It is user-friendly and efficient, especially when combined with momentum. However, careful tuning of the learning rate and other hyperparameters is essential. Compared to adaptive optimizers, a basic gradient descent method may require more adjustments to achieve optimal performance.

  4. Adamax optimizer It is specifically designed to manage situations where gradient magnitudes may become excessively large. This approach is especially beneficial when working with sparse or noisy gradients.

Bayesian optimization

Bayesian optimization is a powerful strategy for optimizing objective functions that are expensive to evaluate and lack analytical expressions. It builds a probabilistic model, typically using Gaussian processes, to approximate the unknown function and systematically explore the search space49. By leveraging prior knowledge and incorporating uncertainty in the predictions, Bayesian optimization focuses on evaluating the most promising areas of the space, balancing exploration and exploitation. This approach is particularly useful in applications like hyperparameter tuning in machine learning, where each evaluation can be computationally costly. Its iterative process ensures that optimal solutions are identified efficiently with minimal evaluations. In this work, Bayesian optimization is used to optimize the weights of models in an ensemble. Our goal is to find the optimal set of weights Inline graphic that maximize the accuracy of the ensemble model.

Phase 1: Defining the objective function

Let Inline graphic denote the accuracy of the ensemble model with weights Inline graphic, where Inline graphic represents the weight assigned to model Inline graphic. The objective is to maximize Inline graphic.

graphic file with name d33e1041.gif 11

The ensemble model’s accuracy Inline graphic depends on the weighted combination of predictions from Inline graphic individual models. This can be represented as follows.

graphic file with name d33e1060.gif 12

Where: Inline graphic is the weight assigned to model Inline graphic. Inline graphic is the accuracy of model Inline graphic in the validation set. Inline graphic is the number of models in the ensemble.

Phase 2: Probabilistic model (surrogate model)

To approximate the unknown accuracy function Inline graphic and handle the computational cost of evaluating it, we use a Gaussian Process (GP) model. This model gives a probabilistic distribution of the possible values of Inline graphic, with predictions of mean and variance.

The Gaussian process is defined as follows.

graphic file with name d33e1119.gif 13

Where: Inline graphic is the mean function and Inline graphic is the covariance function (kernel) for the weights Inline graphic and Inline graphic.

For a new weight vector Inline graphic, the predicted mean and variance are as follows:

graphic file with name d33e1158.gif 14

Phase 3: Acquisition function

An acquisition function is used to guide the search for the optimal weights. A common acquisition function to maximize Inline graphic is expected improvement (EI). Quantifies the expected improvement over the best current observed accuracy Inline graphic.

graphic file with name d33e1180.gif 15

Where Inline graphic is the best accuracy observed so far. The acquisition function balances the exploration of uncertain regions of the weight space and the exploitation of regions with high predicted accuracy.

Phase 4: Iterative optimization process

The optimization process proceeds iteratively:

  1. Select the next evaluation point: Based on the acquisition function, select the next set of weights Inline graphic:
    graphic file with name d33e1214.gif 16
  2. Evaluate the objective function: Evaluate the ensemble accuracy Inline graphic at the selected weight set Inline graphic, using the weighted ensemble of models.

  3. Update the surrogate model: Update the Gaussian Process with the new evaluation Inline graphic. This improves the GP’s predictions for future evaluations:
    graphic file with name d33e1252.gif 17
  4. Repeat: Repeat the process until a stopping criterion is met (e.g., convergence or maximum number of evaluations).

Phase 5: Convergence

The optimization process converges when no further significant improvement in accuracy is observed. The final result is the optimal weight set Inline graphic and the corresponding maximum accuracy Inline graphic.

Ensemble of CNN models

Ensemble learning seeks to improve predictive accuracy by merging the outputs of various models trained on the same dataset. The fundamental concept is to intelligently combine base models to construct a more reliable composite model. This method is effective in reducing model variance and error, often outperforming individual models. When applied to deep CNN structures, the ensemble techniques blend the features extraction strengths of each model, resulting in superior generalization capabilities50. Popular ensemble methods include bagging, stacking, voting, and averaging predictions, with Average Ensemble being particularly common for classification problems. Instead of the conventional Average Ensemble, our strategy boosts model significance via a weighted ensemble technique, where optimal weights are derived using the Bayesian optimization process. The weighted average ensemble strategy, as illustrated in Fig. 3, integrates the results of multiple models by assigning differential weights derived from Bayesian optimization. The following steps are involved in this process:

  1. Base CNN models based on different Optimizers are trained on a Potato Blight dataset.

  2. Every model produces its prediction score for the test data.

  3. Allocate weights to models derived from Bayesian optimization to enhance accuracy.

  4. Calculate the final prediction by combining the separate prediction models in a weighted manner.

Mathematically, the final prediction of the ensemble Inline graphic is given by:

graphic file with name d33e1324.gif 18

where: Inline graphic is the number of models, Inline graphic is the weight assigned to the Inline graphic-th model, ensuring Inline graphic, Inline graphic is the prediction of the Inline graphic-th model.

Fig. 3.

Fig. 3

Bayesian optimized weighted ensemble model.

This approach ensures that models with higher reliability contribute more to the final decision, improving overall accuracy.

Performance measurement metrics

To assess deep learning models, various performance metrics are employed. The confusion matrix serves as a fundamental tool for computing accuracy, precision, recall, and the F1 score. The mathematical expressions for these metrics are presented in Equations (4) through (7):

graphic file with name d33e1381.gif 4
graphic file with name d33e1387.gif 5
graphic file with name d33e1393.gif 6
graphic file with name d33e1399.gif 7

where Inline graphic stands for True Positive, Inline graphic stands for True Negative, Inline graphic stands for False Positive and Inline graphic stands for False Negative.

System design: bayesian optimized CNN ensemble for potato blight detection

Several convolutional and pooling layers were used in the CNN model architecture to extract features, and fully connected layers were used for classification, as shown in Fig. 4. The model architecture included input, rescaling, convolutional, max-pooling, flattening, dense, and dropout layers. Specifically, the CNN model comprised three convolutional layers with filter sizes of 16, 32, 64 each followed by ReLU activation, and max-pooling layers to reduce spatial dimensions. A flatten layer was followed by one dense layer with 128 neurons and a softmax output layer for classification. The sparse categorical cross-entropy loss function (for integer labels) was used for training, with early stopping implemented to prevent overfitting. The model was trained using four different optimizers (ADAM, SGD, RMSPROP, and ADAMAX) to compare performance and each optimizer was paired with an identical CNN structure to ensure fair comparison and capture learning diversity. Subsequently, a Bayesian Optimization-based Weighted Ensemble strategy was employed, which assigned dynamic weights to each model based on multi-metric performance (accuracy, F1-score, sensitivity, and precision). This led to the construction of the final model of detection of blight. Unlike existing ensemble approaches that apply fixed or accuracy-only based weighting, our method adapts weights to maximize generalization and robustness. Table 1 lists the hyperparameters used to train and test the potato blight detection model. In this paper, we propose a novel Bayesian Optimized Weighted Average Ensemble CNN model, which uniquely integrates model and optimizer diversity within a structured deep learning pipeline, enhancing the accuracy and robustness of blight detection in potato leaves. The classification process involves data collection, data balancing through augmentation, image enhancement using fuzzy techniques, splitting (80:20), training, and testing. Data was normalized between 0 and 1, with 80% used for training (including 10% for validation) and 20% for testing. To address class imbalance, targeted data augmentation was specifically applied to healthy leaf images, improving minority class representation during training.

Fig. 4.

Fig. 4

Flowchart of potato blight detection using fuzzy enhancement.

Table 1.

Hyperparameters used to train CNN models.

Hyperparameter Value
Learning rate 0.0001
Batch size 64
Kernel/filter size 3 Inline graphic 3
Dropout rate 0.5
Activation function ReLU
Number of epochs 20
Number of dense layers/Neurons 1 dense layer with 128 neurons
Color channels 3(RGB)
Min delta 0.0001 with Patience 5

Bayesian Optimized CNN Ensemble Classifier Construction

In the proposed study, a Bayesian Optimized Weighted Ensemble Deep Learning model was developed for the detection of potato leaf blight. Figure 5 and Algorithm 2 illustrate the architecture of the proposed optimized ensemble model. The ensemble classifier is built by combining multiple CNN models trained with different optimizers. This approach improves accuracy and robustness. The process includes the following key steps: selecting base models, training them, constructing the ensemble, and evaluating performance. It is explained below:

  1. Base model training: Four CNN models based on different optimizers are training on the training dataset, employing sparse categorical cross-entropy loss and utilizing early stopping to mitigate overfitting.

  2. Ensemble construction: The final prediction of the ensemble was derived by calculating the Bayesian optimized weighted outputs of the CNN models.

  3. Ensemble evaluation: The ensemble’s performance was assessed using the test dataset, evaluating key classification metrics such as precision, precision, recall and F1 score.

Fig. 5.

Fig. 5

Proposed model for potato blight detection.

Algorithm 2.

Algorithm 2

Weighted average ensemble with bayesian optimization

Experimental results and discussion

In the proposed study, extensive experiments were conducted to develop an optimized Bayesian Weighted Ensemble CNN model with fuzzy image enhancement for the detection of potato leaf blight. First, multiple CNN architectures were trained using different optimizers (ADAM (DL1), SGD (DL2), RMSPROP (DL3), and ADAMAX (DL4)) on images enhanced using a fuzzy rule-based system to improve edge contrast and texture clarity. Then a Bayesian Optimized Weighted Average method was applied to determine the optimal ensemble weights based on accuracy performance metrics. The effectiveness of the proposed framework was validated using confusion matrices and performance graphs, demonstrating a superior classification accuracy. To avoid overfitting, early stopping with patience 5 and dropout regularization were applied. In the experimental setup, the initial learning rate and mini-batch size were set to 0.0001 and 64, respectively. The models were trained using sparse categorical cross-entropy loss, and training was stopped after 20 epochs based on early stopping. The final Bayesian Optimized Weighted Ensemble was constructed using the selected CNN models. All experiments were carried out on the Kaggle platform using an NVIDIA Tesla P100 GPU with 16 GB VRAM, Keras®2.9.0 API, TensorFlow®2.9.2 backend, and Python®3.8.10. Bayesian optimization was used solely for tuning ensemble weights, while other hyperparameters were manually chosen based on prior works and initial testing.

Results

Results of CNN models

The comparison of evaluation metrics across the imbalanced, balanced, CLAHE enhanced, and fuzzy enhanced datasets, as shown in Table 2, highlights the progressive improvement in model performance with each enhancement stage. While balancing the dataset significantly improves precision, recall, and F1 score compared to the imbalanced dataset (mean F1 increased from 0.83 to 0.85), the introduction of a CLAHE enhanced dataset further elevates these metrics (mean F1: 0.94). This demonstrates that enhancement techniques other that fuzzy enhancement contribute to improved model generalization. However, the fuzzy-enhanced dataset consistently achieves the highest precision, recall, and F1 scores across all models, with a mean F1 score of 0.95. Notably, models DL2 and DL4 exhibit substantial gains in F1 score, highlighting that fuzzy-based image enhancement enhances edge clarity and texture contrast, improving feature extraction and reducing misclassifications. The steady progression of metrics from imbalanced to fuzzy-enhanced datasets confirms the cumulative benefit of each preprocessing stage. Figure 6 illustrates the mean values of precision, recall, and F1 across the four datasets, clearly validating the superior efficiency and robustness of the fuzzy-enhanced dataset. These findings reinforce fuzzy enhancement as a critical preprocessing step in achieving optimal model performance for plant disease classification tasks.

Table 2.

Evaluation metrics comparison on imbalanced, balanced, preprocessed, and fuzzy enhanced datasets.

Model Imbalanced dataset Balanced dataset CLAHE enhanced dataset Fuzzy enhanced dataset
P R F1 P R F1 P R F1 P R F1
DL1 0.81 0.90 0.84 0.96 0.96 0.96 0.95 0.93 0.94 0.95 0.95 0.95
DL2 0.92 0.67 0.69 0.74 0.55 0.52 0.91 0.91 0.91 0.96 0.96 0.96
DL3 0.87 0.89 0.88 0.95 0.95 0.95 0.94 0.95 0.95 0.95 0.94 0.94
DL4 0.92 0.89 0.91 0.95 0.95 0.95 0.95 0.95 0.95 0.96 0.96 0.96
Fig. 6.

Fig. 6

Comparison of MEAN PRECISION, RECALL and F1 score values.

For further experimentation, we used the fuzzy enhanced dataset. Table 3 compares the performance of the CNN model using different optimizers. DL2 (SGD) and DL4 (Adamax) achieve the highest accuracy of 0.96. The controlled learning rate and momentum terms of SGD allow it to generalize well, while Adamax provides stable updates even with sparse gradients due to its use of the infinite norm. DL4 also demonstrates high recall, particularly for blight classes, making it sensitive to disease detection. DL3 (RMSProp) performs slightly lower, likely due to its aggressive adaptation to the learning rate. DL1 (Adam) is strong overall, though its recall for Class 2 is slightly reduced. In addition to accuracy metrics, training time per epoch was also recorded to assess computational efficiency. Among the models, DL2 (SGD) achieved high performance with the lowest training time (11.31 sec/epoch), making it both effective and resource-efficient.mConversely, DL1 (Adam) required the most time (21.76 sec/epoch), highlighting a trade-off between optimization quality and speed. Figures 7 and 8 present the training convergence and confusion matrices, respectively, validating the observed trends in optimizer effectiveness.

Table 3.

Performance comparison of CNN models with different optimizers(0(Early Blight), 1(Healthy), 2(Late Blight)).

Model Class Accuracy Precision Recall F1 score Training time (s)
DL1 0 0.95 0.99 0.97 0.98 21.76
1 0.88 1.00 0.94
2 0.97 0.88 0.92
DL2 0 0.96 0.96 0.97 0.96 11.31
1 0.96 0.99 0.98
2 0.96 0.93 0.94
DL3 0 0.94 0.97 0.97 0.97 12.47
1 0.99 0.88 0.93
2 0.88 0.97 0.92
DL4 0 0.96 0.97 0.99 0.98 12.82
1 0.93 1.00 0.96
2 0.99 0.90 0.94
Fig. 7.

Fig. 7

(a) Accuracy-Loss Curve per epoch for DL1, (b) Accuracy-Loss Curve per epoch for DL2, (c) Accuracy-Loss Curve per epoch for DL3, (d) Accuracy-Loss Curve per epoch for DL4.

Fig. 8.

Fig. 8

Confusion matrix on fuzzy enhanced dataset(0(Early Blight), 1(Healthy), 2(Late Blight)).

Results of proposed bayesian optimized CNN ensemble

The Table 4 and Figure 10 presents a performance comparison of various deep learning models of the ensemble (EDL1-EDL11), each combining different base models (DL1, DL2, DL3, DL4). Among all ensembles, EDL7 (DL1, DL2, DL3) achieves the highest accuracy of 97.94%, followed closely by EDL10 (DL2, DL3, DL4) at 97.89%. Most ensembles perform well, with accuracy ranging from 96.22 to 97.94%, and demonstrate high precision, recall, and F1 scores across all classes. These results highlight the effectiveness of ensemble learning in improving classification performance. Figure 9 shows the confusion matrix of the proposed EDL7 ensemble deep learning classification.

Table 4.

Performance comparison of ensemble models(0(Early Blight), 1(Healthy), 2(Late Blight)).

Ensemble models Description Class Accuracy Precision Recall F1 score
EDL1 Ensemble of DL1, DL2 0 96.74% 0.97 1.00 0.98
1 0.94 1.00 0.97
2 0.99 0.91 0.95
EDL2 Ensemble of DL1, DL3 0 97.77% 0.98 0.99 0.98
1 0.98 0.98 0.98
2 0.97 0.96 0.97
EDL3 Ensemble of DL1, DL4 0 96.22% 0.97 0.99 0.98
1 0.93 1.00 0.96
2 0.99 0.90 0.94
EDL4 Ensemble of DL2, DL3 0 97.25% 0.98 0.97 0.97
1 0.99 0.98 0.99
2 0.95 0.97 0.96
EDL5 Ensemble of DL2, DL4 0 96.74% 0.97 1.00 0.98
1 0.94 1.00 0.97
2 0.99 0.91 0.95
EDL6 Ensemble of DL3, DL4 0 97.77% 0.98 0.99 0.98
1 0.98 0.98 0.98
2 0.97 0.96 0.97
EDL7 Ensemble of DL1, DL2, DL3 0 97.94% 0.98 0.99 0.98
1 0.97 1.00 0.99
2 0.99 0.95 0.97
EDL8 Ensemble of DL1, DL2, DL4 0 96.22% 0.97 0.99 0.98
1 0.93 1.00 0.96
2 0.99 0.90 0.94
EDL9 Ensemble of DL1, DL3, DL4 0 97.08% 0.97 0.99 0.98
1 0.96 1.00 0.98
2 0.98 0.93 0.96
EDL10 Ensemble of DL2, DL3, DL4 0 97.89% 0.99 0.99 0.98
1 0.98 0.99 0.98
2 0.99 0.98 0.98
EDL11 Ensemble of DL1, DL2, DL3, DL4 0 97.25% 0.97 1.00 0.98
1 0.95 1.00 0.98
2 0.99 0.93 0.96

Significance values are in bold.

Fig. 10.

Fig. 10

Evaluation metric comparison of ensemble CNN models(0(Early Blight), 1(Healthy), 2(Late Blight)).

Fig. 9.

Fig. 9

Confusion matrix of proposed EDL7.

During training, different deep learning models (DL1, DL2, DL3, DL4) capture distinct features of the dataset. Combining predictions from multiple models in an ensemble improves overall classification performance compared to individual models, as the ensemble leverages complementary strengths and improves generalization. Figure 11 presents a comparison of the test results for the individual deep learning models (used in EDL7) and the proposed Ensemble Model. The following key observations can be made:

  1. The proposed ensemble model EDL7 (DL1, DL2, DL3) outperforms individual models, achieving the highest accuracy of 97.94%. Weights obtained from Bayesian Optimization for DL1 is 0.3525, DL2 is 0.3247 and DL3 is 0.3228. This demonstrates the superior predictive capability of ensemble learning.

  2. The ensemble model EDL7 achieves a precision of 98.1%, which is significantly higher than the best performing individual deep learning model. Higher precision indicates a lower false-positive rate, making the model more reliable.

  3. The recall of the proposed ensemble model is 98.3%, outperforming individual models. This high recall suggests that the ensemble effectively captures positive instances, reducing false negatives.

  4. The proposed ensemble model has an F1 score of 98.2%, indicating a well balanced trade-off between precision and recall, leading to robust classification performance.

In summary, the ensemble learning approach significantly improves classification performance, achieving higher accuracy, precision, recall, and F1 score compared to individual deep learning models. The combination of multiple classifiers allows the ensemble to generalize better to unseen data, leading to more reliable and accurate predictions.

Fig. 11.

Fig. 11

Comparison between EDL7 and individual CNN models in EDL7.

The proposed ensemble model (EDL7) is shown to surpass both individual CNN classifiers and conventional methods in a comparative analysis that covers experimental results and literature (Table 5). It achieves the highest accuracy of 97.94% and an F1-score of 0.982, consistently delivering better outcomes across multiple metrics, demonstrating its robustness, generalizability through Bayesian optimization, and suitability for real-time deployment.

Table 5.

Comparison with existing approaches.

Study Model Accuracy (%)
Singh et al.51 GLCM + SVM 95.99
Islam et al.52 SVM 95.00
Chakraborty et al.53 VGG16 92.69
VGG19 80.39
ResNet50 73.75
MobileNet 78.84
Proposed model EDL7 97.94

TOPSIS analysis

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) TOPSIS is a popular multicriteria decision-making (MCDM) method utilized to rank alternatives by measuring their closeness to an ideal solution. In this research, TOPSIS is used to assess various neural network (CNN) models using four primary evaluation metrics: Classification accuracy, precision, sensitivity, and F1 score.

Step 1: Formulating the decision matrix

The decision matrix Inline graphic represents CNN architectures as alternatives and their respective performance metrics as evaluation criteria:

graphic file with name d33e2522.gif

where Inline graphic denotes the performance score of model Inline graphic according to criterion Inline graphic.

Step 2: Normalization of the decision matrix

To standardize the evaluation criteria, the matrix is normalized using vector normalization:

graphic file with name d33e2550.gif 19

This transformation ensures that all the criteria are on a uniform scale.

Step 3: Defining the optimal and least desirable values

The most desirable (Inline graphic) and least desirable (Inline graphic) values for each criterion are identified as:

graphic file with name d33e2573.gif 20
graphic file with name d33e2579.gif 21

where Inline graphic represents benefit-oriented criteria (higher values are preferable) and Inline graphic represents cost-oriented criteria (lower values are preferable).

Step 4: Calculating separation distances

The Euclidean Distance(D) between each model and the optimal and least favorable solutions is calculated as follows:

graphic file with name d33e2602.gif 22
graphic file with name d33e2608.gif 23

where Inline graphic measures how far a model is from the ideal solution, while Inline graphic indicates its proximity to the least favorable outcome.

Step 5: Determining the relative closeness score

The relative closeness index(C) for each model is calculated as:

graphic file with name d33e2631.gif 24

where Inline graphic. A higher value Inline graphic indicates superior model performance.

Step 6: Ranking CNN models

Finally, CNN models are ranked in descending order based on their Inline graphic values. The model with the highest Inline graphic score is considered the most effective according to the selected evaluation criteria.

Table 6 presents the TOPSIS evaluation results for different models (EDL1 to EDL11). From the results, the model EDL10 achieves the highest score (0.988260), making it the most preferred model. In contrast, models EDL3 and EDL8 have the lowest scores (0.000000), indicating poor performance. The results suggest that EDL10 is closest to the ideal solution, while EDL3 and EDL8 perform worst according to the TOPSIS methodology. Figure 12 shows the ranking plot of the EDL models.

Table 6.

TOPSIS evaluation results for different models.

Model D+ D− Relative closeness
EDL1 0.009804 0.003513 0.263813
EDL2 0.004532 0.009191 0.669780
EDL3 0.013122 0.000000 0.000000
EDL4 0.006588 0.006816 0.508519
EDL5 0.009804 0.003513 0.263813
EDL6 0.004532 0.009191 0.669780
EDL7 0.002936 0.010981 0.789007
EDL8 0.013122 0.000000 0.000000
EDL9 0.007449 0.006170 0.453025
EDL10 0.000155 0.013060 0.988260
EDL11 0.006732 0.006985 0.509205
Fig. 12.

Fig. 12

TOPSIS analysis of ensemble CNN models.

Conclusion and future work

In this study, we proposed the Bayesian-Optimized CNN Weighted Ensemble for Potato Blight Detection, a deep learning-based approach that integrates multiple CNN models trained with different optimizers (ADAM, SGD, RMSProp, and ADAMAX) and employs Bayesian optimization for weighted ensemble learning. The proposed EDL7 ensemble model achieved the highest classification accuracy (97.94%), along with strong precision, recall, and the F1 score, making it the best performing model based on traditional machine learning evaluation metrics. However, TOPSIS analysis ranked EDL10 as the most preferred model (0.988260), indicating that it offered a better trade-off in multiple evaluation criteria. These findings underscore the importance of multicriteria decision-making in the evaluation of deep learning models, emphasizing that the highest accuracy alone does not always reflect the most optimal model. Our proposed methodology provides a robust and efficient solution for automated potato blight classification, which is beneficial for precision agriculture and early disease management. However, a limitation of the current study is that it has not yet been validated in real-world deployment scenarios such as mobile or edge environments, nor tested under field conditions involving environmental variability (e.g., lighting, background noise, or occlusion). Future research can explore the integration of attention mechanisms and transformer-based architectures to further enhance detection performance in Linux environment54. In addition, deploying the model on mobile or edge devices, performing real-time inference tests, and validating performance on field-acquired data will be prioritized to assess the practical applicability and robustness of the system in real agricultural environments. We also acknowledge that test-time augmentation, pruning, and uncertainty estimation were not explored in this study and will be considered in future work to further improve model generalization and robustness.

Acknowledgements

This project was funded by Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah under grant No. (IFPIP-639-611-1443), the authors, therefore, acknowledge with thanks DSR technical and financial support. Also, this work is supported from the Division of Graduate Studies, Research, and Business at Dar Al-Hekma University, Jeddah, Saudi Arabia.

Author contributions

Final Manuscript Revision, funding, Supervision: B.B.G., C.H.H. V.A.,V.J.; study conception and design, analysis, and interpretation of results, methodology development: A.J., A.K.D., V.S.H.P,A.P. draft manuscript preparation, figure and tables: S.M., T.A.A., W.A., S.K.S, S.K.

Data availability

The datasets generated and analyzed during the current study are available in the Kaggle repository https://www.kaggle.com/datasets/muhammadardiputra/potato-leaf-disease-dataset

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Vincent Shin-Hung Pan, Email: vincentpan@cyut.edu.tw.

Brij B. Gupta, Email: gupta.brij@gmail.com

References

  • 1.Dolničar, P. Importance of potato as a crop and practical approaches to potato breeding. In Solanum Tuberosum: Methods and Protocols 3–20 (Springer, 2021). [DOI] [PubMed]
  • 2.Singh, R., Kaur, S. & Aggarwal, P. Exploration of potato starches from non-commercial cultivars in ready to cook instant non cereal, non glutinous pudding mix. Lwt150, 111966 (2021). [Google Scholar]
  • 3.Liu, R. W., Guo, Y., Lu, Y., Chui, K. T. & Gupta, B. B. Deep network-enabled haze visibility enhancement for visual IoT-driven intelligent transportation systems. IEEE Trans. Ind. Inf.19, 1581–1591 (2022). [Google Scholar]
  • 4.Gold, K. M. et al. Hyperspectral measurements enable pre-symptomatic detection and differentiation of contrasting physiological effects of late blight and early blight in potato. Remote Sensing12, 286 (2020). [Google Scholar]
  • 5.Mandle, A. K., Gupta, G. P., Sahu, S. P., Bansal, S. & Alhalabi, W. A. Semantic-aware hybrid deep learning model for brain tumor detection and classification using adaptive feature extraction and mask-rcnn. Int. J. Semant. Web Inf. Syst. (IJSWIS)21, 1–23 (2025). [Google Scholar]
  • 6.Kang, F., Li, J., Wang, C. & Wang, F. A lightweight neural network-based method for identifying early-blight and late-blight leaves of potato. Appl. Sci.13, 1487 (2023). [Google Scholar]
  • 7.Chu, H. et al. Artificial intelligence in tongue image recognition. Int. J. Softw. Sci. Comput. Intell. (IJSSCI)15, 1–25 (2023). [Google Scholar]
  • 8.Alston, J. M. & Pardey, P. G. Agriculture in the global economy. J. Econ. Perspect.28, 121–146 (2014). [Google Scholar]
  • 9.Government of India. Contribution of agriculture sector towards gdp: Agriculture has been the bright spot in the economy despite covid-19 (2024).
  • 10.Kassanuk, T. & Phasinam, K. A hybrid binary bird swarm optimization (bso) and dragonfly algorithm (da) for vm allocation and load balancing in cloud. Int. J. Cloud Appl. Comput. (IJCAC)13, 1–21 (2023). [Google Scholar]
  • 11.Li, L., Zhang, S. & Wang, B. Plant disease detection and classification by deep learning—A review. IEEE Access9, 56683–56698 (2021). [Google Scholar]
  • 12.Forbes, G. et al. Stability of resistance to phytophthora infestans in potato: An international evaluation. Plant. Pathol.54, 364–372 (2005). [Google Scholar]
  • 13.Forbes, G., Pérez, W. & Andrade-Piedra, J. Evaluación de la resistencia en genotipos de papa a Phytophthora infestans bajo condiciones de campo: Guía para colaboradores internacionales (International Potato Center, 2014).
  • 14.Guan, X. et al. Semantic web-based AI system for neuroimmune-gastrointestinal medical image processing. Int. J. Semant. Web Inf. Syst. (IJSWIS)21, 1–36 (2025). [Google Scholar]
  • 15.Dawod, R. G. & Dobre, C. Upper and lower leaf side detection with machine learning methods. Sensors22, 2696 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Khan, M. A. et al. An automated system for cucumber leaf diseased spot detection and classification using improved saliency method and deep features selection. Multimedia Tools Appl.79, 18627–18656 (2020). [Google Scholar]
  • 17.Poornappriya, T. & Gopinath, R. Rice plant disease identification using artificial intelligence approaches. Int. J. Electr. Eng. Technol.11, 392–402 (2022). [Google Scholar]
  • 18.Zhang, K., Wu, Q., Liu, A. & Meng, X. Can deep learning identify tomato leaf disease?. Adv. Multimedia2018, 6710865 (2018). [Google Scholar]
  • 19.Kumari, P. & Kaur, P. An adaptable approach to fault tolerance in cloud computing. Int. J. Cloud Appl. Comput. (IJCAC)13, 1–24 (2023). [Google Scholar]
  • 20.Amara, J., Bouaziz, B. & Algergawy, A. A deep learning-based approach for banana leaf diseases classification. In Datenbanksysteme für Business, Technologie und Web (BTW 2017)-Workshopband, 79–88 (Gesellschaft für Informatik eV, 2017).
  • 21.Türkoğlu, M. & Hanbay, D. Plant disease and pest detection using deep learning-based features. Turk. J. Electr. Eng. Comput. Sci.27, 1636–1651 (2019). [Google Scholar]
  • 22.Sharma, A., Jain, A., Gupta, P. & Chowdary, V. Machine learning applications for precision agriculture: A comprehensive review. IEEE Access9, 4843–4873 (2020). [Google Scholar]
  • 23.Viana, C. M., Santos, M., Freire, D., Abrantes, P. & Rocha, J. Evaluation of the factors explaining the use of agricultural land: A machine learning and model-agnostic approach. Ecol. Ind.131, 108200 (2021). [Google Scholar]
  • 24.Hamrani, A., Akbarzadeh, A. & Madramootoo, C. A. Machine learning for predicting greenhouse gas emissions from agricultural soils. Sci. Total Environ.741, 140338 (2020). [DOI] [PubMed] [Google Scholar]
  • 25.Shin, J.-Y., Kim, K. R. & Ha, J.-C. Seasonal forecasting of daily mean air temperatures using a coupled global climate model and machine learning algorithm for field-scale agricultural management. Agric. For. Meteorol.281, 107858 (2020). [Google Scholar]
  • 26.Yu, C., Li, J., Li, X., Ren, X. & Gupta, B. B. Four-image encryption scheme based on quaternion fresnel transform, chaos and computer generated hologram. Multimedia Tools Appl.77, 4585–4608 (2018). [Google Scholar]
  • 27.Cravero, A., Pardo, S., Sepúlveda, S. & Muñoz, L. Challenges to use machine learning in agricultural big data: A systematic literature review. Agronomy12, 748 (2022). [Google Scholar]
  • 28.Zaidi, S. M. H. et al. Intelligent process automation using artificial intelligence to create human assistant. Int. J. Softw. Sci. Comput. Intell. (IJSSCI)17, 1–19 (2025). [Google Scholar]
  • 29.Asif, M. K. R., Rahman, M. A. & Hena, M. H. Cnn based disease detection approach on potato leaves. In 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS) 428–432 (IEEE, 2020).
  • 30.Rozaqi, A. J. & Sunyoto, A. Identification of disease in potato leaves using convolutional neural network (cnn) algorithm. In 2020 3rd International Conference on Information and Communications Technology (ICOIACT) 72–76 (IEEE, 2020).
  • 31.Bangari, S., Rachana, P., Gupta, N., Sudi, P. S. & Baniya, K. K. A survey on disease detection of a potato leaf using cnn. In 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS) 144–149 (IEEE, 2022).
  • 32.Liao, M. et al. A lightweight network for abdominal multi-organ segmentation based on multi-scale context fusion and dual self-attention. Inf. Fusion108, 102401 (2024). [Google Scholar]
  • 33.Lee, T.-Y., Yu, J.-Y., Chang, Y.-C. & Yang, J.-M. Health detection for potato leaf with convolutional neural network. In 2020 Indo–Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN) 289–293 (IEEE, 2020).
  • 34.Agarwal, M., Sinha, A., Gupta, S. K., Mishra, D. & Mishra, R. Potato crop disease classification using convolutional neural network. In Smart Systems and IoT: Innovations in Computing: Proceeding of SSIC 2019 391–400 (Springer, 2019).
  • 35.Rashid, J. et al. Multi-level deep learning model for potato leaf disease recognition. Electronics10, 2064 (2021). [Google Scholar]
  • 36.Wang, H., Li, Z., Li, Y., Gupta, B. B. & Choi, C. Visual saliency guided complex image retrieval. Pattern Recogn. Lett.130, 64–72 (2020). [Google Scholar]
  • 37.Khobragade, P., Shriwas, A., Shinde, S., Mane, A. & Padole, A. Potato leaf disease detection using cnn. In 2022 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON) 1–5 (IEEE, 2022).
  • 38.Mahum, R. et al. A novel framework for potato leaf disease detection using an efficient deep learning model. Hum. Ecol. Risk Assess. Int. J.29, 303–326 (2023). [Google Scholar]
  • 39.Joseph, S. G. et al. Cnn-based early blight and late blight disease detection on potato leaves. In 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS) 923–928 (IEEE, 2022).
  • 40.Sharma, R. et al. Plant disease diagnosis and image classification using deep learning. Comput. Mater. Continua71, (2022).
  • 41.Asfaw, T. A. Deep learning hyperparameter’s impact on potato disease detection. Sci. Temper14, 582–590 (2023). [Google Scholar]
  • 42.Bhat, S. A. & Huang, N.-F. Big data and AI revolution in precision agriculture: Survey and challenges. IEEE Access9, 110209–110222 (2021). [Google Scholar]
  • 43.Khan, N. et al. Current progress and future prospects of agriculture technology: Gateway to sustainable agriculture. Sustainability13, 4883 (2021). [Google Scholar]
  • 44.Tan, C. S. A smart helmet framework based on visual-inertial slam and multi-sensor fusion to improve situational awareness and reduce hazards in mountaineering. Int. J. Softw. Sci. Comput. Intell. (IJSSCI)15, 1–19 (2023). [Google Scholar]
  • 45.Shaikh, F. K., Memon, M. A., Mahoto, N. A., Zeadally, S. & Nebhen, J. Artificial intelligence best practices in smart agriculture. IEEE Micro42, 17–24 (2021). [Google Scholar]
  • 46.Putra, M. A. Potato leaf disease dataset (2020).
  • 47.Selvam, C., Jebadass, R. J. J., Sundaram, D. & Shanmugam, L. A novel intuitionistic fuzzy generator for low-contrast color image enhancement technique. Inf. Fusion108, 102365 (2024). [Google Scholar]
  • 48.Biswas, S., Saha, I. & Deb, A. Plant disease identification using a novel time-effective cnn architecture. Multimedia Tools Appl.83, 82199–82221 (2024). [Google Scholar]
  • 49.Daniel, C. et al. Bayesian optimization-enhanced ensemble learning for the uniaxial compressive strength prediction of natural rock and its application. Geohazard Mech.2, 197–215 (2024). [Google Scholar]
  • 50.Ali, A. H., Youssef, A., Abdelal, M. & Raja, M. A. An ensemble of deep learning architectures for accurate plant disease classification. Eco. Inform.81, 102618 (2024). [Google Scholar]
  • 51.Singh, A. & Kaur, H. Potato plant leaves disease detection and classification using machine learning methodologies. In IOP Conference Series: Materials Science and Engineering vol. 1022, p. 012121 (IOP Publishing, 2021).
  • 52.Islam, M., Dinh, A., Wahid, K. & Bhowmik, P. Detection of potato diseases using image segmentation and multiclass support vector machine. In 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE) 1–4 (IEEE, 2017).
  • 53.Chakraborty, K. K., Mukherjee, R., Chakroborty, C. & Bora, K. Automated recognition of optical image based potato leaf blight diseases using deep learning. Physiol. Mol. Plant Pathol.117, 101781 (2022). [Google Scholar]
  • 54.Singh, S. K. Linux Yourself: Concept and Programming (Chapman and Hall/CRC, 2021). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated and analyzed during the current study are available in the Kaggle repository https://www.kaggle.com/datasets/muhammadardiputra/potato-leaf-disease-dataset


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES