Skip to main content
PLOS One logoLink to PLOS One
. 2024 Jul 25;19(7):e0307747. doi: 10.1371/journal.pone.0307747

Field pea leaf disease classification using a deep learning approach

Dagne Walle Girmaw 1,*, Tsehay Wasihun Muluneh 2
Editor: Valentine Otang Ntui3
PMCID: PMC11271925  PMID: 39052602

Abstract

Field peas are grown by smallholder farmers in Ethiopia for food, fodder, income, and soil fertility. However, leaf diseases such as ascochyta blight, powdery mildew, and leaf spots affect the quantity and quality of this crop as well as crop growth. Experts use visual observation to detect field pea disease. However, this approach is expensive, labor-intensive, and imprecise. Therefore, in this study, we presented a transfer learning approach for the automatic diagnosis of field pea leaf diseases. We classified three field pea leaf diseases: Ascochyta blight, leaf spot, and powdery mildew. A softmax classifier was used to classify the diseases. A total of 1600 images of both healthy and diseased leaves were used to train, validate, and test the pretrained models. According to the experimental results, DenseNet121 achieved 99.73% training accuracy, 99.16% validation accuracy, and 98.33% testing accuracy after 100 epochs. we expect that this research work will offer various benefits for farmers and farm experts. It reduced the cost and time needed for the detection and classification of field pea leaf disease. Thus, a fast, automated, less costly, and accurate detection method is necessary to overcome the detection problem.

1. Introduction

Field pea (Pisum sativum L) is one of the most significant crops used and produced by smallholder farmers in Ethiopia [1, 2]. This crop is the second most produced legume in Ethiopia in terms of volume after fava beans [1]. This crop is crucial to the farming community in the highlands of Bale, southeastern Ethiopia, and it provides revenue and acts as a rotational crop. However, this crop is affected by several diseases, such as Ascochyta blight (Ascochyta pisi), powdery and downy mildew (Erysiphe polygoni), and leaf spot. These diseases are causes of yield loss in production. Ascochyta blight and powdery mildew are the two most prevalent field pea diseases in mid-altitudes, and these diseases can reduce yields by 20–30% [1, 3]. Worldwide, Ascochyta blights (Ascochyta spp.) significantly reduce field pea yields and degrade seed quality. Ascochyta blight often results in a total loss of crop production in Dembi, Ethiopia, where the disease is highly prevalent. The Kulumsa Agricultural Research Centre is an ideal location for this disease because the days are frequently dry and hot [1, 4]. Field peas are grown around the world and have 86% loss due to powdery mildew disease, which reduces crop production. In Ethiopia, this disease is a main cause of yield losses of 21.09% [24].

Therefore, proper detection of field pea leaf diseases is crucial to improve the quality and quantity of crop production [5]. Currently, experts use visual observations to detect field pea disease. Nevertheless, this method has drawbacks and is expensive for large farms. Farmers might also lack the necessary resources or even the idea to consult experts; thus, expert consultation is costly and time-consuming. In large farms, the cost of visually observing and detecting leaf diseases is high, inaccurate, and difficult [6, 7]. Therefore, we suggest a deep learning approach to classify diseases in field pea leaves. It resolves these issues with traditional image processing techniques. A deep learning method uses multiple layers to process the data and extracts information from the image [5, 8]

The structure of the paper is as follows: related works are explored in section 2. Methods and materials are presented in section 3. The results and discussion are presented in Section 4. A conclusion is stated in Section 5.

2. Related works

Several studies have been performed on plant disease detection. The authors used different approaches, such as image processing, machine learning, and deep learning.

[9] presented convolutional neural networks (CNNs) to identify diseases in rice and potato plants. The authors identified diseases such as brown spot, blast, bacterial blight, and tungro. Images of potato leaves are divided into three categories: healthy, early blight, and late blight. The study used 1500 images of potato leaves and 5932 images of rice leaves. The suggested CNN model achieved 99.58% accuracy in classifying rice images and 97.66% accuracy in classifying potato leaves.

In [10] deep convolution networks for plant disease detection based on deep transfer learning were employed. The suggested approach is sufficiently light to significantly lower the processing expenses. The proposed method shows a considerable boost in efficiency with reduced complexity.

The PlantVillage (https://www.kaggle.com/datasets/mohitsingh1804/plantvillage) dataset was used for the experiment, and 18453 diseased leaves were divided into 3 categories based on species and 15 types of classification. The suggested method achieved a 99.28% accuracy rate on the dataset.

[11] suggested a model for image binarization using Otsu’s global thresholding technique to eliminate the image’s background noise. The suggested method is based on a fully connected CNN to identify the three rice diseases. The model was trained using 4,000 image samples of each damaged leaf and 4,000 image samples of healthy rice leaves. The model achieved a 99.7% accuracy rate on the dataset.

In [12] the YOLOv4 deep learning model with image processing was employed in a hybrid approach using the faster R-CNN, SSD, VGG16, and YOLOv4 deep learning models to automatically determine the severity of leaf spot disease on sugar beetroot and classify it. A total of 1040 images were used for training and testing the hybrid method and to determine their severity, and a classification accuracy rate of 96.47% was achieved. The suggested hybrid method produces better results than analyzing data using only these models.

In [13] a convolution neural network (CNN) approach was proposed to evaluate potato diseases (Healthy, Black Scurf, Common Scab, Black Leg, and Pink Rot). A database with 5,000 images of potatoes (https://www.kaggle.com/datasets/mohitsingh1804/plantvillage) was employed. The proposed model was compared with R-CNN, VGG, AlexNet, and GoogLeNet through transfer learning. The deep learning method’s suggested accuracy is higher than that of previous works and achieved accuracy rates of 100% and 99%, respectively.

[14] employed a transfer learning approach using InceptionV3, ResNet50, VGG16, MobileNet, Xception, and DenseNet121 to identify plant leaf disease using the PlantVillage dataset. A total of 11,370 images of healthy and unhealthy tomato and potato leaves were included in the dataset. The method achieved an accuracy of 98.83% using the MobileNet architecture.

In [15]“Convnets" were employed to classify and identify plant diseases. The data are collected from the PlantViallge dataset, which includes plant classes such as potato, pepper, and tomato. The model achieved accuracy rates of 98.3%, 98.5%, and 95% in detecting tomato, pepper, and potato leaf diseases, respectively.

In [16] the proposed ResNet-9 model was used to identify blight disease in images of potato and tomato leaves. A total of 3,990 initial training data samples were used from the Plant Village Dataset. The model was evaluated on the 133 images of the test set. A 99.25% test accuracy, an overall precision of 99.67%, an overall recall of 99.33%, and an overall F1-score value of 99.33% were achieved.

[17] presented an autonomous method for detecting plant leaf damage and identifying disease. The suggested method used DenseNet to classify the disease based on an image of a plant leaf. The suggested DenseNet model yielded 100% classification accuracy. A deep learning-based semantic segmentation is used to identify leaf damage. A 97% accuracy rate was obtained using semantic segmentation. Apple, grape, potato, and strawberry plants were detected in the experimental analysis.

In [18] a deep learning method was proposed to identify different plant diseases. The proposed model implementation processes include acquiring datasets, training, testing, and classification to categorize leaves as healthy or diseased. The work identified potato leaf disease using the KNN and CNN methods. The developed method achieved an accuracy of ~ 90% using CNN-based classification.

[19] a deep learning-based approach to crop disease detection was proposed. The detection and classification of diseases is performed using a convolutional neural network-based method. Two convolutional and two pooling layers are employed within the model. According to the experimental findings, the suggested model achieved 98% training accuracy and 88.17% testing accuracy compared with pretrained models (InceptionV3, ResNet 152, and VGG19).

In [20] a lightweight convolutional neural network called VGG-ICNN was proposed for use in plant-leaf disease identification. VGG-ICNN had a significantly smaller number than most other high-performing deep learning models. PlantVillage and Embrapa provide 38 and 93 categories, respectively. The proposed work achieved 99.16% accuracy on the PlantVillage dataset.

In [21] the suggested system for identifying rice plant diseases used a computer vision-based methodology using deep learning, machine learning, and image processing techniques. The approach detects diseases in rice fields, such as sheath rot, brown leaf spot, rice blast, bacterial leaf blight, and false smut. The diseased region of the rice plant is recognized using image segmentation after image preprocessing. To identify and categorize distinct types of rice plant diseases, convolutional neural networks and a support vector machine classifier are employed, and the proposed deep learning-based approach achieved the greatest validation accuracy of 0.9145 using ReLU and softmax algorithms.

[22] proposed a transfer learning approach for the identification of maize leaf diseases. A dataset of 18,888 images of both healthy and diseased leaves was classified using pretrained VGG16, ResNet50, InceptionV3, and Xception models. The findings show that all trained models can classify maize leaf diseases with an accuracy of greater than 93%. Specifically, Xception, InceptionV3, and VGG16 all attained accuracies greater than 99%. Finally, we summarized all the related work as presented in Table 1.

Table 1. Summary of the related works.

Authors Title Method Accuracy Observed Weakness
[9] Plant disease diagnosis and image classification using deep learning Convolutional Neural Networks (CNNs) 99.5% and 97.66% The CNN model requires extensive training to get an acceptable result
[10] SK-MobileNet: A Lightweight Adaptive Network Based on Complex Deep Transfer Learning 99.28% Complex Model requires large GPU and longer training times.
Transfer Learning for Plant Disease Recognition
[11] A novel approach for rice plant disease classification with deep convolutional neural network Deep Learning with Otsu’s global thresholding technique 99.7% Otsu’s global thresholding requires a significant amount of computational work.
[12] A sugar beet leaf disease classification method based on image processing and deep learning Hybrid model 96.47% Hybrid models have an overfitting problem.
[13] Potato disease detection and classification using deep learning methods Convolution neural network (CNN) 100% and 99% The CNN model requires extensive training to get an acceptable result
[14] Performance Analysis of Deep Learning Algorithms Toward Disease Detection: Tomato and Potato Plant as Use-Cases Transfer learning 98.83% Inception V3 model is not appropriate for diseases exhibiting numerous lesions
[15] Plant Disease Prediction and classification using Deep Learning ConvNets, Deep Learning 98.3%, 98.5%, and 95% Such extensive architectures may result in poor convergence and overfitting.
[16] Automatic blight disease detection in potato and tomato plants using deep learning Transfer Learning (ResNet-9) 99.25% The model requires a large GPU and longer training times.
[17] Plant leaf disease classification and damage detection system using deep learning models Transfer Learning (DenseNet) 97% The model can cause significant features to be skipped or lost.
[18] Deep Learning-Based Approach to Identify the Potato Leaf Disease and Help in Mitigation Using IOT Convolutional Neural Networks and KNN 90% The CNN model requires extensive training to get an acceptable result
[19] Detection and Classification of Tomato Crop Disease Using Convolutional Neural Network Convolutional Neural Network 88.17% The CNN model requires extensive training to get an acceptable result
[20] VGG-ICNN: A Lightweight CNN model for crop disease identification A Lightweight CNN model 99.16% The model requires a large GPU and longer training times.
[21] Deep learning system for paddy plant disease detection and classification Deep learning and SVM 91.4% For noisy image data, the support vector machine approach is inappropriate.
[22] On fine-tuning deep learning models using transfer learning and hyperparameters optimization for disease identification in maize leaves Transfer learning 99% When the FTNN technique is fed new weights, they forget the old weight that was connected with them, which could affect the result

3. Methods and materials

These are the descriptions of methods and materials undertaken in the study:

Image pre-processing: To resize and rescale the input images, image preprocessing is necessary. Before feeding the image into the model, it should be appropriately sized. We resized and rescaled the image using the image data generator method. We used data augmentation techniques such as rotation, horizontal flip, zoom, and shear to diversify training images [23] and enhance the model’s performance [24]. Images are resized to a common size of 224x224 pixels and normalized in the range [0, 1] to scale pixel values [25, 26].

Model Training: We found the optimal hyperparameters such as optimizer, learning rate, and batch size through experimentation. We used dropout and batch normalization to prevent overfitting problems. To train the model we used an optimizer, loss function, number of epochs, and early stopping to stop the training when performance on the validation set starts failing.

Evaluation: We used evaluation metrics such as accuracy, precision, recall, F1 score, assess model performance.

Deployment: We saved trained models and deployed them as web applications using the Flask application. This allows the user to perform a live field pea leaf disease classification service.

The suggested work contains three phases: training, validation, and testing.

3.1 Training phase

To train the pretrained model, we labeled the images of field peas with the appropriate class, and the model was trained using labeled images. 1600 field pea images were labeled for this study. Field pea images were obtained from the Kulumsa Agricultural Research Centre in Ethiopia. A smartphone camera was utilized to take images. The study was conducted at Kulumsa Agricultural Research Centre in Ethiopia.

3.2 Validation phase

Before the input images feed into the model, we employed image pre-processing techniques such as image resizing, normalization, and argumentations. To validate the pretrained models, we employed of these approaches.

3.3 Testing phase

The models were tested using the test images to evaluate the model performance.240 images of field peas are used for testing. Test images were obtained from the same dataset used for training and validation

3.4 Transfer learning

We used transfer learning to retrain a previously trained model for a new problem. Transfer learning offers numerous advantages, including the ability to finetune parameters quickly and easily to learn new tasks without defining a new network. We adopted a transfer learning technique for the classification of pea leaf diseases as follows:

Choosing a Pre-trained Model: We selected novel pre-trained models (EfficentNetB7, MobileNetV2, and DenseNet121) for the classification of pea leaf diseases.

Loading the Pre-trained Model: We loaded the selected pre-trained model weights and excluded the top classification layers because they are specific to the original task for which the model was trained.

Adding Custom Classification Layers: We added custom layers for our classification task of pea leaf disease and these layers are followed by a SoftMax classifier.

Freezing Pre-trained Layers: We freeze these layers to stop the weights of the pre-trained layers from being updated during training and to train only the weights of the newly added layers for our classification task.

Compiling the Model: The model was compiled using Adam optimizer and categorical cross-entropy as a loss function.

Data Augmentation: Data augmentation techniques such as rotation, horizontal flip, zoom, and shear are employed to diversify training images and enhance the model’s performance.

Training the Model: The model was trained using training data and the weights of the recently added classification layers are updated by the backpropagation method.

Evaluation: After training is finished, the model’s performance is assessed using a test dataset and we use accuracy, precision, recall, and F1-score to evaluate the model’s performance.

In this study, we employed a transfer learning approach to classify field pea disease, which enabled us to quickly and readily adjust parameters to learn new tasks without creating a new network [2729]. It trained to classify images into 1000 classes using more than a million images [30]. For our classification task, we reuse the pretrained models (EfficentNetB7, MobileNetV2, and DenseNet201). All layers are maintained, and we use the final layer of the model for our detection task.

3.4.1 MobileNet

The goal of the MobileNet model is to improve deep learning’s real-time performance with limited resources. It uses fewer computational resources than the other CNN models, and it is perfect for usage on computers with low processing power as well as mobile devices [31]. A 1x1 convolution and a type of factorized convolution or depthwise separable convolution serve as the foundation for the model architecture. Every input channel receives a single filter application from the depthwise convolution. The outputs of the depthwise convolution are combined with the pointwise convolution using 1x1 convolutions [10].

3.4.2 DenseNet

DenseNet works using a densely layered design with direct connections. In comparison to traditional convolutional neural networks, it uses fewer parameters [32, 33]. It is employed in CNN networks to streamline the layer-to-layer connectivity structure. It fixes the problems caused by the gradients, and easy access to the input images is provided for each layer of the model [34].

3.4.3 EfficientNet

The EfcientNet model stretches the network using an inverted bottleneck convolution and a swish activation function. It reduces the calculation time by the square of a filter size [30, 35]. It has a better classification performance than other deep CNN models in terms of accuracy [36]. The proposed work is presented in Fig 1.

Fig 1. The proposed methodology.

Fig 1

3.5 Performance metrics

We assessed the model performance using ROC curve, support, precision, recall, and accuracy. The following is a mathematical expression of the metrics as presented in Table 2.

Table 2. Performance metrics.

Metric Definition Symbol Reference
A [37]
Accuracy A=TP+TNTP+FN+TN+FP
Where
TP = true positive
TN = true negative
FP = false positive
FN = false negative
Precision P=TPTP+FP P
Recall R=TPTP+FN. R
F1-score F1-Score = 2xPxRP+R. F1

4. Results and discussion

4.1 Dataset Acquisition

We collected 1600 images from the Kulumsa Agricultural Research Centre with the help of agricultural experts. All images were resized to 224 x 224, and we used 15% of the data for testing,15% of the data for validating, and 70% of the data for training. The dataset is divided into four classes that correspond to the diseases. The distribution of classes is as follows: the first class consists of 400 images of ascochyta blight disease, the second class consists of 400 Healthy images, the third class consists of 400 images of leaf spot disease and the fourth class consists of 400 images of powdery mildew leaves. Images are resized to a common size of 224x224 pixels and normalized in the range [0, 1] to scale pixel values. The summary of the dataset is presented in Table 3.

Table 3. Summary of the dataset.

No Class Total number of images Source of images
1 Ascochyta blight 400 Kulumsa Agricultural Research Centre
2 Powdery mildew 400 Kulumsa Agricultural Research Centre
3 Leaf spots 400 Kulumsa Agricultural Research Centre
4 Healthy 400 Kulumsa Agricultural Research Centre

4.2 Environment setup

Models were trained and tested on Google Colab, which provided free resources such as GPU and RAM. We wrote the code using Jupyter Notebook on a computer with a 64-bit operating system, Intel(R) Core i3 processor, and 4 GB of RAM. we used GPUs to accelerate training time, TensorFlow, and Keras, for building and training deep learning models, Jupyter Notebook for writing, debugging, and running deep learning code, Matplotlib, Seaborn, and Tensor Board for visualizing training metrics, model architectures, and data distributions.

4.3 Pretrained model training

Initially, we downloaded the MobileNetV2 model from ‘Keras. We retrained the last layer of the model for our disease classification, and we used batch sizes of 32 for 100 epochs. The model achieved 99.64% training accuracy, 98.33% validation accuracy, and a testing accuracy of 96. 09%. The training, validation, and loss accuracies are displayed in Figs 2 and 3.

Fig 2. Training and validation accuracy.

Fig 2

Fig 3. Training and validation loss of the model.

Fig 3

In addition, we downloaded the DenseNet 121 model from ‘keras. We retrained the last layer of the model with 32 batch sizes and 100 epochs. The model achieved a 99.73% training accuracy, a 99.16% validation accuracy, and a testing accuracy of 98.33% after 100 epochs. Figs 4 and 5 display its training accuracy, validation accuracy, and losses.

Fig 4. Training accuracy and validation accuracy.

Fig 4

Fig 5. Training and validation losses.

Fig 5

In the third experiment, we downloaded the EfficientNetB7 model from ‘Keras. We retrained the last layer of the model using 32 batch sizes and 100 epochs. The model achieved a training accuracy of 99.82%, a validation accuracy of 99.16%, and a testing accuracy of 97.92%. Figs 6 and 7 display its training accuracy, validation accuracy, and losses.

Fig 6. Training accuracy and validation accuracy.

Fig 6

Fig 7. Training and validation losses.

Fig 7

We made model comparisons in terms of accuracy, loss, and receiver operating characteristic (ROC) curves for the classification of field pea leaf diseases, as presented in Figs 8 and 9.

Fig 8. ROC curves of the three models.

Fig 8

Fig 9. Model comparison.

Fig 9

4.4 Visualizing of channels during the activation layer

Channel visualization provides an overview of how deep learning divides any input into discrete filters. Two arguments are used to instantiate a model: input and output tensor. To visualize it, an input image is fed to the model, and it returns the layer’s activation values. The first layer of the network in the visualization has multiple detectors, including edge, bright dot, and brightness detectors. The feature maps in this layer hold all of the information related to the input image. Nonimportant characteristics were skipped in the first few layers. When we go deeper and deeper inside the neural network, we learn more abstract features and cannot interpret what the filter is doing, as presented in Fig 10.

Fig 10. Visualizations of the activation of channels.

Fig 10

(A) Visualization in the 1st channel (B) Visualization in the 2nd channel (C) Visualization of activation in the 3rd channel.

4.5 Model deployment using a flask

A flask application was used to launch the suggested model as a web application. This enables a real-time field pea leaf disease classification service to the user, as presented in Figs 1113.

Fig 11. Predicting field pea blight.

Fig 11

Fig 13. Predicting field pea normal.

Fig 13

Fig 12. Predicting field pea powdery mildew.

Fig 12

4.6 Discussion

From the state-of-the-art models, DenseNet121 achieved a training accuracy of 99.73% validation accuracy of 99.16%, and testing of 98.33%% and MobileNetV2 scored a training accuracy of 99.64%, validation accuracy of 98.33%and testing of 96. 09%, While EfficientNetB7 scored a training accuracy of 99.82%, validation accuracy of 99.16%, and testing of 97.92%. The experimental results showed that DenseNet121 performed better than MobileNetV2 and EfficientNetB7 for field pea leaf disease detection. The experimental results also validated the use of transfer learning for field pea leaf disease classification. Overfitting was one challenge to adopting transfer learning for leaf disease classification when the target dataset is small. This means the model may memorize the training data rather than learning features. we mitigated this issue using data augmentation and dropout techniques. fine-tuning the hyperparameters of the pre-trained model was another challenge to determine the optimal parameters such as learning rate, optimizer, and dropout value

5 Conclusion

Hence, we proposed a work using transfer learning for field pea disease classification. The main contributions of this study are as follows: Performance improvement: We improved the performance of the proposed work using recent deep-learning models. Experimental results show that DenseNet121 achieves 98.33% testing accuracy. Data collection: We collected a dataset of field pea lea diseases from Kulumsa Agricultural Research Centre with the help of farm area experts. Medical treatment recommendation: We deployed the model using a flask application to detect the disease and suggest a medical treatment. Visualization: It enables new users to know how deep learning models work internally to classify the disease. Future works are: Exploring the performance of other pre-trained models such as AlexNet, GoogleNet, and VGG-19 for filed pea leaf diseases classification task. Incorporating the textual features of symptoms on field pea leaves to enhance classification accuracy. Applying domain-specific data augmentation techniques to replicate changes in disease severity and leaf orientation.

Supporting information

S1 Dataset

(ZIP)

pone.0307747.s001.zip (20.3MB, zip)

Data Availability

The dataset that supports the findings of this study has been uploaded as supplementary information. Access to this dataset can be requested from the corresponding author upon reasonable request.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Fikere M., Tadesse T., Gebeyehu S., and Hundie B., “Agronomic performances, disease reaction and yield stability of field pea (Pisum sativum L.) genotypes in Bale Highlands, Ethiopia,” 2010. [Google Scholar]
  • 2.Kindie Y. et al., “Field Pea (Pisum sativum L.) Variety Development for Moisture Deficit Areas of Eastern Amhara, Ethiopia,” Advances in Agriculture, vol. 2019, 2019, doi: 10.1155/2019/1398612 [DOI] [Google Scholar]
  • 3.Tadesse Y., Kesho A., Amareand D., and Tadele M., “Screening of Field Pea Genotypes for Aschochyta Blight,” World Journal of Agricultural Sciences, vol. 17, no. 4, pp. 351–356, 2021, doi: 10.5829/idosi.wjas.2021.351.356 [DOI] [Google Scholar]
  • 4.Kindie Y. et al. , “Erratum: Field pea (Pisum sativum L.) variety development for moisture deficit areas of Eastern Amhara, Ethiopia (Advances in Agriculture (2019) 2019 (1398612) 10.1155/2019/1398612),” Advances in Agriculture, vol. 2019. 2019. doi: 10.1155/2019/1408408 [DOI] [Google Scholar]
  • 5.Kumari T., Kannan M. K. J., and V. N., “A Survey on Plant Leaf Disease Detection,” Int J Comput Appl, vol. 184, no. 17, 2022, doi: 10.5120/ijca2022922170 [DOI] [Google Scholar]
  • 6.Azmeraw Y. and Hussien T., “Management of Common Bean Rust (Uromyces appendiculatus) through Host Resistance and Fungicide Sprays in Hirna District, Eastern Ethiopia,” Advances in Crop Science and Technology, vol. 05, no. 06, 2017, doi: 10.4172/2329-8863.1000314 [DOI] [Google Scholar]
  • 7.Al Hiary H., Bani Ahmad S., Reyalat M., Braik M., and ALRahamneh Z., “Fast and Accurate Detection and Classification of Plant Diseases,” Int J Comput Appl, vol. 17, no. 1, 2011, doi: 10.5120/2183-2754 [DOI] [Google Scholar]
  • 8.A. A. M. Al-Saffar, H. Tao, and M. A. Talab, “Review of deep convolution neural network in image classification,” in Proceeding—2017 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications, ICRAMET 2017, 2017. doi: 10.1109/ICRAMET.2017.8253139 [DOI]
  • 9.Sharma R. et al. , “Plant disease diagnosis and image classification using deep learning,” Computers, Materials and Continua, vol. 71, no. 2, pp. 2125–2140, 2022, doi: 10.32604/cmc.2022.020017 [DOI] [Google Scholar]
  • 10.Liu G., Peng J., and El-Latif A. A. A., “SK-MobileNet: A Lightweight Adaptive Network Based on Complex Deep Transfer Learning for Plant Disease Recognition,” Arab J Sci Eng, vol. 48, no. 2, pp. 1661–1675, Feb. 2023, doi: 10.1007/s13369-022-06987-z [DOI] [Google Scholar]
  • 11.Upadhyay S. K. and Kumar A., “A novel approach for rice plant diseases classification with deep convolutional neural network,” International Journal of Information Technology (Singapore), vol. 14, no. 1, pp. 185–199, Feb. 2022, doi: 10.1007/s41870-021-00817-5 [DOI] [Google Scholar]
  • 12.Adem K., Ozguven M. M., and Altas Z., “A sugar beet leaf disease classification method based on image processing and deep learning,” Multimed Tools Appl, vol. 82, no. 8, 2023, doi: 10.1007/s11042-022-13925-6 [DOI] [Google Scholar]
  • 13.Arshaghi A., Ashourian M., and Ghabeli L., “Potato diseases detection and classification using deep learning methods,” Multimed Tools Appl, vol. 82, no. 4, pp. 5725–5742, Feb. 2023, doi: 10.1007/s11042-022-13390-1 [DOI] [Google Scholar]
  • 14.Eligar V., Patil U., and Mudenagudi U., “Performance Analysis of Deep Learning Algorithms Toward Disease Detection: Tomato and Potato Plant as Use-Cases,” in Smart Innovation, Systems and Technologies, 2022. doi: 10.1007/978-981-16-9873-6_54 [DOI] [Google Scholar]
  • 15.A. Lakshmanarao, M. R. Babu, and T. S. R. Kiran, “Plant Disease Prediction and Classification using Deep Learning ConvNets,” in Proceedings—2021 1st IEEE International Conference on Artificial Intelligence and Machine Vision, AIMV 2021, 2021. doi: 10.1109/AIMV53313.2021.9670918 [DOI]
  • 16.Anim-Ayeko A. O., Schillaci C., and Lipani A., “Automatic blight disease detection in potato (Solanum tuberosum L.) and tomato (Solanum lycopersicum, L. 1753) plants using deep learning,” Smart Agricultural Technology, vol. 4, 2023, doi: 10.1016/j.atech.2023.100178 [DOI] [Google Scholar]
  • 17.Sai Reddy B. and Neeraja S., “Plant leaf disease classification and damage detection system using deep learning models,” Multimed Tools Appl, vol. 81, no. 17, 2022, doi: 10.1007/s11042-022-12147-0 [DOI] [Google Scholar]
  • 18.Gupta H. K. and Shah H. R., “Deep Learning-Based Approach to Identify the Potato Leaf Disease and Help in Mitigation Using IOT,” SN Comput Sci, vol. 4, no. 4, 2023, doi: 10.1007/s42979-023-01758-5 [DOI] [Google Scholar]
  • 19.Sakkarvarthi G., Sathianesan G. W., Murugan V. S., Reddy A. J., Jayagopal P., and Elsisi M., “Detection and Classification of Tomato Crop Disease Using Convolutional Neural Network,” Electronics (Switzerland), vol. 11, no. 21, 2022, doi: 10.3390/electronics11213618 [DOI] [Google Scholar]
  • 20.Thakur P. S., Sheorey T., and Ojha A., “VGG-ICNN: A Lightweight CNN model for crop disease identification,” Multimed Tools Appl, vol. 82, no. 1, 2023, doi: 10.1007/s11042-022-13144-z [DOI] [Google Scholar]
  • 21.Haridasan A., Thomas J., and Raj E. D., “Deep learning system for paddy plant disease detection and classification,” Environ Monit Assess, vol. 195, no. 1, 2023, doi: 10.1007/s10661-022-10656-x [DOI] [PubMed] [Google Scholar]
  • 22.Subramanian M., Shanmugavadivel K., and Nandhini P. S., “On fine-tuning deep learning models using transfer learning and hyperparameters optimization for disease identification in maize leaves,” Neural Comput Appl, vol. 34, no. 16, pp. 13951–13968, Aug. 2022, doi: 10.1007/s00521-022-07246-w [DOI] [Google Scholar]
  • 23.Arshaghi A., Ashourin M., and Ghabeli L., “Detection and Classification of Potato Diseases Potato Using a New Convolution Neural Network Architecture,” Traitement du Signal, vol. 38, no. 6, 2021, doi: 10.18280/ts.380622 [DOI] [Google Scholar]
  • 24.Shantkumari M. and Uma S. V., “Grape leaf image classification based on machine learning technique for accurate leaf disease detection,” Multimed Tools Appl, vol. 82, no. 1, pp. 1477–1487, Jan. 2023, doi: 10.1007/s11042-022-12976-z [DOI] [Google Scholar]
  • 25.Kundu N. et al. , “Disease detection, severity prediction, and crop loss estimation in MaizeCrop using deep learning,” Artificial Intelligence in Agriculture, vol. 6, 2022, doi: 10.1016/j.aiia.2022.11.002 [DOI] [Google Scholar]
  • 26.Yang S. et al. , “Classification and localization of maize leaf spot disease based on weakly supervised learning,” Front Plant Sci, vol. 14, 2023, doi: 10.3389/fpls.2023.1128399 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Khasawneh N., Faouri E., and Fraiwan M., “Automatic Detection of Tomato Diseases Using Deep Transfer Learning,” Applied Sciences (Switzerland), vol. 12, no. 17, 2022, doi: 10.3390/app12178467 [DOI] [Google Scholar]
  • 28.Chen J., Chen J., Zhang D., Sun Y., and Nanehkaran Y. A., “Using deep transfer learning for image-based plant disease identification,” Comput Electron Agric, vol. 173, 2020, doi: 10.1016/j.compag.2020.105393 [DOI] [Google Scholar]
  • 29.R. H. Hridoy, F. Akter, and A. Rakshit, “Computer Vision Based Skin Disorder Recognition using EfficientNet: A Transfer Learning Approach,” in 2021 International Conference on Information Technology, ICIT 2021—Proceedings, 2021. doi: 10.1109/ICIT52682.2021.9491776 [DOI]
  • 30.Farman H., Ahmad J., Jan B., Shahzad Y., Abdullah M., and Ullah A., “Efficientnet-based robust recognition of peach plant diseases in field images,” Computers, Materials and Continua, vol. 71, no. 1, 2022, doi: 10.32604/cmc.2022.018961 [DOI] [Google Scholar]
  • 31.Srinivasu P. N., Sivasai J. G., Ijaz M. F., Bhoi A. K., Kim W., and Kang J. J., “Classification of skin disease using deep learning neural networks with mobilenet v2 and lstm,” Sensors, vol. 21, no. 8, Apr. 2021, doi: 10.3390/s21082852 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Albahli S. and Nawaz M., “DCNet: DenseNet-77-based CornerNet model for the tomato plant leaf disease detection and classification,” Front Plant Sci, vol. 13, 2022, doi: 10.3389/fpls.2022.957961 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Nandhini S. and Ashokkumar K., “An automatic plant leaf disease identification using DenseNet-121 architecture with a mutation-based henry gas solubility optimization algorithm,” Neural Comput Appl, vol. 34, no. 7, 2022, doi: 10.1007/s00521-021-06714-z [DOI] [Google Scholar]
  • 34.Vallabhajosyula S., Sistla V., and Kolli V. K. K., “Transfer learning-based deep ensemble neural network for plant leaf disease detection,” Journal of Plant Diseases and Protection, vol. 129, no. 3, 2022, doi: 10.1007/s41348-021-00465-8 [DOI] [Google Scholar]
  • 35.Liu J., Wang M., Bao L., and Li X., “EfficientNet based recognition of maize diseases by leaf image classification,” in Journal of Physics: Conference Series, 2020. doi: 10.1088/1742-6596/1693/1/012148 [DOI] [Google Scholar]
  • 36.Hanh B. T., Van Manh H., and Nguyen N. V., “Enhancing the performance of transferred efficientnet models in leaf image-based plant disease classification,” Journal of Plant Diseases and Protection, vol. 129, no. 3, 2022, doi: 10.1007/s41348-022-00601-y [DOI] [Google Scholar]
  • 37.Sahu P., Chug A., Singh A. P., and Singh D., “Deep Learning Models for Beans Crop Diseases: Classification and Visualization Techniques Deep Learning Models for Beans Crop Diseases: Classification and Visualization Techniques,” no. March, 2021. [Google Scholar]

Decision Letter 0

Valentine Otang Ntui

9 Jun 2024

PONE-D-24-10799Field Pea Leaf Disease Classification Using a Deep Learning ApproachPLOS ONE

Dear Dr. Girmaw,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 24 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript: 

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Valentine Otang Ntui, Ph.D

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. In the online submission form, you indicated that [Insert text from online submission form here].

All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either 1. In a public repository, 2. Within the manuscript itself, or 3. Uploaded as supplementary information.

This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons on resubmission and your exemption request will be escalated for approval.

4. Please include a separate caption for each figure in your manuscript.

5. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files.

6. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The manuscript by the author addresses detection of three pea plant diseases and classification using deep learning approach. In my view, I recommended this article with some revisions.

The author can revise the manuscript including details on the following:

1. The images quality can be improved. Some sections of the paper could be revised for clarity and conciseness. Additionally, attention should be paid to grammar, spelling, and formatting of text to enhance the overall readability of the paper.

2. Detailed explanation of the methodology adopted for transfer learning can be included. It would be beneficial to provide a step-by-step explanation of how the pre-trained model was adapted and fine-tuned for pea leaf disease classification.

3. More details are needed regarding the dataset used for training, validation, and testing. It would be helpful to include information on sample images of diseased pea leaf, distribution of classes, and any preprocessing techniques applied.

4. The paper would be strengthened by including comparisons with other classification approaches or baselines. This could demonstrate the effectiveness of the transfer learning approach in comparison to traditional methods or other deep learning architectures.

5. The results and discussion should include interpretation and discussion of the findings. It would be beneficial to analyze the performance of the model in classifying different types of pea leaf diseases and discuss any challenges or limitations encountered.

6. Suggestions for future work could be provided to guide further research in this area. This could include exploring different pre-trained models, incorporating additional features or data augmentation techniques.

Reviewer #2: Generally, the manuscript is well-written and provides a novel method for diagnosing pea leaf diseases. The authors have provided all the data underlying the findings of the study. The paper is also presented in an intelligible fashion using standard English. However, the materials and methods section has only provided limited information on the protocols undertaken in the study.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Duncan Njora Waweru

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PONE-D-24-10799.docx

pone.0307747.s002.docx (13.2KB, docx)
PLoS One. 2024 Jul 25;19(7):e0307747. doi: 10.1371/journal.pone.0307747.r002

Author response to Decision Letter 0


17 Jun 2024

REVIEWED PAPER TITLE:

Ms. Ref. No.: PONE-D-24-10799

Paper Title: Field Pea Leaf Disease Classification Using a Deep Learning Approach

Journal: PLOS ONE.

Responses to Reviewer’s/Editor’s Comments

Acknowledgment from the Authors: We are sincerely grateful to the

reviewers and Editorial Board for the constructive comments tailored to improve the

quality of the manuscript. We are also thankful for the comments/suggestions for further improvement/reconsideration of our manuscript. We deeply acknowledge the efforts

made by the reviewers for devoting valuable time to improving our

manuscript. The changes made concerning the reviewer’s suggestions are highlighted in color red for Reviewers in the revised manuscript with track changes comments from the Editors and Reviewers:

Reviewer’s comments:

1. Is the manuscript technically sound, and does the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented

Response from authors: Yes, we provide a novel method for diagnosing pea leaf disease, and achieve good performance on the dataset. The research follows design science research with five basic steps such as problem identification, design solution, development, demonstration, evaluation, and communication.

2. Has the statistical analysis been performed appropriately and rigorously?

Response from authors: Yes, the study design is appropriate for the research question and objectives. The results of the statistical analysis are interpreted by using performance evaluation metrics such as accuracy, precision, recall, support, macro average, and F1 score.

3. Have the authors made all data underlying the findings in their manuscript fully available?

Response from authors: Yes.

4. Is the manuscript presented in an intelligible fashion and written in standard English?

Response from authors: Yes, we corrected some irregular format, inconsistent statements, absence of content, and typos, in the entire section of the manuscript. Right now, the manuscript is written in clear, and brief language.

Review Comments to the Author:

Reviewer #1. The manuscript by the author addresses the detection of three pea plant diseases and classification using a deep learning approach. In my view, I recommend this article with some revision. The author can revise the manuscript including details on the following:

1. The image quality can be improved. Some sections of the paper could be revised for clarity and conciseness. Additionally, attention should be paid to grammar, spelling, and formatting of text to enhance the overall readability of the paper.

Response from authors: Yes, the quality of images is improved to a higher resolution. The manuscript is written in clear, brief language to enhance the overall readability of the paper.

2. A detailed explanation of the methodology adopted for transfer learning can be included. It would be beneficial to provide a step-by-step explanation of how the pre-trained model was adapted and fine-tuned for pea leaf disease classification.

Response from authors: Yes, we adopted a transfer learning technique for the classification of pea leaf diseases as follows:

Choosing a Pre-trained Model: We selected novel pre-trained models (EfficentNetB7, MobileNetV2, and DenseNet121) for the classification of pea leaf diseases.

Loading the Pre-trained Model: We loaded the selected pre-trained model weights and excluded the top classification layers because they are specific to the original task for which the model was trained.

Adding Custom Classification Layers: We added custom layers for our classification task of pea leaf disease and these layers are followed by a SoftMax classifier.

Freezing Pre-trained Layers: We freeze these layers to stop the weights of the pre-trained layers from being updated during training and to train only the weights of the newly added layers for our classification task.

Compiling the Model: We compiled the model using Adam optimizer and categorical cross-entropy as a loss function.

Data Augmentation: We used data augmentation techniques such as rotation, horizontal flip, zoom, and shear to diversify training images and enhance the model's performance.

Training the Model: We trained the model using training data and the weights of the recently added classification layers are updated by the backpropagation method.

Evaluation: After training is finished, we assess the model's performance using a test dataset and we use accuracy, precision, recall, and F1-score to evaluate the model's performance.

3. More details are needed regarding the dataset used for training, validation, and testing. It would be helpful to include information on sample images of diseased pea leaves, distribution of classes, and any pre-processing techniques applied.

Response from authors: The dataset is divided into four classes that correspond to the diseases. The distribution of classes is as follows: the first class consists of 400 images of ascochyta blight disease. The second class consists of 400 Healthy images. The third class consists of 400 images of leaf spot disease. The fourth class consists of 400 images of powdery mildew leaves. Images are resized to a common size of 224x224 pixels and normalized in the range [0, 1] to scale pixel values. We applied data augmentation to improve model generalization and increase the diversity of training data.

4. The paper would be strengthened by including comparisons with other classification approaches or baselines. This could demonstrate the effectiveness of the transfer learning approach in comparison to traditional methods or other deep learning architectures.

Response from authors: we incorporated a comprehensive comparison under related works to evaluate our approach against traditional methods and other deep learning architectures.

5. The results and discussion should include interpretation and discussion of the findings. It would be beneficial to analyze the performance of the model in classifying different types of pea leaf diseases and discuss any challenges or limitations encountered.

Response from authors: Overfitting was one challenge to adopting transfer learning for leaf disease classification when the target dataset is small. This means the model may memorize the training data rather than learning features. we mitigated this issue using data augmentation and dropout techniques. fine-tuning the hyperparameters of the pre-trained model was another challenge to determine the optimal parameters such as learning rate, optimizer, and dropout value

6. Suggestions for future work could be provided to guide further research in this area. This could include exploring different pre-trained models and incorporating additional features or data augmentation techniques.

Response from authors: Yes, here are some recommendations:

Exploring Different Pre-Trained Models: Exploring the performance of other pre-trained models such as AlexNet, GoogleNet, and VGG-19 for filed pea leaf diseases classification task.

Incorporating Additional Features: Incorporating the textual features of symptoms on field pea leaves to enhance classification accuracy.

Data Augmentation Techniques: Applying domain-specific augmentation techniques to replicate changes in disease severity and leaf orientation.

Reviewer #2: Generally, the manuscript is well-written and provides a novel method for diagnosing pea leaf diseases. The authors have provided all the data underlying the findings of the study. The paper is also presented in an intelligible fashion using standard English. However, the materials and methods section has only provided limited information on the protocols undertaken in the study:

Response from authors: Yes, these are the descriptions of protocols undertaken in the study

Data Pre-processing Protocol: We used data augmentation techniques such as rotation, horizontal flip, zoom, and shear to diversify training images and enhance the model's performance. Images are resized to a common size of 224x224 pixels and normalized in the range [0, 1] to scale pixel values. The dataset is divided into training, validation, and test sets to assess model performance.

Model Training Protocol: We found the optimal hyperparameters such as optimizer, learning rate, and batch size through experimentation. We used dropout and batch normalization to prevent overfitting problems. To train the model we used optimizer, loss function, number of epochs, and early stopping to stop the training when performance on the validation set starts failing.

Evaluation Protocol: We used evaluation metrics such as accuracy, precision, recall, F1 score, assess model performance.

Deployment Protocol: We saved trained models and deployed them as web applications using the Flask application. This allows the user to perform a live field pea leaf disease classification service.

Comments

Abstract:

You have not indicated the significance of your study, which is usually the last statement in the abstract section.

Response from authors: Yes, we expect that this research work will offer various benefits for farmers and farm experts. It reduced the cost and time needed for the detection and classification of field pea leaf disease. Thus, a fast, automated, less costly, and accurate detection method is necessary to overcome the detection problem.

Introduction:

Line 35: ‘Field peas are grown around the world….’ This has already been indicated, so there is no need to repeat it. The next statement ‘…and have 86% loss due to powdery mildew disease’ is contradictory to what you alluded to earlier that ‘…...these diseases can reduce yields by 20–30%’ (line 31). I suggest that you amalgamate both sentences and have a unified statement offering the specific statistics with a relevant citation.

Response from authors: Field peas, grown globally, are highly vulnerable to powdery mildew disease and cause substantial losses [3], [5]. Although it's commonly estimated that diseases like powdery mildew result in yield decreases of 20–30%, the actual losses can be even higher [1], [3].

Related works:

Line 63: ‘The Plant-Village dataset was used…’ Provide a link to indicate the dataset used. The same should be applied to line 78: ‘A database with 5,000 images of potatoes was employed.’ Line 93: ‘…on the 1,33 images of the test set.’ Is it 133 images? I don’t understand why you indicated the contributions of your work in the related works section before providing the methods and results obtained. This would have been appropriate for the conclusions of your study.

Response from authors:

Line 63: The Plant-Village dataset (https://www.kaggle.com/datasets/mohitsingh1804/plantvillage) was used.

Line 78: A database with 5,000 images of potatoes (https://www.kaggle.com/datasets/mohitsingh1804/plantvillage) was employed.

Line 93: on the 133 images of the test set.

Contributions: we moved the discussion of contributions to the conclusions section for better organization.

Methods and materials:

Shouldn’t it be materials and methods? You need to indicate the materials used in the study. Generally, this section has provided limited information on the protocol you used. For example, in line 148: ‘We labeled the images of field peas…’ How many images? How were they obtained? Which camera was used to capture the images? Where was the study conducted? Line 151: ‘We employed several approaches, such as…’ Please specify the precise approaches used in the study. Line 154: ‘Models were tested using the test images…’ How many images? Where were they obtained? Did you use a specific field pea cultivar?

Response from authors:

Labelling of Field Pea Images (Line 148):

Number of Images: 1600 field pea images were labeled for this study.

Acquisition: Field pea images were obtained from the Kulumsa Agricultural Research Centre in Ethiopia.

Camera: A smartphone camera was utilized to take images.

Study Location: The study was conducted at Kulumsa Agricultural Research Centre in Ethiopia.

Description of Approaches Used (Line 151):

Before the input images feed into the model, we employed image pre-processing techniques such as image resizing, normalization, and argumentations.

Testing Data Information (Line 154):

Number of Test Images: 240 images of field peas are used for testing.

Source of Test Images: Test images were obtained from the same dataset used for training and validation.

Results and discussion:

The information lacking in the methods section is provided in the results section, which is not appropriate. This section should only provide the results you obtained from the study and not how the study was conducted. I have also not seen any discussion of your results in comparison to previous studies. Generally, you have provided limited/incomplete information in the methods, results, and discussion sections.

Response from authors:

Methods Section: This research is a design science, so we have built artifacts to detect and classify field pea leaf diseases. we used GPUs to accelerate training time, TensorFlow, and Keras, for building and training deep learning models, Jupyter Notebook for writing, debugging, and running deep learning code, Matplotlib, Seaborn, and Tensor Board are used for visualizing training metrics, model architectures, and data distributions. We used transfer learning to retrain a previously trained model for a new problem. Transfer learning offers numerous advantages, including the ability to finetune parameters quickly and easily to learn new tasks without defining a new network.

Results Section: From the state-of-the-art models, DenseNet121 achieved a training accuracy of 99.73% validation accuracy of 99.16%, and testing of 98.33%% and MobileNetV2 scored a training accuracy of 99.64%, validation accuracy of 98.33%and testing of 96. 09%, While EfficientNetB7 scored a training accuracy of 99.82%, validation accuracy of 99.16%, and testing of 97.92%.

Discussion Section: The experimental results showed that DenseNet121 performed better than MobileNetV2 and EfficientNetB7 for field pea leaf disease detection. The experimental results also validated the use of transfer learning for field pea leaf disease classification.

Attachment

Submitted filename: Response to Reviewers.docx

pone.0307747.s003.docx (36KB, docx)

Decision Letter 1

Valentine Otang Ntui

11 Jul 2024

Field Pea Leaf Disease Classification Using a Deep Learning Approach

PONE-D-24-10799R1

Dear Dr. Girmaw,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Valentine Otang Ntui, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Valentine Otang Ntui

16 Jul 2024

PONE-D-24-10799R1

PLOS ONE

Dear Dr. Girmaw,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Valentine Otang Ntui

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset

    (ZIP)

    pone.0307747.s001.zip (20.3MB, zip)
    Attachment

    Submitted filename: PONE-D-24-10799.docx

    pone.0307747.s002.docx (13.2KB, docx)
    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0307747.s003.docx (36KB, docx)

    Data Availability Statement

    The dataset that supports the findings of this study has been uploaded as supplementary information. Access to this dataset can be requested from the corresponding author upon reasonable request.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES