Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Aug 21;15:30708. doi: 10.1038/s41598-025-15380-3

Detection of weeds in teff crops using deep learning and UAV imagery for precision herbicide application

Alemu Setargew Kebede 1, Tsehay Wasihun Muluneh 1, Abebe Belay Adege 2,3,
PMCID: PMC12371000  PMID: 40841734

Abstract

In Ethiopia, Teff is a vital staple crop, yet its productivity is significantly challenges due to inefficient weed and fertilizer management, threatening food security. Traditional weed control methods rely on manual labor and the indiscriminate application of herbicides, resulting in inaccurate targeting, inefficient distribution, excessive labor, and reduced yields. Non-selective weed management often leads to herbicide misuse, compounded by the difficulty in distinguishing Teff from visually similar weeds, particularly on large farms. This study introduces an optimized deep learning model for weed detection in Teff fields, enabling selective and efficient herbicide application through unmanned aerial vehicles (UAVs). A dataset of 1308 high-resolution drone-captured images was collected across various growth stages and weather conditions from the University of Gondar Agricultural Research Farm and surrounding farms. Key shape-based features such as aspect ratio (AR) and solidity were utilized to enhance model performance. Further, we applied data augmentations at different ratios of the original dataset and experimented with various optimizers to enhance the model’s adaptabliy to different data characteristics and minimize overfitting problems. Deep learning models, such as MobileNetV2, InceptionResNetV2, DenseNet201, VGG16, Resnet50, Fast R-CNN, and YOLOv8, were evaluated with and without fine-tuning. Among the models, fine-tuned MobileNetV2 achieved the highest accuracy (96.40%), demonstrating its potential for practical implementation in UAV-assisted precision agriculture. This work highlights the transformative role of AI-driven solutions in enhancing weed management and improving Teff crop productivity.

Keywords: Deep learning, Weed detection, Precision agriculture, Drone technology

Subject terms: Computational biology and bioinformatics, Engineering, Mathematics and computing

Introduction

Teff, a staple crop of immense cultural and economic importance in Ethiopia, sustaining over 57.2 million people, accounting for more than 64% of the country’s population1. It plays a crucial role in Ethiopian traditions and food security. Despite its resilience to diseases and insect pests compared to other Ethiopian crops, Teff productivity is significantly threatened by weeds, particularly during the V3–V4 growth stages. The V3 stage refers to the phase where the Teff plant has developed three fully formed leaves, while the V4 stage is characterized by the presence of four fully formed leaves. These stages are critical for weed management, as weeds compete most aggressively with Teff for nutrients and moisture during this time, leading to potential yield resuction2. Uncontrolled weed growth remains a primary challenge, causing major yield losses. According to Gebretsadik, Haile, and Yamoah3, Teffee crops can suffer biomass reductions of up to 36% and grain yield losses of up to 38% due to infestations from weed species such as Cyperus esculentus, Setaria pumila, Avena abyssinica, and Galinsoga parviflora.

Ethiopian farmers traditionally rely on manual labor and, more recently, chemical herbicides like glyphosate for weed control. These methods, while commonly used, are labor-intensive, environmentally harmful, and costly, particularly for larger farms 4. Current herbicide application techniques are non-selective, posing risks to both the environment and human health5. In6, the study emphasize the limitations of conventional spraying methods, advocating for more precise and cost-effective solutions, such as UAV-based technologies. Gebretsadik, Haile, and Yamoah3 argue that traditional approaches fail to effectively manage weed populations, resulting in substantial yield losses. Alzubaidi et al.7 highlight that these challenges can be mitigated through the development of effective deep learning models for weed detection, which are critical for advancing modern, automated solutions. However, Murad et al.8 identify a significant research gap in addressing weed detection in Teff crops and developing precise weed control strategies. Moreover, Yared, Yibekal, and Kiya1 stress the difficulty in distinguishing Teff leaves from weed leaves due to their similar morphology, which presents a key challenge in accurately detecting and quantifying weed infestations.

Recent advancements in computer vision and deep learning, particularly Convolutional Neural Networks (CNNs), offer promising solutions for weed detection and selective herbicide spraying using Unmanned Aerial Vehicles (UAVs). High-performance drones equipped with high-resolution cameras can capture detailed visual data, enabling CNNs to effectively distinguish between weeds and crops9. Once weeds are identified, this technology can be integrated with UAVs to facilitate precise herbicide application, a process that will be further explored in the second phase of this study. However, challenges persist in accurately detecting weeds amidst the complex backgrounds and densely overlapping Teff leaves10. The dense growth of Teff crops often obscures ground conditions, making accurate weed identification more difficult. These challenges highlight the critical need for advanced technological solutions to improve weed management in Teff cultivation, ensuring greater productivity and sustainability.

The existing works focus on weed detection for commonly cultivated crops, such as maize11, cabbage12, beans13, rice14, and fruit crops15. Most of these studies often rely on publicly available or satellite imagery datasets rather than high-resolution drone imagery. Moreover, none of the aforementioned studies evaluated weed detection in Teff crops using combined optimization strategies. The detection of weeds in Teff is particularly challenging due to its dense growth pattern, the morphological similarity between the crop and weeds, and the narrowness of the leaves, as a result there is no any studied. Despite the fact that uncontrolled weed infestations can cause yield losses of 5–39% in Teff cultivation 16, current weed control practices remain largely manual, mechanical, or dependent on backpack sprayers. This gap highlights the need for targeted research into automated Teff weed detection and the application of state-of-the-art technologies such as drone-based spraying systems. Focusing on Teff weed detection gives a significant opportunity to enhance precision agriculture for those highly producing of Teff crops countries, such as Ethiopia and USA.

This study addresses the challenge of weed detection in Teff crops by introducing a novel approach that leverages a fine-tuned CNN model, trained on datasets of drone-captured images of Teff plants and various weed species. The research aims to enhance detection accuracy and efficiency in herbicide spraying through drone technology. By utilizing a refined MobileNetV2 model, the system achieves improved precision in identifying weeds, enabling targeted herbicide application that spares Teff crops while minimizing the use of chemicals. This approach not only reduces environmental impact but also promotes sustainable farming practices. The contributions of this work are summarized as follows:

  • Development of a fine-tuned deep learning model Designed a refined deep learning model tailored for accurate weed detection in Teff crops, achieving superior performance in challenging agricultural scenarios.

  • Comprehensive image dataset collection and preprocessing Assembled and preprocessed a diverse dataset of Teff crops and weeds, captured under varying environmental conditions using drone cameras, ensuring robust training and testing of the model.

  • Innovative integration of shape-based features for enhanced detection Incorporated shape-based features, such as aspect ratio (AR) and solidity, to improve the model’s ability to differentiate between Teff plants and weeds, significantly boosting detection accuracy and overall performance.

The remaining sections of this paper are organized as follows: Sect. “Related works”reviews related literature. Sect. “Methodology”describes the proposed method and data collection approach. Sect. “Experimental setups, results and discussion” details the experimental setup, results, and discussion. Finally, Sect. “Conclusion” concludes the paper and suggests directions for future research.

Related works

This section reviews recent studies that have advanced weed detection and UAV-based herbicide application using deep learning techniques. Sa et al.17 employed multispectral image datasets to explore dense semantic classification for vegetation detection in crops and weeds. Their research utilized 465 data points divided into three categories: 132 images of crops, 243 images of weeds, and 90 images containing both crops and weeds. This well-structured dataset facilitated the fine-tuning of deep learning models for precise vegetation classification. The study also emphasized the importance of sensor calibration prior to data acquisition, which significantly enhanced model performance.

Giselsson et al.18 focused on identifying plant species and detecting weeds during the early growth stages of 12 different weed and crop species. Their dataset featured high-resolution imagery (5184 × 3456 pixels), including full images, segmented plants, and unsegmented plants, each tagged with species-specific identifiers. This research highlighted the utility of segmentation techniques and high-resolution imagery in detecting weeds early, enabling timely crop management interventions. Similarly, dos Santos Ferreira et al.19 targeted weed detection using CNNs to differentiate between soil, soybean, broadleaf, and grass weeds. Their dataset, captured via UAVs, consisted of 15,336 images with a resolution of 4000 × 3000 pixels, which were manually segmented and annotated. The model achieved over 98% accuracy in detecting broadleaf and grass weeds and an average accuracy of 99%, demonstrating the efficacy of CNNs in weed detection. These findings underscore the potential for developing UAV-based herbicide spraying systems with high precision and efficiency.

Yu et al.20 explored weed detection for species such as dandelion, ground ivy, spotted spurge, and ryegrass, utilizing a dataset comprising 33,086 images with a resolution of 1920 × 1080 pixels, including 17,600 positive and 15,486 negative samples. The study employed VGGNet, achieving a recall value of 0.9952, which highlights the model’s effectiveness in accurately identifying weeds. The researchers proposed integrating Deep CNN, VGGNet, and DetectNet to develop machine vision-based smart sprayers for precision weed control. This approach demonstrates the potential of high-recall deep learning models for selective herbicide spraying, offering a path toward more efficient and accurate weed management in agriculture. Ma et al.21 focused on image segmentation of rice seedlings and weeds, analyzing 224 data points collected from paddy fields. Their research targeted weeds during early growth stages, training six models under varying conditions. The study achieved an 8% improvement in AUC classification metrics, emphasizing the importance of fine-tuning deep learning models for precise vegetation classification.

Olsen et al.22 focused on classifying multiple weed species using deep learning, targeting eight nationally significant species from eight locations across northern Australia. Their dataset comprised 17,509 images with a resolution of 256 × 256 pixels, with each class containing between 1009 and 1125 images. Using benchmark models Inception-v3 and ResNet-50, they achieved classification accuracies of 95.1% and 95.7%, respectively, establishing a strong baseline for weed species classification. Madsen et al.23 developed detection and classification algorithms for 47 common weed species in Denmark. Their dataset of 7590 records included both monocotyledonous and dicotyledonous weeds grown in semi-field settings to mimic natural growth conditions. This work highlighted the importance of training robust deep learning models on diverse datasets to ensure accurate weed detection and classification across various agricultural environments, thereby supporting more precise weed management strategies. Sudars et al.24 proposed identifying six food crops and eight weed species, using a total of 1,118 manually annotated images captured under controlled and field conditions at various growth stages. The study employed three RGB digital cameras, demonstrating the potential of multi-camera systems to enhance detection accuracy. This innovative approach underscores the value of integrating advanced imaging techniques with deep learning for improved crop and weed classification.

Previous studies have highlighted the effectiveness of shape-based features, such as AR and Solidity, in various agricultural applications, particularly for weed detection and classification. For example, Kanda et al.25 applied AR and Solidity into a machine learning framework for weed detection in wheat fields, demonstrating improved performance compared to models relying solely on color features. In remote sensing, Wang et al.26 emphasized the robustness of shape features under diverse environmental conditions, showcasing their adaptability. Furthermore, Li, Zhang, and Wang27 successfully integrated AR and Solidity into a deep learning model for plant disease detection, significantly enhancing the model’s discriminative capabilities.

More recently, researchers have focused using the application of drone technology in enhancing smart agriculture, such as weed detection and herbsie spraying28,29. Agriculture spraying through the use of drone technology has been reported to be capable of enhancing efficiency by as much as 60 times29. Shahi et al.30, for instance, utilized the application of drone imagery for cotton field weed detection through a U-Net model on 201 images. The model achieved 88.20% precision rate, 88.97% recall rate, 88.24% F1-score, and 56.21% Intersection over Union (IoU) mean. This research didn’t focus on thin leave images and used small datasets. In28, UAV images were applied in maize weed detection, where CNN recorded an F1-score of 82%, 75% recall rate, and 90% precision rate. The main purpose of the research was to compare the performance of various multispectral sensors rather than applying enhancing techniques. In31, they employed the YOLOv5 algorithm to resolve the conflicting requirements of high accuracy and low computational overhead in weed detection in grass fields. The model recorded a mean Average Precision (mAP) of 86.80%. Ong, Teo, and Sia32 employed a CNN to detect weeds in Chinese cabbage fields from UAV images and compared the performance with that of a Random Forest (RF) model. They revieled that CNN and RF achived an accuracy of 92.41% and 86.18%, respectly. This research showed that the deep learning can achieve more accurately than traditional machine learning (RF). Similarly, Lu et al.33 utilized UAV images for potato weed detection using YOLOv8 and Mask R-CNN with mAP of 0.902 and 0.920, respectively.

In the above studies, including1720, and23, provided a solid foundation for advancing weed detection and herbicide spraying technologies. However. none of these studies didn’t show different optimization techniques to find accurate results for Teff crops. Besides, the fieldwork data were not used from various climate, locations and lighting levels. In addition, UAV-based data were not collected under varying climatic conditions, lighting levels, which factors that are essential for generating diverse datasets.

Following this, the current research aims to fill the noted gap in research by developing sophisticated deep learning models with data augmentation combine with learning techniques (k-fold cross-validation, early stopping, regularization, and dropout) and optimizers (Adam, SGD, RMSprop, and Nadam). Teff’s distinctive morphological characteristics, including its compact growth and resemblance of leaves to numerous weeds, require the generation of demanding models based on carefully segmented images and optimized with state-of-the-art techniques hadn’t studied yet. As Teff crop is a widely producing crop in Ethiopia and partially in USA, the integration of state-of-the-art deep learning models and drone-based real-time herbicide spraying can introduce tremendous advances in agricultural productivity.

Methodology

This study employed Design Science Research (DSR) to develop and evaluate a fine-tuned deep learning model for Teff-weed detection using datasets collected by drones. DSR, a research methodology focused on creating and evaluating IT artifacts to address organizational challenges, aims to enhance understanding while delivering practical solutions. This approach is particularly suitable to the development of innovative technologies, such as UAV-based herbicide applications, which demand both robust theoretical foundations and practical validation34. the study not only designs an advanced deep learning model but also rigorously tests and refines it to ensure its applicability in real-world agricultural scenarios for Teff weed identification. This methodology bridges the gap between theoretical research and its practical implementation, contributing to more efficient and sustainable agricultural practices in Ethiopia.

Proposed system architecture

Figure 1 illustrates the proposed system architecture for detecting and classifying weeds in Teff crops using deep learning techniques. The process begins with image acquisition, where images of Teff crops and weeds are captured using a high-resolution drone camera. These images then undergo a preprocessing phase, which includes resizing, enhancement, filtering, normalization, and annotation to ensure the data is suitable for model training. After preprocessing, the dataset is divided into training and testing subsets with varying ratios to identify the optimal split. During the training phase, feature extraction and pre-designed models are utilized to extract relevant features and leverage pre-trained networks, which are then refined for the specific task of weed detection. These extracted features and pre-designed models are integrated into a backbone architecture that supports both bounding box detection and class label classification. The model undergoes performance evaluation, and if it achieves the desired accuracy, it is exported as an. H5 file for deployment. If the performance falls short, the model enters a fine-tuning phase, where hyperparameters and architectural adjustments are made to enhance its accuracy and robustness. This iterative process of evaluation and fine-tuning continues until the model meets the optimal performance criteria, ensuring its readiness for real-world deployment.

Fig. 1.

Fig. 1

Proposed system architecture.

Image acquisition

Images of Teff and weeds were collected at various growth stages using a drone, both before and after rainy and sunny periods. The aerial perspective enabled effective monitoring of crop progress, identification of key leaf characteristics, and detecting weed density. To introduce variability, images were captured under different lighting conditions and at various altitudes, as detailed in Table 1. This comprehensive dataset strengthens the deep learning model by ensuring its robustness across diverse environmental conditions, thereby enhancing its efficiency and accuracy in real-world applications.

Table 1.

Drone flight terms and conditions.

Drone flight sessions Timing around Soil condition Sunlight condition Date captured Teff fields
1st 3:00 Dry Sunny July 12, 2023 UoG agro lab
2nd 3:00 Dry Cloudy August 1, 2023 Farmer’s 1st Teff field
3rd 10:00 Dry Near sunset August 1, 2023 Farmer’s 2nd Teff field
4th 8:00 Wet Sunny+cloudy July 15, 2023 UoG agro lab

For this study, Teff crops were planted during the summer of 2023 at the University of Gondar (UoG) Agricultural Research Farm, located at latitude 12.580178 and longitude 37.441197. Four experimental fields, each measuring 4 × 4 m, were established, as illustrated in Fig. 2. The Teff was sown manually, maintaining a standard row distance of 20 cm and intra-row spacing, as recommended by Meseret Gezahegn and Tamiru35. Planting was conducted on a level surface to mitigate the influence of topographical variations on the results. By the time of image acquisition in Field 1 (right side), the Teff crops had matured sufficiently, facilitating an accurate evaluation of the model’s performance.

Fig. 2.

Fig. 2

Planting teff field 1 experiment.

Additionally, an extensive dataset of field images was gathered from two farmlands in Northern Amhara, Azezo Kebele, located at latitude 12.539057 and longitude 37.385636. These images were captured using a DJI Air 2S UAV equipped with high-resolution RGB cameras, as shown in Fig. 3. The images, taken at a resolution of 3056 × 3056 pixels, were collected during flights conducted at altitudes of 1–2 m above the ground. Captures were performed under varying surface and lighting conditions, including sunny and post-rain scenarios. The resulting dataset comprises up to 3,250 RGB images, some containing noise, and is formatted in JPG. This diverse dataset facilitates a thorough evaluation of the model’s performance under different field conditions and enhances its potential for generalizability across various agricultural environments.

Fig. 3.

Fig. 3

Image acquisition using drones.

As detailed in Table 1, four drone flight sessions conducted over three Teff fields (Terms 1–4) captured a diverse range of natural conditions. These included varying sunlight levels, wet and dry soil conditions, crop shadows in different orientations, and Teff crops at multiple growth stages. Such heterogeneous conditions form an optimal dataset for deep learning applications, promoting the development of robust models capable of generalizing across diverse real-world scenarios. Supporting this, Murad et al.8 emphasize that acquiring image data at different growth stages not only aids in tracking crop development but also facilitates the identification of critical leaf features, offering valuable insights for precision agriculture solutions.

Figure 4 illustrates the various levels of Teff crop and weed growth, as captured during experimental data collection with a drone for weed detection. The images depict varying soil conditions and crop growth stages, offering a comprehensive dataset for developing and validating weed detection algorithms across diverse agricultural settings. Additionally, a dataset of weed species was compiled through the manual classification of individual weeds in the images, assisted by experts. This dataset included images of various weed species categorized as one group and Teff crops as another.

Fig. 4.

Fig. 4

Different environment conditions (rainy and sunny), and growth level.

Data preprocessing

Image annotation and image filtering

From the originally collected dataset of 3250 images, many defective images were discarded, resulting in a total of 1308 usable images that were annotated using makesense.ai. Annotations involved drawing rectangles around instances of ‘Teff’ and ‘Weed,’ which were then saved in text files. The rectangles were confined to smaller regions within the high-resolution raw images, as defined by bounding boxes in the text files. The dataset was created by cropping the raw images according to the bounding boxes that defined the annotated regions. Data for each image was extracted by projecting a rectangle from the raw image space onto the space delineated by its corresponding bounding box. To reduce computational requirements and enhance model performance, the following preprocessing steps were implemented. First, the original images were resized to 512 × 512 pixels from their original size of 3056 × 3056 pixels. Second, the pixel intensities of the images were scaled to the range [0, 1] to normalize the dynamic range and enhance model performance. Additionally, the mask filter, as discussed by Al-Ameen, Muttar, and Al-Badrani36, was employed to adjust the radius and achieve the appropriate sharpness for images with varying levels of motion blur. This non-destructive editing feature allowed fine-tuning adjustments without permanently altering the original image data, as illustrated in Fig. 5. It has been adjusted the radius to achieve the appropriate level of sharpness for images with varying degrees of motion blur. Moreover, the non-destructive editing feature ensures that adjustments can be fine-tuned without permanently altering the original image data.

Fig. 5.

Fig. 5

Drone motion blur and filtered by unsharp mask.

Image preprocessing

The process of obtaining more suitable datasets for Teff weed detection involves several steps, each utilizing mathematical formulations to extract meaningful information from images. This is essential because Teff leaves are extremely narrow and require extensive preprocessing. Key preprocessing steps include resizing, noise filtering, and color normalization. For instance, Eq. (1) is used to resize images to a standard dimension:

graphic file with name d33e583.gif 1

Here, Inline graphic represents the input image, which is resized to a new standardized width Inline graphic and height Inline graphic. This resizing ensures uniform input dimensions for the deep learning models, facilitating consistent feature extraction. I resized refers to the newly resized image derived from the original image. For this work, Gaussian filtering was applied to reduce noise and minimize overfitting problems in model development37, as illustrated in Eq. (2).

graphic file with name d33e619.gif 2

Here,Inline graphic (Inline graphic represents a Gaussian blur applied to reduce noise in the image, Inline graphic is the standard deviation of the Gaussian distribution, Inline graphic and Inline graphic are pixel coordinates, and “Inline graphic” denotes convolution. In Eq. (3), we apply color normalization outlined in the reports by Nair 38as:

graphic file with name d33e671.gif 3

Here, Inline graphic Inline graphic normalizes the color channels (Red, Green, Blue) by subtracting the mean (Inline graphic and dividing by the standard deviation (Inline graphic) of the Inline graphic color channel. This process standardizes the pixel values across different images. By applying these mathematical formulations and techniques, relevant features for weed detection in Teff crops can be effectively extracted, facilitating accurate and efficient herbicide application using UAVs.

Feature extraction

In this study, feature extraction was performed on images of Teff and weeds. However, prior to feature extraction, the images captured by UAVs underwent preprocessing to enhance quality and emphasize relevant information. This preprocessing included steps such as image normalization (Inline graphic) and image segmentation (Inline graphic).

graphic file with name d33e725.gif 4

where Inline graphic is the original image, Inline graphic is the mean intensity, and Inline graphic is the standard deviation.

graphic file with name d33e751.gif 5

where Inline graphic is a function for segmentation applied to the normalized image to separate teff and weed from the background.

Using the normalized images, feature extraction involves deriving important features from segmented images to distinguish weeds from Teff. In a deep learning context, features can be automatically learned through convolutional layers via the convolution operation, as illustrated in Eq. (6):

graphic file with name d33e770.gif 6

where Inline graphic represents the input image, Inline graphic denotes the convolution kernel, and Inline graphic are the spatial coordinates. Convolutional layers in a deep learning model learn spatial hierarchies of features, such as edges, textures, and shapes, which are critical for detecting weeds and assessing plant health. During feature extraction, the pooling operation is applied, where max-pooling layers reduce the dimensionality of the features while preserving the most significant ones, thereby enhancing the model’s ability to generalize. It is shown in Eq. (7):

graphic file with name d33e799.gif 7

Convolution and pooling operations were applied for feature extraction across all the proposed algorithms: MobileNetV2, InceptionResNetV2, DenseNet201, and VGG16. The dataset was partitioned into training and testing sets using the train_test_split function, with 80% of the data allocated for training and 20% for testing, as summarized in Table 2.

Table 2.

Augmented data splitting for model training.

Property Type Shape Data size
Data size Train 1831 70% of the total dataset
Test 523 20% of the total dataset
Validation 262 10% of the training dataset
Total dataset(augmented) 2616 Class size
Data class Labels (class) 2 Teff and weed
Bounding box 4 2 (width x height)

Modeling usages

In this paper, we use pre-trained models with transfer learning techniques (MobileNetV2, InceptionResNetV2, DenseNet201, VGG16, Resnet50, Fast R-CNN, YOLOv8), then fine-tune and train these networks to develop a new model. Our approach leverages the non-uniform visual characteristics of Teff and weed species across different fields by using different CNN backbone architectures—connected to model ‘heads’ for detecting class labels and bounding boxes. Key components include fine-tuned Keras model backbones for detection, the ReLU activation function, a Sigmoid output layer as a classifier, and two iterations for class label and bounding box detection. The different hyperparameters and their values used for the pre-trained models in transfer learning are shown in Table 3. These values were selected after exhaustive testing. Besides to the CNN architectures, the YOLOv8 was used for comparing to show robustness of the proposed method39.

Table 3.

Hyper-parameter description.

Hyperparameters Values
Optimizer Nadam
Activation ReLU, sigmoid
Batch size 32
Epoch 100
Learning rate 0.0001
Loss function

Class label = binary cross entropy

Bounding box = mean squared error

Model optimizations

Aspect Ratio and Solidity were employed to differentiate the uniform shapes of Teff plants from the irregular shapes of weeds. These shape-based features provide significant advantages in accurately identifying and classifying vegetation, thereby enhancing the efficiency of weed detection systems. The robustness of AR and Solidity to size variations ensures consistent performance across different growth stages and plant sizes, making these features particularly effective in real-world applications40. In Teff plants, this ratio tends to be more uniform, whereas weeds often exhibit greater variability. The mathematical formula of AR is shown in Eq. (8):

graphic file with name d33e945.gif 8

We also use solidity (S) which is the ratio of the area of the object to the area of the convex hull-area of the object41, as shown in Eq. (9).

graphic file with name d33e960.gif 9

The convex hull of Teff crops typically exhibits higher solidity values due to their uniform and compact shape, whereas the majority of weeds have lower solidity due to their irregular and more dispersed shapes. Our methodology involves preprocessing the images to segment the plants from the background and then identifying the contours of the segmented plants. We then compute the bounding box and convex hull for each plant to extract the AR and Solidity features. These features are combined with other visual characteristics to form a comprehensive feature vector that is used to train a machine-learning model for detecting and classifying Teff crops and weeds. The effectiveness of our approach is validated through extensive experiments, demonstrating the potential of AR and Solidity in improving weed detection and enabling precise herbicide application in agricultural fields.

Once the datasets had been normalized, data augmentation techniques in different ratios of the original data, such as 25%, 50%, 75%, and 100%, were applied. As a result, the 1308 images increased by 327 for 25%, by 654 for 50%, by 981images for 75%, and by 1308 images for 100%. Then, the performance of the MobileNetv2 model in each datasize was computed, and select the optimum result. The 100% augmentation (2616 images) was used for this study in different optimizatioin techniques, as shown in Table 4. Different optimizers, such as Adam, SGD, RMSprop, and Nadam, combined with different learning techniques, such as k-fold (k = 5), dropout regularization, early stoping and lassio regularization techniques, were used. The optimum results are found in the combinations of K-fold and dropout regularization with Nadam optimizer, and gives 96.40 of accuracies. Since the dropout optimizer is more efficient than k-fold technique, it has been used for this study. The overall result shows that using optimizers with overfitting minimizing technique can improves the accuracy of the model performances, as illustrated in Tables 4 and 5.

Table 4.

MobileNetv2 accuracies in different optimizers and learning techniques.

Optiomizers Accuracy evaluation in different learning techniques
K-fold Dropout (%) Early stopping (%) Regularization (%) Mixed (%)
Adam 95.74% 94.62 93.33 95.11 92.12
SGD 95.42% 92.32 90.21 95.34 88.90
RMSprop 95.86 95.01 91.91 92.11 85.12
Nadam 96.40 96.40 90.00 94.88 87.57

Table 5.

Training, validation and testing stage comparisons before fine-tuning.

Model Training Validation Testing
Acc Acc Accuracy
MobileNetv2 88.5 88.1 87.7
InceptionresNetv2 81.2 81.2 65.6
DenseNet201 86.6 80.4 79.1
VGG16 77.1 66.2 65.3
Resnet50 99.0 89.4 81.4
Fast R-CNN 89.2 65.3 56.9
YOLOv8 98.6 93.0 90.4

Model evaluations

The models are evaluated using different metrics such as accuracy, recall, precision, F1-score, mAP and COCO mAP score. We use COCO mAP to show how much the metrics indicates the accuracy across multiple IoU values, ranging from 0.5 to 0.95, rather than using only 0.5. The COCO mAP metric measures the accuracy of the model in detecting bounding boxes of objects42. A common threshold for positive detection of one instance is setting IoU with a threshold value of 0.5:

graphic file with name d33e1160.gif 10
graphic file with name d33e1166.gif 11

The proposed evaluation technique, COCO mAP, is the average of APs calculated across all IoU thresholds. In object detection, average precision (AP or mAP) has become a standard metric to represent detection, as shown in Eq. (12)42.

graphic file with name d33e1181.gif 12

where Inline graphic is the number of IoU thresholds (in COCO mAP), Inline graphic is the number of classes, Inline graphic is the average precision for class Inline graphic at the threshold Inline graphic

Once our model achieved the desired accuracy, we exported it to the “.h5” file format, allowing for seamless integration into the drone’s onboard systems for further operations, ensuring real-time decision-making, and enhancing the overall efficiency of the drone’s operations.

Experimental setups, results and discussion

Experimental setup

In this work, experiments were conducted using the free cloud service Google Colab, which provides 78.2 GB of disk space, Python 3.11, and a Google Compute Engine backend (GPU) with 12.7 GB of system RAM and 15 GB of GPU RAM. Additionally, experiments were also performed on an HP laptop with an Intel Core i5-5200 CPU, 8 GB of RAM, and a 64-bit Windows operating system. Programming tasks were completed using Python and OpenCV, and the detection of Teff and weeds based on their leaves was developed using four Keras deep learning models.

As indicated in Table 4, the optimum results were found when the Nadam optimizer is combined with the dropout learning technique. Therefore, for this section, each report is based on the combinations of the specified learning technique and optimizer type. This combination is evaluated in different CNN architectures before and after the data augmentation technique is applied. At the end of the experiment, different models that have been applied for weed detection other than Teff were shown. Moreover, YOLOv8 has been evaluated to show the performance of MobileNetv2 compared to other state-of-the-art algorithms.

Evaluating the model before fine-tuned application

Table 5 summarizes the results of an experiment evaluating the performance of seven deep learning models prior to fine-tuning. The models were assessed on their ability to classify and detect Teff crops and weeds during the training, validation, and testing phases. During the training phase, YOLOv8 exhibited the best performance, achieving the highest classification accuracy of 98.6%, significantly outperforming the other models. It also achieved more than 90% of accuracy duing validation and testing phase while the accuracies in training and testing phases have bigger gaps. MobileNetV2 achieved an accuracy of 87.7%, which is the highest among all models except YOLOv8. In addition, MobileNetV2 demonstrates lower variability across training, validation, and testing accuracies. Resnet50 and Fasr R-CNN had higher accuruacies than MobileNetv2 during the training phase. However, their performances was lower than MobileNetv2 during testing, indicating an inablity to control overfitting problems in Teff weed detection. DenseNet201, in comparison, achieved an accuracy of 86.6%, 80.4%, and 79.1% during the training, validation and testing phases, indicating lower performances than YOLOv8, MobileNetv2, and Resnet50. InceptionResNetV2 demonstrated an accuracy of 81.2% for the traing and validation phases. However, its performance lower to 65.6% during the testing phase. VGG16 showed the weakest performance during the training, validation and testing phases.

Figure 6 is a loss curve, which shows the training and validation loss trends, consests of a comparative analysis of four deep learning models—MobileNetV2, DenseNet201, InceptionResNetV2, and VGG16. MobileNetv2 demonistrates stable learns convergence throughout the epochs, both training and validation losses decreasing steadily and closely tracking each other. This indicates a good generalization and minimal overfitting compard to other figures. VGG16 in Figure shows its training loss graph declining steadily. However, after a few epochs, the validation line shows significant variability, indicating a high level of overfitting and poor generalization. InceptionNetV2 has relatively high final loss values compared to MobileNetV2, indicating slower convergence. However, the training and the loss functions have steady declines with lower variability between epochs. DenseNet201 shows a rapid drop in both training and validation losses early in training, with the two curves remaining close throughout, suggesting efficient learning and moderate generalization. Overall, MobileNetV2 and DenseNet201 appear to be the most stable and generalizable models prior to fine-tuning, while VGG16 exhibits signs of overfitting and instability.

Fig. 6.

Fig. 6

Loss graph of sample models before fine-tuned applications.

Figure 7 depicts the bounding box detection performance of four deep learning models in weed detection. Among these models, MobileNetv2 (Fig. 7A) achieved the highest precision, effectively adapting its bounding boxes to the size and shape of the weeds, thereby demonstrating superior detection capabilities. DenseNet201 (Fig. 7B) performed reasonably well, although its bounding boxes were slightly less accurate compared to MobileNetV2. In contrast, VGG16 (Fig. 7D) exhibited the poorest performance, struggling to accurately identify and bound weeds, which highlights their limitations in this detection task.

Fig. 7.

Fig. 7

Bounding box base model for weed detections (A = MobileNetV2, B = DenseNet201, C = InceptionResNetV2, D = VGG16).

Table 6 summarizes the testing accuracies of MobileNetV2, DenseNet201, InceptionResNetV2, VGG16, Resnet50, Fast R-CNN and YOLOv8 prior to the application of fine-tuning techniques. YOLOv8 and MobileNetv2 achieved the highest baseline accuracy at 90.4 and 87.4%, respectively, highlighting their superior performances during the initial testing phase. Its superior performance, particularly in maintaining low loss values and high accuracy, underscores its effectiveness in both learning and generalizing across different stages of model evaluation. InceptionResNetV2 and DenseNet201 recorded accuracies of 80.6% and 79.1%, respectively, while VGG16 exhibited the lowest accuracy at 65.3%. This performance decline is likely attributable to overfitting, where models excel on training and validation data but fail to generalize to unseen test data. Furthermore, the challenging nature of Teff crop and weed detection likely contributed to the observed performance gap. These challenges include occluded objects, small object sizes, and instances where objects occupy only a minimal portion of the image frame, all of which make accurate detection more difficult.

Table 6.

Different model performances in various metrics before fine-tuning.

Algorithms Precision Recall F1-score Accuracy % Infer
MobileNetv2 84.12 87.4 87.4 87.7 0.09
InceptionresNetv2 65.6 65.6 65.6 65.6 0.07
DenseNet201 65.0 83.8 75.6 79.1 0.17
VGG16 58.0 65.3 65.3 65.3 0.08
Resnet50 77.3 81.4 81.4 81.4 0.25
Fast R-CNN 25.8 50.8 34.2 56.9 7.30
YOLOv8 86.1 93.4 87.0 90.4 0.12

The performance of Fast R-CNN is lower compared to other algorithms in precision, recall, F1-score, and accuracy. Besides, the inference time is slower than other algorithms. The overall learning capacity and efficiency are the poorest. In the inference column, VGG16 has a shorter computing time than other models. MobileNetv2 also has a smaller inference time compared to YOLOv8, Fast R-CNN, DenseNet201, and ResNet50. The slowest performance was achieved in Fast R-CNN, which was 5.30 s.

Evaluating the models after fine-tuned application

Table 7 shows a detailed comparison of varios deep learning models after fine-tuning, using the same metrics and phases as those evaluated before the fine-tuning application. In the training phase, VGG16, Densnet201, MobileNetV2, and InceptionresNetv2 achieved extraordinary accuracy, achieving 99.1%, 98.9, 98.7 and 96.8% of training accuracies, respectively. These results indicate the models’ effectiveness in learning from training data with minimal errors. However, the accuracies in validation phase dropped to 97.3%, 96.5% and 95.2% for MobileNetv2, DenseNet201, and VGG16, respectively. The accuracy gaps between training and validation phases for InceptionresNetv2 decreased by more than 6%, indicating high overfitting. In the testing phase, mobileNetv2 performed better compared to other six models. Furthermore, its testing accuracy is close to both training and validation accuracy, suggesting that MobileNetV2 exhibits stronger generalization capability than the other models in Teff weed detection.

Table 7.

Training, validation and testing stage comparisons after fine-tuning.

Model Training Validation Testing
Accuracy Accuracy Accuracy
MobileNetv2 98.7 97.3 96.4
DenseNet201 98.9 96.5 95.4
InceptionresNetv2 96.8 89.5 95.8
VGG16 99.1 95.2 94.3
Resnet50 88.0 79.2 87.7
Fast R-CNN 57.4 51.0 50.8
YOLOv8 91.7 77.9 83.0

After fine-tuning, YOLOv8 showed reduced performance compared to the training phase before fine-tuning. Additionally, the accuracies in the validation and testing phases were not as strong as those of MobileNetV2. InceptionResNetV2 also achieved a training accuracy of 99.1%; however, its accuracy dropped to 89.5% during validation and 95.8% during testing, after applying dropout and fine-tuning. This represents a significant improvement compared to using only the optimization technique. DenseNet20, which has lower variations between training, validation and testing phases’ accuracy, showed more stable performances than InceptionResNetV2. However, the ovrall results of the seven models show that they have more improved generalization capabilities due to proper combination of different optimizers and fine-tuning techniques.

Figure 8 illustrates the object detection performance of the models after fine-tuning. MobileNetV2 (Fig. 8a) achieved the highest detection accuracy, with well-aligned bounding boxes that closely match the actual objects, highlighting the model’s superior precision following fine-tuning. InceptionResNetV2 (Fig. 8b) also performed effectively, though its bounding box alignment was slightly less precise than MobileNetV2. While it demonstrated strong detection accuracy, it faced some challenges with exact localization. DenseNet201 (Fig. 8c) exhibited moderate performance, with bounding boxes that were less tightly fitted. This suggests that although it achieved general accuracy in detection, it lacked the precision of the leading models. VGG16 (Fig. 8d) struggled significantly, producing poorly aligned bounding boxes that reflect considerable difficulties in object detection and localization, even after fine-tuning. This comparison underscores the effectiveness of MobileNetV2 in object detection tasks post-fine-tuning, with InceptionResNetV2 and DenseNet201 showing reasonable performance, while VGG16 remained the least effective model.

Fig. 8.

Fig. 8

The object detection after fine-tuning application: (a) MobileNetv2, (b) InceptionResNetV2, (c) DenseNet201, and (d) VGG16 models.

Table 8 outlines a comparative analysis of seven deep learning models—MobileNetV2, DenseNet201, InceptionResNetV2, VGG16, ResNet50, Fast R-CNN, and YOLOv8—evaluated based on precision, recall, F1-score, accuracy, and inference time after employing fine-tuning, dropout, and data augmentation techniques. Of the models evaluated, MobileNetV2 demonstrated the best performance across all evaluation metrics, achieving precision, recall, F1-score, and accuracy values of 96.2%, 97.5%, 96.2%, and 96.4%, respectively, while also logging the lowest inference time (0.07 s) in comparison all models except VGG-16, thus representing both exemplary predictive performance and computational efficiency. DenseNet201 came in a close second, achieving 95.8% on all classification metrics; however, it showed a higher inference time (0.38 s), reflecting excellent performance with a concomitant moderate sacrifice in processing speed. InceptionResNetv2 showed slightly lower values (95.4%) across all metrics, with a reasonable inference time (0.27 s), thus making it a viable choice for applications where time sensitivity is a moderate concern.

Table 8.

Comparision of different models in different metrics after applying fine-tuning, dropout and data augmentation.

Algorithms Precision (%) Recall (%) F1-score (%) Accuracy (%) Infer
MobileNetv2 96.2 97.5 96.2 96.4 0.07
DenseNet201 95.8 95.8 95.8 95.4 0.38
InceptionresNetv2 95.4 95.4 95.4 95.8 0.27
VGG16 94.4 94.3 94.3 94.3 0.05
Resnet50 88.1 87.9 87.7 87.7 0.81
Fast R-CNN 25.8 50.8 34.1 50.8 4.22
YOLOv8 86.12 69.2 83.0 83.0 0.33

VGG16 achieved 94.4% of precision, 94.3% of recall, F1-score, and accuracy, with the shortest inference time (0.05 s) of all models, although its classification performance was weaker than the top three models: MobileNetv2, DenseNet201 and InceptionresNetv2. ResNet50, at 87.7% accuracy, was considerably weaker than most of the model performances, indicating lower effectiveness in this fine-tuned scenario. YOLOv8, although generally powerful in object detection tasks, obtained a comparatively lower accuracy of 83.0%, with lower recall (69.2%) likely being a factor in this result. Lastly, Fast R-CNN performed poorly in terms of classification (F1-score of 34.1%, accuracy 50.8%) and had the longest inference time (4.22 s), reflecting inefficiency and limited applicability to this task.

These findings highlight the better generalization and efficiency of MobileNetV2 following fine-tuning and optimization, rendering it the best and most deployable model for Teff weed detection. DenseNet201 and InceptionResNetV2 also demonstrate high potential, albeit at a modest computational overhead. VGG16 is computationally light but provides relatively lower accuracy, while Fast R-CNN is still the worst performing in both accuracy and speed.

Table 9 presents a comparative performance differences of mAP of YOLOv8 and MobileNetV2 on different IoU thresholds—namely, COCO mAP (calculated at a few IoU levels in the range of 0.5 and 0.95) and fixed-threshold mAP—before and after fine-tuning. Before fine-tuning, YOLOv8 recorded higher COCO mAP (58.00%) and standard mAP (90.50%) than MobileNetV2, with a value of 44.15% and 87.12%, respectively. But after fine-tuning, MobileNetV2 became much better with its COCO mAP rising to 61.50% and mAP to 95.80%, which is superior to that of YOLOv8 in both measurements. YOLOv8 decreased in COCO mAP to 51.13% and in mAP to 80.81%. This shows that MobileNetV2 responds better to fine-tuning and optimization techniques, particularly for Teff weed detection. The improvement in both COCO mAP and fixed-threshold mAP for MobileNetV2 indicates more accurate detection and localization precision at all IoU levels. Conversely, the decline in performance for YOLOv8 following fine-tuning may indicate overfitting, reduced generalization, or being sensitive to the optimization strategies employed.

Table 9.

mAP comparisons in various IoU ranges.

Model Before fine-tuning After fine-tuning
COCO mAP (%) mAP (%) COCO mAP (%) mAP (%)
MobileNetV2 44.15 87.12 61.50 95.80
YOLOv8 58.00 90.50 51.13 80.81

Table 10 presents a comparative analysis of the proposed model’s performance against several state-of-the-art weed detection approaches applied to crops other than Teff. The results demonstrate that while weed detection in Teff crops remains extremely challenging—due to the high planting density and the narrow, thin morphology of Teff leaves—the proposed model based on MobileNetV2 still achieved a high accuracy of 96.4%. This performance is particularly remarkable with respect to models applied in wider-leaf crops, such as soybean, maize, or sugar beet, which tend to be easier for weed detection techniques to analyze. The precision of the proposed model is comparable to or better than approaches applied in less complex crop types, such as SLIC (98%) in soybean, SegNet (97%) in food crops, and ResNet-50 (95%) in multiweed classification. These results highlight the strength and adaptability of the proposed model and demonstrate its validity even in the more adverse visual environments involved in Teff cultivation.

Table 10.

Proposed model performances comparison with different weed detection models.

Authors and year Aims Accuracy (%)
19 Weed detection from Soybean using SLIC 98
10 Classification, and localizing sugar beets and Weeds using SegNet 89
21 Segmentation of rice Seedlings and weeds using SegNet 80
22 Classification of multiple weeds using ResNet-50 95
23 Detection and classification common weed species using R-CNN 77
24 weeds and soybean crops detections from drone imagery through the combinations of ResNet101 and DSASPP 97.8
Proposed work weed Detection based on drone imagery of Teff and Weeds using optimized MobileNetv2 96.40

Conclusion

In this study, we investigated the application of fine-tuned deep learning models for classifying and detecting Teff crops and weeds using drone-collected datasets for UAV-assisted precision herbicide application. The performance of MobileNetV2, InceptionResNetV2, DenseNet201, VGG16, Resnet-50, Fasr R-CNN, and YOLOv8 were compared both before and after fine-tuning. The results demonstrated that MobileNetV2 outperformed the other models in terms of accuracy, mAP, and inference time, establishing it as the most suitable model for real-time object detection in precision agriculture. MobileNetV2’s ability to accurately distinguish between Teff crops and weeds with minimal errors suggests that it can significantly enhance the efficiency and precision of herbicide application, reducing chemical usage and minimizing environmental impact. Although YOLOv9, InceptionResNetV2 and DenseNet201 showed promising results, their longer inference times compared to MobileNetV2 indicate the need for further optimization. VGG16, however, struggled with detection accuracy, rendering it less viable for this application. In conclusion, integrating deep learning models such as MobileNetV2 with UAV technology presents significant potential for advancing precision agriculture, particularly in the targeted application of herbicides. This approach not only improves crop management practices but also promotes sustainable and environmentally friendly farming operations. Future research will focus on optimizing these models further, applying them to diverse agricultural settings, and integrating fine-tuned models with drones for real-time herbicide application.

Author contributions

A.B.A. and A.S.K. wrote the manuscript, code, analysis, investigation, access resources, data curation, A.B.A and T.M.T revised, edited, and validated the manuscript.

Funding

This research is not supported by any financial sources.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Yared, T., Yibekal, A. & Kiya, A. Effect of blended NPS and nitrogen fertilizers rates on yield components and yield of tef. Acad. Res. J. Agric. Sci. Res.8(4), 361–377. 10.14662/ARJASR2020.018 (2020). [Google Scholar]
  • 2.Vickneswaran, M., Carolan, J. C., Saunders, M. & White, B. Establishing the extent of pesticide contamination in Irish agricultural soils. Heliyon.10.1016/j.heliyon.2023.e19416 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.H. Gebretsadik, M. Haile, and C. F. Yamoah, Tillage Frequency, Soil Compaction and N-Fertilizer Rate Effects on Yield of Teff (Eragrostis Tef (Zucc) Trotter) in Central Zone of Tigray, Northern Ethiopia. 2009.
  • 4.Degaga, J. Review on coffee production and marketing in Ethiopia. J. Mark. Consum. Res.10.7176/jmcr/67-02 (2020). [Google Scholar]
  • 5.Parven, A., Meftaul, I. M., Venkateswarlu, K. & Megharaj, M. Herbicides in modern sustainable agriculture: Environmental fate, ecological implications, and human health concerns. Springer Nat.10.1007/s13762-024-05818-y (2024). [Google Scholar]
  • 6.Kubiak, A., Wolna-Maruwka, A., Niewiadomska, A. & Pilarska, A. A. The problem of weed infestation of agricultural plantations vs. The assumptions of the European biodiversity strategy. Agronomy10.3390/agronomy12081808 (2022). [Google Scholar]
  • 7.Alzubaidi, L. et al. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big. Data.10.1186/s40537-021-00444-8 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Murad, N. Y. et al. Weed detection using deep learning: A systematic literature review. Sensors.10.3390/s23073670 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Zheng, Y. Y. et al. Cropdeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors.10.3390/s19051058 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Chebrolu, N. et al. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. Int. J. Robot. Res.36(10), 1045–1052. 10.1177/0278364917720510 (2017). [Google Scholar]
  • 11.Chen, P., Xia, T. & Yang, G. A new strategy for weed detection in maize fields. Eur. J. Agron.10.1016/j.eja.2024.127289 (2024). [Google Scholar]
  • 12.Kong, X. et al. Lightweight cabbage segmentation network and improved weed detection method. Comput. Electron Agric.10.1016/j.compag.2024.109403 (2024). [Google Scholar]
  • 13.Mishra, A. M., Harnal, S., Gautam, V., Tiwari, R. & Upadhyay, S. Weed density estimation in soya bean crop using deep convolutional neural networks in smart agriculture. J. Plant Dis. Prot.129(3), 593–604. 10.1007/s41348-022-00595-7 (2022). [Google Scholar]
  • 14.Ali, M. S. et al. A comprehensive dataset of rice field weed detection from Bangladesh. Data Brief.10.1016/j.dib.2024.110981 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Xiao, F., Wang, H., Xu, Y. & Zhang, R. Fruit detection and recognition based on deep learning for automatic harvesting: An overview and review. Agronomy10.3390/agronomy13061625 (2023). [Google Scholar]
  • 16.M. Berhan and D. Bekele, “Review of Major Cereal Crops Production Losses, Quality Deterioration of Grains by Weeds and Its Prevention in Ethiopia,” 2021. Available: https://www.researchgate.net/publication/356189011
  • 17.Sa, I. et al. WeedNet: Dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robot Autom. Lett.3(1), 588–595. 10.1109/LRA.2017.2774979 (2018). [Google Scholar]
  • 18.T. M. Giselsson, R. N. Jørgensen, P. K. Jensen, M. Dyrmann, and H. S. Midtiby, “A Public Image Database for Benchmark of Plant Seedling Classification Algorithms,” 2017. Available: http://arxiv.org/abs/1711.05458
  • 19.dos Santos Ferreira, A., Matte Freitas, D., Gonçalves da Silva, G., Pistori, H. & Theophilo Folhes, M. Weed detection in soybean crops using ConvNets. Comput. Electron Agric.143, 314–324. 10.1016/j.compag.2017.10.027 (2017). [Google Scholar]
  • 20.Yu, J., Schumann, A. W., Cao, Z., Sharpe, S. M. & Boyd, N. S. Weed detection in perennial ryegrass with deep learning convolutional neural network. Front. Plant Sci.10.3389/fpls.2019.01422 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ma, X. et al. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE10.1371/journal.pone.0215676 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Olsen, A. et al. Deepweeds: A multiclass weed species image dataset for deep learning. Sci. Rep.10.1038/s41598-018-38343-3 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Madsen, S. L. et al. Open plant phenotype database of common weeds in Denmark. Remote Sens.10.3390/RS12081246 (2020). [Google Scholar]
  • 24.Sudars, K., Jasko, J., Namatevs, I., Ozola, L. & Badaukis, N. Dataset of annotated food crops and weed images for robotic computer vision control. Data Brief.10.1016/j.dib.2020.105833 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kanda, P. S., Xia, K., Kyslytysna, A. & Owoola, E. O. Tomato leaf disease recognition on leaf images based on fine-tuned residual neural networks. Plants10.3390/plants11212935 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Wang, D. et al. A review of deep learning in multiscale agricultural sensing. Comput. Electron. Agric.10.3390/rs14030559 (2022). [Google Scholar]
  • 27.L. Li, S. Zhang, and B. Wang, Plant disease detection and classification by deep learning—A review. Institute of Electrical and Electronics Engineers Inc. 10.1109/ACCESS.2021.3069646. (2021).
  • 28.Patil, A. S., Mailapalli, D. R. & Singh, P. K. Drone Technology Reshaping Agriculture: A Meta-Review and Bibliometric Analysis on Fertilizer and Pesticide Deployment (Springer Science and Business Media, Cham, 2024). 10.1007/s42853-024-00240-1. [Google Scholar]
  • 29.Meesaragandla, S., Jagtap, M. P., Khatri, N., Madan, H. & Vadduri, A. A. Herbicide spraying and weed identification using drone technology in modern farms: A comprehensive review. Results Eng.10.1016/j.rineng.2024.101870 (2024). [Google Scholar]
  • 30.Shahi, T. B., Dahal, S., Sitaula, C., Neupane, A. & Guo, W. Deep learning-based weed detection using UAV images: A comparative study. Drones.10.3390/drones7100624 (2023). [Google Scholar]
  • 31.Liu, B. An automated weed detection approach using deep learning and UAV imagery in smart agriculture system. J. Optics53(3), 2183–2191. 10.1007/s12596-023-01445-x (2024). [Google Scholar]
  • 32.Ong, P., Teo, K. S. & Sia, C. K. UAV-based weed detection in Chinese cabbage using deep learning. Smart Agric. Technol.10.1016/j.atech.2023.100181 (2023). [Google Scholar]
  • 33.Lu, C. et al. Weed instance segmentation from UAV orthomosaic images based on deep learning. Smart Agric. Technol.10.1016/j.atech.2025.100966 (2025). [Google Scholar]
  • 34.Peffers, K., Tuunanen, T., Rothenberger, M. A. & Chatterjee, S. A design science research methodology for information systems research. J. Manag. Inf. Syst.24(3), 45–77. 10.2753/MIS0742-1222240302 (2007). [Google Scholar]
  • 35.Meseret Gezahegn, A. & Tamiru, S. Effect of seed rate and row spacing on (Tef Eragrostis tef (Zucc) Trotter) production at central highlands of Ethiopia. J. Plant Sci.9(3), 71. 10.11648/j.jps.20210903.11 (2021). [Google Scholar]
  • 36.Al-Ameen, Z., Muttar, A. & Al-Badrani, G. Improving the sharpness of digital image using an amended unsharp mask filter. Int. J. Image Graph. Signal Process.11(3), 1–9. 10.5815/ijigsp.2019.03.01 (2019). [Google Scholar]
  • 37.Zufria, I. & Syahnan, M. Improved digital image quality using the Gaussian filter method. Int. J. Inf. Syst. Technol. Akreditasi5(158), 556–563 (2022). [Google Scholar]
  • 38.B. Nair B J and A. S. Nair, Ancient horoscopic palm leaf binarization using a deep binarization model–RESNET. In: Proc. 5th International Conference on Computing Methodologies and Communication, ICCMC 2021, Institute of Electrical and Electronics Engineers Inc., pp. 1524–1529. 10.1109/ICCMC51019.2021.9418461. (2021).
  • 39.Ma, C., Chi, G., Ju, X., Zhang, J. & Yan, C. YOLO-CWD: A novel model for crop and weed detection based on improved YOLOv8. Crop Protect.10.1016/j.cropro.2025.107169 (2025). [Google Scholar]
  • 40.Olorunfemi, B. O., Nwulu, N. I., Adebo, O. A. & Kavadias, K. A. Advancements in Machine Visions for Fruit Sorting and Grading: A Bibliometric Analysis, Systematic Review, and Future Research Directions (Elsevier B.V, 2024). 10.1016/j.jafr.2024.101154. [Google Scholar]
  • 41.Qiankun, K. et al. A method for measuring geometric information content of area cartographic objects based on discrepancy degree of shape points. Geocarto Int.10.1080/10106049.2023.2275685 (2023). [Google Scholar]
  • 42.Rutili de Lima, C., Khan, S. G., Shah, S. H. & Ferri, L. Mask region-based CNNs for cervical cancer progression diagnosis on pap smear examinations. Heliyon.10.1016/j.heliyon.2023.e21388 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES