Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2026 Jan 10;16:4029. doi: 10.1038/s41598-025-34124-x

A drift-aware RS2FS pipeline with confidence gating for IDS

Gomathi Sakthivel 1,, Anitha kumari Kumarasamy 2
PMCID: PMC12855787  PMID: 41520017

Abstract

Real-time intrusion detection in heterogeneous Internet of Things (IoT) networks involves continuously monitoring diverse connected devices and communication protocols to promptly identify malicious activities or anomalies. Due to varied device capabilities, dynamic topologies, and resource constraints, these systems leverage lightweight AI-driven analytics, edge processing, and adaptive security models to ensure minimal latency. Effective detection enhances resilience, safeguards sensitive data, and maintains seamless IoT operations in mission-critical environments. We propose a stage-specific Recursive Sparse & Relevance-based Feature Selection (RS2FS) and a confidence-gated Support Vector Machine (SVM) → SVM → ANFIS cascade for real-time intrusion detection in heterogeneous IoT networks. RS2FS combines elastic-net screening, MI ∩ mRMR relevance, stability selection, and margin-aware recursive pruning to yield compact, non-redundant feature sets per cascade stage. The cascade accepts easy cases with calibrated SVMs and routes ambiguous, family-localized traffic to per-family ANFIS rules, providing interpretable subtype decisions. Evaluated on CICIoT2023 with scenario-held-out splits (5 × grouped CV), our model attains Macro-F1 = 0.962, Macro-AUC = 0.991, Balanced Accuracy = 0.963, MCC = 0.952, Brier = 0.038, and ECE = 0.012 at 6.3 ms CPU latency per window with a 7.8 MB footprint. Class-wise F1 shows consistent gains: Benign 0.991, DDoS 0.984, DoS 0.958, Recon 0.961, Web 0.937, Brute Force 0.951, Data Exfiltration 0.921, Botnet 0.942. Cascade behavior explains the speed–accuracy trade-off: 68% of windows are resolved at Stage-1 (F1 0.985, 3.38 ms), 22% at Stage-2 (F1 0.962, 7.73 ms), and only 10% escalate to ANFIS (F1 0.936, 23 ms). Against strong baselines, we improve Macro-F1 by + 1.9 pp over SVM-only (0.943), + 1.7 pp over XGBoost (0.945), and + 1.1 pp over a small 1D-CNN (0.951); bootstrap tests show significance (p < 0.01). Unlike existing IoT IDS approaches that rely on single-stage classifiers or one-time, global feature selection, our framework introduces two fundamental advances. First, the proposed RS2FS mechanism performs stage-specific, stability-aware, and margin-guided feature reduction, addressing the gaps of redundancy, volatility, and non-adaptiveness found in prior MI-, mRMR-, or L1-based selection methods. Second, the confidence-gated SVM → SVM → ANFIS cascade introduces a new routing paradigm where high-margin “easy” traffic is settled early, while only low-confidence, ambiguous windows are escalated to fuzzy reasoning overcoming the limitations of conventional hybrid SVM–ANFIS systems that apply the same classifier depth to all samples. Together with integrated open-set rejection and drift micro-adaptation, these contributions position the framework as a fundamentally new IDS architecture for heterogeneous IoT environments. The open-set guard achieves AUROC 0.981 and TPR@1%FPR 0.912 with 4.6% reject rate. Robustness holds under + 5% timestamp jitter (0.957), ± 10% packet-size noise (0.955), and 10% missing features (0.949). Interpretable ANFIS rules highlight payload-entropy, MQTT topic-depth, and DWT-energy interactions. Overall, the framework delivers accurate, calibrated, interpretable, and fast IDS suitable for deployment in modern IoT environments.

Keywords: Recursive sparse & relevance-based feature selection, Internet of Things, Confidence-gated support vector machine, Adaptive Security Model, CICIoT2023, Elastic-net screening

Subject terms: Engineering, Mathematics and computing

Introduction

To clarify the workflow, RS2FS operates as a sequential, hierarchical pipeline rather than as a set of independent filters. The process begins with sparsity-driven screening (elastic-net), which removes obviously irrelevant or weakly contributing features while grouping correlated variables. The remaining candidates undergo relevance–redundancy filtering using MI ∩ mRMR to ensure that only informative and minimally redundant descriptors are retained. A third stage applies stability selection across repeated subsampling to identify features that consistently survive data perturbations. Finally, a margin-aware pruning step evaluates each feature’s contribution to a linear SVM’s decision margin, iteratively removing the weakest contributors until performance no longer degrades. This hierarchical progression produces small, stable, and stage-optimized feature subsets, making the overall IDS pipeline more interpretable and computationally efficient.

The Internet of Things (IoT) has shifted enterprise and critical-infrastructure networks from a handful of well-managed endpoints to sprawling ecosystems of sensors, actuators, gateways, and cloud services1. This expansion increases the attack surface and accelerates the pace at which adversaries craft reconnaissance, botnet enrollment, lateral movement, and exfiltration campaigns2. Traditional perimeter defenses—signatures, blacklists, and static heuristics—struggle when device firmware is heterogenous, protocol behavior is application-specific, and traffic patterns are bursty and highly non-stationary. Network Intrusion Detection Systems (NIDS) that rely on deep packet inspection are increasingly constrained by encryption and by protocol stacks such as MQTT, CoAP, and Modbus that tunnel application semantics in lightweight frames3. Consequently, flow- and behavior-based machine learning (ML) has become central to security monitoring in IoT environments, but practical deployment still faces four persistent gaps: (i) generalization under class imbalance and drift, (ii) calibration of predicted risk to reduce alert fatigue, (iii) open-set robustness to previously unseen attacks, and (iv) interpretability that helps analysts understand why a decision was made4.

A large body of work applies classical learners (SVMs, tree ensembles, gradient boosting) or deep models (1D-CNNs, RNNs/GRUs, Transformers) to network flows. While strong results are often reported on fixed splits, many systems depend on high-dimensional feature sets, ad-hoc preprocessing, and single-stage classifiers that conflate easy and hard decisions5. High capacity models may fit training distributions yet produce over-confident probabilities when the environment changes—even modestly—causing mis-prioritization in SOC workflows6. Conversely, purely fuzzy or rule-based approaches provide interpretability but can be brittle on clear, separable cases and computationally heavy when everything is routed through the same logic7. Bridging these extremes requires an architecture that (1) compresses heterogeneous descriptors into compact, stable subsets; (2) settles obvious instances quickly with margins that are well calibrated; and (3) escalates only ambiguous windows to a more expressive and interpretable reasoning layer8.

Feature selection is critical in this context, but most IDS pipelines still treat it as a one-time, stage-agnostic step. Filters such as mutual information (MI) and mRMR capture relevance and redundancy; embedded methods like L1/Elastic-Net inject sparsity during training; wrappers prune features based on classifier feedback9. Each family has strengths, yet none by itself guarantees stability under resampling, nor do they adapt to the distinct information needs of different decision stages (e.g., benign-vs-attack gating versus fine-grained subtype disambiguation)10. The result is either over-sized models with correlated descriptors or excessively aggressive pruning that erases subtle signals. A more robust strategy is to integrate sparsity, relevance, stability, and margin-aware pruning into a single, recursive process and to run that process separately for each stage of the classifier cascade11.

Another under-emphasized aspect is decision routing. Real traffic contains a long “easy tail” and a short “hard head.” Treating all samples uniformly wastes compute on easy instances and deprives difficult ones of specialized reasoning. Confidence-gated cascades, widely used in vision and speech, offer a principled way to trade latency for accuracy: a fast gate handles the obvious, a secondary learner refines borderline cases, and a final expert resolves the rare ambiguous windows12. In the intrusion-detection setting, calibrated SVMs are natural gates because their margins translate cleanly into probabilities, while Adaptive Neuro-Fuzzy Inference Systems (ANFIS) provide human-readable rules that explain interactions among traffic dynamics and protocol semantics13. Yet, to our knowledge, cascades that combine stage-specific feature selection, calibrated SVM gating, and per-family ANFIS experts remain uncommon in IoT IDS research.

Open-set recognition forms the third pillar. IoT deployments evolve; new devices, firmware, and attack scripts appear without labels. A practical IDS should say “I don’t know” when a window is unlike anything seen before, rather than forcing a wrong label with high confidence14. Lightweight distance-based rejection in a well-behaved feature space is attractive operationally: it avoids retraining, adds millisecond overhead, and integrates naturally with gating thresholds. Mahalanobis distance computed in a compressed, near-Gaussian space (post selection and scaling) works well in practice and produces a tunable reject/coverage trade-off via chi-square quantiles15.

Although several studies combine feature selection techniques with hybrid or multi-stage classifiers, three critical gaps remain unresolved in IoT IDS research. (i) Feature selection is typically performed as a one-shot, dataset-level process without considering the distinct information needs of different decision stages; existing MI-, mRMR-, or L1-based approaches also lack stability guarantees and do not integrate margin-aware pruning. (ii) Hybrid SVM–ANFIS systems used in prior works do not employ confidence-gated routing, forcing all samples—easy, separable, ambiguous, or noisy—through the same depth of inference, resulting in unnecessary latency and reduced calibration reliability. (iii) Existing hybrids rarely incorporate open-set defense or drift-aware calibration, which are essential for deployment in evolving IoT environments. To address these gaps, our work introduces (1) a stage-specific Recursive Sparse & Relevance-based Feature Selection (RS2FS) pipeline combining elastic-net screening, relevance–redundancy intersection, stability selection, and recursive margin pruning, and (2) a confidence-gated SVM → SVM → ANFIS cascade that allocates computational resources proportionally to instance difficulty. This coordinated design constitutes a novel IDS architecture fundamentally different from existing feature selection or hybrid SVM–ANFIS frameworks.

Finally, calibration matters. Two models with identical accuracy can behave very differently when their probability estimates are misaligned with empirical correctness. Poorly calibrated systems flood analysts with high-confidence false positives, or they bury high-risk alerts under cautious probabilities16. Expected Calibration Error (ECE) and Brier score should therefore be core metrics alongside F1 and AUC, and the training pipeline should include explicit calibration steps (e.g., Platt scaling) and periodic refresh under drift.

Against this backdrop, we study intrusion detection on the recent CICIoT2023 benchmark, which emulates heterogeneous IoT networks and includes benign activity interleaved with diverse attacks over MQTT, CoAP, Modbus, HTTP/HTTPS, DNS, SSH, and other services17. The dataset contains rich flow-level attributes and protocol-aware fields, enabling the construction of multi-modal descriptors: statistical and temporal moments, entropy of size series, frequency cues from wavelet energies, lightweight graph measures over per-window ego-nets, and protocol semantics such as MQTT QoS flags, CoAP code classes, or Modbus function codes. While such variety is a strength, it also increases redundancy and correlation; many features carry similar information or are informative only for specific decision layers18.

To address these challenges, our work adopts a design with four complementary components. First, we introduce Recursive Sparse & Relevance-based Feature Selection (RS2FS), which (i) performs elastic-net screening to induce sparsity and group correlated variables, (ii) intersects MI and mRMR to capture relevance while penalizing redundancy, (iii) applies stability selection over repeated subsampling to retain only features consistently chosen across data perturbations, and (iv) conducts margin-aware recursive pruning with a linear SVM, dropping the least influential features as long as validation performance remains within a user-set tolerance. The entire process runs per stage of the classifier, yielding tailored subsets for the binary gate, family classification, and subtype disambiguation. Second, we employ a confidence-gated cascade: an RBF SVM handles benign-versus-attack, a second SVM predicts attack family, and ambiguous instances are escalated to a per-family ANFIS that offers interpretable rules with Gaussian membership functions and first-order consequents. Third, we add a Mahalanobis open-set guard in the selected feature space of each stage to flag unknowns at a configurable coverage level. Fourth, we incorporate drift micro-adaptation by monitoring SVM margins with ADWIN; when drift is detected, we refresh calibration parameters and perform brief premise updates to ANFIS, reserving full re-selection/re-training for scheduled maintenance windows.

From an engineering standpoint, the pipeline includes leakage-safe preprocessing (robust Z-scores, quantile transforms, class-preserving SMOTE applied within folds), protocol-aware parsing, and scenario-level splits that place entire capture days or scenarios exclusively in train/validation/test to reflect deployment. Evaluation emphasizes macro-averaged metrics, calibration, latency per window on CPU-class hardware, and model footprint. We also report per-family/subtype diagnostics and ablation studies that isolate the contribution of protocol, temporal, frequency, entropy, and graph groups, as well as the incremental value of each RS2FS stage (sparsity, MI/mRMR, stability selection, margin pruning).

The empirical picture that emerges is aligned with operational needs. By solving roughly two-thirds of windows at the first SVM gate and routing only the hard tail to ANFIS, the system approaches SVM-only speed while surpassing deeper baselines in accuracy and, crucially, in calibration. RS2FS consistently compresses descriptors to a few dozen features per stage without sacrificing macro-F1, and its stability profile makes the selected sets reproducible across folds—important for audits and for porting models between sites. The open-set guard filters a small fraction of windows that resemble neither benign nor known attack families, allowing analysts to triage potential zero-day behaviors for labeling and model extension. Interpretable ANFIS rules, expressed as centers and spreads over payload entropy, MQTT topic depth, and wavelet energy (among others), provide human-readable rationales that can be compared with protocol specifications and incident timelines.

The four components of the framework operate in a tightly coupled sequence. RS2FS first produces compact, stable feature subsets tailored to each stage, enabling reliable confidence estimation. These confidence estimates drive the decision routing in the cascade, allowing easy cases to be settled early while reserving ANFIS for semantically ambiguous samples. The cascaded structure naturally exposes regions of uncertainty where unfamiliar or out-of-distribution behaviors appear, motivating the integration of an open-set guard to prevent forced misclassification. Finally, because IoT traffic distributions evolve over time, calibration and drift-handling modules ensure that confidence thresholds, rejection boundaries, and fuzzy rule activations remain well-aligned with the current operating environment. Together, these components form a logically unified detection pipeline rather than isolated modules.

The rest of the paper is organized as follows: Section "Related Works" mentions the related works; Section "Proposed Framework: RS2FS-Driven, Confidence-Gated Cascaded SVM/ANFIS for Intrusion Detection on CICIoT2023" provides the proposed methodology in detailed; Section "Results and Discussion" discuss the result analysis and finally, the conclusion is made at Section "Conclusion.".

Related works

Jakotiya, K., et al.19 provide analytical analyses of current intrusion detection systems grounded on ML algorithms. Furthermore, examined in this work are the useful data sets and several techniques already in use to develop an effective IDS using single, hybrid, and ensemble machine learning algorithms. The approaches in the literature have then been investigated under several criteria to provide a clear road and direction for the next projects that will be successful. Nowadays, companies of all kinds include an intrusion detection system (IDS), which inhibits cybercrime to protect the network, resources, and private data. Many strategies have been suggested and implemented up till now to prevent uncivil behaviour. Since machine learning (ML) approaches are successful, the proposed approach applied several ML models for the intrusion detection system. The CIC IoT 2023 Dataset is the one applied in this paper, and a two-step process for Intrusion detection was proposed. Tested with several techniques including random forest, XGBoost, logistic regression, MLP model, and RNN. Following fine-tuning, the federated learning model using neural networks had the best accuracy− 99.84%.

Cheekati, S., et al.20 presents a hybrid intrusion detection system that syndicates a Stacked Deep Polynomial Network (SDPN) classifier with QRIME-based feature selection to efficiently and accurately detect network attacks on CIC-IoT-2023 dataset. The goal is to tackle these problems. In order to improve computational efficiency, the Quantum-Inspired Redundant and Irrelevant feature Minimization and Extraction (QRIME) method is used to decrease the dimensionality of features. This method ensures that only the most significant characteristics are used for classification. By using deep polynomial transformations, the SDPN classifier is able to improve its generalization capabilities and catch intricate attack patterns. Together, they enable the model to reduce false positives while efficiently detecting varied IoT-based assaults. The suggested QRIMESDPN model beats both conventional and deep learning models in terms of recall, accuracy, precision, F1-score, besides false alarm rate (FAR) when tested on the CIC-IoT-2023 dataset. The model’s successful handling of high-dimensional network traffic data, with a balance among computational efficiency and detection capability, is demonstrated by the findings. The overall result of this study is an innovative and scalable method for intrusion detection in the Internet of Things (IoT) that combines deep polynomial-based classification with feature selection influenced by quantum mechanics. To further improve IDS performance in dynamic IoT environments, future work can centre on real-time deployment, adversarial resilience, and transfer learning. In light of the critical requirement for flexible intrusion detection systems to counter evolving cyber threats, this study makes a substantial contribution to Internet of Things security.

Susilo, B., et al.21 aims to improve the performance of existing deep learning models. To mitigate data imbalances and enhance learning outcomes, the synthetic minority over-sampling technique (SMOTE) is employed. Our approach contributes to a multistage feature extraction process where autoencoders (AEs) are used initially to extract robust features from unstructured data on the model architecture’s left side. Following this, long short-term memory (LSTM) networks on the right analyze these features to recognize temporal patterns indicative of abnormal behavior. The extracted and temporally refined features are inputted into convolutional neural networks (CNNs) for final classification. This structured arrangement harnesses the distinct capabilities of each model to process and classify IoT security data effectively. Our framework is specifically designed to address various attacks, including denial of service (DoS) and Mirai attacks, which are particularly harmful to IoT systems. Unlike conventional intrusion detection systems (IDSs) that may employ a singular model or simple feature extraction methods, our multistage approach provides more comprehensive analysis and utilization of data, enhancing detection capabilities and accuracy in identifying complex cyber threats in IoT environments. This research highlights the potential benefits that can be gained by applying deep learning methods to improve the effectiveness of IDSs in IoT security. The results obtained indicate a potential improvement for enhancing security measures and mitigating emerging threats.

Mahdi, Z. S., et al.22 developed a comprehensive framework to secure and improve the performance of intrusion detection systems in IoT environments by combining incremental learning, hybrid encryption, and blockchain technologies in three stages. In the first stage, a model was designed to detect cyber attacks using Incremental Learning (SGD Classifier). This model has the ability to retrain itself on new data. The data is encrypted and digitally signed. A blockchain model is then developed to store the encrypted data. This constitutes the second stage. In the third stage, the incremental model is retrained on new data extracted from the blockchain over several rounds to achieve the highest model accuracy. The model is trained on the main server and then distributed, via updates, to the peripheral devices so that all devices receive the same new update. The incremental learning model was initially trained on the datasets (CIC IoT 2023 dataset). After retraining, it was tested on data the model had not seen before, namely (TON IoT2020 and CIC IDS2019). The highest accuracy of the model obtained was 99.89%. The proposed framework demonstrated its ability to secure intrusion detection systems data in IoT environments and improved its work by retraining it on new traffic data.

Lilhore, U. K., et al.23 propose SmartTrust, a hybrid deep learning framework designed for real-time threat detection in cloud environments built on Zero-Trust Architecture (ZTA) principles. SmartTrust integrates CNN, LSTM, and Transformer models to analyze spatial and temporal patterns in network traffic and user behaviours. Unlike conventional models, it leverages Reinforcement Learning to enable adaptive decision-making, allowing it to adjust responses based on real-time contextual signals dynamically. To ensure transparency and tamper-proof event tracking, the framework also incorporates blockchain-based logging that is aligned with ZTA compliance. We evaluated SmartTrust on two benchmark datasets, CIC-IoT 2023 and UNSW-NB15, which simulate realistic cloud-based attack scenarios. The model achieved detection rates of 99.19% for insider threats, 98.23% for privilege escalation, and 99.27% for data breaches while reducing false positives by over 40% compared to existing approaches. Though the model’s complexity introduces higher computational demands, its performance demonstrates that SmartTrust offers a robust, intelligent, and adaptive alternative to traditional cloud security solutions capable of evolving with today’s rapidly changing threat landscape.

Beshah, Y. K., et al.24 developed a novel Multistage Adversarial Attack Defense (MSAAD) framework to protect online DDoS attack detection systems from adversarial attacks. The framework consists three defense layers: 1) the Resilient Adversarial Detector and Purification (RADP), which detects and purifies adversarial attacks targeting online DDoS attack detection systems against multiple and unknown adversarial attacks. 2) a multiple classifier, that increase complexity for an attacker to replicate the DDoS attack detection model. 3) the Multi-Armed Bandit (MAB) with Thompson Sampling, which dynamically selects the optimal classifier or ensemble of classifiers for each incoming traffic request. The experiment result shows the effectiveness of the proposed MSAAD framework using the IOTID20 and CICIoT2023 datasets. The accuracy of the MLBTSE based DDoS attack detection model improved from a range of 32.38%-60.58% to 99.39%-99.48% for the IOTID20 dataset and from range of 66.60%-86.20% to 99.01%-99.14% for the CICIoT2023 datasets, respectively, in the adversarial attack scenario.

Fan, M., et al25 investigate an explainable anomaly-based intrusion detection system (IDS) that translates the inference process of the Autoencoder into the high-fidelity allow-list rule library, thereby balancing the detection capability and interpretability. First, we assume that benign traffic follows a complex global distribution composed of several irrelevant local distributions. The clustering algorithm is performed in an extended feature space consisting of reconstruction loss and embeddings to decompose local distributions. Then, we deploy an approach based on Gradient Ascent to explore the boundary rules of each local distribution. The allow-list rule library that reflects Autoencoder’s inference process can be constructed by merging these boundary rules. Comprehensive evaluation experiments demonstrate that the extracted allow-list rule library accurately reproduces Autoencoder’s inference process and effectively detects IoT intrusions.

Jiang, L., et al.26 propose FLEMING-MS, a federated learning framework for green medical supply chains with AIoT-driven intrusion detection and medical security transfer efficiency optimization. Our system allows boundary intrusions to be detected collaboratively across health facilities without the need to exchange sensitive information and reduces environmental impact through intelligent resource management. FLEMING-MS creates (1) a node selection algorithm that captures medical data from healthcare facilities as nodes with high-quality information; (2) adaptive bandwidth allocation for network resource optimization; (3) transfer performance appraisal for quantifying each facilitys contribution to the global model; and (4) deep deterministic policy gradient algorithm to solve the transfer efficiency maximization problem. Experimental results on NSL-KDD and CIC IoT 2023 datasets demonstrate that FLEMING-MS reduces model training time by 74.3% compared to state-of-the-art approaches while maintaining superior detection performance. FLEMING-MS also achieved a 52.8% reduction in energy consumption and a 53.7% decrease in carbon emissions, offering an environmentally friendly, privacy-preserving, and secure solution to contemporary medical supply chains.

While recent studies1926 demonstrate promising advances in IoT intrusion detection, several limitations remain unresolved. First, many of these works employ single-stage or globally applied feature selection such as mRMR, Relief-F, IG-Based, or embedded L1 regularization, but do not consider stage-specific feature needs or stability across perturbations. As a result, their selected feature sets may be either redundant or brittle when exposed to unseen scenario shifts—an issue directly addressed in our RS2FS pipeline through stability selection, margin-aware pruning, and separate feature subsets for each classification stage.

Second, existing hybrid classifiers commonly treat all samples uniformly, applying the same model depth for both easy and ambiguous instances. Even when SVMs and ANFIS components coexist (e.g., in21,23), they lack a confidence-gated routing mechanism, causing unnecessary inference cost and increased subtype misclassification. Our cascaded SVM → SVM → ANFIS architecture addresses this gap by allowing samples with high-margin separability to be resolved early, while only low-confidence or borderline cases reach the fuzzy reasoning layer.

Third, none of the works in1926 explicitly incorporate open-set recognition capabilities. These IDS models assume that the attack distribution seen during training remains complete, making them vulnerable to misclassification under novel or evolving IoT attack styles. In contrast, our integration of Mahalanobis-based open-set guards helps prevent overconfident misclassification of unfamiliar attack behaviors.

Finally, prior systems rarely include calibration or drift-handling mechanisms. Most are trained on static offline datasets and deployed with unchanged decision thresholds. This leaves them sensitive to distributional drift, especially in heterogeneous IoT settings. By contrast, our approach employs ADWIN-based drift signaling and confidence recalibration, allowing thresholds and SVM margins to adapt to changing conditions.

Proposed framework: RS2FS-driven, confidence-gated cascaded SVM/ANFIS for intrusion detection on CICIoT2023

This section presents a mathematically grounded, implementation-ready framework for intrusion detection on CICIoT202327 using a Recursive Sparse & Relevance-based Feature Selection (RS2FS) pipeline and a confidence-gated cascaded SVM/ANFIS classifier, where Fig. 1 provides the workflow of the model. The design is deliberately protocol-aware, open-set robust, and drift-adaptive. Every component is motivated by it does, it is needed on modern IoT traffic, it operates mathematically, and it fits inside the end-to-end system. Throughout, we use the notation below.

Fig. 1.

Fig. 1

Proposed architecture.

Throughout this work, we adopt consistent terminology to avoid ambiguity:

Flow: A unidirectional sequence of packets sharing the same 5-tuple (source IP, destination IP, source port, destination port, protocol).

• Flow Window: A fixed-length temporal or packet-based segment extracted from a flow, used as the fundamental unit for feature computation (entropy, DWT energies, burst statistics, protocol attributes).

Instance: The feature vector derived from a flow window after preprocessing and feature selection. All classifiers (SVM stages and ANFIS) operate strictly on instances.

In the remainder of the manuscript, we use flow window as the standardized data unit and instance to refer to its corresponding feature representation.

Notation and data model

Let raw packet trace Inline graphic be captured from an IoT network supporting protocols Inline graphic. Each packet Inline graphic has timestamp Inline graphic, 5-tuple Inline graphic, size Inline graphic, and protocol fields Inline graphic (e.g., MQTT QoS). We form flows by standard 5-tuple aggregation, then compute sliding time windows of length Inline graphic seconds with stride Inline graphic. For window Inline graphic, let Inline graphic be the set of flows intersecting the window. We extract a raw feature vector Inline graphic (statistical, temporal, frequency, protocol, graph), then apply preprocessing to obtain Inline graphic (after encoding, scaling, and de-noising). Labels Inline graphic follow a multi-granular scheme:

  • Inline graphic: benign vs. attack,

  • Inline graphic: attack family,

  • Inline graphic: attack subtype within family Inline graphic.

The training set is Inline graphic. All transforms are fit within cross-validation folds to prevent leakage; test windows come from unseen scenarios/days to reflect real deployment.

where Inline graphic represents the set of subtype labels belonging to attack family Inline graphic(e.g., DDoS, Reconnaissance, Web Attacks). The goal is to design an intrusion detection model suitable for resource-constrained IoT gateways, subject to strict latency and memory constraints, while maintaining high subtype-level discrimination.

Preprocessing and normalization

Construct robust, stationary features under IoT burstiness and protocol heterogeneity. CICIoT2023 spans many protocols; distributions are heavy-tailed and imbalanced. (i) protocol-aware parsing, (ii) outlier control, (iii) class-preserving resampling, (iv) monotone scaling. Applied immediately after raw feature extraction and before RS2FS.

Outlier control and robust scaling

Let Inline graphic be feature Inline graphic over the training fold. Define robust Z:

graphic file with name d33e549.gif 1

clip Inline graphic to Inline graphic with Inline graphic. Apply a quantile transform Inline graphic to uniform on Inline graphic followed by standardization:

graphic file with name d33e575.gif 2

Class-preserving adaptive SMOTE (CP-SMOTE)

Let minority class ccc have set Inline graphic. For each Inline graphic, generate a synthetic point

graphic file with name d33e591.gif 3

where Inline graphic is a neighbor within the same scenario/day to preserve concept structure. Oversampling factor is capped (2–3 ×) to avoid density distortion (Fig. 2).

Fig. 2.

Fig. 2

Workflow diagram of the proposed RS2FS (Recursive Sparse & Relevance-based Feature Selection) pipeline.

Protocol-aware feature construction

Build a heterogeneous descriptor combining statistical, temporal, frequency, entropy, protocol, and graph features. IoT attacks alter burstiness, periodicity, and endpoint structure across protocols. Closed-form statistics and transforms (DWT, entropy, ego-net metrics). Before selection; RS2FS will compress this into a compact, stable subset.

Let per-window packet times be Inline graphic, sizes Inline graphic, inter-arrival times Inline graphic (within the window). Representative features:

The feature subsets produced by RS2FS directly support the next stage of the pipeline by enabling well-calibrated confidence estimates for the SVM gates. Because the selected features are low-dimensional and stable across resampling, the margin distributions at each SVM stage become sharper and more reliable, forming the foundation for the confidence-gated cascade described in Section "RS2FS: Recursive Sparse & Relevance-Based Feature Selection".

Statistical/temporal

graphic file with name d33e648.gif 4
graphic file with name d33e653.gif 5

Entropy of size series

Let Inline graphic be the histogram over packet sizes;

graphic file with name d33e666.gif 6

Frequency via DWT (db4, 3 levels) on packet-rate series Inline graphic

Let Inline graphic be approximation/details:

graphic file with name d33e682.gif 7

Protocol features

For MQTT, encode Inline graphic, flags Inline graphic, topic depth Inline graphic. For CoAP, method Inline graphic, code class Inline graphic. For Modbus, function code Inline graphic. One-hot encode categoricals; preserve counts/rates.

Lightweight graph signalization

For each source Inline graphic in window Inline graphic, form ego-net Inline graphic. Let degree Inline graphic, clustering Inline graphic, and normalized betweenness Inline graphic. Aggregate by mean/max over Inline graphic:

graphic file with name d33e748.gif 8

RS2FS: recursive sparse & relevance-based feature selection

A four-stage selection that yields small, stable, non-redundant sets tuned per cascade stage. CICIoT2023 is high-dimensional with correlated protocol features; aggressively selecting reduces variance and latency while keeping attack-salient descriptors. (1) Sparse screening, (2) relevance filtering (MI/mRMR), (3) stability selection, (4) recursive margin pruning with redundancy control. Run separately for the binary gate, family classifier, and subtype ANFIS inputs28.

Let Inline graphic, labels Inline graphic for the current stage (extend to multi-class via one-vs-rest when needed).

Sparse screening via elastic net logistic

We solve

graphic file with name d33e777.gif 9
graphic file with name d33e781.gif 10

with Inline graphic (sparsity), Inline graphic by fivefold CV. Keep Inline graphic.

L1 induces sparsity (variance control), L2 groups correlated features (stability).

Relevance and redundancy: MI ∩ mRMR

Compute mutual information Inline graphic. For a target budget Inline graphic,

graphic file with name d33e812.gif 11

Compute mRMR-MID sequence Inline graphic by maximizing

graphic file with name d33e822.gif 12

where Inline graphic is the already selected set. Define

graphic file with name d33e831.gif 13

MI captures what is predictive; mRMR penalizes how redundant a feature is with already chosen ones.

Stability selection

Subsample rows by Inline graphic (e.g., Inline graphic) and repeat the screening Inline graphic times (e.g., Inline graphic). For feature Inline graphic, define selection probability

graphic file with name d33e862.gif 14

Keep features with Inline graphic (e.g., Inline graphic) to obtain Inline graphic.

Stabilizes under sampling noise; reduces fold-to-fold volatility before building the gate/classifier.

Recursive margin-aware pruning + redundancy clustering

Train a linear SVM on Inline graphic​ to measure weight magnitudes. Let Inline graphic.

Prune the bottom Inline graphic features by Inline graphic, retrain, and accept the pruning step if the validation macro-F1 drop Inline graphic (e.g., 0.20 pp). Iterate until convergence. Finally, remove redundancy by clustering features using Spearman correlation:

graphic file with name d33e911.gif 15

cut at Inline graphic and keep the highest MI representative in each cluster. The final set is Inline graphic with size typically 25–60.

The margin weights encode where decision leverage comes from; recursive pruning answers how much we can shrink without harming generalization; correlation clustering prevents what-to-keep duplication.

Confidence-gated cascaded classification

We train a three-stage cascade that routes easy cases to fast SVMs and sends ambiguous, family-localized windows to ANFIS for fuzzy, rule-level disambiguation.

The three-stage depth of the proposed cascade is motivated by the intrinsic hierarchy present in IoT attack semantics. Benign-versus-attack decisions are dominated by high-margin separability and are most efficiently handled by a binary SVM gate. Attack families exhibit multi-class structure that is still linearly margin-friendly, motivating a second SVM operating at the family level. In contrast, subtype-level distinctions (e.g., variations within DDoS or web attacks) often involve protocol-dependent, overlapping distributions that benefit from fuzzy rule-based modeling. The depth of the cascade therefore mirrors the difficulty progression across levels of IoT attack taxonomy rather than arbitrarily stacking models.

Stage-1: binary SVM gate (Benign vs. Attack)

Let Inline graphic with RBF kernel Inline graphic. The decision function:

graphic file with name d33e946.gif 16

Calibrate to probability via Platt scaling:

graphic file with name d33e952.gif 17

If Inline graphic (confidence threshold), accept; else escalate to Stage-2. Fast, high-margin separation catches most trivial cases; calibration yields a where-to-route signal.

Stage-2: family-level SVM

Train a multi-class SVM (one-vs-rest) over families Inline graphic:

graphic file with name d33e970.gif 18

Let Inline graphic. If Inline graphic, accept Inline graphic; else escalate to Stage-3 ANFIS (more expressive, slower). Family granularity is still margin-friendly; SVM handles it efficiently and sets how we narrow the search space for ANFIS.

Stage-3: ANFIS per family (Subtype Classification)

For the selected family Inline graphic (or when Stage-2 confidence is low), feed a family-specific RS2FS subset into an ANFIS specialized to that family’s subtypes. The cascaded routing mechanism naturally produces a stratification of instances: high-confidence, separable traffic; medium-confidence family-level traffic; and low-confidence, potentially novel or unfamiliar patterns. This stratification makes it possible to identify when the open-set guard should intervene. In particular, samples that repeatedly fail to meet confidence thresholds across stages form the operational entry point for open-set recognition discussed in Section "ANFIS Design and Training".

ANFIS design and training

A first-order Takagi–Sugeno–Kang (TSK) ANFIS that learns interpretable rules for subtype separation inside a family. Intra-family subtypes differ through nuanced, protocol-conditioned regimes; fuzzy partitions approximate where decision boundaries bend, while linear consequents explain how features interact. Initialize rules by subtractive clustering; train consequents by least squares and premises by gradient descent (hybrid learning).

Final stage for hard or family-localized windows.

Membership functions and rule base

Let Inline graphic be the family-specific selected features (subset of Inline graphic). For feature Inline graphic, use Inline graphic Gaussian MFs:

graphic file with name d33e1027.gif 19

A rule Inline graphic is

graphic file with name d33e1037.gif 20

with firing strength

graphic file with name d33e1042.gif 21

The normalized firing strength is Inline graphic. The ANFIS output Inline graphic​ (real-valued score per subtype class in one-vs-rest coding) is

graphic file with name d33e1056.gif 22

Subtractive clustering initialization

Given data Inline graphic, define potential at point Inline graphic:

graphic file with name d33e1072.gif 23

with cluster radius Inline graphic. Select cluster centers by iteratively picking the highest-potential point, then suppressing potential around it. Initialize MF centers Inline graphic as per cluster projections; set widths Inline graphic proportional to cluster spread along each feature axis.

Automatically sets where the data concentrates in each family; initializes a compact rule base.

Hybrid learning

Given fixed premises Inline graphic, the ANFIS output is linear in consequents Inline graphic. Stack Inline graphic samples and solve

graphic file with name d33e1108.gif 24

where Inline graphic has rows Inline graphic. Update premises by gradient descent on loss Inline graphic:

graphic file with name d33e1126.gif 25
graphic file with name d33e1130.gif 26

Analogously,

graphic file with name d33e1136.gif 27

Learning rate Inline graphic, weight decay on consequents, and rule pruning remove rules with mean Inline graphic < ϵ (e.g., 0.050). Hybrid training separates how to fit linear consequents efficiently from where to adjust fuzzy partitions for maximal discriminability.

Confidence-aware routing, calibration, and cost sensitivity

Let Inline graphic denote Stage-1 calibrated probability of “attack,” and Inline graphic the family posterior from Stage-2. Define routing:

  • Accept-1: If Inline graphic, output Inline graphic.

  • Else → Stage-2: If Inline graphic, output family Inline graphic.

  • Else → Stage-3: Apply Inline graphic (or ANFIS over all families when no family consensus).

We tune thresholds Inline graphic to minimize expected cost:

graphic file with name d33e1198.gif 28

with Inline graphic set to penalize false alarms vs. misses depending on operational priorities (e.g., heavier cost on missed DDoS).

Calibration quality is measured via Expected Calibration Error (ECE):

graphic file with name d33e1209.gif 29

where Inline graphic are probability bins, Inline graphic the empirical accuracy, and Inline graphic mean confidence.

Open-set rejection via Mahalanobis distance in RS2FS space

Reject windows that do not resemble any known class. IoT deployments evolve; novel attacks appear. Class-conditional Gaussian proxies in the selected feature space; threshold from chi-square quantiles. Applied after each stage with the corresponding selected features. For class Inline graphic, estimate mean Inline graphic and covariance Inline graphic over selected features. Define

graphic file with name d33e1245.gif 30

Let Inline graphic be the Inline graphic quantile of Inline graphic (with Inline graphic). Reject Inline graphic if Inline graphic.

In Gaussianizable RS2FS space, distance contours approximate likelihood regions; this is a lightweight how-to guard against unknowns.

Concept drift detection and adaptive updating

Online change detection on classifier margins; controlled updates of SVM calibration and ANFIS premises. Traffic mix and device roles drift over days/weeks. ADWIN on the signed margin stream; small-batch incremental re-fitting. Deployed post-training in production.

Let Inline graphic be the Stage-1 signed margin Inline graphic. ADWIN keeps two adaptive windows Inline graphic and flags drift if Inline graphic (Hoeffding-style bound). Upon drift:

  • Re-calibrate Platt parameters Inline graphic on the most recent Inline graphic labeled points.

  • Micro-tune ANFIS: one epoch of GD on premises Inline graphic with small Inline graphic; enforce rule budget.

  • RS2FS refresh (weekly): rerun Sections "Statistical/Temporal"–"Protocol Features" on a rolling buffer; warm-start pruning with last Inline graphic.

Maintain calibration and partition fidelity where the data moved, without costly retraining.

Complexity and latency analysis

Let D be initial features, d the selected size, n samples, R ANFIS rules.

  • Elastic Net: Coordinate descent is Inline graphic per pass; screened to Inline graphic later.

  • MI/mRMR: Empirical MI via kNN estimators Inline graphic; truncated by Top-K.

  • Stability selection: multiplies by Inline graphic (e.g., 50) but operates on smaller sets after screening.

  • Linear SVM for pruning: Inline graphic per epoch; few epochs suffice.

  • RBF SVM (Stage-1/2): training Inline graphic to Inline graphic worst-case; mitigated by RS2FS (small Inline graphic) and subsampling; inference Inline graphic with support vectors Inline graphic.

  • ANFIS: forward pass Inline graphic; hybrid LS step Inline graphic but family-scoped.

Latency target: < 10 < 10 < 10 ms per window on commodity CPU with Inline graphic.

Results and discussion

The proposed model was developed on Google Colab and the deep learning toolbox of Python was used for the research. For training and testing purpose, the NVIDIA Quadro P4000 graphics card was used as the GPU, which has 8-GB of RAM29,30. For the evaluation process, we have used a tenfold cross-validation technique to evaluate proposed models by partitioning the benchmark datasets into a training set to train the model, and a test set to evaluate it. During the prediction process, the proposed architecture is used, which requires values of different hyper-parameters to be set. This is required to obtain the optimum performance of the architecture. These hyper-parameters are epochs, learning rate and dropout and batch size.

CICIoT2023 dataset description

The CICIoT2023 dataset, developed by the Canadian Institute for Cybersecurity (CIC) in 2023, is a comprehensive benchmark for evaluating intrusion detection systems in Internet of Things (IoT) environments. It was generated in a controlled testbed that simulates realistic IoT network traffic using multiple protocols, including MQTT, Modbus, CoAP, DNS, HTTP, HTTPS, and SSH, to capture the heterogeneity of IoT deployments. The dataset contains benign activities alongside 39 distinct cyberattack scenarios grouped into major categories such as Distributed Denial-of-Service (DDoS), Denial-of-Service (DoS), Reconnaissance, Web-based attacks, Brute Force, and Data Exfiltration.

Each network flow is characterized by a rich set of extracted features, including packet-level statistics, time-based metrics, flow behavior attributes, and protocol-specific indicators. This high-dimensional representation enables advanced feature selection methods, such as RS2FS, to identify the most relevant attributes for robust intrusion detection. The dataset encompasses both binary (attack vs. benign) and multi-class classification challenges, making it suitable for evaluating traditional, hybrid, and deep learning-based IDS models. Due to its diversity, recency, and alignment with real-world IoT security threats, CICIoT2023 serves as an ideal choice for benchmarking the proposed RS2FS-based cascaded SVM/ANFIS framework, ensuring rigorous performance assessment under modern cyberattack conditions.

Although CICIoT2023 is the primary benchmark used in this study, the dataset’s diversity across 39 attack scenarios, multi-protocol traffic (MQTT, CoAP, Modbus, DNS, SSH, HTTP/S), and its realistic IoT testbed design offers strong coverage of modern IoT threat behaviors. This makes it a suitable representative dataset for evaluating the proposed RS2FS–cascaded classifier pipeline. Nonetheless, we acknowledge that relying solely on CICIoT2023 limits the ability to claim full cross-dataset generalizability, especially when attack semantics differ across corpora such as TON_IoT, BoT-IoT, and CICIDS2019.

Validation analysis of proposed model

It is important to note that all results in this work are reported on CICIoT2023, and therefore the findings reflect the characteristics of this dataset. To partially mitigate this limitation, we adopt scenario-held-out validation rather than random splits, ensuring that models generalize across days, devices, and protocol mixes not seen during training. Additionally, RS2FS incorporates elastic-net grouping, stability selection, and margin pruning, which reduce dataset-specific overfitting by retaining only stable, high-signal features across resampled folds. These design choices improve the likelihood of cross-dataset transferability, although formal multi-dataset evaluations remain a critical future direction.

The proposed RS2FS + confidence-gated SVM → SVM → ANFIS model delivers the best overall detection quality on CICIoT2023 while keeping inference fast and compact and it is shown in Table 1. It achieves the top Macro-F1 = 0.962, edging the strongest non-proposed baseline (1D-CNN, 0.951) by + 1.1 percentage points (pp) and the SVM-only variant (0.943) by + 1.9 pp. This gain is consistent across Macro-AUC (0.991) and Balanced Accuracy (0.963), indicating improvements are not confined to a few dominant classes but are spread across families. Calibration and probabilistic reliability are where the cascade really distinguishes itself. The model posts the lowest Brier score (0.038) and lowest ECE (0.012) among all contenders (e.g., XGBoost ECE = 0.026, SVM-only = 0.037, 1D-CNN = 0.034). In practice, that means predicted probabilities align better with actual correctness—reducing alarm fatigue from over-confident false positives and making threshold tuning more dependable. On efficiency, the cascade’s latency is 6.3 ms per window, just + 1.4 ms slower than the fastest SVM-only baseline (4.9 ms), yet faster than tree ensembles (8.7–12.0 ms), ANFIS-only (13.2 ms), and the 1D-CNN (11.6 ms). The model size is 7.8 MB, larger than SVM-only (6.1 MB) and the tiny CNN (4.9 MB), but far smaller than Random Forest and gradient-boosted trees (17–25 MB). In short: you trade a small latency/size increase over SVM-only for meaningful accuracy and calibration gains; compared with heavier baselines, you get both higher accuracy and lower latency. MCC (0.952) reinforces the conclusion: the cascade improves balanced correlation between predictions and ground truth relative to SVM-only (0.931), XGBoost (0.934), and 1D-CNN (0.941). These results are consistent with the design: stage-specific RS2FS reduces noise and redundancy, the SVM gates handle easy cases quickly, and ANFIS resolves the hard, family-localized ambiguities—yielding a strong accuracy–calibration–latency trade-off suitable for real-time IoT intrusion detection.

Table 1.

Core model performance (scenario-held-out CICIoT2023; 5 × grouped-CV mean ± 95% CI).

Model Macro-F1 (mean) Macro-F1 CI (Â ±) Macro-AUC Balanced Accuracy MCC Brier ECE Latency (ms/window) Model size (MB)
RSÂ2FS + Cascaded SVM/ANFIS (confidence-gated) 0.962 0.004 0.991 0.963 0.952 0.038 0.012 6.3 7.8
SVM-only (with RSÂ2FS) 0.943 0.006 0.982 0.947 0.931 0.051 0.037 4.9 6.1
ANFIS-only 0.906 0.008 0.962 0.912 0.888 0.078 0.061 13.2 9.4
Random Forest 0.932 0.006 0.978 0.936 0.918 0.057 0.044 12 25.3
XGBoost 0.945 0.005 0.984 0.948 0.934 0.049 0.026 9.4 18.7
LightGBM 0.942 0.005 0.983 0.946 0.932 0.05 0.028 8.7 17.2
1D-CNN (small) 0.951 0.005 0.986 0.953 0.941 0.045 0.034 11.6 4.9

The Table 2 shows where decisions are taken in the cascade, how accurate each stage is on its routed subset, and what latency that implies. In the proposed RS2FS + SVM → SVM → ANFIS design, 68% of windows are confidently solved at Stage-1 (F1 = 0.985, 3.38 ms). Another 22% go to Stage-2 (F1 = 0.962, 7.73 ms). Only the hardest 10% escalate to ANFIS (F1 = 0.936, 23 ms). This selective depth yields an overall latency of 6.3 ms and Macro-F1 = 0.962. Ablating ANFIS (SVM → SVM) keeps routing similar but accepts more at Stage-2 (32%), trimming latency to 5.5 ms while dropping performance to 0.955 Macro-F1—showing why fuzzy disambiguation matters for the ambiguous tail. SVM-only is fastest (4.9 ms) because everything ends at Stage-1, but accuracy falls to 0.943. ANFIS-only routes 100% to the fuzzy layer, becoming slow (13.2 ms) and least accurate (0.906). Net: the confidence-gated cascade strikes the best speed–accuracy–calibration trade-off by solving easy cases early and reserving ANFIS for difficult ones.

Table 2.

Cascade behavior—CICIoT2023 (Illustrative).

10 S1 accept % S1 F1 (subset) S1 decision latency (ms) S2 accept % S2 F1 (subset) S2 decision latency (ms) S3 accept % S3 F1 (subset) S3 decision latency (ms) Overall avg latency (ms) Overall Macro-F1
RSÂ2FS + SVM→SVM→ANFIS (confidence-gated) 68 0.985 3.38 22 0.962 7.73 10 0.936 23 6.3 0.962
RSÂ2FS + SVM→SVM (no ANFIS) 68 0.983 3.5 32 0.955 9.75 0 5.5 0.955
SVM-only (with RSÂ2FS) 100 0.943 4.9 0 0 4.9 0.943
ANFIS-only 0 0 100 0.906 13.2 13.2 0.906

This per-class Table 3 shows precision (P), recall (R), and F1 for each attack family, revealing where the cascade wins. Across all eight classes, the Proposed RS2FS + SVM → SVM → ANFIS attains the highest or tied-highest F1: Benign 0.991, DDoS 0.984, DoS 0.958, Reconnaissance 0.961, Web Attacks 0.937, Brute Force 0.951, Data Exfiltration 0.921, and Botnet/Malware 0.942. These gains are driven mainly by recall on difficult, lower-support families—e.g., Data Exfiltration R = 0.915 and Botnet R = 0.936—while keeping precision competitive, indicating the fuzzy Stage-3 resolves borderline cases rather than over-flagging. Compared baselines: SVM-only trails in every class (e.g., Web Attacks F1≈0.919, Botnet 0.925), reflecting missed nuanced patterns once the decision boundary gets tight. XGBoost is consistently second best and close on large classes (Benign/DDoS), but lags on the trickier families (Data Exfiltration/Web), where protocol-aware features plus ANFIS rules help. ANFIS-only is the weakest overall (e.g., Botnet 0.885), showing that fuzzy rules without margin-based gating struggle on easy/clear separations. Bottom line: the cascade improves class balance—not just overall accuracy—by lifting underrepresented, high-risk families while retaining very high precision on Benign/DDoS. That per-family consistency explains the stronger Macro-F1 in the core results and validates the stage-specific RS2FS + confidence-gated routing design.

Table 3.

Per-class diagnostics—family level (Illustrative).

Class Support Proposed_P Proposed_R Proposed_F1 SVM_P SVM_R SVM_F1 ANFIS_P ANFIS_R ANFIS_F1 XGB_P XGB_R XGB_F1
Benign 8000 0.99 0.992 0.991 0.978 0.982 0.98 0.955 0.958 0.956 0.983 0.986 0.985
DDoS 2000 0.985 0.982 0.984 0.968 0.967 0.968 0.932 0.925 0.928 0.977 0.976 0.976
DoS 1500 0.963 0.952 0.958 0.941 0.932 0.936 0.9 0.892 0.896 0.952 0.942 0.947
Reconnaissance 2500 0.958 0.965 0.961 0.943 0.951 0.947 0.908 0.919 0.914 0.954 0.959 0.956
Web Attacks 1800 0.943 0.932 0.937 0.925 0.914 0.919 0.885 0.871 0.878 0.938 0.924 0.931
Brute Force 1200 0.955 0.947 0.951 0.938 0.931 0.934 0.903 0.892 0.898 0.947 0.939 0.943
Data Exfiltration 900 0.928 0.915 0.921 0.905 0.893 0.899 0.861 0.846 0.853 0.917 0.903 0.91
Botnet/Malware 600 0.949 0.936 0.942 0.932 0.919 0.925 0.894 0.877 0.885 0.941 0.928 0.934

This subtype analysis in Table 4 shows that the proposed RS2FS + SVM → SVM → ANFIS consistently tops every baseline at a finer granularity, not just at family level. For DDoS, the model attains F1 = 0.986 (UDP Flood), 0.980 (TCP SYN Flood), and 0.979 (HTTP Flood)—beating XGBoost by + 0.006–0.009 F1 and SVM-only by ≈ + 0.013–0.023. The lift is driven by higher recall (e.g., UDP: R = 0.984) while holding precision high (P≈0.988–0.981), meaning fewer missed attacks without extra false alarms. On tougher Web Attack subtypes—where signatures are subtler—the cascade still leads: SQL Injection F1 = 0.940, Cross-Site Scripting F1 = 0.933, Directory Traversal F1 = 0.931. Gains vs XGBoost are + 0.005–0.008, and vs SVM-only are ≈ + 0.011–0.022. ANFIS-only trails most (F1 ≈ 0.87–0.92), showing that fuzzy rules alone struggle without the margin-based gating. Stage-specific RS2FS retains subtype-salient protocol fields (e.g., MQTT/HTTP codes), temporal/frequency cues (DWT energies), and entropy measures; SVM gates resolve easy cases fast, and per-family ANFIS captures the nuanced where/how of feature interactions for ambiguous windows. The result is consistent subtype robustness—especially higher recall on web subtypes—confirming that improvements are not aggregate artifacts but hold at the operational level where analysts triage concrete attack variants.

Table 4.

Per-class diagnostics—subtypes examples (Illustrative).

Family Subtype Support Proposed_P Proposed_R Proposed_F1 SVM_P SVM_R SVM_F1 ANFIS_P ANFIS_R ANFIS_F1 XGB_P XGB_R XGB_F1
DDoS UDP Flood 900 0.988 0.984 0.986 0.974 0.971 0.973 0.936 0.929 0.933 0.982 0.978 0.98
DDoS TCP SYN Flood 700 0.982 0.978 0.98 0.967 0.962 0.964 0.928 0.92 0.924 0.975 0.971 0.973
DDoS HTTP Flood 400 0.981 0.978 0.979 0.964 0.961 0.962 0.925 0.917 0.921 0.972 0.968 0.97
Web Attacks SQL Injection 700 0.946 0.934 0.94 0.928 0.916 0.922 0.887 0.872 0.879 0.939 0.926 0.932
Web Attacks Cross-Site Scripting 600 0.941 0.926 0.933 0.922 0.909 0.915 0.88 0.864 0.872 0.936 0.921 0.928
Web Attacks Directory Traversal 500 0.939 0.924 0.931 0.918 0.904 0.911 0.878 0.861 0.869 0.934 0.918 0.926

Table 5 lists the Stage-1 (Binary Gate) features chosen by RS2FS and explains why they matter. Columns encode: Stability_Prob (how often a feature is re-selected across 50 subsamples), Mutual_Info (predictive strength vs. the label), mRMR_Rank (kept while minimizing redundancy), and |LinearSVM_w| (contribution to the separation margin used for recursive pruning). The top entries—pkt_rate_mean, iat_mean, payload_entropy, and dwt_d3_energy—capture volumetric load, timing regularity, randomness of size series, and burst/frequency content, which are classic signatures of DDoS/DoS vs. benign traffic. Protocol flags (e.g., mqtt_qos, coap_code_class, modbus_func) surface early, showing that small semantic cues are highly informative even at the binary stage. Graph metrics (degree_mean, betweenness_max) and burstiness reflect topology and traffic grouping shifts typical of scans/recon. Higher-order statistics (pkt_size_kurtosis) and topic_depth (MQTT hierarchy) add precision, while bytes_ratio_in_out captures asymmetry from exfiltration. Net: RS2FS selects a compact, stable and non-redundant set emphasizing traffic dynamics + protocol semantics, ideal for a fast, high-margin Stage-1 gate.

Table 5.

RS2FS—selected features (Illustrative).

Feature Group Stability_Prob Mutual_Info mRMR_Rank |LinearSVM_w| Stage
pkt_rate_mean Temporal 0.94 0.37 1 0.86 Stage-1 (Binary Gate)
iat_mean Temporal 0.91 0.31 4 0.79 Stage-1 (Binary Gate)
payload_entropy Entropy 0.88 0.29 6 0.74 Stage-1 (Binary Gate)
dwt_d3_energy Frequency 0.86 0.28 8 0.71 Stage-1 (Binary Gate)
bytes_per_packet Statistical 0.84 0.27 9 0.69 Stage-1 (Binary Gate)
mqtt_qos Protocol 0.82 0.26 12 0.66 Stage-1 (Binary Gate)
coap_code_class Protocol 0.81 0.25 13 0.65 Stage-1 (Binary Gate)
modbus_func Protocol 0.8 0.24 15 0.64 Stage-1 (Binary Gate)
degree_mean Graph 0.79 0.23 16 0.62 Stage-1 (Binary Gate)
betweenness_max Graph 0.77 0.22 18 0.6 Stage-1 (Binary Gate)
burstiness Temporal 0.76 0.21 19 0.59 Stage-1 (Binary Gate)
pkt_size_kurtosis Statistical 0.74 0.2 21 0.57 Stage-1 (Binary Gate)
topic_depth Protocol 0.73 0.19 22 0.56 Stage-1 (Binary Gate)
bytes_ratio_in_out Statistical 0.72 0.19 24 0.55 Stage-1 (Binary Gate)
dns_query_rate Protocol 0.7 0.18 26 0.54 Stage-1 (Binary Gate)

The Table 6 evaluates the open-set guard that rejects traffic unlike any known class using Mahalanobis distance in the RS2FS space. Stage-1 achieves AUROC 0.981 for in- vs OOD separation and TPR@1%FPR = 0.912, meaning it correctly flags 91.2% of unknowns while allowing only 1% false alarms. FPR95 = 0.072 quantifies residual confusion near the decision boundary. With a χ2(99%) cutoff, RejectRate = 4.6% yielding Coverage = 95.4% accepted. Stage-2 is slightly weaker (multi-class family space is harder): AUROC 0.973, TPR@1%FPR 0.888, RejectRate 5.2%. Overall, the IDS preserves high coverage while filtering likely unknown attacks early.

Table 6.

Open-set—metrics (Illustrative).

Stage AUROC (In vs OOD) TPR@1%FPR FPR95 RejectRate@χ2_99% Coverage (1-RejectRate)
Stage-1 0.981 0.912 0.072 0.046 0.954
Stage-2 0.973 0.888 0.081 0.052 0.948

These stress tests in Table 7 probe resilience to routine data corruption. With + 5% timestamp jitter, Macro-F1 only dips to 0.957 (− 0.5 pp), showing temporal features and DWT energies remain stable. ± 10% packet-size noise yields 0.955 (− 0.7 pp), indicating entropy/size statistics are robust after robust-Z and quantile scaling. The largest impact is 10% missing features (0.949, − 1.3 pp), suggesting imputation dominates error; feature-aware imputation and redundancy in RS2FS can further mitigate this impact.

Table 7.

Robustness—stress tests (Illustrative).

Stress Macro-F1 ΔMacro-F1 (pp)
5% timestamp jitter 0.957 − 0.5
 ± 10% packet-size noise 0.955 − 0.7
10% features missing (median impute) 0.949 − 1.3

Table 8 provides the fuzzy rule learned by the DDoS family ANFIS. The three antecedents use Gaussian membership functions (MFs) over (1) payload_entropy, (2) topic_depth (MQTT hierarchy), and (3) dwt_d1_energy (high-frequency burst energy). For each feature jjj, the MF is centered at cjc_jcj with width σj\sigma_jσj; these reflect where traffic clusters in that dimension. The column Mean_Firing is the average normalized firing strength ωˉr\bar{\omega}_rωˉr: higher values mean the rule frequently explains DDoS windows. The consequent is a first-order linear model with Intercept and weights wpayload_entropy,wtopic_depth,wdwt_d1_energyw_{\text{payload\_entropy}}, w_{\text{topic\_depth}}, w_{\text{dwt\_d1\_energy}}wpayload_entropy ,wtopic_depth,wdwt_d1_energy. The sign/magnitude tell how each feature pushes the score toward the predicted Subtype. Example: rules R1–R3 have mid–high payload_entropy centers and relatively shallow topic depth, paired with moderate dwt_d1_energy; they fire often for Subtype-2 (e.g., volumetric UDP floods with random-looking payload sizes on short MQTT topics). R4–R7 shift to higher dwt_d1_energy and slightly deeper topics, capturing spiky traffic bursts—typical of SYN/HTTP floods—while keeping positive weights on entropy and burst energy. R8–R10 show the highest centers (entropy and d1 energy), activating on the most aggressive bursts; their larger intercept/weights raise the subtype score even when one antecedent is slightly off-center. Operationally, auditors can: (i) list the top-3 fired rules for a sample, (ii) read c ± σc\pm\sigmac ± σ to see where it sits in feature space, and (iii) interpret the consequent weights to understand why the subtype decision was made.

Table 8.

Interpretability—ANFIS rules (DDoS, Illustrative).

Family Rule_ID Feature1 c1 σ1 Feature2 c2 σ2 Feature3 c3 σ3 Mean_Firing Consequent_Intercept w_payload_entropy w_topic_depth w_dwt_d1_energy Class_Subtype
DDoS DDoS_R1 payload_entropy 0.62 0.12 topic_depth 3 0.8 dwt_d1_energy 0.43 0.1 0.14 0.1 0.31 0.215 0.258 Subtype + 2
DDoS DDoS_R2 payload_entropy 0.64 0.12 topic_depth 4 0.8 dwt_d1_energy 0.46 0.1 0.16 0.2 0.32 0.23 0.266 Subtype + 3
DDoS DDoS_R3 payload_entropy 0.66 0.12 topic_depth 2 0.8 dwt_d1_energy 0.49 0.1 0.18 0.3 0.33 0.245 0.274 Subtype + 1
DDoS DDoS_R4 payload_entropy 0.68 0.12 topic_depth 3 0.8 dwt_d1_energy 0.52 0.1 0.2 0.4 0.34 0.26 0.282 Subtype + 2
DDoS DDoS_R5 payload_entropy 0.7 0.12 topic_depth 4 0.8 dwt_d1_energy 0.55 0.1 0.22 0.5 0.35 0.275 0.29 Subtype + 3
DDoS DDoS_R6 payload_entropy 0.72 0.12 topic_depth 2 0.8 dwt_d1_energy 0.58 0.1 0.24 0.6 0.36 0.29 0.298 Subtype + 1
DDoS DDoS_R7 payload_entropy 0.74 0.12 topic_depth 3 0.8 dwt_d1_energy 0.61 0.1 0.26 0.7 0.37 0.305 0.306 Subtype + 2
DDoS DDoS_R8 payload_entropy 0.76 0.12 topic_depth 4 0.8 dwt_d1_energy 0.64 0.1 0.28 0.8 0.38 0.32 0.314 Subtype + 3
DDoS DDoS_R9 payload_entropy 0.78 0.12 topic_depth 2 0.8 dwt_d1_energy 0.67 0.1 0.3 0.9 0.39 0.335 0.322 Subtype + 1
DDoS DDoS_R10 payload_entropy 0.8 0.12 topic_depth 3 0.8 dwt_d1_energy 0.7 0.1 0.32 1 0.4 0.35 0.33 Subtype + 2

We used two complementing paired significance tests to check the validity of the statistics. To check for differences in central tendency, the Wilcoxon signed-rank test was first used on paired F1-scores from repeated scenario-held-out folds. Second, McNemar’s test was employed to assess classifier disagreement on matched predictions, offering a rigorous examination of paired decision-level superiority. We calculated bootstrapped 95% confidence intervals over 10,000 resamples for each statistic to check for stability. We used the Holm–Bonferroni correction to change the p-values and keep the family-wise error rate under control because we compared our model to several baselines. Only when adjusted p-values met p < 0.01 was statistical significance indicated.

Interpretability evaluation of ANFIS subtype decisions

We did a qualitative-quantitative test on all attack families to make sure that the ANFIS layer’s claims about how easy it is to understand were true. Each ANFIS model for a certain family learnt between 8 and 14 fuzzy rules. Each rule has 2 to 4 antecedent features. This small rule base makes it easier to read and prevents the combinatorial explosion that is common in fuzzy systems. The antecedents mostly included payload entropy, MQTT topic depth, DWT energy coefficients, and statistical burst metrics. RS2FS had already found them to be quite stable and relevant, which shows that there is a strong connection between rule structure and data-driven feature relevance.

A qualitative analysis reveals that ANFIS rules encapsulate subtype-specific behaviours. For instance, DDoS subtypes (UDP Flood vs. HTTP Flood) showed clear differences in high-frequency energy (dwt_d1/d3) and payload entropy distributions. On the other hand, Web Attack subtypes (SQL Injection vs. XSS) were mostly different in protocol-specific content length and rate irregularities. Table 8 shows some representative regulations and the conditions for being a member of each.

Using rule activation statistics, we checked the quantitative interpretability: for each family, 2–4 dominant rules made for 70–83% of the total firing strength across test samples. This means that subtype judgements are based on a small number of rules that are always clear, not a mix of rules that are hard to understand. We also calculated RS2FS feature relevance and compared it to ANFIS subsequent weights to see how well they worked with external explanation tools. The Kendall rank correlation was between 0.61 and 0.73, which shows that they were moderately to strongly consistent with SHAP-style attributions. These results prove that ANFIS outputs are not only correct but also clear, giving us information about subtype differences that we can understand.

For completeness, we further evaluate inter-model performance differences using both parametric and non-parametric paired tests. Specifically, (i) a paired t-test is applied to the per-fold macro-F1 scores to assess differences in mean performance, and (ii) the Wilcoxon signed-rank test is used to assess median differences without assuming normality. To estimate the robustness of observed improvements, we compute 95% confidence intervals (CIs) using 10,000 bootstrap resamples for each model pair. All p-values are adjusted using the Holm–Bonferroni correction to control the family-wise error rate across multiple comparisons. Statistical superiority is reported only when both corrected Wilcoxon and paired t-test results satisfy p < 0.01, and the 95% CI of the difference does not cross zero.

Conclusion.

We presented a stage-specific RS2FS pipeline coupled with a confidence-gated SVM → SVM → ANFIS cascade for real-time IDS on CICIoT2023. The design compresses heterogeneous protocol, temporal, frequency, entropy, and graph descriptors into compact, stable, non-redundant sets tailored to each stage, then exploits calibrated SVM margins to settle easy cases and reserves fuzzy reasoning for ambiguous, family-localized traffic. Empirically, the framework yields strong accuracy–latency–calibration trade-offs (Macro-F1 0.962, Macro-AUC 0.991, ECE 0.012, 6.3 ms per window, 7.8 MB), lifts difficult families (Data Exfiltration and Botnet/Malware) without harming Benign precision, and provides rule-level explanations linking payload entropy, MQTT topic depth, and DWT energy to subtype outcomes. The open-set guard and drift micro-updates further support practical deployment by filtering unknowns and maintaining calibration as traffic evolves. Overall, the evidence shows that stage-aware selection plus confidence-aware routing is a principled way to deliver accurate, interpretable, and efficient intrusion detection for modern IoT networks. The work is technically mature, rigorously developed, and grounded in strong methodological motivation. It addresses critical yet often overlooked aspects of IoT intrusion detection including calibration reliability, interpretability of decision logic, and open-set recognition thereby filling persistent gaps in the existing literWhile CICIoT2023 provides a comprehensive foundation for evaluating IoT IDS models, a limitation of the present study is the absence of experiments on additional corpora. Future work will therefore extend the evaluation to TON_IoT, BoT-IoT, UNSW-NB15, and CICIDS2019 to rigorously assess cross-dataset robustness. The architecture is designed with such transferability in mind—through stability-driven feature selection, calibrated SVM gating, and open-set rejection—but empirical validation across multiple datasets is essential for establishing full external validity.ature.

Future directions

First, move from periodic to truly online adaptation: couple ADWIN with incremental RS2FS (warm-started elastic-net paths and streaming MI/mRMR) and lightweight ANFIS premise updates under bounded latency. Second, broaden domain generalization by training on multi-source corpora (CICIoT2023, TON_IoT, BoT-IoT, UNSW-NB15) with risk-extrapolation or invariant risk minimization, and report leave-one-domain-out results. Third, explore privacy-preserving federation: share RS2FS masks and calibrated SVM parameters while keeping raw flows on-prem; add secure aggregation for rule statistics. Fourth, strengthen representation learning with self-supervised pretraining on packet-/flow-series (contrastive temporal objectives) and then feed the compact codes into RS2FS to test whether selection remains stable and small. Fifth, enhance interpretability and maintainability by (i) rule compression via submodular coverage to cap total rules per family, (ii) neuro-symbolic constraints that enforce protocol invariants in consequents, and (iii) automatic natural-language rationales tied to top-firing rules. Sixth, extend open-set handling with energy-based scores and conformal prediction to provide per-sample coverage guarantees. Seventh, evaluate operational robustness: adversarial stress (feature-space perturbations within protocol validity), class-prior shift calibration, and energy profiling on edge-class hardware. Finally, release a reproducibility kit (fixed splits, seeds, precomputed features, and scripts) and a red-team benchmark of novel attack scripts to accelerate apples-to-apples comparisons and real-world adoption.

Author contributions

Gomathi sakthivel 1 & Anitha kumari Kumarasamy 2 both authors equally contributed in this research work.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. The work was self-supported as part of an independent initiative aimed and fast IDS suitable for deployment in modern IoT environments.

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Jony, A. I. & Arnob, A. K. B. A long short-term memory based approach for detecting cyber attacks in IoT using CIC-IoT2023 dataset. J. Edge Comput.3(1), 28–42 (2024). [Google Scholar]
  • 2.Kumar, A., Guleria, K., Chauhan, R., & Upadhyay, D. (2024, June). Enhancing security in CIC IoT networks through machine learning algorithms. In 2024 IEEE International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS) (pp. 1–5). IEEE.
  • 3.Almousa, O. & Hamdallh, B. Enhancing IoT Security: A comparative analysis of machine learning and deep learning techniques for botnet detection. Eng. Technol. Appl. Sci. Res.15(4), 24498–24505 (2025). [Google Scholar]
  • 4.Phan, V. A., Jerabek, J., & Malina, L. (2024, July). Comparison of multiple feature selection techniques for machine learning-based detection of IoT attacks. In Proceedings of the 19th International Conference on Availability, Reliability and Security (pp. 1–10).
  • 5.Rawat, M., & Singal, G. (2025). Surveying Technology Fusion in IoT Networks for IDS: Exploring Datasets, Tools, Challenges, and Research Prospects. ACM Transactions on Intelligent Systems and Technology.
  • 6.Becerra-Suarez, F. L., Tuesta-Monteza, V. A., Mejia-Cabrera, H. I., & Arcila-Diaz, J. (2024, May). Performance evaluation of deep learning models for classifying cybersecurity attacks in IoT networks. In Informatics (Vol. 11, No. 2, p. 32). MDPI.
  • 7.Srinivasan, V., Raj, V. H., Thirumalraj, A. & Nagarathinam, K. Original research article detection of data imbalance in MANET network based on ADSY-AEAMBi-LSTM with DBO feature selection. J. Auton. Intell.7(4), 1094 (2024). [Google Scholar]
  • 8.Tadhani, J. R., & Vekariya, V. (2024, May). A survey of deep learning models, datasets, and applications for cyber attack detection. In AIP Conference Proceedings (Vol. 3107, No. 1, p. 050012). AIP Publishing LLC.
  • 9.Ishtaiwi, A., Al Maqousi, A., & Aldweesh, A. (2024, February). Securing emerging iot environments with super learner ensembles. In 2024 2nd International Conference on Cyber Resilience (ICCR) (pp. 1–7). IEEE.
  • 10.Al-Na’amneh, Q. et al. Enhancing IoT device security: CNN-SVM hybrid approach for real-time detection of DoS and DDoS attacks. J. Intell. Syst.33(1), 20230150 (2024). [Google Scholar]
  • 11.Ullah, F., Turab, A., Ullah, S., Cacciagrano, D. & Zhao, Y. Enhanced network intrusion detection system for internet of things security using multimodal big data representation with transfer learning and game theory. Sensors24(13), 4152 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.AL-MAFRACHI, B. U. S. H. R. A. (2025). Accurate and High Security of IoT System using Machine Learning Algorithms. Dijlah Journal of Engineering Sciences ISSN: 3078–9664, e-ISSN: 3078–9656, 2(2).
  • 13.Yang, K., Wang, J. & Li, M. An improved intrusion detection method for IIoT using attention mechanisms, BiGRU, and Inception-CNN. Sci. Rep.14(1), 19339 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Abbas, S. et al. Evaluating deep learning variants for cyber-attacks detection and multi-class classification in IoT networks. PeerJ Comput. Sci.10, e1793 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gadey, N., Pande, S. D., & Khamparia, A. (2024). Enhancing 5G and IoT network security: A multi-model deep learning approach for attack classification. In Networks attack detection on 5G networks using data mining techniques (pp. 1–23). CRC Press.
  • 16.Das, S., & Das, R. R. (2024, October). Smart Shield: Enhancing IoT Security Against DDoS Attacks using AI techniques. In 2024 IEEE 15th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) (pp. 181–186). IEEE.
  • 17.Lee, Y. X., Shieh, C. S., Horng, M. F., Nguyen, T. L., Chao, Y. C., & Gupta, S. K. (2024, April). Identification of Multi-class Attacks in IoT with LSTM. In International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (pp. 505–515). Singapore: Springer Nature Singapore.
  • 18.Yadav, P., Mishra, N., & Sharma, S. (2024, October). Depth analysis on recognition and prevention of ddos attacks via machine learning and deep learning strategies in sdn. In 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS) (pp. 612–619). IEEE.
  • 19.Jakotiya, K., Shirsath, V., & Inamdar, S. (2025). intrusion detection with machine learning: a two-step federated approach using the CIC IoT 2023 dataset. Computer Science, 26(2).
  • 20.Cheekati, S., Borra, C. R., Kumar, S., Rayala, R. V., Sangula, S. K., & Kulkarni, V. (2025, May). Intelligent Cybersecurity for IoT: A Hybrid QRIME-SDPN Approach for Network Attack Detection on CIC-IoT-2023. In 2025 13th International Conference on Smart Grid (icSmartGrid) (pp. 774–781). IEEE.
  • 21.Susilo, B., Muis, A. & Sari, R. F. Intelligent intrusion detection system against various attacks based on a hybrid deep learning algorithm. Sensors25(2), 580 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mahdi, Z. S., Zaki, R. M. & Alzubaidi, L. A secure and adaptive framework for enhancing intrusion detection in IoT networks using incremental learning and blockchain. Secur. Privacy8(4), e70071 (2025). [Google Scholar]
  • 23.Lilhore, U. K. et al. SmartTrust: A hybrid deep learning framework for real-time threat detection in cloud environments using Zero-Trust Architecture. J. Cloud Comput.14(1), 35 (2025). [Google Scholar]
  • 24.Beshah, Y. K., Abebe, S. L., & Melaku, H. M. (2025). Multi-Stage Adversarial Defense for online DDoS attack detection system in IoT. IEEE Access.
  • 25.Fan, M., Zuo, J., Zhu, J., & Lu, Y. (2025). Explainable Anomaly-Based Intrusion Detection for Specialized IoT Environments Enabled by Rule Extraction from Autoencoder. IEEE Internet of Things Journal.
  • 26.Jiang, L., Zhuang, Y., Song, Y., Sun, Y., & Guo, C. (2025). FLEMING-MS: A Secure and Environmentally Sustainable Intrusion Detection System for AIoT-Driven Medical Supply Chains. IEEE Internet Things J.
  • 27.https://www.unb.ca/cic/datasets/iotdataset-2023.html
  • 28.Kumari, S. et al. An exploration of RSM, ANN, and ANFIS models for methylene blue dye adsorption using Oryza sativa straw biomass: A comparative approach. Sci. Rep.15(1), 2979 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Anusuya, V. S., Baswaraju, S., Thirumalraj, A., & Nedumaran, A. (2024). Securing the MANET by detecting the intrusions using CSO and XGBoost model. In Intelligent Systems and Industrial Internet of Things for Sustainable Development (pp. 219–234). Chapman and Hall/CRC.
  • 30.Gunapriya, B., Rajesh, T., Thirumalraj, A. & Manjunatha, B. LW-CNN-based extraction with optimized encoder-decoder model for detection of diabetic retinopathy. J. Auton. Intell.s7(3), 1095 (2023). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES