Skip to main content
. 2025 Aug 11;15:29358. doi: 10.1038/s41598-025-15225-z

Table 4.

Selection of baseline models for comparative analysis.

Analysis dimension Baseline model Comparative justification Key parameters of baseline model
Multi-dimensional attack detection performance comparison Traditional static role-based access control (RBAC) model Evaluates performance differences between static and dynamic strategies in detecting emerging attacks19 Role-permission matrix: 100 × 50; rule update cycle: fixed at 24 h
Q-learning algorithm Validates SAC’s superiority over traditional RL in policy exploration and multi-objective trade-offs20 Learning rate: 0.01; discount factor: 0.99; state space dimension: node degree + link load (50 dimensions)
ML-based detector (SVM - support vector machine) Compares traditional feature engineering with GNN for dynamic topology-based attack detection sensitivity21 Feature dimensions: 20 (packet rate, connection count, etc.); kernel function: RBF (radial basis function); penalty factor c = 10
Dynamic resource allocation balance validation Greedy algorithm Highlights the contrast between global optimization and local optimal strategies in resource balancing22 Link Selection Policy: Lowest real-time load priority; History State Memory Window: None.
Shortest path first Validates the necessity of multi-objective optimization compared to single-objective (minimum hops)23 Routing update interval: 5 s; link cost calculation: hop count; maximum path count: 3
Robustness evaluation under topology variations Dynamic programming graph generation (DPGG) model Compares centralized vs. federated architectures in terms of policy stability in dynamic topologies24 Network structure: actor (256–128–64)—critic (256–128); experience replay buffer size: 1e5; batch size: 64
Distributed architecture efficiency & scalability testing Not applicable N/A N/A