Abstract
Agentic Artificial Intelligence (AAI) refers to autonomous, adaptable, and goal-directed systems capable of proactive decision-making in dynamic environments. These agentic systems extend beyond reactive AI by leveraging cognitive architectures and reinforcement learning to enhance adaptability, resilience, and self-sufficiency in cybersecurity contexts. As cyber threats grow in sophistication and unpredictability, Agentic AI is rapidly becoming a foundational technology for intelligent cyber defense, enabling capabilities such as real-time anomaly detection, predictive threat response, and quantum-resilient protocols. This narrative review synthesizes literature from 2005 to 2025, integrating academic, industry, and policy sources across three thematic pillars: cognitive autonomy, ethical governance, and quantum-resilient defense. The review identifies key advancements in neuromorphic architectures, cross-jurisdictional governance models, and hybrid defense systems that adapt to evolving threat landscapes. It also exposes critical challenges, including dual-use risks, governance interoperability, and preparedness for post-quantum security. This work contributes a multi-dimensional conceptual framework linking governance mechanisms to operational practice, maps resilience strategies across conventional and quantum vectors, and outlines a forward-looking roadmap for secure, ethical, and adaptive deployment of Agentic AI in cybersecurity. The synthesis aims to support policymakers, developers, and security practitioners in navigating the accelerating convergence of autonomy, security, and AI ethics.
Keywords: Agentic Artificial Intelligence, Cybersecurity, Cognitive Autonomy, Ethical Governance, Quantum-Resilient Systems, AI Threat Mitigation, Autonomous Cyber Defense
1. Introduction
1.1 Background
Agentic Artificial Intelligence refers to autonomous systems capable of pursuing complex, goal-directed tasks with minimal human intervention, distinguished by adaptability, decision-making autonomy, and operational resilience in dynamic environments. 1 Unlike traditional AI models that operate reactively, agentic AI proactively plans, adapts, and executes workflows, aligning with theories of autonomy and adaptivity central to intelligent agent design. 2 In the context of cybersecurity, this capability is increasingly valuable due to the sophistication, speed, and unpredictability of emerging threats. The scarcity of skilled cybersecurity professionals, combined with the growing complexity of attack surfaces, creates a pressing need for autonomous cyber-defense agents capable of sensing, reasoning, acting, and learning without constant oversight. 3 These agents leverage autonomy to maintain operational continuity, even when communication with human operators is degraded or adversarial manipulation attempts are underway. 4 The principles underlying agentic AI in cyber defense intersect with resilience engineering and sociotechnical systems theory, emphasizing adaptability, fault tolerance, and human-machine collaboration in uncertain conditions. 5 From a sociotechnical perspective, designing AI that can augment human decision-making while preserving ethical governance aligns with broader goals of ensuring trust, accountability, and interoperability across systems. 6
1.2 Problem context
The cybersecurity threat landscape has undergone rapid transformation, driven by increasingly sophisticated attack vectors such as advanced persistent threats (APTs), polymorphic malware, and adversarial machine learning (AML) attacks. Conventional defense models, often reliant on static rules, perimeter-based protections, and human-centric incident response, are proving inadequate in addressing these evolving risks. 7 These legacy systems struggle with detection latency, limited adaptability, and the inability to effectively counter stealthy, long-term intrusions such as APTs. 8 Emerging cyber threats increasingly exploit gaps in static defense mechanisms, bypassing traditional intrusion detection and prevention systems through zero-day vulnerabilities and social engineering tactics. 9 Furthermore, the expanding digital attack surface amplified by cloud adoption, IoT proliferation, and distributed architectures has introduced complexity beyond the operational scope of conventional security strategies. 10 To address these limitations, the cybersecurity domain is increasingly exploring autonomous, adaptive defense paradigms that integrate AI for real-time threat detection, proactive risk mitigation, and resilience-building. 11 This paradigm shift highlights the urgent need for more dynamic and intelligent frameworks capable of evolving alongside the threats they are designed to counter.
1.3 Role of agentic AI
AAI introduces a paradigm shift in cybersecurity by enabling systems to operate with cognitive autonomy, adaptability, and proactive decision-making. Unlike traditional AI, which responds to predefined prompts or static rules, Agentic AI autonomously pursues complex goals, learns from dynamic environments, and adapts strategies in real-time. 1 This autonomy allows for advanced threat detection and response capabilities, especially in contexts where rapid, unsupervised decision-making is critical to system survival. In cybersecurity, the role of Agentic AI extends beyond detection to include autonomous mitigation, strategic defense orchestration, and quantum-resilient risk anticipation. Cognitive architectures integrated with reinforcement learning and neuromorphic frameworks enable these agents to emulate human-like reasoning while maintaining superior scalability and speed. 12
By leveraging quantum-enhanced learning, Agentic AI systems can improve decision accuracy and significantly reduce training time, thereby accelerating adaptation to novel threats. Agentic AI also introduces embedded governance potential, where ethical oversight mechanisms can be directly integrated into autonomous workflows to ensure compliance with fairness, transparency, and accountability principles. 13 Furthermore, autonomous cyber-defense agents (AICAs) demonstrate how distributed, goal-driven AI entities can defend compromised or isolated networks without human intervention, closing critical response gaps caused by communication delays or resource shortages. 3 By combining adaptability, proactive resilience, and embedded ethical governance, Agentic AI offers a robust framework for evolving beyond reactive cybersecurity models toward self-sustaining, anticipatory defense ecosystems.
1.4 Emerging risks
1.4.1 Technical risk vectors
The rapid integration of Agentic AI into cybersecurity introduces a complex range of technical vulnerabilities that exceed those addressed by traditional defense models. One of the most critical areas is adversarial AI, where attackers exploit weaknesses in learning models through data poisoning, evasion tactics, and generative deepfakes to mislead or disable autonomous agents. 14 These techniques compromise both the integrity and trustworthiness of agentic systems, especially those relying on real-time inference or adaptive learning. Moreover, as agentic architectures become more complex and data-driven, model inversion and extraction attacks represent serious threats to proprietary model assets and user privacy. The deployment of online learning or few-shot adaptation further increases exposure to manipulation, making it easier for adversaries to steer system behavior toward compromised or suboptimal states over time. 15
Compounding these risks is the advent of quantum computing, which presents an existential threat to the cryptographic underpinnings of secure communication systems. Algorithms like Shor’s make traditional encryption methods such as RSA and ECC potentially obsolete, raising the real possibility of “harvest now, decrypt later” scenarios. 16 In this environment, autonomous AI agents handling secure credentials or key management functions could become high-value targets, necessitating quantum-resilient architectures and post-quantum cryptographic protocols. Together, these vectors demonstrate the urgent need for resilience-by-design practices in agentic systems. This includes adversarial robustness, model verifiability, runtime threat detection, and secure update mechanisms. The conceptual relationships among cognitive autonomy, governance requirements, and quantum resilience are visualized in Figure 1, illustrates the conceptual intersections among the three foundational pillars of agentic AI in cybersecurity: Cognitive Autonomy, Ethical Governance, and Quantum Resilience. Each pillar contains distinct thematic elements, while the overlap zones represent shared challenges and synergistic opportunities such as trust calibration, secure autonomy, and dual-use governance strategies.
Figure 1. Conceptual map: Intersections among cognitive autonomy, ethical governance, and quantum resilience.

1.4.2 Governance and dual-use challenges
Beyond technical complexity, Agentic AI introduces major challenges in governance, ethical oversight, and strategic misuse, particularly in the context of dual-use applications. These systems, by their autonomy and learning capability, blur the boundary between defensive protection and offensive exploitation. The dual-use dilemma is a central concern: tools originally designed for cyber defense, such as autonomous intrusion detection or self-healing systems, can be repurposed for offensive purposes, such as autonomous probing, system infiltration, or self-replicating malware agents. 17 The accessibility of these capabilities, combined with the lack of attribution in cyber warfare, creates scenarios where unintended escalation or covert cyber operations may proliferate. At the same time, current governance frameworks often lack the transparency, accountability, and international harmonization required to manage such systems effectively. Many agentic systems operate as “black boxes” with limited explainability, complicating compliance with ethical principles like traceability, justice, and human oversight as outlined in regulatory frameworks like the EU AI Act or ISO/IEC AI standards. 18 The regulatory lag, the time gap between technological advancement and legal or ethical controls, further increases governance risks. In this void, actors can deploy powerful agentic AI without sufficient safeguards, especially across jurisdictions with uneven oversight capacity. The risk is magnified in contexts with geopolitical asymmetry, where state and non-state actors exploit legal grey zones to develop and deploy ethically ambiguous capabilities.
As the autonomy and sophistication of agentic systems grow, governance must evolve beyond static compliance to embrace dynamic, lifecycle-aware models of control. These should include ethical risk forecasting, operational transparency, and shared international norms. These governance and ethical challenges further reinforce the need for agentic AI frameworks that are not only secure and adaptable, but also normatively grounded. The breadth and severity of dual-use and governance risks are illustrated in Figure 2, highlights the dual-use risk zones across the three thematic pillars of agentic AI in cybersecurity. It maps areas where technologies designed for defense or oversight can be repurposed maliciously such as explainability for deception, autonomy for offensive decision-making, or quantum tools for surveillance evasion. This overlay reinforces the ethical imperative of governance-aware system design.
Figure 2. Dual-use risk overlay diagram-visual overlay showing where misuse potential exists across cognitive autonomy, ethical governance, and quantum resilience.

1.5 Knowledge gaps
Despite rapid advancements in agentic AI, significant knowledge gaps remain in governance, integration, and risk management. Research indicates that AI governance (AIG) is still in an emergent state, with a limited understanding of how to effectively implement ethical principles into operational practices. Critical deficiencies include a lack of contextual awareness, uncertainty regarding the effectiveness of regulations, and insufficient operationalization of governance processes. 19 The trustworthiness of AI systems is hindered by the absence of robust metrics for human-centric risks, such as bias, misinformation, and privacy erosion. Existing frameworks often focus on technical vulnerabilities while neglecting socio-psychological threats and interdisciplinary collaboration needs. 20 From a security perspective, knowledge gaps persist in addressing quantum-era threats and building adaptive, sector-specific cybersecurity strategies. Emerging studies recommend integrating quantum-ready encryption and adaptive risk models, but practical implementation remains limited. 21 Furthermore, there is an identified AI knowledge gap at the academic and industry level, where the pace of system development outstrips the number of empirical studies characterizing AI behavior. Bridging this requires cross-disciplinary collaboration and institutional incentives to promote systematic evaluation of AI systems. 22 In addition, sector-specific readiness varies widely. Furthermore, in healthcare, clinicians often lack adequate AI literacy, limiting their ability to evaluate, deploy, and monitor AI tools effectively. Structured educational frameworks and regulatory alignment are needed to ensure safe and ethical adoption. 23 Similarly, global and regional disparities in AI policy readiness hinder cohesive governance strategies, particularly in resource-limited regions. 24
Overall, addressing these knowledge gaps will require:
-
•
Operationalizing AI governance frameworks with measurable outcomes.
-
•
Developing socio-technical trustworthiness metrics.
-
•
Preparing for quantum-era cybersecurity challenges.
-
•
Expanding empirical research on AI behavior.
-
•
Building sector-specific education and readiness programs.
1.6 Rationale for review type
The rapidly evolving nature of agentic AI in cybersecurity necessitates a synthesis approach that captures diverse perspectives, emerging trends, and multidisciplinary insights. A narrative review is particularly suited for this purpose, as it allows the integration of findings from heterogeneous sources, including both peer-reviewed and grey literature, thereby providing a richer and more contextual understanding than purely systematic approaches. Narrative reviews are valuable for capturing thematic breadth, exploring conceptual linkages, and accommodating evolving terminologies and methodologies that may not yet be standardized in empirical databases. 25 Additionally, narrative review methodologies are effective in mapping emerging domains where empirical evidence may be limited, fragmented, or in non-traditional formats. This flexibility is essential in fields like AI-enhanced cybersecurity, where insights from technical reports, policy papers, and case studies can be as informative as academic journal articles. 26 The narrative review format also enables the identification of thematic gaps and conceptual trends that can guide future research and policy development. 27 Moreover, in multidisciplinary contexts, such as the convergence of AI, cybersecurity, governance, and ethics, a narrative review can synthesize perspectives across domains without being constrained by rigid inclusion criteria that might exclude innovative or early-stage work. 28 This approach ensures that the review remains responsive to the dynamic and rapidly changing landscape of cyber threats and AI capabilities, supporting a holistic and forward-looking synthesis.
1.7 Review objective & scope
The objective of this review is to synthesize and critically analyze the evolution, governance, and resilience dimensions of AAI within cybersecurity, emphasizing cognitive autonomy, ethical governance, and quantum-resilient defense strategies. The review aims to bridge conceptual frameworks with practical applications, drawing from interdisciplinary insights across computer science, security studies, ethics, and emerging quantum technologies.
This study is guided by four central research questions:
RQ1: What are the prevailing design patterns and architectural principles in agentic AI for cybersecurity?
RQ2: How are ethical governance mechanisms implemented to align agentic AI with regulatory and compliance frameworks?
RQ3: What strategies are being developed for resilience against both conventional and quantum-era threats in agentic AI systems?
RQ4: What are the primary implementation barriers and enabling factors for deploying agentic AI in cybersecurity contexts?
These questions are anchored in three thematic pillars (1) Cognitive Autonomy, (2) Ethical Governance, and (3) Quantum-Resilient Defense, which provide a structured lens for thematic synthesis. Prior research has highlighted the necessity of integrating autonomy with robust cognitive architectures, 29 addressing security vulnerabilities in autonomous agents, 4 and developing resilience frameworks that anticipate quantum-era disruptions. 30 The review’s scope spans literature from 2005 to 2025, integrating academic, industry, and policy perspectives. This time frame captures the formative years of agentic AI conceptualization, recent governance reforms, and the emergence of quantum-resilient strategies. By adopting a narrative review approach, the study allows for thematic breadth, inclusion of grey literature, and cross-domain insights essential for understanding the socio-technical implications of deploying agentic AI in complex cybersecurity ecosystems.
1.8 Terminology & scope clarifications
Agentic AI refers to artificial intelligence systems capable of autonomous decision-making, adaptability, and goal-directed reasoning, often integrating cognitive frameworks and reinforcement learning to operate with minimal human intervention in dynamic environments. 12 These systems extend beyond traditional automation by incorporating higher-order cognitive functions such as self-reflection, context awareness, and adaptive problem-solving.
Cognitive Autonomy denotes the capacity of an AI agent to independently process information, learn from diverse experiences, and generate novel solutions, often drawing on quantum-inspired or neuromorphic architectures to address uncertainty and incomplete knowledge. 31 This form of autonomy emphasizes not only decision accuracy but also the system’s ability to navigate ethical dilemmas and conflicting objectives. 32
Ethical Governance in Agentic AI involves the establishment of oversight frameworks, policies, and technical safeguards that ensure transparency, accountability, and alignment with human values. Models such as decentralized governance systems have been proposed to address the challenges of regulating highly autonomous AI agents, leveraging tools like blockchain, smart contracts, and verifiable identity protocols. 33
Quantum-Resilient Defense refers to cybersecurity strategies designed to withstand quantum-era threats, incorporating post-quantum cryptographic algorithms and AI-based resilience mechanisms. By embedding quantum-resistant features into autonomous agents, systems can maintain security integrity even in the face of future quantum computing capabilities. 34
For this review, the scope is limited to AI systems with demonstrable autonomy and decision-making capacity that directly influence cybersecurity operations. Out of scope are purely algorithmic tools lacking adaptive or cognitive features, as well as general discussions of AI ethics unrelated to security contexts. The thematic pillars guiding this review, Cognitive Autonomy, Ethical Governance, and Quantum-Resilient Defense, serve as the structural framework for synthesizing literature across academic, industrial, and policy domains.
2. Methodology (Narrative review approach)
2.1 Scope
This review adopts a narrative review methodology to provide a broad yet thematically focused synthesis of literature on Agentic AI in cybersecurity. Unlike systematic reviews that rely on strict inclusion and exclusion criteria, narrative reviews enable the integration of diverse evidence sources, including peer-reviewed research, grey literature, and conceptual papers, to capture the full spectrum of emerging developments and perspectives. 35 This approach is particularly valuable in rapidly evolving fields such as AI-driven cybersecurity, where technological innovations, threat landscapes, and governance frameworks shift quickly.
-
•
The scope of this review encompasses technical, governance, and socio-ethical dimensions of Agentic AI applications in cyber defense. Specifically, it covers.
-
•
Technological foundations, including autonomy, adaptability, and cognitive reasoning architectures for cybersecurity defense agents. 36
-
•
Operational applications such as automated threat detection, incident response, and quantum-resilient defense strategies.
-
•
Governance and ethical considerations addressing transparency, bias mitigation, and accountability in autonomous systems. 37
-
•
Cross-domain integration examining lessons from other sectors such as healthcare and finance, where AI adoption has generated both efficiency gains and equity concerns. 38
By framing the scope across technological, operational, and governance dimensions, this review aims to deliver a multidimensional perspective that is relevant to both academic inquiry and practical cybersecurity policy-making.
2.2 Search strategy
The search strategy for this narrative review was designed to ensure breadth, depth, and methodological rigor, while maintaining transparency in decision-making. Following recommendations for narrative review structuring, the process began by identifying core concepts: Agentic AI, cybersecurity, and emerging governance frameworks, and then expanding the search using Boolean operators and wildcard symbols to capture relevant variations in terminology. 39 Key academic databases, including IEEE Xplore, Scopus, Web of Science, and ACM Digital Library, were targeted to capture peer-reviewed literature, complemented by grey literature searches in organizational reports, policy briefs, and conference proceedings. Boolean combinations were crafted to merge technical (for instance, “machine learning,” “autonomous agents,” “threat detection”) and contextual terms (for instance, “cyber policy,” “governance frameworks,” “critical infrastructure”), a practice shown to enhance comprehensiveness in cybersecurity reviews. 40 While the complete Boolean search strings and refinement process are available in the supplementary dataset. 41
The search process was iterative rather than linear, with periodic refinement of terms as familiarity with the literature increased, consistent with best practices for narrative synthesis. 42 In line with recent methodological advancements, generative AI tools were selectively used to identify thematic clusters and suggest additional terms, increasing the efficiency of retrieval without replacing researcher judgment. 43 To ensure relevance, retrieved documents were screened against predefined thematic pillars and research questions, and reference chaining was employed to locate key works cited by foundational studies. This combination of database querying, grey literature capture, and citation chasing is recognized as a robust approach for mapping complex, interdisciplinary domains like AI-enabled cybersecurity. 44
2.3 Coverage period & source types
This review encompasses literature published from the early 2010s through 2025 to capture the rapid evolution of agentic AI, cybersecurity resilience, and quantum-readiness paradigms. The timeframe aligns with the emergence of AI-enabled cyber threat models, governance frameworks, and ethical discourses that have shifted significantly in the last decade. 45 Source types include peer-reviewed journal articles, conference proceedings, authoritative narrative reviews, systematic literature reviews, and selected grey literature from reputable institutional or policy sources to ensure both academic rigor and practical applicability. 46 By incorporating both traditional academic studies and non-traditional but credible reports, the review benefits from a broader evidence base that reflects ongoing developments in highly dynamic technological landscapes. 37 Given the complexity and cross-domain nature of agentic AI applications, the inclusion of diverse source types enables the integration of conceptual, empirical, and practice-oriented perspectives. 47 This approach ensures that the review remains comprehensive while reflecting the multidisciplinary realities of AI-driven cybersecurity systems.
2.4 Eligibility criteria
To ensure methodological rigor and relevance, this review adopted explicit inclusion and exclusion criteria grounded in established guidance for narrative and scoping reviews. 48
Inclusion criteria comprised:
-
•
Peer-reviewed journal articles, authoritative narrative reviews, and high-quality grey literature published between 2010 and 2025.
-
•
Studies addressing agentic AI in cybersecurity, quantum readiness, governance, or ethical frameworks.
-
•
Empirical, conceptual, or policy-oriented works providing actionable insights for cross-domain resilience.
Exclusion criteria included:
-
•
Studies not in English.
-
•
Non-scholarly content lacking methodological transparency or relevance to the thematic scope.
-
•
Redundant studies where newer or more comprehensive sources covered the same ground.
The adoption of clearly defined eligibility parameters improves transparency, reproducibility, and credibility, while reducing bias in literature selection. 49 Such rigor ensures that the review accurately reflects both the breadth and depth of the evidence base on emerging AI-driven cybersecurity challenges. 50
2.5 Review process flow
The review adopted a five-stage process to ensure a structured yet adaptable narrative synthesis. Identify: Relevant literature was located through systematic database searches (IEEE Xplore, ACM Digital Library, Scopus, Web of Science) and targeted grey literature repositories. Boolean keyword clustering was employed to maximize thematic coverage. 26 Screen: Initial screening involved title and abstract review to remove irrelevant materials, with emphasis on aligning studies to the thematic pillars: cognitive autonomy, ethical governance, and quantum-resilient defense. 51 Select: Full-text reviews were conducted for eligible papers, guided by predefined inclusion/exclusion criteria to maintain consistency and methodological transparency. 48 Classify: Selected works were coded according to their thematic relevance (RQ1-RQ4), type (conceptual, empirical, policy), and domain focus, enabling cross-pillar linkages and identification of evidence clusters. 52 Synthesize: Findings were integrated into a narrative format that preserved contextual richness while drawing comparative insights across studies, thereby supporting conceptual model development and identification of knowledge gaps. 43 This sequential flow allowed the review to maintain methodological rigor while remaining responsive to emergent concepts and evolving research directions in agentic AI cybersecurity. Figure 3. Process flow of the narrative review methodology. This includes five sequential stages: Identify, Screen, Select, Classify, and Synthesize. The diagram emphasizes thematic sorting, expert validation, and cross-domain synthesis unique to this review’s integrative approach.
Figure 3. Narrative review process flow.

2.6 Thematic synthesis
The thematic synthesis process organized findings across the four research questions (RQ1-RQ4), allowing for a structured integration of cognitive autonomy, ethical governance, and quantum-resilient defense perspectives. This approach drew on principles of thematic analysis while incorporating AI-assisted literature mapping to identify patterns and conceptual linkages within and across thematic pillars. 52 For RQ1 (design patterns in Agentic AI cybersecurity), extracted data were coded into recurring categories such as hybrid human -agent architectures, neuromorphic cognitive frameworks, and reinforcement learning variants, with emphasis on domain-specific applications. 12 For RQ2 (ethical governance mechanisms), synthesis mapped governance approaches from global frameworks (e.g., NIST AI RMF, ISO/IEC standards) against observed implementation practices, revealing alignment gaps and variations in accountability structures. 21 For RQ3 (threat resilience strategies), themes encompassed both cryptographic and non-cryptographic methods, including quantum-resistant protocols, AI model integrity safeguards, and resilience-by-design engineering principles. 53 For RQ4 (implementation barriers and enablers), synthesized evidence highlighted the influence of cost constraints, skills shortages, and policy fragmentation as major inhibitors, while collaborative ecosystems and sector-specific funding emerged as enabling factors. 54 Cross-pillar synthesis revealed strong interdependencies: advancements in cognitive autonomy often necessitated governance innovation, and quantum-resilient security strategies were most effective when embedded within ethically-aligned agentic frameworks.
2.7 Methodological limitations
While the review’s methodology aimed for comprehensiveness, several inherent limitations must be acknowledged. First, AI-focused cybersecurity literature often exhibits rapid obsolescence due to evolving threat landscapes and technological advancements, meaning findings may lose relevance quickly. 55 Second, AI-based threat detection studies frequently suffer from data bias and limited generalizability, as training datasets may not fully represent emerging or rare attack vectors, impacting external validity. 56 Third, reliance on simulation and controlled environments in many evaluations constrains ecological validity, as real-world deployment introduces unpredictable system, human, and adversarial factors. 57 Fourth, adversarial vulnerabilities in AI models remain underreported in empirical literature, creating a blind spot in systematic reviews of security performance. 58 Finally, language and indexing bias may have excluded relevant non-English studies or grey literature, potentially narrowing the thematic breadth of the synthesis. 59 These limitations highlight the need for ongoing updates, diversified data sources, and inclusion of real-world case validations to strengthen future reviews.
3. State of the art
3.1 Evolution of agentic AI in cybersecurity
The progression of Agentic Artificial Intelligence (AI) in cybersecurity reflects a broader shift in AI from reactive, rule-based systems toward autonomous, goal-driven agents capable of adaptive decision-making. Early AI systems in cybersecurity focused on static pattern matching and predefined rule sets, which were effective for known threats but struggled against novel attack vectors. The rise of Generative AI marked a turning point by enabling large-scale language and vision models that could reason and adapt dynamically, laying the groundwork for more autonomous, agentic capabilities. 60 The adoption of Agentic AI has been propelled by its ability to integrate cognitive autonomy, persistent memory, and multi-agent collaboration features that allow for continuous monitoring, adaptive learning, and proactive threat mitigation in complex digital environments. 61 This evolution has also been shaped by the incorporation of cognitive skills modules tailored to specific domains, improving decision-making precision and scalability. 62 In cybersecurity specifically, Agentic AI has transformed defensive strategies by reimagining frameworks like the cyber kill chain, integrating real-time threat intelligence, and embedding ethical governance into automated responses. 63 These advances address the limitations of conventional defense by enabling autonomous orchestration of countermeasures against Advanced Persistent Threats (APTs) and other sophisticated attack forms. 64
The historical trajectory of Agentic AI in cybersecurity can be viewed in distinct phases:
-
•
Rule-Based Automation (Pre-2010s): Reliance on static rules and expert systems.
-
•
Machine Learning Integration (2010-2020): Incorporation of supervised and unsupervised learning for anomaly detection.
-
•
Generative & Hybrid AI (2020-2023): Rise of LLMs and multi-modal AI as adaptive reasoning engines.
-
•
Agentic AI Era (2023 - Present): Fully autonomous, collaborative agents with embedded governance and quantum-resilient readiness. 65
This evolution has redefined the cybersecurity domain by shifting the paradigm from reactive defense to proactive, ethically governed autonomy capable of anticipating and countering emerging threats before they escalate.
3.2 Current landscape
The current landscape of Agentic AI in cybersecurity reflects rapid technological convergence, diverse sectoral applications, and growing industry adoption. Key technologies underpinning this domain include autonomous decision-making architectures, multi-agent coordination systems, and AI-enhanced threat intelligence platforms. Recent developments demonstrate how Agentic AI is increasingly integrated with complementary technologies such as blockchain to enhance trust, transparency, and resilience in cyber defense systems, leading to significant improvements in anomaly detection accuracy and incident response efficiency. 66 Industry uptake spans multiple sectors. Financial institutions leverage Agentic AI for fraud detection and high-frequency threat analysis, while healthcare organizations use it to protect sensitive patient data and enforce fine-grained access control policies. 66 Governmental agencies have also begun integrating agentic and frontier AI capabilities into critical infrastructure protection, particularly in enhancing real-time monitoring and automated incident response. 63 The technology ecosystem is evolving toward proactive and perpetual learning systems capable of adapting to new threat vectors without human intervention. This shift is accompanied by an increase in deployment scale, ranging from organizational-level solutions to national cyber defense initiatives. 67 However, while adoption rates are accelerating, challenges persist, particularly regarding ethical governance, interoperability across sectors, and the quantum readiness of deployed systems. To provide an overview of the academic and grey literature informing this review, Table 1 summarizes key studies across relevant domains. Each entry includes author(s), year, national/regional context, domain of application, and the study’s core contribution to the agentic AI and cybersecurity landscape. This table offers a snapshot of the diversity and interdisciplinary nature of current research, forming the empirical backbone for the thematic synthesis presented in later sections.
Table 1. Summary of reviewed studies (authors, year, country, domain, contribution).
| Author(s) | Year | Country | Domain | Key contribution |
|---|---|---|---|---|
| Ratnawita | 2025 | USA | Adversarial AI | Demonstrated data poisoning threats in autonomous systems |
| Chimamiwa | 2024 | South Africa | Autonomous Learning | Analyzed few-shot learning risks in cybersecurity AI |
| Elmisery et al. | 2025 | Qatar | Quantum Security | Forecasted quantum threats to agentic AI systems |
| Dubey et al. | 2025 | India | Dual-Use Ethics | Explored dual-use risks of defensive agentic AI |
| Lekota | 2024 | Nigeria | Governance & Regulation | Mapped oversight gaps in agentic AI governance |
3.3 Publication trends
Bibliometric analyses reveal a sustained upward trajectory in scholarly output on agent-based and agentic AI applications in cybersecurity over the past decade. For instance, global publications on agent-based cybersecurity systems have steadily risen since 2013, peaking in 2023 with more than 1,200 articles and over 30,000 citations, reflecting both growing research interest and the field’s expanding influence. 68 Similarly, research at the broader AI -cybersecurity nexus has shown remarkable acceleration since 2015, driven by the emergence of machine learning -based intrusion detection, adversarial attack studies, and IoT security solutions. 69 Trends have been fueled by factors such as increasing cyber threat sophistication, the proliferation of connected devices, and strategic investment in AI-enhanced security frameworks. The surge also aligns with thematic shifts toward integrating blockchain, quantum computing, and explainable AI in security architectures, reflecting an evolution from reactive measures to proactive, adaptive, and autonomous defence strategies. 70 Overall, the publication patterns demonstrate both quantitative growth and thematic diversification, with research increasingly intersecting with governance, ethics, and cross-domain resilience. Figure 4 visualizes the growth of literature related to Agentic AI in cybersecurity across academic, industrial, and policy domains from 2005 to 2025. This temporal trend highlights a sharp increase in activity post-2015, coinciding with the rise of autonomous AI systems and growing quantum-era security concerns.
Figure 4. Annual publication trends.
3.4 Geographical distribution - Mapping global research activity and drivers
Research into agentic AI in cybersecurity shows a globally diverse landscape, with notable activity in the United States, India, the United Kingdom, and China. These regions lead in publications and innovation, driven by national security priorities, technology sector investment, and advanced academic research ecosystems. 71 Strategic adoption of AI in cybersecurity is further evident in regions investing in national resilience, such as the integration of AI-enhanced infrastructure protection in smart grids, transportation, and crisis management systems. Countries with mature digital economies, including the US, Singapore, and parts of the EU, are increasingly embedding AI into multi-sectoral cybersecurity strategies. 72 The geographical spread also reflects emerging markets’ growing role in AI-driven cybersecurity research. Nations such as the UAE are focusing on adoption readiness, with socio-cultural and workforce factors shaping implementation strategies. 73 Finally, global distribution trends indicate increasing collaboration between AI hubs and regions with sector-specific vulnerabilities such as healthcare, logistics, and energy through cross-border research initiatives and targeted deployments. 74
While research activity is concentrated in regions such as North America, Western Europe, and parts of East Asia, the geographical distribution also reveals notable gaps. Africa, Latin America, and parts of Southeast Asia are significantly underrepresented in the reviewed literature. This disparity suggests the need for more inclusive global participation in developing and governing Agentic AI systems for cybersecurity, especially in regions with rising digital infrastructure but limited AI policy maturity. Figure 5 presents a comparative analysis of research activity across global regions, highlighting regional leadership in agentic AI for cybersecurity. The United States, the EU, and China dominate academic and policy-related outputs, with emerging contributions from the Middle East and Africa.
Figure 5. Geographical distribution of research.
3.5 Technology taxonomy - Categorization by autonomy level, domain, governance type
The technology taxonomy for agentic AI in cybersecurity can be structured along three key dimensions: autonomy level, application domain, and governance type.
Autonomy Level: Agentic AI systems range from semi-autonomous assistants that require human-in-the-loop oversight to fully autonomous agents capable of adaptive decision-making in dynamic cyber environments. Händler 75 proposes a multi-dimensional taxonomy that evaluates autonomy across aspects such as task management, agent collaboration, and context interaction, emphasizing the balance between independence and alignment. Cihon et al. 76 further refine autonomy assessment through a code-based inspection method that measures agent independence and required oversight.
Application Domain: Agentic AI applications in cybersecurity span critical infrastructure defense, malware detection, threat intelligence, blockchain security, and autonomous incident response. Karim et al. 77 highlight blockchain-integrated multi-agent systems for secure and scalable collaboration in decentralized environments, demonstrating interoperability across sectors such as finance, Web3, and autonomous systems.
Governance Type: Governance structures for agentic AI can be centralized, decentralized, or hybrid. Frenette 78 introduces Decentralized AI Governance Networks (DAGN) with tokenized power control to enforce human-centric policies, particularly relevant for sensitive domains like cybersecurity. Similarly, the LOKA Protocol provides a governance-oriented architecture with decentralized identity and ethical consensus mechanisms to ensure trustworthy multi-agent operations. 79
This taxonomy framework supports the systematic classification of agentic AI systems, enabling researchers and practitioners to evaluate trade-offs between operational autonomy, security domain applicability, and governance robustness. Figure 6 presents a structured taxonomy of agentic AI technologies in cybersecurity, categorizing them by autonomy levels, operational domains, and governance models. This visual classification highlights the diversity of system configurations and control mechanisms in real-world and experimental deployments.
Figure 6. Technology taxonomy.
4. Findings
4.1 Overview of literature trends
The literature on agentic AI in cybersecurity demonstrates a steady trajectory toward maturity, reflected in publication growth, thematic consolidation, and expanding international contributions. A recent bibliometric analysis of agent-based systems in cybersecurity revealed a consistent upward trend since 2013, with a marked acceleration after 2018. The field reached a peak in 2023 with over 1,200 publications and more than 30,000 citations, indicating both academic interest and practical relevance. 68 Geographically, China emerged as the largest contributor, followed closely by the United States and India, reflecting both state-led research investment and private sector innovation. The thematic landscape is dominated by intrusion detection, malware classification, blockchain security, and emerging quantum-resilient methods. 71 Moreover, the convergence of agentic AI with adjacent domains such as blockchain and federated learning illustrates the transition from experimental prototypes to scalable, operational systems. This shift parallels a move toward multi-stakeholder collaborations and increased cross-border research networks, which are characteristic indicators of a maturing research field. 69 These patterns suggest that agentic AI in cybersecurity is entering an applied innovation phase, where theoretical development is increasingly complemented by field deployment and governance considerations. Figure 7 synthesizes reviewed studies across the three thematic pillars of this review: Cognitive Autonomy, Ethical Governance, and Quantum Resilience, using a cross-pillar evidence matrix. The heatmap highlights topic -pillar intersections and identifies underexplored thematic areas in agentic AI research.
Figure 7. Cross-pillar evidence matrix - mapping reviewed works against thematic pillars.

RQ1: Common Design Patterns in Agentic AI Cybersecurity
Agentic AI in cybersecurity often leverages hybrid architectures that integrate symbolic reasoning with data-driven methods to improve adaptability and interpretability. For example, hybrid AI cyber defense agents have been developed combining deep reinforcement learning (DRL), large language models (LLMs), and rule-based reasoning for real-time threat detection, network monitoring, and human -agent collaboration. 80 Architectural design patterns for agentic AI frequently follow modular frameworks where components such as perception, reasoning, and action layers are clearly separated and can be combined dynamically. Taxonomies like the boxology approach extend to hybrid actor systems, enabling distributed reasoning and coordinated multi-agent operations. 81– 83 Explainability has emerged as a critical design pillar in these systems, with patterns such as TriQPAN (Trigger, Query, Process, Action, Notify) embedding explainability into the agent’s decision loop, thereby enhancing trust in autonomous operations. 84 Hybrid human -agent teaming patterns, such as delegation and associative teaming, structure interactions to optimize cognitive load distribution between human operators and AI agents, especially in high-stakes cybersecurity operations. 85 Table 2 outlines recurring design patterns identified in the literature for agentic AI systems used in cybersecurity. These patterns include architectural models, decision-making approaches, and integration strategies with human or hybrid systems. Understanding these design archetypes provides a foundation for assessing both operational strengths and vulnerabilities in emerging agentic defense frameworks.
Table 2. Common design patterns with examples.
| Design pattern | Description | Example application | Autonomy level |
|---|---|---|---|
| Reactive Agents | Rule-based responses to predefined threats | Signature-based malware detection | Low |
| Proactive Goal-Seeking | Agents that plan actions based on objectives and environmental scanning | Adaptive firewall reconfiguration | Medium |
| Learning-Based Agents | Agents trained via ML to detect evolving threats | Anomaly detection using reinforcement learning | High |
| Human-in-the-Loop | The agent acts autonomously but escalates to a human under uncertainty | Threat triage assistants in SOCs | Medium |
| Federated Agent Networks | Distributed agents learning from local data without central sharing | Federated malware classifiers across orgs | Medium -High |
| Reflexive Cognitive Loops | Agents with internal reasoning about confidence, ethics, or system feedback | Self-regulating AI-based honeypots | High |
Case study - Autonomous threat detection system:
A deployed system integrating DRL-driven agents with LLM-based analyst interfaces demonstrated robust performance in defending critical networks against simulated red-team attacks. The system dynamically selected between monitoring, deception, and remediation strategies, outperforming baseline static defenses in adversarial testing environments. 80
RQ2: Ethical Governance Mechanisms and Compliance Alignment
The governance of Agentic AI in cybersecurity requires integrating ethical oversight, regulatory compliance, and operational best practices into system design and deployment. Frameworks such as the NIST AI Risk Management Framework (AI RMF), ISO/IEC AI standards, and the EU AI Act have emerged as central pillars for ensuring trustworthiness, safety, and accountability in AI-driven defense systems. These frameworks aim to harmonize technical controls with societal expectations by embedding fairness, transparency, and risk-based assessment throughout the AI lifecycle. ISO/IEC Standards, notably ISO/IEC 42001:2023, provide a certifiable management system for AI governance, outlining requirements for risk assessment, accountability structures, and continuous auditing processes. This standard is particularly relevant for cybersecurity applications, as it introduces operational guidance for AI risk mitigation and conformity assessment, helping organizations align with international compliance requirements. 86 The EU AI Act adopts a risk-based approach to AI regulation, classifying systems into risk categories and enforcing stricter governance for high-risk applications such as autonomous cybersecurity agents. Its provisions mandate transparency, human oversight, and conformity with fundamental rights, while also fostering cross-border regulatory harmonization. 87 Complementing this, the EU’s Ethics Guidelines for Trustworthy AI advocate seven key requirements, including technical robustness, accountability, and privacy protection, offering a non-binding yet influential blueprint for ethical AI deployment. 88 In the U.S., the NIST AI RMF emphasizes risk identification, mitigation strategies, and measurement of trustworthiness metrics. When combined with ISO/IEC standards, it enables interoperability between domestic and international governance regimes, reducing compliance fragmentation and ensuring AI systems meet both security and ethical imperatives. 89 Comparative analyses show that while ISO standards excel in operational consistency, they often lack enforcement mechanisms, making legally binding instruments like the EU AI Act critical for ensuring adherence in sensitive cybersecurity contexts. 90 Moving forward, integrating human rights-based governance approaches could enhance the ethical legitimacy of agentic AI, particularly in scenarios involving autonomous decision-making in cyber defense. 91 Table 3 compares leading governance frameworks relevant to agentic AI systems in cybersecurity. The table highlights each framework’s core principles, scope of applicability, and alignment with ethical requirements such as transparency, accountability, and human oversight. This synthesis offers a practical reference point for evaluating governance integration within agentic AI deployments.
Table 3. Governance approaches and compliance alignment.
| Governance framework | Core focus areas | Scope of applicability | Ethical alignment areas |
|---|---|---|---|
| NIST AI RMF (USA) | Risk management, trustworthiness | Federal and private sector (USA) | Fairness, explainability, accountability |
| EU AI Act (EU) | Risk-based AI classification | High-risk systems in EU jurisdictions | Human oversight, safety, transparency |
| ISO/IEC 42001 | AI management systems and lifecycle | Global, industry-agnostic | Governance, responsibility, documentation |
| IEEE 7000 Series | Design ethics and value alignment | System design and implementation | Value alignment, stakeholder engagement |
| OECD AI Principles | Global normative guidance | Multilateral, public-private sectors | Robustness, democratic values, and human rights |
| UNESCO AI Ethics | Cross-cultural ethical foundations | Global education, human development | Sustainability, cultural diversity, and inclusion |
Figure 8 maps the alignment between leading AI governance frameworks and foundational ethical principles, offering a comparative visual of strengths, overlaps, and potential gaps. By illustrating how ethical concerns like transparency, accountability, and fairness are addressed (or omitted) across frameworks such as the EU AI Act, NIST AI RMF, and ISO/IEC standards, the figure clarifies areas of convergence and fragmentation within agentic AI governance.
Figure 8. Governance - ethical principle mapping.
RQ3: Threat Resilience Strategies (Conventional & Quantum)
Cybersecurity resilience strategies must now address both conventional threats and the disruptive potential of quantum computing. Traditional approaches such as layered defense architectures, intrusion detection, and zero-trust models remain crucial but increasingly integrate AI-driven predictive analytics for proactive risk mitigation. 92 Quantum resilience demands post-quantum cryptography (PQC) to safeguard against algorithms like Shor’s, which threaten RSA and ECC encryption. Lattice-based, hash-based, and multivariate schemes are emerging as leading candidates, often paired with AI for adaptive key management. 93 Hybrid frameworks that combine traditional and quantum-resistant algorithms ensure a gradual migration path while maintaining operational continuity. 94 Beyond cryptography, quantum-safe strategies focus on AI model integrity, addressing risks such as adversarial inputs and data poisoning through deep-learning-based anomaly detection and graph neural networks. 95 Quantum-secure threat intelligence platforms that leverage generative AI for predictive modeling and integrate PQC provide long-term resilience against evolving threats. 96 Table 4 presents a comparative overview of threat resilience strategies employed in agentic AI systems, differentiating between conventional cybersecurity defenses and quantum-resilient approaches. By mapping specific techniques, use cases, and implementation readiness levels, this table provides a structured perspective on how agentic systems are evolving to address both present and emerging threat vectors, particularly in the context of post-quantum risk environments.
Table 4. Comparative overview of threat resilience strategies.
| Strategy type | Technique/Approach | Example use case | Implementation readiness |
|---|---|---|---|
| Conventional Defense | Adversarial training | Robust anomaly detection | Mature |
| Conventional Defense | Multi-agent redundancy | Fail-safe decision-making agents | Intermediate |
| Conventional Defense | Secure model update pipelines | Tamper-resistant agent learning | Intermediate |
| Quantum-Resilient | Post-quantum cryptography (PQC) | Agent communication using lattice cryptography | Emerging |
| Quantum-Resilient | Quantum key distribution (QKD) | Secure multi-agent key exchange | Experimental |
| Quantum-Resilient | Quantum-safe federated learning | Distributed anomaly detection with PQC | Emerging |
Figure 9 contrasts conventional and quantum-era threat mitigation strategies across core cybersecurity domains. The comparative bar chart illustrates how mitigation approaches evolve in response to emerging quantum threats, highlighting both continuity and strategic shifts in defense postures for agentic AI systems.
Figure 9. Side-by-side mapping of conventional vs quantum readiness.
Case study: In a recent quantum-safe AI pilot, a hybrid architecture was deployed in a multi-cloud environment, integrating Kyber and McEliece PQC algorithms with real-time AI-driven key management. The system demonstrated sub-4ms encryption latency while achieving zero vulnerability to Shor’s algorithm and minimal susceptibility to Grover’s. 97 Combining conventional defenses with AI-enhanced quantum-safe mechanisms, especially hybrid cryptographic models and predictive threat intelligence, offers the most effective path to achieving future-proof resilience against both current and quantum-enabled cyber threats.
RQ4: Implementation Barriers and Enablers
The deployment of agentic AI in cybersecurity is shaped by a complex interplay of barriers and enablers that determine scalability, trust, and operational success. Common barriers include high implementation costs, integration complexity with legacy systems, and a shortage of skilled professionals capable of designing, deploying, and maintaining multi-agent AI architectures. Organizational resistance to change and ethical concerns, such as data privacy and algorithmic bias, further slow adoption. 98 In military and critical infrastructure contexts, additional hurdles include ensuring model robustness under adversarial conditions and safeguarding sensitive operational data during training and deployment. 99 On the enabling side, strong governance frameworks, cross-sector collaboration, and investment in workforce training are critical. For example, secure multi-agent communication protocols such as Google’s Agent2Agent (A2A) and Model Context Protocol (MCP) provide a robust foundation for interoperability and resilience in complex agent ecosystems. 100 Multi-stakeholder collaboration where government, academia, and industry jointly participate in development and oversight has been shown to accelerate safe deployment while improving compliance with security and ethical guidelines. 101 Table 5 synthesizes key implementation barriers and enabling factors influencing the deployment of agentic AI systems in cybersecurity contexts. Drawing from both academic and grey literature, this table captures the technical, organizational, and regulatory dynamics that affect real-world adoption, offering insights into how these systems can be scaled effectively and ethically across diverse operational environments.
Table 5. Summary of barriers and enablers.
| Category | Barrier | Enabler |
|---|---|---|
| Technical | Model complexity and interpretability | Modular architectures and explainable AI (XAI) |
| Human Capital | Skills gap in secure agentic AI design | Cross-sector training and capacity-building |
| Organizational | Integration with legacy infrastructure | Cloud-native deployment models |
| Regulatory | Ambiguous compliance obligations | Aligned ethical governance frameworks |
| Economic | High cost of development and maintenance | Public-private funding mechanisms |
| Interoperability | Lack of standards across systems | Open protocol ecosystems and industry coalitions |
A successful example of overcoming barriers can be seen in large-scale, human-centered deployment frameworks in healthcare AI, which emphasize modular architecture, explainability, and continuous monitoring to maintain operational trust principles equally applicable to cybersecurity agentic AI systems. 102 These approaches demonstrate that the combination of strong technical safeguards, inclusive governance, and adaptive organizational culture is key to unlocking the full potential of agentic AI in cybersecurity.
5. Discussion
5.1 Summary of key insights
The integration of Agentic AI into cybersecurity frameworks reveals a convergence of four core dimensions: design patterns, governance mechanisms, resilience strategies, and implementation dynamics that collectively define the field’s maturity. From a design perspective, agentic architectures are increasingly adopting reusable patterns that enhance scalability, safety, and multi-agent coordination, enabling systems to autonomously manage complex cybersecurity tasks while maintaining human oversight where necessary. 103 The shift toward modular control planes, interoperable tool orchestration, and context-aware decision-making has been critical for adapting to diverse operational environments. On the governance front, alignment with global frameworks such as the NIST AI RMF and sector-specific compliance models has proven essential to ensuring transparency, ethical accountability, and lawful operation in automated threat response systems. 104 These governance measures are further strengthened by privacy-by-design principles, which mitigate risks of misuse and enhance stakeholder trust. In terms of resilience strategies, advancements have moved beyond conventional cryptography toward quantum-resilient AI systems capable of maintaining operational integrity even against post-quantum adversaries. 63 This includes integrating anomaly detection, autonomous incident response, and adaptive behavioral analytics to counter rapidly evolving threats. Finally, implementation success is closely linked to collaboration between stakeholders, availability of skilled talent, and sustainable funding models. Case studies show that multi-stakeholder deployments combining technical, policy, and operational expertise are more effective at achieving both security and compliance goals. 105 However, persistent challenges remain in integrating these systems into legacy infrastructures without introducing operational complexity or excessive cost burdens. Overall, the field’s trajectory suggests that Agentic AI in cybersecurity is moving toward a mature ecosystem where autonomous defense capabilities are harmonized with robust governance and quantum-ready resilience, creating a dynamic yet ethically anchored security posture.
5.2 Comparison with existing reviews
Previous literature on AI in cybersecurity has largely concentrated on either the technical mechanisms of AI-driven defense or high-level discussions of AI ethics, with relatively few works explicitly examining the convergence of agentic autonomy, governance integration, and quantum-resilient defense. For instance, many traditional reviews, such as Daraojimba et al., 106 provide comprehensive overviews of AI applications in protecting national infrastructure but lack an in-depth exploration of how agentic architectures adaptively align with evolving governance frameworks and resilience strategies. Similarly, while Tallam 63 advances the discussion by integrating ethical governance into AI-driven cyber defense, its focus remains on operational frameworks rather than the multi-pillar synthesis of design patterns, governance, and post-quantum security that our review aims to deliver. Some reviews, such as Al Siam et al., 107 offer a holistic analysis of AI in cybersecurity, categorizing technical advances across domains like threat detection, endpoint protection, and adaptive authentication. However, they typically treat governance and quantum readiness as peripheral topics rather than integral co-drivers of system maturity. Furthermore, Oesch et al. 64 explore agentic AI in the context of cyber conflict and global security competition but do not address operational barriers, stakeholder collaboration mechanisms, or the integration of emerging resilience strategies into national cybersecurity postures. By contrast, this review extends beyond these prior works by explicitly mapping design patterns to governance compliance and resilience measures, synthesizing both conventional and quantum-era defense considerations. It also draws on cross-domain analogies and empirical case studies to bridge the gap between theory and implementation, offering a multi-pillar framework for understanding and advancing the role of Agentic AI in cybersecurity.
5.3 Research trends & gaps
Recent studies reveal that research on agentic AI in cybersecurity has accelerated, particularly in areas such as intrusion detection, malware classification, and IoT security, with emerging interest in adversarial machine learning, blockchain integration, and quantum-resilient approaches. 71 Bibliometric analyses show steady growth in publications since 2013, with 2023 marking a peak in both research output and citations, underscoring the expanding global attention to agent-based cybersecurity systems. 68 Several thematic trends are emerging. First, AI-powered threat intelligence and anomaly detection are becoming standard components of advanced cyber defense systems. 108 Second, research increasingly explores multi-agent architectures for cross-domain knowledge discovery, enhancing the adaptability and contextual reasoning of AI-driven security platforms. 109 Third, workforce capability gaps remain a pressing issue, with global demand for AI skills in cybersecurity, such as predictive analytics and neural networks, outpacing current talent supply. 110 Despite these advances, significant gaps persist. Conceptual research still dominates over large-scale empirical deployments, limiting real-world validation of agentic AI’s resilience under adversarial conditions. 111 Ethical governance and compliance integration, while discussed often, remain underdeveloped in practice, with few frameworks aligning technical safeguards to emerging regulatory mandates. 112 Additionally, the integration of quantum-safe AI remains largely at the pilot stage, with limited cross-sectoral adoption or testing in high-threat operational environments. 70 While agentic AI research in cybersecurity is rapidly expanding with promising innovations, there is a critical need for empirical validation, workforce capability building, ethical governance alignment, and proactive readiness for quantum-era threats. Table 6 identifies critical research gaps uncovered through this review and proposes corresponding future research questions. These gaps span across technical innovation, ethical governance, and resilience to emerging threats. The questions are intended to guide scholars, practitioners, and policymakers in shaping the next wave of research on agentic AI in cybersecurity.
Table 6. Gaps and future priorities.
| Thematic area | Identified gap | Future research question |
|---|---|---|
| Cognitive Autonomy | Lack of standardization in agent reasoning models | How can we formalize and benchmark cognitive autonomy in AI agents for cybersecurity? |
| Ethical Governance | Limited empirical work on applied dual-use mitigation | What practical governance tools can prevent misuse of defensive agentic AI systems? |
| Quantum-Resilient Design | Early-stage exploration of quantum-safe federated learning | How can federated agentic AI systems ensure resilience to quantum-enabled threats? |
| Socio-Technical Systems | Under-researched human -agent trust calibration | What design strategies enhance human trust in semi-autonomous defense agents? |
| Compliance Integration | Fragmented alignment between technical and regulatory systems | How can agentic AI architectures be made auditable and legally interpretable? |
| Cross-Domain Insights | Lack of translatable insights from other high-stakes domains | What lessons from aviation or healthcare can inform safe agentic AI deployment in cybersecurity? |
5.4 Practical implications
The integration of Agentic AI into cybersecurity presents a set of actionable pathways for stakeholders, including policymakers, developers, and security managers, to enhance resilience and trust in autonomous defense systems. For policymakers, there is a pressing need to establish regulatory frameworks that balance innovation with public safety. This includes ensuring transparency, enforcing ethical use standards, and aligning governance with internationally recognized frameworks such as the NIST AI Risk Management Framework and the EU AI Act to mitigate misuse and protect civil liberties. 113 For developers, secure-by-design principles should be prioritized to address vulnerabilities inherent in agentic architectures, such as data poisoning, adversarial manipulation, and unauthorized access. Approaches such as the MAESTRO risk framework for Agent-to-Agent (A2A) protocols can guide the development of resilient and interoperable systems. 114 Furthermore, explainable AI (XAI) techniques should be embedded to support monitoring, auditing, and compliance with ethical norms. 115 Security managers must adapt operational models to leverage Agentic AI’s predictive and adaptive capabilities while maintaining human oversight for high-stakes decision-making. Implementing hybrid human -agent workflows can enhance detection and response without fully relinquishing control, especially in critical infrastructure defense. Collaboration across sectors is essential for sharing threat intelligence and developing quantum-resilient strategies that protect AI models and cryptographic assets from emerging quantum computing threats. 116 Ultimately, the practical application of Agentic AI in cybersecurity hinges on a triad of well-aligned governance, secure system design, and proactive operational strategies. When these elements are integrated, Agentic AI can serve as a force multiplier for defense capabilities while upholding the principles of accountability, transparency, and ethical stewardship.
5.5 Ethical & policy considerations
The integration of agentic AI into cybersecurity systems presents complex ethical and policy challenges that require careful governance. Central concerns include transparency, accountability, legal compliance, and dual-use governance. Transparency is critical for fostering trust, ensuring that AI decision-making processes remain explainable and open to scrutiny, particularly in high-stakes security contexts. 117 Accountability frameworks must address the difficulty of attributing responsibility in autonomous systems, incorporating auditability and clear chains of responsibility. 118 Legal compliance is also essential, as AI-driven cybersecurity must align with data protection regulations such as GDPR and evolving AI-specific governance standards. 104 In addition, the dual-use nature of AI technologies, where tools designed for defense could be repurposed for offensive cyber operations, necessitates proactive policy safeguards to mitigate misuse. 119 Ethical governance models should integrate fairness-aware AI design, bias mitigation strategies, and public engagement mechanisms to align technological deployment with societal values. 120 Ultimately, effective policy must balance innovation with human rights protections, ensuring that agentic AI in cybersecurity operates within robust ethical and legal boundaries.
5.6 Strengths of this review
This review distinguishes itself through its breadth, diversity, and integrative approach, enabling a holistic understanding of agentic AI in cybersecurity. Unlike narrowly focused studies, narrative reviews can accommodate a wide range of sources, synthesize cross-disciplinary perspectives, and highlight both conceptual and empirical developments in the field. 121 This methodological flexibility allows for the inclusion of technological, ethical, and governance dimensions, which are crucial in an area as multifaceted as cybersecurity enhanced by AI. The diversity of literature integrated here, spanning technical frameworks, governance models, and sector-specific applications, enables a richer contextualization of findings, aligning with best practices in high-quality narrative synthesis. 122 Furthermore, this work extends beyond descriptive aggregation by critically linking patterns across governance, threat resilience, and implementation strategies, thereby offering an actionable synthesis for policymakers, developers, and security managers. Finally, the integration of multi-sector insights mirrors the strengths highlighted in previous interdisciplinary narrative reviews, which have demonstrated the value of cross-pollination between fields to address complex socio-technical challenges. 123 This review’s ability to weave together perspectives from different disciplines ensures that its conclusions are robust, relevant, and adaptable to rapidly evolving technological landscapes.
5.7 Future research directions
Future research on agentic AI in cybersecurity should be structured across short-, medium-, and long-term horizons, with each phase addressing pressing challenges and laying the groundwork for more advanced solutions. In the short term (1-3 years), emphasis should be placed on enhancing XAI and robust adversarial defense mechanisms to counteract model poisoning, data manipulation, and zero-day vulnerabilities. Recent studies have shown that while AI-driven anomaly detection and threat prediction significantly improve incident response, their susceptibility to adversarial attacks remains a key obstacle to operational deployment. 70 In the medium term (3-7 years), the integration of federated learning with quantum-safe cryptographic protocols is expected to gain prominence. This approach can enable decentralized model training without exposing sensitive data, while resisting the decryption capabilities of quantum computers. Research in cyber-physical systems security highlights the potential of combining AI, blockchain, and quantum-resistant algorithms to create scalable, privacy-preserving defense frameworks. 124 In the long term (7+ years), the focus will likely shift toward neurosymbolic AI and quantum-enhanced multi-agent reinforcement learning for autonomous and adaptive threat mitigation. Neurosymbolic AI promises enhanced reasoning and explainability by combining symbolic knowledge graphs with deep learning models, making AI systems more transparent and reliable in high-stakes security environments. 125 Concurrently, quantum multi-agent reinforcement learning could enable faster, more coordinated responses to cyber incidents, leveraging quantum computational speedups for complex decision-making. 126
An emerging technology watchlist should therefore include:
-
•
Neurosymbolic AI for explainable and safe decision-making in cybersecurity.
-
•
Federated learning integrated with quantum-safe protocols for secure, decentralized intelligence.
-
•
Quantum-enhanced AI architectures for rapid, scalable, and autonomous security orchestration.
By strategically aligning research with these phased horizons, the cybersecurity field can evolve toward highly autonomous, explainable, and quantum-resilient agentic AI systems capable of operating securely in complex, adversarial digital ecosystems.
5.8 Limitations of the review
While this review provides a broad and integrative synthesis of agentic AI in cybersecurity, several limitations should be acknowledged. First, the scope is inherently constrained by the availability and accessibility of relevant literature, which may introduce selection bias. Narrative and systematic reviews in AI research often face challenges in ensuring full representativeness of the field, particularly when grey literature and non-English sources are excluded, potentially limiting generalizability to global contexts. 127 Second, while the methodology applied aimed for rigor, the reliance on human interpretation during thematic synthesis may introduce subjectivity, a recognized limitation in narrative synthesis approaches. 128 In addition, differences in study design, evaluation metrics, and reporting standards across included works make comparative analysis challenging, potentially affecting the reliability of cross-study conclusions. 129 Third, this review’s findings may be influenced by the publication bias prevalent in AI and cybersecurity literature, where studies reporting positive or novel outcomes are more likely to be published than those reporting null or negative results. 130 Finally, given the rapid evolution of AI technologies, there is an inherent temporal limitation; insights drawn from current literature may become outdated as breakthroughs and regulatory changes emerge. 131 While the synthesis offers valuable integrative insights, these methodological and contextual constraints should be considered when interpreting the results and their applicability to future research or policy contexts.
5.9 Cross-domain insights
Lessons from other critical industries aviation, finance, and healthcare, offer valuable parallels for agentic AI in cybersecurity. In aviation, AI-driven anomaly detection, predictive analytics, and game theory -based adversarial modeling have been successfully deployed for avionics security, airport monitoring, and autonomous flight operations. These approaches emphasize the importance of certified, trustworthy AI solutions within highly regulated environments, ensuring both operational safety and regulatory compliance. 132 In the financial sector, cross-domain data sharing, particularly in green finance, has demonstrated how AI can integrate diverse datasets across organizational boundaries to enhance decision-making and compliance monitoring. This model underscores the potential for agentic AI to facilitate secure, privacy-preserving data exchanges in cybersecurity contexts. 133 In healthcare, the value of multi-agent AI systems for interdisciplinary collaboration offers a template for cybersecurity. Multi-AI frameworks have been shown to enhance knowledge integration, decision-making speed, and contextual adaptability, especially when dealing with complex, multi-variable scenarios. 109 Taken together, these cross-domain experiences highlight three transferable principles: robust certification and governance of AI tools, secure and ethical cross-organization data sharing, and collaborative multi-agent ecosystems capable of adapting to high-stakes, dynamic environments. Cross-domain insights from aviation, finance, and healthcare provide valuable lessons for agentic AI in cybersecurity. In aviation, autonomous flight control systems, such as Airbus’s AI-enabled Flight Management Systems, illustrate how machine agency can be deployed in high-stakes, safety-critical environments with rigorous redundancy, explainability, and human override protocols. In finance, AI-driven fraud detection systems such as Mastercard’s Decision Intelligence combine cognitive autonomy with compliance-by-design architectures, offering models for integrating ethical governance into algorithmic decision-making. These examples underscore the importance of embedding trust, accountability, and resilience into agentic systems, especially when scaling into volatile cybersecurity domains.
5.10 Analytical framework of agentic AI in cybersecurity
An effective analytical framework for agentic AI in cybersecurity integrates design principles, governance structures, and resilience mechanisms into a unified model. At its core, the framework must incorporate multi-layered threat intelligence pipelines, real-time anomaly detection, automated response systems, and adaptive learning loops to respond to evolving adversarial tactics. 134 From a governance standpoint, integrating AI with risk management frameworks and sector-specific compliance standards is essential for ensuring transparency, accountability, and auditability in decision-making. For instance, sectoral applications in telecommunications highlight the importance of aligning AI-based detection and mitigation tools with organizational and regulatory contexts, recognizing the interdependence of technical, human, and legal dimensions. 135 On the resilience axis, the framework should employ autonomous and collaborative agents capable of predictive risk assessment, cross-domain situational awareness, and integration with digital twins for pre-emptive testing of security postures in simulated environments. 72 These resilience mechanisms must be underpinned by zero-trust architectures and privacy-preserving learning models to safeguard sensitive datasets while enabling collaborative threat intelligence sharing. Finally, an ethical oversight layer embedding fairness, explainability, and dual-use governance ensures the trustworthiness of deployed systems, preventing misuse while preserving operational efficacy in high-stakes contexts. 63 This integrated analytical framework provides a structured blueprint for designing, deploying, and governing agentic AI systems in cybersecurity, bridging technical innovation with responsible stewardship. Figure 10 presents a unifying analytical framework that integrates design, governance, and resilience dimensions of agentic AI in cybersecurity. The framework maps these elements across the agentic AI lifecycle, enabling strategic alignment between architecture, oversight, and threat mitigation strategies for secure deployment.
Figure 10. Integrated analytical framework (Design -Governance -Resilience).
6. Conclusion
This review has examined the intersection of Agentic Artificial Intelligence and cybersecurity, with a focus on cognitive autonomy, ethical governance, and quantum-resilient defense. The primary objective was to synthesize diverse literature from 2005-2025, highlighting how agentic systems characterized by autonomy, adaptability, and goal-directed reasoning are reshaping cyber defense strategies while introducing new governance and security challenges. By mapping design patterns, governance frameworks, and resilience strategies, the review identified that Agentic AI offers substantial advantages in proactive threat mitigation, continuous learning, and adaptive incident response. However, these capabilities also amplify dual-use risks, governance gaps, and the urgency for quantum-era readiness. The comparative analysis with existing frameworks, including the NIST AI RMF and EU AI Act, underscored the need for integrated governance mechanisms that align ethical principles with operational security measures. In terms of prospects, the convergence of agentic architectures with quantum-safe protocols, neurosymbolic AI, and federated learning models presents opportunities for unprecedented resilience in cybersecurity infrastructures. Nevertheless, achieving these outcomes will require coordinated policy development, technical innovation, and cross-domain knowledge transfer, drawing lessons from sectors such as aviation, finance, and healthcare. Ultimately, this review contributes to the growing discourse on aligning technological autonomy with societal values, advocating for a cybersecurity future in which Agentic AI operates as both a strategic enabler and a governed entity capable of delivering security, resilience, and trust in an increasingly complex digital ecosystem.
Ethics and consent statement
Ethical approval and consent were not required.
Acknowledgments
Not applicable.
Funding Statement
The author(s) declared that no grants were involved in supporting this work.
[version 1; peer review: 2 approved]
Data availability
The supplementary materials underlying this article are openly available on Figshare 10.6084/m9.figshare.29966266.v1 41 : A Review of Agentic AI in Cybersecurity: Cognitive Autonomy, Ethical Governance, and Quantum-Resilient Defense: Supplementary Data. This repository contains Tables, Figures, Appendix files, Code, and Supplementary Data. All newly generated materials and supplementary datasets are available under the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
References
- 1. Acharya DB, Kuppan K, Divya B: Agentic AI: Autonomous Intelligence for Complex Goals—A Comprehensive Survey. IEEE Access. 2025;13:18912–18936. 10.1109/ACCESS.2025.3532853 [DOI] [Google Scholar]
- 2. Wan ADM, Braspenning P: Agent Theory: Autonomy and Self-Control. 1996. Accessed: Aug. 12, 2025.
- 3. Kott A: Autonomous Intelligent Cyber-defense Agent: Introduction and Overview. ArXiv. 2023;abs/2304.1. 10.48550/ARXIV.2304.12408 [DOI] [Google Scholar]
- 4. Sengupta A: Securing the Autonomous Future A Comprehensive Analysis of Security Challenges and Mitigation Strategies for AI Agents. Int. J. Sci. Res. Eng. Manag. Dec. 2024;08(12):1–2. 10.55041/IJSREM40091 [DOI] [Google Scholar]
- 5. Hauptman AI, Schelble BG, McNeese NJ, et al. : Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Comput. Hum. Behav. Jan. 2023;138:107451. 10.1016/J.CHB.2022.107451 [DOI] [Google Scholar]
- 6. Laitinen A, Sahlgren O: AI Systems and Respect for Human Autonomy. Front. Artif. Intell. Oct. 2021;4. 10.3389/FRAI.2021.705164/PDF [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Bondhala S: Cybersecurity in AI-Driven Data Centers: Reinventing Threat Detection. Int. J. Adv. Res. Sci. Commun. Technol. Mar. 2025;510–519. 10.48175/IJARSCT-24464 [DOI] [Google Scholar]
- 8. Mohammed DF: Adaptive Cyber Defense: Leveraging AI for Real Time Threat Detection. Int. J. Sci. Res. Eng. Manag. May 2025;09(05):1–9. 10.55041/IJSREM47095 [DOI] [Google Scholar]
- 9. Jimmy F: Emerging Threats: The Latest Cybersecurity Risks and the Role of Artificial Intelligence in Enhancing Cybersecurity Defenses. Int. J. Sci. Res. Manag. Feb. 2021;9(02):564–574. 10.18535/IJSRM/V9I2.EC01 [DOI] [Google Scholar]
- 10. Ajayi R, Masunda M: Integrating edge computing, data science and advanced cyber defense for autonomous threat mitigation. Int. J. Sci. Res. Arch. May 2025;15(2):063–080. 10.30574/IJSRA.2025.15.2.1292 [DOI] [Google Scholar]
- 11. Sindiramutty SR: Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence. ArXiv. 2023;abs/2401.0. 10.48550/ARXIV.2401.00286 [DOI] [Google Scholar]
- 12. Balasubramani R, Biradar VG: Empowering Autonomous Decision-Making Through Quantum Reinforcement Learning and Cognitive Neuromorphic Frameworks. 2024 4th Int. Conf. Mob. Networks Wirel. Commun. 2024; pp.1–7. 10.1109/ICMNWC63764.2024.10872223 [DOI]
- 13. Tallam K: Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence. ArXiv. 2025;abs/2503.0. 10.48550/ARXIV.2503.00164 [DOI] [Google Scholar]
- 14. Ratnawita R: Cybersecurity in the AI Era Measures Deepfake Threats and Artificial Intelligence-Based Attacks. J. Am. Inst. Feb. 2025;2(2):180–189. 10.71364/S3EMXX77 [DOI] [Google Scholar]
- 15. Chimamiwa G: Managing cyber risks in the face of AI- and ML - Driven Adversarial Attacks. Integr. AI Technol. Mod. Bus. Pract. Oct. 2024;71–79. 10.70301/CONF.SBS-JABR.2024.1/1.6 [DOI] [Google Scholar]
- 16. Elmisery AM, Sertovic M, Zayin A, et al. : Cyber Threats in Financial Transactions - Addressing the Dual Challenge of AI and Quantum Computing. ArXiv. 2025;abs/2503.1. 10.48550/ARXIV.2503.15678 [DOI] [Google Scholar]
- 17. Dubey V, Shende P, Kumbhare B, et al. : Exploring AI Techniques for Quantum Threat Detection and Prevention. Indian J. Comput. Sci. Technol. Jan. 2025;08–12. 10.59256/INDJCST.20250401002 [DOI] [Google Scholar]
- 18. Lekota NF: Governance Considerations of Adversarial Attacks on AI Systems. Int. Conf. AI Res. 2024;4:227–233. 10.34190/ICAIR.4.1.3194 [DOI] [Google Scholar]
- 19. Birkstedt T, Minkkinen M, Tandon A, et al. : AI governance: themes, knowledge gaps and future agendas. Internet Res. 2023;33(7):133–167. 10.1108/INTR-01-2022-0042/FULL/PDF [DOI] [Google Scholar]
- 20. Polemi N, Praça I, Kioskli K, et al. : Challenges and efforts in managing AI trustworthiness risks: a state of knowledge. Front. Big Data. 2024;7. 10.3389/FDATA.2024.1381163/PDF [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Ogunmolu AM: Enhancing Data Security in Artificial Intelligence Systems: A Cybersecurity and Information Governance Approach. J. Eng. Res. Reports. May 2025;27(5):154–172. 10.9734/JERR/2025/V27I51500 [DOI] [Google Scholar]
- 22. Epstein Z, et al. : Closing the AI Knowledge Gap. Mar. 2018, Accessed: Aug. 12, 2025. Reference Source
- 23. Perrella A, Bernardi FF, Bisogno M, et al. : Bridging the gap in AI integration: enhancing clinician education and establishing pharmaceutical-level regulation for ethical healthcare. Front. Med. 2024;11. 10.3389/FMED.2024.1514741/FULL Reference Source [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Tun HM, Naing L, Malik OA, et al. : Navigating ASEAN Region Artificial Intelligence (AI) Governance Readiness in Healthcare. Heal. Policy Technol. Mar. 2025;14(2):100981. 10.1016/J.HLPT.2025.100981 [DOI] [Google Scholar]
- 25. Bellini V, et al. : Artificial intelligence and anesthesia: a narrative review. Ann. Transl. Med. May 2022;10(9):528–528. 10.21037/ATM-21-7031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Paez A: Grey literature: An important resource in systematic reviews. J. Evid. Based Med. Dec. 2017. 10.1111/JEBM.12265 [DOI] [PubMed] [Google Scholar]
- 27. Papageorgiou G, Skamnia E, Economou P: AI-Assisted Literature Review: Integrating Visualization and Geometric Features for Insightful Analysis. WIREs Data Min. Knowl. Discov. Jun. 2025;15(2). 10.1002/WIDM.70016 [DOI] [Google Scholar]
- 28. Stevens T: Knowledge in the grey zone: AI and cybersecurity. Digit. War. Dec. 2020;1(1–3):164–170. 10.1057/S42984-020-00007-W [DOI] [Google Scholar]
- 29. Thórisson K, Helgasson H: Cognitive Architectures and Autonomy: A Comparative Review. J. Artif. Gen. Intell. May 2012;3(2):1–30. 10.2478/V10229-011-0015-3 [DOI] [Google Scholar]
- 30. Sharma D: Innovations and Future Directions in Securing Digital Environments. Int. J. Sci. Res. Eng. Manag. Apr. 2025;09(04):1–9. 10.55041/IJSREM45950 [DOI] [Google Scholar]
- 31. Huber-Liebl M, Römer R, Wirsching G, et al. : Quantum-inspired cognitive agents. Front. Appl. Math. Stat. Sep. 2022;8. 10.3389/FAMS.2022.909873/PDF [DOI] [Google Scholar]
- 32. Yilmaz L: A quantum cognition model for simulating ethical dilemmas among multi-perspective agents. J. Simul. Apr. 2020;14(2):98–106. 10.1080/17477778.2019.1603090 [DOI] [Google Scholar]
- 33. Chaffer TJ, Goldston J, Okusanya B, et al. : Decentralized Governance of Autonomous AI Agents. Probl. Polit. Auth. 2024;81–100. 10.1057/9781137281661_5 [DOI] [Google Scholar]
- 34. Ranjan R, Gupta S, Singh SN: LOKA Protocol: A Decentralized Framework for Trustworthy and Ethical AI Agent Ecosystems. ArXiv. 2025;abs/2504.1. 10.48550/ARXIV.2504.10915 [DOI] [Google Scholar]
- 35. Byrne JA: Improving the peer review of narrative literature reviews. Res. Integr. Peer Rev. Dec. 2016;1(1):12. 10.1186/S41073-016-0019-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Edmonds B: A Context- and Scope-Sensitive Analysis of Narrative Data to Aid the Specification of Agent Behaviour. J. Artif. Soc. Soc. Simul. Jan. 2015;18(1). 10.18564/JASSS.2715 [DOI] [Google Scholar]
- 37. Shah Z, et al. : ETHICAL CONSIDERATIONS IN THE USE OF AI FOR ACADEMIC RESEARCH AND SCIENTIFIC DISCOVERY: A NARRATIVE REVIEW. Insights-Journal Life Soc. Sci. Apr. 2025;3(2):183–189. 10.71000/JFESGV69 [DOI] [Google Scholar]
- 38. D’Elia A, et al. : Artificial intelligence and health inequities in primary care: a systematic scoping review and framework. Fam. Med. Community Heal. Nov. 2022;10:e001670. 10.1136/FMCH-2022-001670 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Pautasso M: The Structure and Conduct of a Narrative Literature Review. A Guid. to Sci. Career. Jan. 2019;299–310. 10.1002/9781118907283.CH31 [DOI] [Google Scholar]
- 40. Faisal A, Cupiadi H: Cybersecurity in Digital Supply Chains: A Narrative Review of Threats and Strategic Frameworks for Sustainable Logistics. Sinergi Int. J. Logist. Aug. 2024;2(3):174–186. 10.61194/SIJL.V2I3.728 [DOI] [Google Scholar]
- 41. Adabara I, Sadiq BO, Shuaibu AN, et al. : A Review of Agentic AI in Cybersecurity: Cognitive Autonomy, Ethical Governance, and Quantum-Resilient Defense: Supplementary Data.[Dataset]. A Rev. Agentic AI Cybersecurity Cogn. Auton. Ethical Governance, Quantum-Resilient Def. Suppl. Data. Figshare. 2025. 10.6084/m9.figshare.29966266.v1 [DOI]
- 42. Cooper C, et al. : Revisiting the need for a literature search narrative: A brief methodological note. Res. Synth. Methods. Sep. 2018;9(3):361–365. 10.1002/JRSM.1315 [DOI] [PubMed] [Google Scholar]
- 43. Ferenhof HA, Fernandes RF: Demystifying literature review in the AI Era. Biblios J. Librariansh. Inf. Sci. 2025;88. 10.5195/BIBLIOS.2025.1317 [DOI] [Google Scholar]
- 44. Adhikari D, Thapaliya S: An Overview of AI Applications in Cybersecurity for IT Management. NPRC J. Multidiscip. Res. Oct. 2024;1(4):121–133. 10.3126/NPRCJMR.V1I4.70951 [DOI] [Google Scholar]
- 45. Guembe B, Azeta A, Misra S, et al. : The Emerging Threat of Ai-driven Cyber Attacks: A Review. Appl. Artif. Intell. 2022;36(1). 10.1080/08839514.2022.2037254 [DOI] [Google Scholar]
- 46. Ahmady E, Mojadadi AR, Hakimi M: A Comprehensive Review of Cybersecurity Measures in the IoT Era. J. Soc. Sci. Util. Technol. Feb. 2024;2(1):288–298. 10.70177/JSSUT.V2I1.722 [DOI] [Google Scholar]
- 47. Veritti D, Rubinato L, Sarao V, et al. : Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch. Clin. Exp. Ophthalmol. Mar. 2024;262(3):975–982. 10.1007/S00417-023-06245-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Petticrew M, Roberts H: What Sorts of Studies do I Include in the Review? Deciding on the Review’s Inclusion/Exclusion Criteria. Syst. Rev. Soc. Sci. Jan. 2006;57–78. 10.1002/9780470754887.CH3 [DOI] [Google Scholar]
- 49. Swift JK, Wampold BE: Inclusion and exclusion strategies for conducting meta-analyses. Psychother. Res. May 2018;28(3):356–366. 10.1080/10503307.2017.1405169 [DOI] [PubMed] [Google Scholar]
- 50. Chustecki M: Benefits and Risks of AI in Health Care: Narrative Review. Interact. J. Med. Res. Nov. 2024;13:e53616. 10.2196/53616 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Sarkar S, Bhatia G: Writing and appraising narrative reviews. J. Clin. Sci. Res. Jul. 2021;10(3):169–172. 10.4103/JCSR.JCSR_1_21 [DOI] [Google Scholar]
- 52. Christou PA: Thematic Analysis through Artificial Intelligence (AI). Qual. Rep. Feb. 2024;29(2):560–576. 10.46743/2160-3715/2024.7046 [DOI] [Google Scholar]
- 53. Rawat DB, Bajracharya C: The Intersection of Quantum Computing, AI, and Cybersecurity: Challenges and Opportunities. 2024 IEEE 6th Int. Conf. Trust. Priv. Secur. Intell. Syst. Appl. 2024; pp.176–181. 10.1109/TPS-ISA62245.2024.00029 [DOI]
- 54. Young DL, Bigham M, Bradbury M, et al. : SMU-DDI Cyber Autonomy Range. 2022 IEEE Appl. Imag. Pattern Recognit. Work. 2022; vol.2022-October: pp.1–5. 10.1109/AIPR57179.2022.10092228 [DOI] [Google Scholar]
- 55. Ali S, Wang J, Leung VCM: AI-driven fusion with cybersecurity: Exploring current trends, advanced techniques, future directions, and policy implications for evolving paradigms- A comprehensive review. Inf. Fusion. Jun. 2025;118:102922. 10.1016/J.INFFUS.2024.102922 [DOI] [Google Scholar]
- 56. Çakır AM: AI Driven Cybersecurity. Hum. Comput. Interact. Dec. 2024;8(1):119. 10.62802/JG7GGE06 [DOI] [Google Scholar]
- 57. Vemuri N, Thaneeru N, Tatikonda VM: Adaptive generative AI for dynamic cybersecurity threat detection in enterprises. Int. J. Sci. Res. Arch. Feb. 2024;11(1):2259–2265. 10.30574/IJSRA.2024.11.1.0313 [DOI] [Google Scholar]
- 58. Nour SM, Said SA: Harnessing the Power of AI for Effective Cybersecurity Defense. 2024 6th Int. Conf. Comput. Informatics. 2024; pp.98–102. 10.1109/ICCI61671.2024.10485059 [DOI]
- 59. Samtani S, Kantarcioglu M, Chen H: Trailblazing the Artificial Intelligence for Cybersecurity Discipline. ACM Trans. Manag. Inf. Syst. Dec. 2020;11(4):1–19. 10.1145/3430360 [DOI] [Google Scholar]
- 60. Schneider J: Generative to Agentic AI: Survey, Conceptualization, and Challenges. ArXiv. 2025;abs/2504.1. 10.48550/ARXIV.2504.18875 [DOI] [Google Scholar]
- 61. Sapkota R, Roumeliotis KI, Karkee M: AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges. 2025. Accessed: Aug. 13, 2025.
- 62. Bousetouane F: Agentic Systems: A Guide to Transforming Industries with Vertical AI Agents. ArXiv. 2025;abs/2501.0. 10.48550/ARXIV.2501.00881 [DOI] [Google Scholar]
- 63. Tallam K: Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence. ArXiv. 2025;abs/2503.0. 10.48550/ARXIV.2503.00164 [DOI] [Google Scholar]
- 64. Oesch S, Hutchins J, Austria P, et al. : Agentic AI and the Cyber Arms Race. Computer (Long. Beach. Calif). 2025;58(5):82–85. 10.1109/MC.2025.3544116 [DOI] [Google Scholar]
- 65. Wu J, You H, Du J: AI Generations: From AI 1.0 to AI 4.0. ArXiv. 2025;abs/2502.1. 10.48550/ARXIV.2502.11312 [DOI] [Google Scholar]
- 66. Bako NZ, Ozioko CN, Sanni IO, et al. : The Integration of AI and blockchain technologies for secure data management in cybersecurity. World J. Adv. Res. Rev. Mar. 2025;25(3):1666–1697. 10.30574/WJARR.2025.25.3.0784 [DOI] [Google Scholar]
- 67. Murugesan S: The Rise of Agentic AI: Implications, Concerns, and the Path Forward. IEEE Intell. Syst. 2025;40(2):8–14. 10.1109/MIS.2025.3544940 [DOI] [Google Scholar]
- 68. Girish Savadatti S, Srinivasan K, Hu YC: A Bibliometric Analysis of Agent-Based Systems in Cybersecurity and Broader Security Domains: Trends and Insights. IEEE Access. 2025;13:90–119. 10.1109/ACCESS.2024.3520583 [DOI] [Google Scholar]
- 69. Albahri O, Alamoodi A: Cybersecurity and Artificial Intelligence Applications: A Bibliometric Analysis Based on Scopus Database. Mesopotamian J. Cyber Secur. 2023;2023:158–169. 10.58496/MJCSC/2023/018 [DOI] [Google Scholar]
- 70. Ali S, Wang J, Leung VCM: AI-driven fusion with cybersecurity: Exploring current trends, advanced techniques, future directions, and policy implications for evolving paradigms- A comprehensive review. Inf. Fusion. Jun. 2025;118:102922. 10.1016/J.INFFUS.2024.102922 [DOI] [Google Scholar]
- 71. Achuthan K, Ramanathan S, Srinivas S, et al. : Advancing cybersecurity and privacy with artificial intelligence: current trends and future research directions. Front. Big Data. 2024;7. 10.3389/FDATA.2024.1497535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72. Mintoo AA, Saimon ASM, Bakhsh MM, et al. : NATIONAL RESILIENCE THROUGH AI-DRIVEN DATA ANALYTICS AND CYBERSECURITY FOR REAL-TIME CRISIS RESPONSE AND INFRASTRUCTURE PROTECTION. Am. J. Sch. Res. Innov. Mar. 2022;1(1):137–169. 10.63125/SDZ8KM60 [DOI] [Google Scholar]
- 73. Alneyadi MRMAH, Normalini MK: Intelligent Protection: A Study of the Key Drivers of Intention to Adopt Artificial Intelligence (AI) Cybersecurity Systems in the UAE. Interdiscip. J. Inf. Knowl. Manag. 2025;20:003. 10.28945/5430 [DOI] [Google Scholar]
- 74. Arefin S, Zannat NT, Global Health Institute Research Team United states : Securing AI in Global Health Research: A Framework for Cross-Border Data Collaboration. Clin. Med. Heal. Res. J. Mar. 2025;5(02):1187–1193. 10.18535/CMHRJ.V5I02.457 [DOI] [Google Scholar]
- 75. Händler T: Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for Autonomous LLM-powered Multi-Agent Architectures. ArXiv. 2023;abs/2310.0. 10.48550/ARXIV.2310.03659 [DOI] [Google Scholar]
- 76. Cihon P, Stein M, Bansal G, et al. : Measuring AI agent autonomy: Towards a scalable approach with code inspection. ArXiv. 2025;abs/2502.1. 10.48550/ARXIV.2502.15212 [DOI] [Google Scholar]
- 77. Karim MM, Van DH, Khan S, et al. : AI Agents Meet Blockchain: A Survey on Secure and Scalable Collaboration for Multi-Agents. Futur. Internet. Feb. 2025;17(2). 10.3390/FI17020057 [DOI] [Google Scholar]
- 78. Frenette J: Systems and Methods for Decentralized AI Governance Networks (DAGN) with Tokenized Power Control (TPC) for Enforcing Human-Centric AI. Int. J. Res. Appl. Sci. Eng. Technol. Jan. 2025;13(1):609–613. 10.22214/IJRASET.2025.66304 [DOI] [Google Scholar]
- 79. Ranjan R, Gupta S, Singh SN: LOKA Protocol: A Decentralized Framework for Trustworthy and Ethical AI Agent Ecosystems. ArXiv. 2025;abs/2504.1. 10.48550/ARXIV.2504.10915 [DOI] [Google Scholar]
- 80. Loevenich JF, Adler E, Mercier R, et al. : Design of an Autonomous Cyber Defence Agent using Hybrid AI models. 2024 Int. Conf. Mil. Commun. Inf. Syst. 2024; pp.1–10. 10.1109/ICMCIS61231.2024.10540988 [DOI]
- 81. Meyer-Vitali A, Mulder W, De Boer MHT: Modular design patterns for neural-symbolic integration: refinement and combination CEUR Workshop Proc. 2022; vol.3212. Accessed: Aug. 13, 2025. Reference Source [Google Scholar]
- 82. Meyer-Vitali A, Mulder W, Boer MHT: Modular Design Patterns for Hybrid Actors. Sep. 2021. Accessed: Aug. 13, 2025. Reference Source
- 83. Bekkum M, Boer M, Harmelen F, et al. : Modular design patterns for hybrid learning and reasoning systems: a taxonomy, patterns and use cases. Spring. Sep. 2021;51(9):6528–6546. 10.1007/S10489-021-02394-3 [DOI] [Google Scholar]
- 84. Rodriguez S, Thangarajah J, Davey A: Design Patterns for Explainable Agents (XAg). 2024;1621–1629. 10.5555/3635637.3663023 [DOI]
- 85. Schulte A, Donath D, Lange DS: Design Patterns for Human-Cognitive Agent Teaming. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 2016;9736:231–243. 10.1007/978-3-319-40030-3_24 [DOI] [Google Scholar]
- 86. Benraouane SA: AI Management System Certification According to the ISO/IEC 42001 Standard. AI Manag. Syst. Certif. Accord. to ISO/IEC 42001 Stand. How to Audit. Certify, Build Responsible AI Syst. Jan. 2024; pp.1–190. 10.4324/9781003463979 [DOI]
- 87. Cancela-Outeda C: The EU’s AI act: A framework for collaborative governance. Internet Things. Oct. 2024;27:101291. 10.1016/J.IOT.2024.101291 [DOI] [Google Scholar]
- 88. Smuha NA: The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Comput. Law Rev. Int. Aug. 2019;20(4):97–106. 10.9785/CRI-2019-200402 [DOI] [Google Scholar]
- 89. Ricciardi Celsi L, Zomaya AY: Perspectives on Managing AI Ethics in the Digital Age. Inform. Apr. 2025;16(4). 10.3390/INFO16040318 [DOI] [Google Scholar]
- 90. Sankaran S: Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts. ArXiv. 2025;abs/2504.1. 10.48550/ARXIV.2504.16139 [DOI] [Google Scholar]
- 91. Hogan L, Lasek-Markey M: Towards a Human Rights-Based Approach to Ethical AI Governance in Europe. Philosophies. Dec. 2024;9(6). 10.3390/PHILOSOPHIES9060181 [DOI] [Google Scholar]
- 92. Sharma D: Innovations and Future Directions in Securing Digital Environments. Int. J. Sci. Res. Eng. Manag. Apr. 2025;09(04):1–9. 10.55041/IJSREM45950 [DOI] [Google Scholar]
- 93. Cherukupalle NS: Quantum-Resilient Cloud Systems: Preemptive Shielding Against Post-Quantum Cryptographic Threats. J. Inf. Syst. Eng. Manag. Feb. 2025;10(38s):1234–1246. 10.52783/JISEM.V10I38S.8781 [DOI] [Google Scholar]
- 94. Aydeger A, Zeydan E, Yadav AK, et al. : Towards a Quantum-Resilient Future: Strategies for Transitioning to Post-Quantum Cryptography. 2024 15th Int. Conf. Netw. Futur. 2024; pp.195–203. 10.1109/NOF62948.2024.10741441 [DOI]
- 95. Polu OR: AI-Driven Detection of Adversarial Attacks in Post- Quantum Cryptographic Systems. Int. J. Sci. Res. Mar. 2025;14(3):62–66. 10.21275/SR25302093317 [DOI] [Google Scholar]
- 96. Maharajan DK, Mugeshwaran K, Nithish D, et al. : Threat Intelligence Platform Empowered by Generative Ai with Quantum-Security. Int. Res. J. Adv. Eng. Hub. Apr. 2025;3(04):1336–1342. 10.47392/IRJAEH.2025.0190 [DOI] [Google Scholar]
- 97. Samunnisa K, Gaddam SVK, Madhavi K: Design and Evaluation of a Quantum-Resilient Cryptographic Framework for Enhancing Security and Efficiency in Distributed Cloud Environments. Int. J. Electr. Electron. Eng. Jul. 2024;11(7):1–22. 10.14445/23488379/IJEEE-V11I7P101 [DOI] [Google Scholar]
- 98. Siddiqui HA, Khan A, Shaikh S: Unveiling Barriers and Enablers: A Study on AI Adoption in Business Management. J. Soc. & Organ. Matters. Mar. 2025;4(1):193–209. 10.56976/JSOM.V4I1.179 [DOI] [Google Scholar]
- 99. Loevenich JF, et al. : Training Autonomous Cyber Defense Agents: Challenges & Opportunities in Military Networks. MILCOM 2024-2024 IEEE Mil. Commun. Conf. 2024; pp.158–163. 10.1109/MILCOM61039.2024.10773923 [DOI]
- 100. Habler I, Huang K, Narajala VS, et al. : Building A Secure Agentic AI Application Leveraging A2A Protocol. ArXiv. 2025;abs/2504.1. 10.48550/ARXIV.2504.16902 [DOI] [Google Scholar]
- 101. Ugo GC, Apata AC, Dawodu PO: A Multi-Stakeholder Perspective on the Limitations of Implementing Artificial Intelligence in Highway Transport. J. Eng. Res. Reports. 2024;26(2):243–249. 10.9734/JERR/2024/V26I21086 [DOI] [Google Scholar]
- 102. Adnan HS, Shidani A, Clifton L, et al. : Implementation Framework for AI Deployment at Scale in Healthcare Systems. SSRN Electron. J. 2023. 10.2139/SSRN.4465877 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103. Kandasamy S: Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems. 2025. 10.48550/ARXIV.2503.23037 [DOI]
- 104. Ajiboye KJ: Ensuring data security and compliance in AI-powered business applications. Glob. J. Eng. Technol. Adv. Apr. 2023;15(1):125–142. 10.30574/GJETA.2023.15.1.0067 [DOI] [Google Scholar]
- 105. Mbah GO, Nkechi A: AI-powered cybersecurity: Strategic approaches to mitigate risk and safeguard data privacy. World J. Adv. Res. Rev. Dec. 2024;24(3):310–327. 10.30574/WJARR.2024.24.3.3695 [DOI] [Google Scholar]
- 106. Daraojimba DO, Adewusi AO, Okoli U, et al. : Artificial intelligence in cybersecurity: Protecting national infrastructure: A USA review. World J. Adv. Res. Rev. Jan. 2024;21(1):2263–2275. 10.30574/WJARR.2024.21.1.0313 [DOI] [Google Scholar]
- 107. Al Siam A, Alazab M, Awajan A, et al. : A Comprehensive Review of AI’s Current Impact and Future Prospects in Cybersecurity. IEEE Access. 2025;13:14029–14050. 10.1109/ACCESS.2025.3528114 [DOI] [Google Scholar]
- 108. Bin Akhtar Z, Rawol AT: Enhancing Cybersecurity through AI-Powered Security Mechanisms. IT J. Res. Dev. Oct. 2024;9(1):50–67. 10.25299/ITJRD.2024.16852 [DOI] [Google Scholar]
- 109. Aryal S, et al. : Leveraging Multi-AI Agents for Cross-Domain Knowledge Discovery. ArXiv. 2024;abs/2404.0. 10.48550/ARXIV.2404.08511 [DOI] [Google Scholar]
- 110. Graham CM: AI skills in cybersecurity: global job trends analysis. Inf. & Comput. Secur. 2025. 10.1108/ICS-09-2024-0235 [DOI] [Google Scholar]
- 111. Kour R, Karim R, Dersin P, et al. : Cybersecurity for Industry 5.0: trends and gaps. Front. Comp. Sci. 2024;6. 10.3389/FCOMP.2024.1434436/PDF [DOI] [Google Scholar]
- 112. Jaiswal A, Mishra PC: ARTIFICIAL INTELLIGENCE (AI) AND CYBERSECURITY LAW: LEGAL ISSUES IN AI-DRIVEN CYBER DEFENSE AND OFFENSE. ShodhKosh J. Vis. Perform. Arts. Jun. 2024;5(6). 10.29121/SHODHKOSH.V5.I6.2024.4144 [DOI] [Google Scholar]
- 113. Wang WC: Legal, Policy, and Compliance Issues in Using AI for Security: Using Taiwan’s Cybersecurity Management Act and Penetration Testing as Examples. 2024 16th Int. Conf. Cyber Confl. Over Horiz. 2024; pp.161–176. 10.23919/CYCON62501.2024.10685586 [DOI]
- 114. Habler I, Huang K, Narajala VS, et al. : Building A Secure Agentic AI Application Leveraging A2A Protocol. ArXiv. 2024;abs/2504.1. 10.48550/ARXIV.2504.16902 [DOI] [Google Scholar]
- 115. Sengupta A: Securing the Autonomous Future A Comprehensive Analysis of Security Challenges and Mitigation Strategies for AI Agents. Int. J. Sci. Res. Eng. Manag. Dec. 2024;08(12):1–2. 10.55041/IJSREM40091 [DOI] [Google Scholar]
- 116. Alnaffar A: Cybersecurity Resilience, Cryptocurrency, and AI: Navigating the Risks in the Middle East. Int. J. Sci. Res. Mar. 2024;13(3):240–241. 10.21275/SR24305132807 [DOI] [Google Scholar]
- 117. Oloyede J: Ethical Reflections on AI for Cybersecurity: Building Trust. SSRN Electron. J. 2024. 10.2139/SSRN.4733563 [DOI] [Google Scholar]
- 118. Shanker B, Neyigapula: Ethical Considerations in AI Development: Balancing Autonomy and Accountability. J. Adv. Artif. Intell. 2024. 10.18178/JAAI.2024.2.1.138-148 [DOI] [Google Scholar]
- 119. Pujari T, Goel A, Kejriwal D: Ethical and Responsible AI in the Age of Adversarial Diffusion Models: Challenges, Risks, and Mitigation Strategies. Int. J. Sci. Technol. Dec. 2022;1(3):54–68. 10.56127/IJST.V1I3.1963 [DOI] [Google Scholar]
- 120. Bahangulu JK, Owusu-Berko L: Algorithmic bias, data ethics, and governance: Ensuring fairness, transparency and compliance in AI-powered business analytics applications. World J. Adv. Res. Rev. Feb. 2025;25(2):1746–1763. 10.30574/WJARR.2025.25.2.0571 [DOI] [Google Scholar]
- 121. Ahmad MN: Narrative Literature Reviews in Scientific Research: Pros and Cons. Jordan J. Agric. Sci. Mar. 2025;21(1):1–4. 10.35516/JJAS.V21I1.4143 [DOI] [Google Scholar]
- 122. Faisal A, Cupiadi H: Cybersecurity in Digital Supply Chains: A Narrative Review of Threats and Strategic Frameworks for Sustainable Logistics. Sinergi Int. J. Logist. Aug. 2024;2(3):174–186. 10.61194/SIJL.V2I3.728 [DOI] [Google Scholar]
- 123. Malatji M, Tolah A: Artificial intelligence (AI) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive AI. AI Ethics. Apr. 2025;5(2):883–910. 10.1007/S43681-024-00427-4 [DOI] [Google Scholar]
- 124. Kodete CS, Thuraka B, Pasupuleti V: A Systematic Review of AI-Driven and Quantum-Resistant Security Solutions for Cyber-Physical Systems: Blockchain, Federated Learning, and Emerging Technologies. 2024 Int. Conf. Comput. Appl. 2024; pp.1–6. 10.1109/ICCA62237.2024.10927942 [DOI]
- 125. Piplai A, Kotal A, Mohseni S, et al. : Knowledge-Enhanced Neurosymbolic Artificial Intelligence for Cybersecurity and Privacy. IEEE Internet Comput. Sep. 2023;27(5):43–48. 10.1109/MIC.2023.3299435 [DOI] [Google Scholar]
- 126. Yu W, Zhao J: Quantum Multi-Agent Reinforcement Learning as an Emerging AI Technology: A Survey and Future Directions. 2023 Int. Conf. Comput. Appl. 2023; pp.1–7. 10.1109/ICCA59364.2023.10401605 [DOI]
- 127. Fazil AW, Hakimi M, Shahidzay AK: A COMPREHENSIVE REVIEW OF BIAS IN AI ALGORITHMS. Nusant. Hasana J. Jan. 2024;3(8):1–11. 10.59003/NHJ.V3I8.1052 [DOI] [Google Scholar]
- 128. Shah Z, et al. : ETHICAL CONSIDERATIONS IN THE USE OF AI FOR ACADEMIC RESEARCH AND SCIENTIFIC DISCOVERY: A NARRATIVE REVIEW. Insights-Journal Life Soc. Sci. Apr. 2025;3(2):183–189. 10.71000/JFESGV69 [DOI] [Google Scholar]
- 129. Murphy A, Bowen K, El Naqa IM, et al. : Bridging Health Disparities in the Data-Driven World of Artificial Intelligence: A Narrative Review. J. Racial Ethn. Health Disparities. 2024;12:2367–2379. 10.1007/S40615-024-02057-2 [DOI] [PubMed] [Google Scholar]
- 130. Bellini V, et al. : Artificial intelligence and anesthesia: a narrative review. Ann. Transl. Med. May 2022;10(9):528–528. 10.21037/ATM-21-7031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131. Elamin MOI, Ismaiel OMO: The AI Revolution in Cybersecurity: Transforming Threat Detection, Defense Mechanisms, and Risk Management in the Digital Era. Int. J. Relig. Feb. 2025;6(1):228–246. 10.61707/1DMVN671 [DOI] [Google Scholar]
- 132. Garcia AB, Babiceanu RF, Seker R: Artificial Intelligence and Machine Learning Approaches For Aviation Cybersecurity: An Overview. 2021 Integr. Commun. Navig. Surveill. Conf. Apr. 2021; vol.2021-April: pp.1–8. 10.1109/ICNS52807.2021.9441594 [DOI] [Google Scholar]
- 133. Papagiannopoulos I, et al. : Enhancing Green Financing Through AI Analytics and Cross-Domain Data Sharing. 2024 15th Int. Conf. Information, Intell. Syst. Appl. 2024; pp.1–11. 10.1109/IISA62523.2024.10786617 [DOI]
- 134. Jabbar H, Al-Janabi S, Syms F: AI-Integrated Cyber Security Risk Management Framework for IT Projects. 2024 Int. Jordanian Cybersecurity Conf. 2024; pp.76–81. 10.1109/IJCC64742.2024.10847294 [DOI]
- 135. Shoetan PO, Amoo OO, Okafor ES, et al. : SYNTHESIZING AI’S IMPACT ON CYBERSECURITY IN TELECOMMUNICATIONS: A CONCEPTUAL FRAMEWORK. Comput. Sci. & IT Res. J. Mar. 2024;5(3):594–605. 10.51594/CSITRJ.V5I3.908 [DOI] [Google Scholar]






