Abstract
Introduction
Artificial Intelligence (AI) is transforming anaesthesia and intensive care medicine, enhancing diagnostic precision, workflow efficiency, and patient safety. However, deploying AI in high-acuity environments involves regulatory, ethical, and operational challenges. The European Union Artificial Intelligence Act (AI Act), effective 2025, imposes binding obligations on healthcare organizations, creating an urgent need for structured, governance-focused AI policies. This work presents a checklist-based methodology for responsible, safe, ethical, and regulation-aligned AI adoption in clinical units.
The need for a methodology to develop an AI policy
Effective AI policies must ensure transparency, safety, fairness, and regulatory compliance while remaining adaptable to rapid technological and legislative changes. The proposed methodology employs a domain-specific checklist to generate critical evaluative questions, enabling healthcare professionals to systematically assess AI systems’ appropriateness, reliability, and legal implications without relying on rigid, quickly outdated prescriptive rules.
The AI Act and its relevance
Regulation (EU) 2024/1689 establishes the first comprehensive AI legal framework, introducing risk-based classification, imposing stringent requirements for high-risk AI, often including medical devices. Compliance obligations extend to both AI-system providers and deployers, making operational compliance instruments and AI literacy programmes essential for lawful implementation.
AI literacy: obligation and planning
From February 2025, the AI Act mandates AI literacy for all personnel interacting with AI-systems. Training should cover baseline competencies for all staff, advanced modules for specialists, continuous professional development, and integration of ethical, legal, and governance principles. Competency acquisition and updates must be systematically documented to meet institutional and EU compliance standards.
Operational checklist for the adoption of AI policy
The checklist has two integrated domains: clinical and technical validation, including evidence-based performance assessment, real-world validation, MDR compliance, GDPR adherence, and post-deployment monitoring; and governance and compliance, covering AI Act conformity, organizational accountability, decision traceability, human oversight, AI literacy, and structured audit and update mechanisms.
Future perspectives
The checklist methodology offers a scalable, adaptable, regulation-ready framework for AI policy development. By embedding legal compliance, clinical safety, governance, and continuous staff training, it supports sustainable AI integration. Future updates will incorporate regulatory changes, real-world feedback, and impact metrics, enhancing AI’s contribution to quality, safety, and equity in patient care.
Keywords: Artificial Intelligence, AI-Act, Policy, Ethics, Regulation, AI-Governance, AI-Literacy, Education
Introduction
Artificial Intelligence (AI) is emerging as a key technology across various fields of medicine, including anaesthesia and intensive care [1]. The integration of AI into these areas offers significant opportunities to enhance clinical accuracy, operational efficiency, and patient safety. However, its use raises ethical, regulatory, and practical concerns, underscoring the need for a clear and well-structured policy [2] to define governance and compliance aspects related to the use of this technology within healthcare organizations. With the entry into force of the first provisions of the European Union’s AI Act in February and August 2025, adopting a structured and responsible approach to AI is no longer merely advisable but rather a regulatory obligation [3]. This proposal aims to define a checklist-based methodology for the adoption of an AI policy for anaesthesia and intensive care units (ICUs), which can also be applied to other clinical units, providing a first-level reference framework for the responsible, safe, ethical and legally compliant use of AI [4]. This work should not be limited to prescribing rigid rules but is aimed at offering healthcare professionals a set of guiding questions to critically evaluate the use of AI in their daily clinical practice in light of the adoption of a structured AI policy.
The need for a methodology to develop an AI policy
An effective AI policy should help healthcare organizations meet the requirements for transparency, safety, fairness, and regulatory compliance, offering a clear framework for healthcare professionals [5]. The wide range of domains to be considered requires developing a methodology that can guide organizations in the implementation of a structured AI policy. Our proposal is based on the creation of a checklist tailored to the healthcare context. The cornerstone of this approach is that it does not provide ready-made answers at this stage but rather helps healthcare professionals ask the right questions to assess the reliability, appropriateness, legal and ethical implications of using AI in their specific organizations [6]. This approach also allows for operational flexibility that adapts to the ongoing evolution of technology and regulations, avoiding rigid frameworks that may quickly become outdated. In addition to supporting healthcare professionals in the adoption of an AI policy by identifying key domains requiring internal governance, the checklist also enables a preliminary assessment of AI systems. As discussed in the concluding section, the checklist has the potential to be integrated into AI policy.
The AI Act and its relevance
The European Union recently adopted Regulation (EU) 2024/1689 (Artificial Intelligence Act), which represents the first comprehensive legislation on AI in the world and seeks to strike a balance between protecting rights and supporting innovation. The AI Act regulates placing AI systems on the market, putting them into service, and using AI systems within the EU according to the risks that may arise from the adoption of this technology. Under this new risk-based approach legal framework, several AI practices considered to pose unacceptable risks are prohibited (e.g., AI systems to infer emotions in the workplace and educational institutions). For high-risk AI systems (including under certain conditions, medical devices), the AI Act introduces a set of requirements and obligations, such as data governance, technical documentation, human oversight, and postmarket monitoring. In addition, specific transparency obligations apply to certain AI systems, such as those that interact with humans or generate synthetic content and deep fakes. These rules are directly relevant in the healthcare domain, as they apply both to organizations that develop AI systems (providers) and those that use AI systems (deployers). Therefore, it is essential to adopt operational compliance tools (AI policy) and acquire adequate skills (AI literacy) to ensure full compliance with the AI Act [3].
AI literacy: obligation and planning
As of February 2, 2025, the AI Act has made AI training mandatory. Specifically, providers and deployers of AI systems should take measures to ensure a sufficient level of AI literacy for their staff and other persons dealing with AI systems on their behalf. This should be done considering their technical knowledge, experience, education and training and the context in which the AI systems are to be used and considering the persons on whom the AI systems are to be used [3]. AI literacy is essential to ensure that healthcare professionals can use these tools consciously and responsibly [7]. A structured training plan should include basic education for all healthcare workers on clinical applications of AI, advanced modules for specialists who use AI systems in daily practice, and ongoing programs to stay updated on new technologies and regulations [8]. The legal, ethical, and governance dimensions of AI should also be part of the AI literacy program. Documenting training progress is also key and must be performed systematically to ensure compliance with institutional and European standards [9]. Therefore, AI literacy is not only an essential component of AI policy but also a fundamental prerequisite for its effective introduction and implementation. For this reason, the checklist proposed in this paper incorporates considerations related to AI literacy [10].
Operational checklist for the adoption of AI policy in clinical units
This checklist provides a preliminary operational framework for assessing the adoption and use of AI systems in clinical units, particularly within Anaesthesia and ICUs, in light of the implementation of a structured AI policy (Table 1).
Table 1.
The first section, clinical and technical validation, addresses the clinical effectiveness, technical robustness, and safety of AI systems. It encompasses aspects such as validation in the intended clinical context, compliance with medical device regulations, adherence to GDPR requirements, and ongoing performance monitoring after deployment. It also considers the representativeness of validation datasets, the inclusion of real-world performance data, the testing of systems in high-acuity scenarios, and the training of healthcare staff to ensure competent and safe AI use. The second section, governance and compliance, focuses on the organizational, legal, and procedural dimensions of AI integration. This includes compliance with the EU AI Act and other regulatory frameworks, clear allocation of responsibilities, human oversight in critical decision-making, transparent documentation, and traceability of AI-supported decisions, as well as structured governance mechanisms such as the appointment of dedicated AI officers. It also addresses the need for periodic audits, software updates, incident reporting, and the integration of staff feedback into continuous improvement processes. This structure ensures that AI adoption is approached not only as a technological implementation, but also as a process embedded within regulatory compliance, organizational accountability, and sustained clinical safety.
| AI operational checklist | |
|---|---|
| Clinical and technical validation | Governance and compliance |
| 1. AI systems adoption status | 1. General compliance |
|
Are AI systems currently in use within the unit? If not, is the decision not to adopt AI systems formally documented and justified? Does the organization have a register in which all AI systems in use are mapped? |
Have AI systems been subject to regulatory compliance assessments? Is there documentation confirming regulatory compliance assessments? |
| 2. AI systems description and context | 2. AI Act compliance |
|
What is the name, purpose, and functionality of AI systems? Which clinical domain does AI systems support? Who is the provider of AI systems? Are AI systems open source? What types of data are processed by the AI system? Which are the data sources used? Are the AI systems accompanied by information on the main decision-making logics? |
Do the systems meet the definition of an AI system under the AI Act? Do the AI systems fall under any of the exceptions to the scope of the AI Act (e.g. scientific research)? Which is the role of the organizations in relation to the AI systems pursuant to the AI Act (e.g. provider, deployer)? How are AI systems classified under the risk-based framework of the AI Act (unacceptable risk, high risk, limited risk, minimal risk)? Does the organization comply with all applicable obligations and requirements under the AI Act, based on the risk classification of the AI systems? |
| 3. Clinical validation | 3. GDPR compliance |
|
Are there peer-reviewed clinical studies supporting the AI system’s effectiveness? Have the AI systems been validated in the specific context of use? Were the validation studies conducted on datasets representative of the target patient population? Are demographic, clinical, and procedural variables comparable to the actual clinical setting? Does the validation include real-world clinical performance data (not just retrospective or simulated data)? Has the AI system been tested for clinical robustness in emergency or high-acuity settings (e.g., ICU, OR)? Were the endpoints of the validation study clinically meaningful (e.g., reduction in complications, improved workflow, improved decision accuracy)? Has the system undergone validation against clinical gold standards or expert consensus? Was a comparative assessment performed (e.g., AI vs human performance, or AI-assisted vs standard care)? Is the performance of the AI system monitored in post-deployment clinical settings (prospective validation)? |
Do AI systems process personal data pursuant to the General Data Protection Regulation (GDPR)? Which is the role of the organizations in relation to the processing of personal data through AI systems pursuant to the GDPR (e.g. controller, processor)? What is the legal basis for the processing of personal data through AI systems pursuant to the GDPR (e.g. consent)? Do AI systems process special categories of personal data (e.g. health data)? If yes, are the necessary conditions under the GDPR met? Are data subjects informed about the processing of their personal data through AI systems? Has the organization adopted mechanisms that allow data subjects to exercise their rights under the GDPR (e.g., access, erasure)? Has a Data Protection Impact Assessment (DPIA) been conducted? Have appropriate technical and organizational measures been implemented to ensure privacy by design and by default principles? Are personal data anonymized or pseudonymized in relation to the use of AI systems? |
| 4. Medical regulation compliance | 4. Organizational AI governance |
|
Are the AI systems registered as a medical device under the Medical Device Regulation (MDR), or classified under another regulatory framework? Is there official documentation of its classification and certification? Has a conformity assessment procedure been carried out for the AI system under the MDR? Does the conformity assessment include clinical evaluation and performance testing? Is the manufacturer of the AI system certified according to MDR standards (e.g., ISO 13485)? Is the AI system CE marked for the intended clinical use? Is the CE mark associated with the same context in which the system is used within the unit? Does the AI system fall into one of the MDR-defined risk classes (I, IIa, IIb, III)? Is the assigned risk class consistent with the type of clinical decisions the system supports? Is post-market surveillance planned and documented for the AI system? Are mechanisms in place for reporting incidents or adverse events related to AI system use? Have responsibilities for compliance with MDR been clearly allocated within the organization (e.g., clinical users vs procurement vs IT)? |
Has an AI governance structure been introduced? Has the organization clearly assigned responsibilities for managing AI-based decision-making? Has the organization appointed a person or unit responsible for overseeing AI (e.g. AI officer)? Has the organization adopted a procedure for assessing the necessity, proportionality and adequacy of AI systems prior to the implementation? |
| 5. Human oversight and clinical safety | 5. Responsibility assignment and decision traceability |
|
Is final human oversight consistently ensured before executing critical decisions through AI systems? Is staff trained to recognize when manual intervention is necessary in relation to AI systems? Are roles and responsibilities between the AI systems and healthcare personnel clearly defined? Are there clear guidelines for using or overriding AI-generated recommendations? |
Is the responsible party clearly identified in case of a clinical error related to AI use? Is it documented who makes the final decision in critical situations? Is the use of AI systems traced in medical records? Are AI-supported decisions transparently documented? |
| 6. AI literacy | |
|
Have all healthcare personnel received basic training on the use and limitations of AI systems? Do training programs include assessment of acquired competencies? Is there a periodic update plan in place for staff regarding AI systems? Are practical, hands-on training sessions provided? Have systems been implemented to collect ongoing feedback from staff on AI systems use? Is the feedback actively used to improve procedures? | |
| 7. Monitoring, updates, and maintenance | |
|
Are regular audits of the AI systems’ performance scheduled? Are any malfunctions or deviations from standards documented? Are routine software updates planned? Are new versions validated before deployment? | |
Future perspectives
The introduction of an AI policy represents a fundamental step towards ensuring the responsible, lawful and safe use of AI technologies in healthcare. The checklist-based methodology proposed here enables the identification of the key areas that an AI policy should address, adopting an operational approach and enabling adaptation to rapid technological and regulatory changes. Future developments will include regular updates of the checklist to reflect regulatory developments and implementation feedback, the enhancement of staff training and AI literacy, and the incorporation of monitoring systems to evaluate AI’s impact on care quality and patient safety [11]. Given its structured and operational design, the checklist may be further developed to be incorporated into AI policy to support the assessment of AI systems.
Acknowledgements
Not applicable.
Clinical trial number
Not applicable.
Authors’ contributions
Elena Bignami: project administration, conceptualization, visualization, supervision, validation, resources, writing—original draft, writing—review and editing. Luigino Jalale Darhour: visualization, writing—original draft, writing—review and editing. Gabriele Franco: visualization, supervision, validation, writing—original draft, writing—review and editing. Matteo Guarnieri: writing—original draft, writing—review and editing. Valentina Bellini: conceptualization, visualization, supervision, validation, writing—original draft, writing—review and editing. All authors have read and approved the final manuscript and agree to be accountable for all aspects of the work.
Funding
None.
Data availability
No datasets were generated or analyzed during the current study.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Bignami E, Darhour LJ, Bellini V (2024) Artificial intelligence in extended perioperative medicine. Trends Anaesth Crit Care 57:101376. 10.1016/j.tacc.2024.101376 [Google Scholar]
- 2.World Health Organization (2021) Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization. Available at: https://www.who.int/publications/i/item/9789240029200
- 3.Regulation (EU) 2024/1689 of the European Parliament and of the Council (2024) Laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union L, 12.7.2024. Available at: http://data.europa.eu/eli/reg/2024/1689/oj; with commentary by: Pehlivan CN, Forgó N, Valcke P (eds) (2024) The EU Artificial Intelligence (AI) Act: a commentary. Wolters Kluwer. ISBN 9789403532271. Available at: https://law-store.wolterskluwer.com/s/product/the-eu-artificial-intelligence-ai-act-a-commentary/01tPg000007gkK9IAI
- 4.Li DM, Parikh S, Costa A (2025) A critical look into artificial intelligence and healthcare disparities. Front Artif Intell 8:1545869. 10.3389/frai.2025.1545869 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Adhikari S, Ahmed I, Bajracharya D et al (2025) Transforming healthcare through just, equitable and quality driven artificial intelligence solutions in South Asia. NPJ Digit Med 8:139. 10.1038/s41746-025-01534-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Kewalramani D, Loftus TJ, Mayol J, Narayan M (2024) Artificial intelligence in surgery: a global balancing act. Br J Surg 111(3):znae062. 10.1093/bjs/znae062 [DOI] [PubMed] [Google Scholar]
- 7.Car J et al (2025) The digital health competencies in medical education framework: an international consensus statement based on a delphi study. JAMA Netw Open 8(1):e2453131. 10.1001/jamanetworkopen.2024.53131 [DOI] [PubMed] [Google Scholar]
- 8.Ng FYC, McInnes MD, Lo K et al (2023) Artificial intelligence education: an evidence-based medicine approach for consumers, translators, and developers. Cell Rep Med 4(10):101230. 10.1016/j.xcrm.2023.101230 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Car J, Topol EJ (2025) Advocating for a master of digital health degree. JAMA 333(9):753–754. 10.1001/jama.2024.27365 [DOI] [PubMed] [Google Scholar]
- 10.Bignami E, Darhour LJ, Buhre W, Cecconi M, Bellini V (2025) Artificial intelligence in healthcare: tailoring education to meet EU AI-act standards. Health Policy Technol 14(6):101078. 10.1016/j.hlpt.2025.101078 [Google Scholar]
- 11.G7 Health Working Group (2024) Policy brief on artificial intelligence: opportunities and challenges for the health sector. Available at: https://www.g7italy.it/wp-content/uploads/G7-Policy-brief-on-Artificial-Intelligence.pdf
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No datasets were generated or analyzed during the current study.
