Skip to main content
Antimicrobial Stewardship & Healthcare Epidemiology : ASHE logoLink to Antimicrobial Stewardship & Healthcare Epidemiology : ASHE
. 2025 Nov 25;5(1):e317. doi: 10.1017/ash.2025.10191

Advancing infection prevention and control through artificial intelligence: a scoping review of applications, barriers, and a decision-support checklist

Silvana Gastaldi 1,, Ermira Tartari 2, Giovanni Satta 3, Benedetta Allegranzi 4
PMCID: PMC12722576  PMID: 41446414

Abstract

Objective:

To examine how artificial intelligence (AI) has been applied to infection prevention and control in healthcare, identify barriers and risks affecting implementation, and develop a structured checklist to support safe adoption.

Design:

Scoping review conducted in line with Joanna Briggs Institute methodology and reported according to PRISMA-ScR.

Methods:

PubMed, Scopus, and Web of Science were searched for primary studies (2014–2024) describing real-world AI applications for IPC. Studies reporting implementation experiences, outcomes, or risks were included. Data on study design, AI type, IPC function, integration level, barriers, and outcomes were extracted and synthesized thematically to derive a 41-item decision-support checklist.

Results:

Of 2,143 records screened, 100 studies met inclusion. Most were published since 2022, with the United States and China leading output. Machine learning dominated (75%), mainly for predictive analytics (53%), HAI detection (13%), and hand hygiene monitoring (13%). Only 15% of tools were integrated into existing digital infrastructures. Barriers centred on data quality (45%), technical and data related (16%), and economic/technical constraints (16%). Reported risks clustered around operational failures (35%), technical errors (33%), and data security (12%). Evidence was heavily skewed toward high-income countries, with limited prospective validation or implementation science.

Conclusions:

AI offers clear promise for IPC, particularly in early detection and compliance monitoring, but its translation into practice remains constrained by data fragmentation, limited integration, and uneven readiness across settings. Our evidence-informed checklist provides IPC teams with a structured tool to assess feasibility, governance, and resource needs before adoption, supporting safer and sustainable innovation.

Introduction

Healthcare-associated infections (HAIs) remain a major threat to patient safety and quality of care worldwide (WHO, 2022). 1 Conventional IPC measures, such as active surveillance, outbreak investigations, and transmission-based precautions, are essential but often reactive and resource-intensive. 1

Artificial intelligence (AI) has emerged as a promising tool to enhance IPC by linking diverse data sources, including laboratory results, resistance profiles, patient movement, and compliance behaviors. 2,3,4 Applications include predictive analytics, infection detection, hand hygiene monitoring, and compliance auditing. While evidence demonstrates technical feasibility, translation into everyday practice remains limited due to fragmented data, poor integration, and organizational or ethical concerns. 5,6

This review synthesizes evidence on real-world AI applications for IPC, identifies barriers and risks, and develops an evidence-informed readiness checklist to support an easier and safer adoption. The checklist aligns empirical findings with international governance frameworks, such as the WHO ethics guidance on AI in health 6 and the EU Artificial Intelligence Act. 7

Methods

This review followed the Joanna Briggs Institute (JBI) methodology for scoping reviews 8 and is reported in line with PRISMA-ScR guidelines. 9 We addressed two questions: (1) What are the documented applications, barriers, and real-world outcomes of AI in IPC? and (2) How can these findings inform a structured checklist for feasibility, risk, and organizational readiness?

Eligible studies were primary research (prospective cohorts, retrospective analyses, pilots, multicenter trials) describing AI applications in healthcare IPC with practical outcomes (eg, effectiveness, feasibility, workflow integration, risks). We excluded theoretical models, algorithm-only studies without real-world application, and non-peer-reviewed material such as editorials or conference abstracts. Digital health tools or Internet of Things (IoT) solutions were included only if they incorporated an AI component relevant to IPC functions. Searches covered PubMed, Scopus, and Web of Science (period Jan 2014–Dec 2024; last search Feb 2025) using a Population–Concept–Context framework. 8 No language restrictions were applied. Search terms are detailed in Supplementary Appendix A.

Records were screened in Rayyan 10 tool by two reviewers (SG, ET) with adjudication by a third (GS). The PRISMA flow diagram (Figure 1) summarizes the selection. Data were extracted into a piloted template and charted in Excel. Variables included study characteristics, AI type, IPC application, implementation features, outcomes, barriers, and risks. Two reviewers (SG, ET) cross-validated entries.

Figure 1.

Figure 1.

PRISMA 2020 flow diagram for the scoping review.

Data synthesis combined descriptive and thematic analysis. Descriptive synthesis grouped studies by AI type and IPC function; thematic synthesis identified implementation barriers, risks, and facilitators. Rooted on this analysis, we developed a 41-item readiness checklist, mapped to six domains: governance and policy, data quality and interoperability, technical and infrastructure, human and workflow, economic and resource, and risk and compliance. The full data set, including structured classifications by AI method, IPC focus, implementation features, and risks, is available in Supplementary Appendix B. Each study in the following tables retains the same numeric identifier as in Appendix B, allowing readers to cross-reference study characteristics with the full citation.

Results

Out of 2,143 identified records, 382 duplicates were removed. Of the remaining 1,761 titles and abstracts screened, 272 full-text articles were reviewed. A total of 100 studies met all inclusion criteria and were included in the final study.

Of the 100 studies included in this review, only one (1%) was published before 2017. In contrast, 31% were published between 2017 and 2021, while more than two-thirds (68%) appeared between 2022 and 2024, indicating an acceleration in research activity in recent years.

Geographically, the evidence was mostly generated in high-income settings. The United States accounts for 35% of studies, primarily focused on predictive analytics. China contributed 17%, often exploring deep-learning approaches. European countries represented the 19%, with emphasis on workflow integration and hospital-acquired infection (HAI) surveillance. The Asia-Pacific region outside China added another 14% and Latin America contributed 5%. A small number of studies (6%) originated from sub-Saharan Africa and the Middle East, reflecting emerging research capacity but limited scale.

Using the four-tier scheme (Experimental, Interventional/Implementation, Retrospective, and Validation) applied to the 100 eligible papers in Table 1, almost half of the evidence base remains anchored in retrospective work (49%), which relied on existing health or laboratory data to build or test models.

Table 1.

Design standardized classification distribution

Design standardized classification Description % References
Experimental Algorithm development or simulation, model development, Building or stress-testing an algorithm, usually offline on historic data. 30% 15,17,1821,27, 2932,34,36,40, 41,44,46,47,51,60,62,65, 66,67,69,96,100,104,106,110
Interventional/implementation Real-world pilots, deployment, or prospective testing, cross-sectional. 17% 11,14,49,55,56,63,68,80, 90,92,94,102,103,105,107,108, 109
Retrospective Non-interventional studies using historic patient data, Performance assessment against gold standards. 49% 12,13,16,22, 2326,28,33,35, 37,38,39,42,43,45,48,50, 5254,5759,61,64, 7073,75,78,79, 8185,87,88,89, 91,93,95,9799, 101
Validation studies Model development with internal validation and external validation. 4% 74,76,77,86

Experimental designs accounted for 30%, focusing on technical feasibility in controlled settings. Far fewer studies examined real-world use: 17% reported interventional or implementation pilots, while only 4% carried out independent validation before wider deployment.

Types of AI technology

A full breakdown of AI techniques and corresponding references is available in Table 2 and detailed in Supplementary Appendix B.

Table 2.

AI technology categories distribution

AI technology (%) References
Machine learning 75 1216,19,20,2230, 33,34,35,3740, 4248,50,54,55, 60,61,65,67, 68,70,71,7392, 94103,106,108,110
Deep learning 21 17,18,21,32, 36,44,5153,6264, 66,69,72,93,96,104,105,107, 109
Generative AI 4 11,31,41,49

Machine learning (ML) was the dominant technique, used in 75% of the 100 studies. This technique often relies on supervised algorithms such as logistic regression, random forests, or gradient boosting to support tasks like infection risk prediction and HAIs surveillance based on EHRs or sensor data. For more details, see next section on IPC applications blocks.

Deep learning approaches appeared in 21 studies (21%) and included for example the use of convolutional neural networks (CNNs) technologies for surface contamination detection 60 but also on video-based hand hygiene monitoring, 63 surgical site infections (SSI), 96 and early detection of multidrug-resistant infections. 52

Generative AI was used in just four studies (4%). These projects piloted large language models (LLMs) for IPC education, 11 policy summarisation 31 through expert consensus, HAI’s surveillance 41 and Hand hygiene. 49

IPC applications supported by AI distribution

Among the 100 primary studies, nine distinct IPC application areas were identified (Table 3 and Figure 2).

Table 3.

IPC applications supported by AI distribution

IPC application area (%) References
Antimicrobial resistance 1 110
Decision support 2 31,59
Education and training 3 11,44,55
Environmental monitoring 3 26,69,106
HAI detection 13 18,19,36,60,64,76,77,79, 81,86,94,98,105
HAI surveillance 8 25,29,34,41,45,54,72,108
Hand hygiene compliance 13 17,21,32,37,40,47,49,51, 63,66,68,96,109
Outbreak detection 4 15,42,43,101
Predictive analytics 53 1214,16,20,2224, 26,28,30,33,35,38,39,46, 48,50,52,53, 5658,61,62,65, 67,70,71,7375, 78,80,8285,87, 8893,95,97,99, 100,102104,107

Figure 2.

Figure 2.

IPC applications distribution.

Predictive-analytics systems accounted for 53% of the studies. These papers typically trained supervised models such as logistic regression, gradient boosting, or recurrent networks on electronic-health-record data to forecast patient-level risks such as, for example, central-line associated bacteremia, 56,39 Multi Drug Resistance (MDR) 35,52 or SSI. 12,38,74,107 In some cases, ward-level early-warning scores were generated by combining laboratory, vital-sign, and admission data streams. 56

Hand hygiene compliance monitoring was reported in 13 studies (13%), with applications varying through video analytics using CNNs, badge-based proximity sensors, 17,96 alcohol-dispenser sensors linked to compliance dashboards, 109 and optical microscopy with ML classification for hand contamination. 40

Thirteen studies (13%) focused on HAI detection, applying classification algorithms, microbiology, imaging, or pharmacy records to flag active infections such as SSI, 36,64,77,76,79 central-line–associated bloodstream infection 94 or urinary tract infections (UTIs). 86 Many of these tools operated continuously in clinical workflows. 60,98

Eight studies (8%) targeted HAI surveillance by aggregating time-series data for incidence-trend analysis. 25,54,108 Automated case-finding rules were often calibrated against conventional manual surveillance 45 but also using Natural Language Processing (NLP) 29,34,72,108 and Generative AI. 41 Outbreak-detection models were described in four studies (4%). Approaches ranged from supervised clustering of ward alerts 101 to whole-genome-sequencing pipelines that feed resistance-related single-nucleotide-polymorphism data into transmission-mapping algorithms. 42,43 One study integrated environmental IoT sensors to identify spatiotemporal hotspots. 15

Less-common themes included education and training platforms (3 %), where generative- or retrieval-augmented language models generated scenario-based learning modules 11 or PPE monitoring through computer vision founded on Human-AI Collaboration; 44,55 environmental monitoring (3 %), employing image recognition or air-quality sensors for surface-cleanliness verification; 27,69,106 and AMR prediction (1%), also using Plasmonic Nanosensors. 110 Finally, decision-support dashboards (2 %) evaluating ChatGPT’s reliability in agreeing with expert statements 31 or risk stratification for UTIs 59 were described in two studies.

Barrier profile

A total of 4 barrier categories were coded across 100 primary studies, clustering into nine composite categories (Table 4 and Figure 3).

Table 4.

Barrier categories

Barrier category (%) References
Data-related 45 14,18,20,2224, 26,28,31,41,48,53,56,57, 6165,7075,77,79, 80,84,86,87, 8895,97100,103,107, 108
Technical, data-related 16 11,16,17,19,29,30,33,50, 54,58,67,74,76,81,83,101
Technical, economic 16 3235,3740,4245,55, 68,69,102,104,110
Technical 9 12,15,25,47,59,66,78,82, 106
Human factors, data-related 5 13,36,46,51,52
Technical, human factors 3 21,44,99
Economic, data-related 3 27,85,96
Economic 2 105,109
Economic, human factors 1 60

Figure 3.

Figure 3.

Barrier categories distribution.

Data-related barriers were the most common, reported in 45 studies (45%), with issues such as incomplete records, non-standard terminologies, and delayed feeds. 22,31,77,78 Barriers combining technical and economic elements (16%) were also frequent. Examples included high setup and maintenance costs for AI surveillance tools, 35 computational demands and complexity of real-time systems, 38 and substantial infrastructure requirements for whole-genome sequencing pipelines. 42,43 A further 16% of studies reported barriers blending integration challenges with data shortcomings. Purely technical problems, including outages, high computational requirements, and sensor failures, appeared in 9%. Human factors coupled with data gaps, such as low trust or alert fatigue, were noted in 5%. Less frequent were mixed technical-human (3%), economic-data (3%), stand-alone economic (2%), and economic-human (1%) barriers.

Risk taxonomy

A total of 4 risk categories were coded across 100 primary studies, yielding eight composite categories (Table 5 and Figure 4). Risk reporting was highly granular, and many papers logged more than one category.

Table 5.

Risk categories

Risk category Percentage (%) References
Operational risks 35 11,12,1618,23, 25,26,28,30,31,33,35,38, 42,44,48,5459, 63,66,76,8082, 84,85,90,97,99,107
Technical risks 33 13,14,2022,24, 27,34,36,39,40,45,50,61, 62,64,65,67, 68,7173,75,77, 78,83,86,88,89,91,92,101,108
Data security 12 15,37,60,69,70,74,79,96, 98,100,103,109
Technical risks, operational risks 9 41,43,53,87,93,94,104,105, 106
Human risks 4 46,95,102,110
Operational risks, data security 4 19,47,51,52
Technical risks, data security 2 29,32
Human risks, data security 1 49

Figure 4.

Figure 4.

Risk categories distribution.

Operational issues were the most frequently documented risk type, recorded in 35 of 100 studies (35%). These papers described, for example, over-reliance on predictions without clinical confirmation 12,16 or potential misinterpretation if model outputs are not validated. 18

Technical failures followed closely, appearing in 33 studies (33%). Examples included server Risk of model overfitting 71,73 or risk of under-detection in systems with incomplete digital documentation. 77,86

Data-security concerns were reported in 12 studies (12%), typically focusing on patient or staff privacy regulations. 15,37,109

Combined technical-and-operational risks featured in nine studies (9 %), while paired operational-and-data-security risks were noted in four studies (4 %).

Pure human-factor risks, such as over-reliance on AI outputs 110 or diminished vigilance during downtimes, 102 were documented in four studies (4 %). Two papers (2 %) combined technical and data-security risks, 29,32 and one study (1 %) reported a mixed human-and-data-security category. 49

Digital-integration status

Of the 100 studies reviewed, only 15 papers (15 %) described AI tools already integrated into another digital layer of hospital or public health information systems. The main integration pathways were:

  • Automated surveillance shells, documented in 4 studies (4 %), which connected AI analytics to existing IPC dashboards. 77,78,86,98

  • IoT data streams, featured in 2 studies (2 %), linking badge or environmental-sensor feeds to analytic engines.15,60

  • Whole-genome sequencing pipelines, reported in 2 studies (2 %), integrating AI outbreak detection into genomic workflows. 42,43

  • EHR-embedded models, described in 2 studies (2 %), integrating real-time AI tools directly into electronic health record systems to enable dynamic risk prediction and decision support within clinical workflows. 58,79

Furthermore, single studies (each 1 %) detailed integration via mHealth apps, 13 computer-vision systems, 44 computational-fluid-dynamics models, 106 wearable devices 109 and plasmonic nanosensors. 110

The remaining 85 studies (85 %) reported prototypes without wider digital connectivity.

Checklist refinement and practical use

To help IPC teams assess readiness for adopting AI tools, we translated the findings of this review into a structured, evidence-informed checklist (Table 6). The checklist contains 41 items grouped into six domains reflecting the most common barrier clusters identified across the literature: Governance and Policy, Data Quality and Interoperability, Technical and Infrastructure, Human and Workflow, Risk and Compliance and Economic and Resource.

Table 6.

Structured readiness checklist for AI implementation in IPC

Domain Item Priority Maturity (0–3)
Governance and policy Is a multidisciplinary AI/IPC steering committee for oversight and strategic governance in place? Medium
Has the AI tool been assessed against relevant AI-as-a-medical-device regulations and obtained the necessary clearance or documented exemption? High
Is the AI tool aligned with the organization’s IPC goals or performance indicators? High
Has patient/public representation been included in oversight of AI use in IPC? Medium
Data quality and interoperability Are large, IPC-relevant, high-quality labeled datasets available for at least 12 months? High
Are key data elements (e.g., admission date, microbiology results, device exposure) ≥95 % complete for the last 12 months? High
Do source systems share a standard vocabulary (e.g. International Classification of Diseases ICD-10) or have a mapped crosswalk? High
Is there a person in charge (data-governance steward) of making sure the data is accurate and managed properly? Medium
Is data refreshed at least hourly (or near-real-time if required)? High
Data quality and interoperability Does the institution provide HL7-FHIR (Health Level Seven - Fast Healthcare Interoperability Resources) or similar APIs (Application Programming Interfaces) for two-way data exchange? High
Are compute, network and storage resources available and budgeted for peak loads? High
Is a rollback plan in place for model/interface updates? Medium
Can anonymized data or AI outputs be shared securely with external IPC/regional surveillance networks? Medium
Technical and Infrastructure (implementation and organizational readiness) Has the model reached acceptable discrimination (e.g. AUC>.80) in external validation? High
Does the AI tool provide explanations (like SHapley Additive exPlanations - SHAP values or saliency maps) that help clinicians understand its decisions? Medium
Vendor maturity—Is the supplier able to provide implementation support and updates for>5 years? Medium
Is a multidisciplinary implementation team for operational delivery (IPC, IT, data science, clinicians) in place? High
Is end-user training and downtime protocol defined? Medium
Technical and infrastructure Is roll-out staged (pilot - limited - full scale)? Medium
Are roles/responsibilities for errors clearly assigned (vendor vs hospital)? High
Are training, SOPs and communication plans defined? Medium
Human and workflow fit Are adequate resources, staff expertise, and leadership support in place to adopt the AI solution? High
Does the tool explain each alert or prediction in plain language? Medium
Has expected alert volume been stress-tested? Medium
Will outputs display inside dashboards staff already use? High
Are standard operating procedures (SOPs), training materials, and staff communication plans defined and validated? Medium
Is a change-management strategy in place (phased roll-out, communication, adoption KPIs)? High
Risk and compliance Has the AI tool been classified under EU AI Act risk levels or FDA medical device categories? High
Does the deployment comply with local data-protection laws and encrypt data in transit and at rest? High
Is a contingency plan ready for system outages (manual override)? High
Is continuous performance monitoring with drift alerts in place? High
Is there evidence of harm mitigation strategies (alerts, human-in-the-loop)? High
Does the system implement de-identification (removal of personal identifiers), encryption (data protection through secure coding), and IEC 81 001-5-1 cyber-controls (International Electrotechnical Commission standard for cybersecurity in health software)? High
Is a documented model-monitoring and re-validation protocol in place (e.g., drift-detection thresholds, scheduled performance audits, retraining triggers, rollback procedures)? High
Risk and compliance Have dedicated cybersecurity safeguards been implemented and verified for the AI system (network segmentation, encryption in transit/at rest, penetration testing, Operating System OS-patch policy)? High
Were datasets checked for representativeness (e.g. minority groups)? High
Are model design, training data, and decision logic documented in a way that allows independent audit? High
Economic and resource Are funding streams secured to support maintenance, updates, and scale-up to new IPC use-cases over time? High
Are IT, compute, and staffing needs adequately resourced within the allocated budget? High
Has a total-cost-of-ownership (TCO) analysis been completed, covering licensing, cloud services, hardware upgrades, model retraining, monitoring, and technical support across the full deployment lifecycle? Medium
Has a postimplementation evaluation plan been defined (clinical impact, cost-effectiveness, unintended consequences) High

Each item was derived inductively from recurrent challenges and risks documented in the 100 included studies, ensuring that the checklist reflects real-world implementation experiences rather than theoretical considerations. To support practical use, two additional ratings were applied: maturity and priority.

By combining these two layers, IPC teams can gauge both how close their systems are to readiness and where to focus resources first. The maturity-priority framework thus transforms the checklist into a practical roadmap for planning, implementation, and risk mitigation.

The development of these scales followed a four-step, evidence-informed process:

  1. Thematic synthesis: reviewers inductively coded implementation barriers and risks from the included studies and organized them into the six checklist domains.

  2. Item drafting: recurring, actionable challenges within each domain were translated into checklist items.

  3. Maturity anchors: the 4-level maturity scale was defined to reflect observable progression from non-existent capacity to embedded practice. This structure aligns with established digital health maturity concepts such as the Healthcare Information and Management Systems Society’s (HIMSS) Electronic Medical Record Adoption Model (EMRAM) 111 and with implementation science frameworks on organizational readiness such as the Consolidated Framework for Implementation Research (CFIR). 112

  4. Priority criteria: item priority was determined using risk-management principles (safety and regulatory criticality, dependency and sequencing, feasibility and resource burden), aligned with two normative governance frameworks central to this review—the WHO guidance on ethics and governance of AI in health 6 and the EU Artificial Intelligence Act, 7 which classifies AI applications by risk. This ensures that high-priority items correspond to safeguards required for high-risk use cases.

The maturity scale was designed as a four-point ordinal measure:

  • 0—Absent: no sign AI solution has been considered.

  • 1—Emergent: early steps are present (e.g., draft policy, small pilot, budget line), but the system is not yet operational.

  • 2—Functional: the system is operational but limited in scope or reliability.

  • 3—Mature: the element is fully embedded, routinely monitored, and continuously improved.

This progression mirrors patterns observed in the literature, moving from complete absence of capacity (eg, no data stewardship), through pilots and partial functionality, to fully institutionalized, monitored systems. A neutral midpoint was deliberately excluded to encourage decisive appraisal of whether an element is operational or still requires attention.

The priority scale distinguishes between:

  • High Priority items, which are essential for safe and effective AI use, such as complete data, model accuracy, cybersecurity, and legal compliance.

  • Medium Priority items, which facilitate adoption and sustainability over time, including vendor support, return-on-investment data, or stress-testing of alert volumes.

Discussion

This review maps how AI has been applied in IPC across 100 studies and distills common barriers and risks into a readiness checklist. Applications ranged from prediction to surveillance and compliance monitoring, but most systems remain at early stages. Success depends not only on algorithmic accuracy but on the conditions enabling reliable use: high-quality data, seamless integration, and organizational readiness. Three key messages emerge: data quality drives performance; integration is the main bottleneck; and barriers span technical, economic, and organizational domains.

Data quality drives AI performance

According to our analysis, data completeness and quality influences AI systems performance and applicability. Across nearly every IPC application area, the most effective systems were built on complete, high-fidelity data. When microbiology codes, admission timestamps, and vital signs were reliably captured, predictive models for HAIs’ prevention such as surgical site infection (SSI) and Clostridioides difficile risk prediction routinely achieved high accuracy. 12,13,78 Conversely, nearly half of all studies reported stalled or degraded performance due to missing fields, non-standard coding, or delayed data streams. 77,100 From a practical standpoint, this suggests that data readiness must come before model adoption. Installing even the most sophisticated AI algorithm on top of fragmented or inconsistent inputs is unlikely to yield clinical value and may even introduce misleading noise.

Digital integration is the real bottleneck

The study design distribution highlighted a pronounced translation gap: while experimental (30%) and retrospective investigations (49%) dominate, fewer than one-fifth of studies evaluate real-world implementation (17%), and rigorous external validation remains exceptional (4%). External validation or device-level verification is the step that exposes hidden overfitting, data-mapping errors, and workflow frictions before large-scale rollout. The scarcity of such studies therefore highlights a critical translational gap: most AI-for-IPC tools remain unproven outside their development sandbox. Moving the field beyond proof-of-concept will therefore require prospective, implementation-science designs that capture workflow integration, user acceptance, and downstream infection-control impact. The vast majority remained confined to isolated research platforms, often lacking connectivity with EHR interfaces, IPC dashboards, or real-time data streams from environmental or wearable sensors. However, by curating case definitions into formats interpretable by AI agents, even complex surveillance tasks such as the detection of HAIs, can be substantially automated. 25,29,34,41,45,54,72,108

Such frameworks could also be extended to support automated case-to-definition matching in outbreak investigations, reportable disease tracking, and clinical decision-making, particularly when embedded directly within electronic health records (EHR). 15,42,43,101

Where digital integration did occur, for example, monitoring of PPE adherence or linking hand hygiene systems to IPC dashboards, teams were more likely to act on the outputs, leading to improved compliance and earlier intervention. 44,109

Failures were often linked to technical incompatibilities (eg, software version mismatches after EHR upgrades) or infrastructure gaps, such as the absence of real-time IoT data pipelines but also due to high setup costs, requiring integration with existing systems for smooth workflow. 66,67,69 This highlights a key insight: accuracy is not enough, usefulness depends on connectivity.

Barriers compound across domains

Barriers rarely occur in isolation. Data gaps, fragile digital infrastructure, budget constraints, and users’ skepticism often interact, reinforcing one another. For instance, dependency on EHR structure and data quality and high computational cost contributed to model drift, which in turn produced irrelevant alerts that clinicians learned to ignore. 35,85

Human factors, like alert fatigue or distrust, became more acute when paired with poor feedback loops or missing context. 102 A simple AI model that functions transparently and within a familiar dashboard may be more sustainable than a black-box tool that overloads staff with unexplained alerts.

Risk is not just technical, it’s operational

The risk landscape described across these studies suggests that operational vulnerabilities are at least as common, and potentially more disruptive, than algorithmic failures. In some studies, challenges were reported due to explainability of ML models and integration into daily routines due to complexity 59 or misaligned response protocols. 31 These risks affect patient care directly and erode staff trust.

Operational and technical risks frequently co-occurred, particularly in complex systems depending on high-quality data and computational power. 59 Although relatively few papers discuss it (12%), data security such as patient privacy remain critical, especially as cloud-based IPC solutions scale across institutions combined with wireless solutions.

Maturity levels vary across IPC application areas

Some IPC application areas appear closer to routine readiness. Predictive analytics (53%) and hand hygiene monitoring (13%) stand out: models often performed well, and compliance tools that provided immediate feedback were linked to tangible improvements. 12,13,17 One pilot study demonstrated that a human-AI collaboration system accurately monitored PPE donning and doffing procedures in a simulated setting, suggesting that such systems could serve as a substitute or enhancement to in-person observers. 44,55 These systems are beginning to move beyond pilot status in select settings. One study reported that despite some usability concerns, particularly related to AI system design and EHR integration, most users expressed overall satisfaction with AI-based hand hygiene monitoring. 68 Importantly, the system was perceived to reduce HAIs and positively influence provider well-being, with younger and more experienced staff reporting greater satisfaction with AI use in direct patient care. 68

By contrast, AMR prediction through WGS, 42,43 and sensor-based environmental monitoring 27,69,106 remain largely exploratory. Their value is conceptually clear, but technical complexity and cost remain barriers to widespread use.

For decision-makers, this suggests a tiered approach: focus first on AI tools with strong implementation evidence and realistic integration paths, while continuing to evaluate and pilot newer innovations with longer lead times.

AI tools already match or surpass conventional IPC approaches in prediction and compliance monitoring (eg, Hand Hygiene or PPE compliance) when high-quality data and robust integration are in place. However, data fragmentation, fragile interfaces, and unfunded maintenance obligations consistently limit scale-up. We also illustrate how these findings informed a readiness checklist that will offer IPC leaders a concise, evidence-based instrument to assess feasibility, allocate resources and mitigate risk before AI deployment.

Future directions

Few studies addressed explainability, equity, or sustainability. Post hoc explanation methods (eg, SHAP) were rare, 95 and almost none reported energy or carbon footprints even if the Wash Ring project 109 showed that efficiency gains are possible. Future research should combine performance metrics with explainability, sustainability reporting, and validation in diverse, low-resource settings.

Limitations

Most included studies were single-center pilots from high-income countries (85%), limiting generalizability. Evidence on cost-effectiveness and patient outcomes was scarce. The proposed checklist is an initial synthesis that requires prospective validation and iterative refinement through consensus methods such as Delphi.

Conclusion

Artificial intelligence offers real opportunities to strengthen infection prevention and control, from improving prediction and surveillance to supporting compliance monitoring. Yet most applications remain at an early stage, with progress slowed by gaps in data quality, weak system integration, and uneven readiness across healthcare settings. The evidence-informed checklist developed in this review is intended as a practical guide for IPC teams, helping them assess maturity, set priorities, and align implementation with international governance frameworks. 6,7 Moving AI from promising pilots to routine practice will require reliable data, robust integration, and above all, clear accountability to ensure safe and effective use.

Supporting information

Gastaldi et al. supplementary material 1

Gastaldi et al. supplementary material

DOI: 10.1017/ash.2025.10191.sm001
Gastaldi et al. supplementary material 2

Gastaldi et al. supplementary material

DOI: 10.1017/ash.2025.10191.sm002

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/ash.2025.10191.

Data availability statement

All data supporting the findings of this review are contained within the manuscript and its supplementary materials.

Acknowledgments

This review was made possible through the voluntary efforts and collaborative dedication of a multidisciplinary team committed to advancing infection prevention and control. We thank all contributing reviewers and technical advisors who supported the identification, screening, and synthesis of evidence. Special thanks to those who participated in the refinement of the readiness checklist and provided critical feedback on its practical relevance and usability.

Author contribution

SG conceptualized the study, coordinated the review process, led the data extraction and analysis and wrote the first draft. GS contributed to the development of the search strategy, supervised the evidence screening process, and supported synthesis and interpretation of findings. ET provided methodological guidance and critically revised the manuscript for intellectual content. BA offered strategic oversight, validated key findings, and provided substantial input to the paper draft. All authors contributed to the drafting and revision of the manuscript, approved the final version, and take responsibility for the accuracy and integrity of the work.

Financial support

No external funding was received for this work, and all contributions were provided on a voluntary, non-commercial basis.

Competing interests

All authors report no conflicts of interest relevant to this article.

Disclaimer

The opinions expressed in this article are those of the authors and do not reflect the official position of WHO or Istituto Superiore di Sanità (ISS). WHO and ISS take no responsibility for the information provided or the views expressed in this article.

References

  • 1. World Health Organization. Global report on infection prevention and control, 2022. License: CC BY-NC-SA 3.0 IGO. https://iris.who.int/handle/10665/354489
  • 2. Ellahham, S. Role of artificial intelligence (AI) in infection control. International Journal of Science and Research (IJSR) 2023; 12:1242–1247. 10.21275/sr21929143532 [DOI] [Google Scholar]
  • 3. Maddox T. M., Rumsfeld J. S., Payne P. R. O. Questions for artificial intelligence in health care. JAMA. 2019;321:31–32. 10.1001/jama.2018.18932. PMID: 30535130. [DOI] [PubMed] [Google Scholar]
  • 4. Hanna J. J., Medford R. J. Navigating the future: machine learnings role in revolutionizing antimicrobial stewardship and infection prevention and control. Curr Opin Infect Dis. 2024;37:290–295. 10.1097/QCO.0000000000001028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019; 25: 44–56. 10.1038/s41591-018-0300-7 [DOI] [PubMed] [Google Scholar]
  • 6. World Health Organization. Ethics and governance of artificial intelligence for health: guidance on large multi-modal models. License: CC BY-NC-SA 3.0 IGO., 2024. https://iris.who.int/handle/10665/375579
  • 7. European Union. Regulation (EU) 2024/1689 of the European parliament and of the council of 12 July 2024 laying down harmonised rules on artificial intelligence (Artificial intelligence act). Off J Eur Union. 2024;L 1689:1–68. [Google Scholar]
  • 8. Aromataris E., Lockwood C., Porritt K., Pilla B., Jordan Z. (eds.) JBI JBI manual for evidence synthesis, 2024. https://jbi-global-wiki.refined.site/space/MANUAL
  • 9. Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC Khalil H. Scop Rev, 2020. 10.46658/JBIMES-24-09. https://synthesismanual.jbi.global [DOI]
  • 10. Ouzzani M., Hammady H., Fedorowicz Z., Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. 10.1186/s13643-016-0384-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Jawanpuria A., Behera A. R., Dash C., Rahman M. H. U. ChatGPT in hospital infection prevention and control – assessing knowledge of an AI model based on a validated questionnaire. Eur J Clin Exp Med. 2024;22:347–352. [Google Scholar]
  • 12. Gutierrez-Naranjo JM et al. A machine learning model to predict surgical site infection after surgery of lower extremity fractures. Int Orthop. 2024; 48:1887–1896. 10.1007/s00264-024-06194-5. [DOI] [PubMed] [Google Scholar]
  • 13. Ke, C. , et al. Prognostics of surgical site infections using dynamic health data. J Biomed Informat. 2017; 65: 22–33. [DOI] [PubMed] [Google Scholar]
  • 14. Zhang, Q. , et al. (2024). Construct validation of machine learning for accurately predicting the risk of postoperative surgical site infection (SSI) following spine surgery. J Hosp Infect: 146, 232–241. [DOI] [PubMed] [Google Scholar]
  • 15. Bhatia M., Sood S. K., Kaur M. Artificial intelligence-inspired comprehensive framework for COVID-19 outbreak control. Artif Intell Med., 2022;127:102288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Liang Q., Zhao Q., Xu X., Zhou Y., Huang M. Early prediction of carbapenem-resistant gram-negative bacterial carriage in intensive care units using machine learning. J Glob Antimicrob Resist. 2022;29:225–231. [DOI] [PubMed] [Google Scholar]
  • 17. Wang, T , Xia, J , Wu, T , Ni, H , Long, E , Li, J-P O , Zhao, L , Chen, R , Wang, R , Xu, Y , Huang, K , Lin, H. Handwashing quality assessment via deep learning: A modeling study for monitoring compliance and standards in hospitals and communities. Intell Med 2. 2022. 152–160. [Google Scholar]
  • 18. Rabhi S., Jakubowicz J., Metzger M.-H. Deep learning versus conventional machine learning for detection of healthcare-associated infections in French clinical narratives. Methods Inf Med. 2018;57. [DOI] [PubMed]
  • 19. Marschollek M., Marquet M., Reinoso Schiller N., et al. RISK PRINCIPE: Development of a risk-stratified infection prevention project integrating surveillance and prediction. Bundesgesundheit Gesundheit Gesundheit. 2024;67:685–692. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Nistal-Nuño B. A neural network for prediction of risk of nosocomial infection at intensive care units: a didactic preliminary model. Einstein (São Paulo) 2020;18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Haghpanah M. A., Tale Masouleh M., Kalhor A., Akhavan Sarraf E. A hand rubbing classification model based on image sequence enhanced by feature-based confidence metric. Signal Image Video Process. 2017;17:2499–2509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Cho, Y , Lee, H K , Kim, J , Yoo, K-B , Choi, J , Lee, Y , Choi, M. Prediction of hospital-acquired influenza using machine learning algorithms: a comparative study. BMC Infect Dis. 2024;24: 466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Chen, Y , Zhang, Y , Nie, S , Ning, J , Wang, Q , Yuan, H , Wu, H , Li, B , Hu, W , Wu, C. Risk assessment and prediction of nosocomial infections based on surveillance data using machine learning methods. BMC Public Health. 2024;24: 1780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Li J., Yan Z. Machine learning model predicting factors for incisional infection following right hemicolectomy for colon cancer. BMC Surgery. 2024;24:279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Rennert-May, E , Leal, J , MacDonald, M K , Cannon, K , Smith, S , Exner, D , Larios, O E , Bush, K , Chew, D. Validating administrative data to identify complex surgical site infections following cardiac implantable electronic device implantation: a comparison of traditional methods and machine learning. Antimicrob Resist Infect Control. 2022;11: 138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Li S., Zhang Y., Lin Y., Zheng L., Fang K., Wu J. Development and validation of prediction models for nosocomial infection and prognosis in hospitalized patients with cirrhosis. Antimicrob Resist Infect Control. 2024;13:85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Hattori, S , Sekido, R , Leong, I W , Tsutsui, M , Arima, A , Tanaka, M , Yokota, K , Washio, T , Kawai, T , Okochi, M. Machine learning-driven electronic identifications of single pathogenic bacteria. Sci Rep 10. 2020. 15525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Soguero-Ruiz C, Wang F, Jenssen R, et al. Data-driven temporal prediction of surgical site infection. Sci Rep. 2020; 10:62083. [PMC free article] [PubMed] [Google Scholar]
  • 29. Shi J, Liu S, Pruitt LC et al. Using natural language processing to improve EHR structured data-based surgical site infection surveillance. Sci Rep. 2023; 10:2246816. [PMC free article] [PubMed] [Google Scholar]
  • 30. Zakur Y. A., Mirashrafi S. B., Flaih L. R. A comparative study on association rule mining algorithms on the hospital infection control dataset. Baghdad Sci. 2023; P-ISSN: 2078-8665, E-ISSN: 2411-7986.
  • 31. Chester A. N., Mandler S. I. A comparison of chatGPT and expert consensus statements on surgical site infection prevention in high-risk pediatric spine surgery. J Pediatr Orthop. 2025;45:e72–e75. 2024. [DOI] [PubMed] [Google Scholar]
  • 32. Shrimali S., Teuscher C. A novel deep learning-, camera-, and sensor-based system for enforcing hand hygiene compliance in healthcare facilities. IEEE Sens J. 2023;23:13659–13670. [Google Scholar]
  • 33. Skube, S J , Hu, Z , Simon, G J , Wick, E C , Arsoniadis, E G , Ko, C Y , Melton, G B. Accelerating Surgical Site Infection Abstraction With a Semi-automated Machine-learning Approach. Ann Surg. 2022;276: 180–185. [DOI] [PubMed] [Google Scholar]
  • 34. Tvardika N., Kergourlay I., Bittar A., Segond F., Darmoni S., Metzger M.-H. Accuracy of using natural language processing methods for identifying healthcare-associated infections. Int J Med Inform. 2018;117:96–102. [DOI] [PubMed] [Google Scholar]
  • 35. Kamruzzaman M., Heavey J., Song A. et al. Improving risk prediction of methicillin-resistant staphylococcus aureus using machine learning methods with network features: retrospective development study. JMIR AI. 2024;3:e48067. 10.2196/48067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Wu J.-M., Tsai C.-J., Ho T.-W., Lai F., Tai H.-C., Lin M.-T. A unified framework for automatic detection of wound infection with artificial intelligence. Appl Sci. 2020;10:5353. [Google Scholar]
  • 37. Zhang P., White J., Schmidt D., Dennis T. Applying machine learning methods to predict hand hygiene compliance characteristics. IEEE Int Conf Med Imaging. 2017;353-356.
  • 38. Bartz-Kurycki M. A., Green C., Anderson K. T. et al. Enhanced neonatal surgical site infection prediction model utilizing statistically and clinically significant variables in combination with a machine learning algorithm. Am J Surg. 2018;216:764–777. [DOI] [PubMed] [Google Scholar]
  • 39. Beeler C., Dbeibo L., Kelley K. et al. Assessing patient risk of central line-associated bacteremia via machine learning. Am J Infect Control. 2018. 46:986–991. 10.1016/j.ajic.2018.02.021. [DOI] [PubMed] [Google Scholar]
  • 40. Claudinon J., Steltenkamp S., Fink M. et al. A label-free optical detection of pathogens in Isopropanol as a first step towards real-time infection prevention. Biosensors. 2021;11:2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Wiemken T. L., Carrico R. M. Assisting the infection preventionist: Use of artificial intelligence for healthcare–associated infection surveillance. Am J Infect Control. 2024;52:625–629. [DOI] [PubMed] [Google Scholar]
  • 42. Sundermann A. J., Chen J., Miller J. K., et al. Outbreak of pseudomonas aeruginosa infections from a contaminated gastroscope detected by whole genome sequencing surveillance. Clin Infect Dis, 2021;73:42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Sundermann A. J., Chen J., Miller J. K., et al. Whole-genome sequencing surveillance and machine learning of the electronic health record for enhanced healthcare outbreak detection. Clin Infect Dis. 2022;75:476–482. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Kim M. S., Park B., Sippel G. J., et al. Comparative analysis of personal protective equipment nonadherence detection: computer vision versus human observers. J Am Med Inform Assoc. 2025;32:163–171. 10.1093/jamia/ocae262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Cho S. Y., Kim Z., Chung D. R., et al. Development of machine learning models for the surveillance of colon surgical site infections. J Hosp Infect. 2024;146:224–231. [DOI] [PubMed] [Google Scholar]
  • 46. Khan R. U., Almakdi S., Alshehri M. et al. Probabilistic approach to COVID-19 data analysis and forecasting future outbreaks using a multi-layer perceptron neural network. Diagnostics. 2022;12:2539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Özakar R, Gedikli E. Evaluation of hand washing procedure using vision-based frame level and spatio-temporal level data models. Electronics. 2023; 12:2024.(File: electronics-12-02024.pdf) (electronics-12-02024). [Google Scholar]
  • 48. Ying H., Guo B. W., Wu H. J., Zhu R. P., Liu W. C., Zhong H. F. Using multiple indicators to predict the risk of surgical site infection after ORIF of tibia fractures: a machine learning-based study. Front Cell Infect Microbiol. 2023;13:1206393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Simioli F., Annunziata A., Coppola A., et al. Artificial intelligence for training and reporting infection prevention measures in critical wards. Front Public Health. 2024;12:1442188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Wang J., Wang G., Wang Y., Wang Y. Development and evaluation of a model for predicting the risk of healthcare-associated infections in ICU patients. Front Public Health. 2024;12:1444176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Pasupuleti D. Handwashing action detection system for an autonomous social robot. TENCON 2022. 2022 IEEE Region 10 International Conference. 2022. 10.1109/TENCON55691.2022.9977684. [DOI]
  • 52. Gouareb R., Bornet A., Proios D., Pereira S. G., Teodoro D. Detection of patients at risk of multidrug-resistant enterobacteriaceae infection using graph neural networks: a retrospective study. Health Data Sci. 2023;3 0099. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Hopkins B. S., Mazmudar A., Driscoll C., Sveta M., Goergen J., Kelsten M. et al. Using artificial intelligence (AI) to predict postoperative surgical site infection: a retrospective cohort of 4046 Posterior spinal fusions. Clin Neurol Neurosurg. 2020;192:105718. [DOI] [PubMed] [Google Scholar]
  • 54. Lukasewicz Ferreira S. A., Franco Meneses A. C., Vaz T. A., et al. Hospital-acquired infections surveillance: the machine-learning algorithm mirrors national healthcare safety network definitions. Infect Control Hosp Epidemiol. 2024;45 :604–608. [DOI] [PubMed] [Google Scholar]
  • 55. Segal R., Bradley W. P., Williams D. L., et al. Human-machine collaboration using artificial intelligence to enhance the safety of donning and doffing personal protective equipment (PPE). Infect Control Hosp Epidemiol. 2023;44(4):732–735. [DOI] [PubMed] [Google Scholar]
  • 56. Montella E., Ferraro A., Sperlì G., Triassi M., Santini S., Improta G. Predictive analysis of healthcare-associated blood stream infections in the neonatal intensive care unit using artificial intelligence: a single center study. Int J Environ Res Public Health. 2022;19:2498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Platt L. S., Chen X., Sabo-Attwood T., Iovine N., Brown S., Pollitt B. Improving infection prevention briefing through predictive predesign: a computational approach to architectural programming by evaluating socioecological risk factors. Arch Eng Des Manag. 2024;20:776–788. [Google Scholar]
  • 58. Cotia A. L. F., Scorsato A. P., Prado M., et al. Integration of an electronic hand hygiene auditing system with electronic health records using machine learning to predict hospital-acquired infection in a health care setting. Am J Infect Control. 2025. 53:58–64. 10.1016/j.ajic.2024.09.012. [DOI] [PubMed] [Google Scholar]
  • 59. Jakobsen R. S., et al. A study on the risk stratification for patients within 24 hours of admission for risk of hospital-acquired urinary tract infection using bayesian network models. Health Inform J. 2024. [DOI] [PubMed]
  • 60. Abubeker K. M., et al. Internet-of-things-assisted wireless body area network-enabled biosensor framework for detecting ventilator and hospital-acquired pneumonia. IEEE Sens. J. 2024.
  • 61. Lu K., et al. Machine learning application for prediction of surgical site infection after posterior cervical surgery. Int Wound J. 2024;21:e14607. 10.1111/iwj.14607. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Liu T., Bai Y., Du M., Gao Y., Liu Y. Susceptible-infected-removed mathematical model under deep learning in hospital infection control of novel coronavirus pneumonia. J Healthc Eng. 2021;2021:1535046. 10.1155/2021/1535046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Kim M., Choi J., Jo J.-Y., Kim W.-J., Kim S.-H., Kim N. Video-based automatic hand hygiene detection for operating rooms using 3D convolutional neural networks. J Clin Monit Comput. 2024;38:1187–1197. 10.1007/s10877-024-01179-6. [DOI] [PubMed] [Google Scholar]
  • 64. Kiser A. C., Shi J., Bucher B. T. An explainable long short-term memory network for surgical site infection identification. Surgery. 2024;176:24–31. 10.1016/j.surg.2024.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Mamlook R. E. A., Wells L. J., Sawyer R. Machine-learning models for predicting surgical site infections using patient pre-operative risk and surgical procedure factors. Am J Infect Control. 2023;51:544–550. 10.1016/j.ajic.2022.08.013. [DOI] [PubMed] [Google Scholar]
  • 66. Haghpanah M. A., Vali S., Torkamani A. M., et al. Real-time hand rubbing quality estimation using deep learning enhanced by separation index and feature-based confidence metric. Expert Syst Appl. 2023;218:119588. 10.1016/j.eswa.2023.119588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Van Lissa C. J., Stroebe W., vanDellen M. R., et al. Using machine learning to identify important predictors of COVID-19 infection prevention behaviors during the early phase of the pandemic. Patterns. 2022;3:100482. 10.1016/j.patter.2022.100482. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Lintz J. Provider satisfaction with artificial intelligence–based hand hygiene monitoring system during the COVID-19 pandemic: study of a rural medical center. J Chiropr Med. 2023;22:197–203. 10.1016/j.jcm.2023.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Hu D., Zhong H., Li S., Tan J., He Q. Segmenting areas of potential contamination for adaptive robotic disinfection in built environments. 2020;184:107226. 10.1016/j.buildenv.2020.107226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Myall A., Price J. R., Peach R. L., et al. Prediction of hospital-onset COVID-19 infections using dynamic networks of patient contact: an international retrospective cohort study. Lancet Digit Health. 2022;4. 10.1016/S2589-7500(22)00093-0. [DOI] [PMC free article] [PubMed]
  • 71. Sun C. L. F., Zuccarelli E., Zerhouni E. G. A., et al. Predicting COVID-19 infection risk and related risk drivers in nursing homes: a machine learning approach. JAMDA. 2020;21:1533–1538. 10.1016/j.jamda.2020.08.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72. dos Santos R. P., Silva D., Menezes A., et al. Automated healthcare-associated infection surveillance using an artificial intelligence algorithm. Infect Prev Pract. 2021;3:100167. 10.1016/j.infpip.2021.100167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Marra A. R., Alzunitan M., Abosi O., et al. Modest clostridioides difficile infection prediction using machine learning models in a tertiary care hospital. Diagn Microbiol Infect Dis. 2020;98:115104. 10.1016/j.diagmicrobio.2020.115104. [DOI] [PubMed] [Google Scholar]
  • 74. Chen W., Lu Z., You L., Zhou L., Xu J., Chen K. Artificial intelligence–based multimodal risk assessment model for surgical site infection (AMRAMS): development and validation study. JMIR Med Inform. 2020;8. 10.2196/18186. [DOI] [PMC free article] [PubMed]
  • 75. Tunthanathip T., Sae-heng S., Oearsakul T., et al. Machine learning applications for the prediction of surgical site infection in neurological operations. Neurosurg Focus. 2019;47. 10.3171/2019.5.FOCUS19241. [DOI] [PubMed]
  • 76. Zhu Y., Simon G. J., Wick E. C., et al. Applying machine learning across sites: external validation of a surgical site infection detection algorithm. J Am Coll Surg. 2021;232:963–971.e1. 10.1016/j.jamcollsurg.2021.03.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Bucher B. T., Shi J., Ferraro J. P., et al. Portable automated surveillance of surgical site infections using natural language processing: development and validation. Ann Surg. 2020;272:629–636. 10.1097/SLA.0000000000004133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78. Chen K. A., Joisa C. U., Stem J., et al. Predicting surgical site infection after colorectal surgery using machine learning. Dis Colon Rectum. 2023;66:458–466. 10.1097/DCR.0000000000002559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Colborn K. L., Zhuang Y., Dyas A. R., et al. Development and validation of models for detection of postoperative infections using structured electronic health records data and machine learning. Surgery. 2023;173:464–471. 10.1016/j.surg.2022.10.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Sanger P. C., van Ramshorst G. H., Mercan E., et al. A prognostic model of surgical site infection using daily clinical wound assessment. J Am Coll Surg. 2016;223:259–270.e2. doi: 10.1016/j.jamcollsurg.2016.04.046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81. Sohn S., Larson D. W., Habermann E. B., et al. Detection of clinically important colorectal surgical site infection using bayesian network. J Surg Res. 2017;209:168–173. 10.1016/j.jss.2016.09.058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. Zachariah P., Sanabria E., Liu J., et al. Novel strategies for predicting healthcare-associated infections at admission: implications for nursing care. Nurs Res. 2020;69:399–403. 10.1097/NNR.0000000000000449. [DOI] [PubMed] [Google Scholar]
  • 83. Yang H., Tourani R., Zhu Y., et al. Strategies for building robust prediction models using data unavailable at prediction time. J Am Med Inform Assoc. 2022;29:72–79. 10.1093/jamia/ocab229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Özdede M, Zarakolu P., Metan G., et al. Predictive modeling of mortality in carbapenem-resistant acinetobacter baumannii bloodstream infections using machine learning. J Investig Med. 2023;72:684–696. 10.1177/10815589241258964. [DOI] [PubMed] [Google Scholar]
  • 85. Ferrari D., Arina P., Edgeworth J., et al. Using interpretable machine learning to predict bloodstream infection and antimicrobial resistance in ICU patients: early alert predictors based on EHR data. PLOS Digit Health. 2024;3. 10.1371/journal.pdig.0000641. [DOI] [PMC free article] [PubMed]
  • 86. van der Werff S. D., Thiman E., Tanushi H., et al. The accuracy of fully automated algorithms for surveillance of healthcare-associated urinary tract infections in hospitalized patients. J Hosp Infect. 2021;110:139–147. 10.1016/j.jhin.2021.01.023. [DOI] [PubMed] [Google Scholar]
  • 87. Panchavati S., Zelin N. S., Garikipati A., et al. A comparative analysis of machine learning approaches to predict C. difficile infection in hospitalized patients. Am J Infect Control. 2022;50:250–257. 10.1016/j.ajic.2021.11.012. [DOI] [PubMed] [Google Scholar]
  • 88. da Silva D. A., ten Caten C. S., dos Santos R. P., Fogliatto F. S., Hsuan J. Predicting the occurrence of surgical site infections using text mining and machine learning. PLoS One. 2019;14. 10.1371/journal.pone.0226272. [DOI] [PMC free article] [PubMed]
  • 89. Møller, J Kølseth , Sørensen, M , Hardahl, C , Pappalardo, F. Prediction of risk of acquiring urinary tract infection during hospital stay based on machine-learning: A retrospective cohort study. PLoS One 2021;16. [DOI] [PMC free article] [PubMed]
  • 90. Peng H.-Y., Lin Y.-K., Nguyen P.-A., et al. Determinants of coronavirus disease 2019 infection by artificial intelligence technology: a study of 28 countries. PLoS One. 2022;17. 10.1371/journal.pone.0272546. [DOI] [PMC free article] [PubMed]
  • 91. Zhuang Y., Dyas A., Meguid R. A., et al. Preoperative prediction of postoperative infections using machine learning and electronic health record data. Ann Surg. 2024;279:720–726. 10.1097/SLA.0000000000006106. [DOI] [PubMed] [Google Scholar]
  • 92. Rafaqat W., Fatima H. S., Kumar A., Khan S., Khurram M. Machine learning model for assessment of risk factors and postoperative day for superficial vs deep/organ-space surgical site infections. Surg Innov. 2023;30:455–462. 10.1177/15533506231170933. [DOI] [PubMed] [Google Scholar]
  • 93. Yeo I., Klemt C., Robinson M. G., et al. The use of artificial neural networks for the prediction of surgical site infection following TKA. J Knee Surg. 2023;36:637–643. 10.1055/s-0041-1741396. [DOI] [PubMed] [Google Scholar]
  • 94. Roimi M., Neuberger A., Shrot A., et al. Early diagnosis of bloodstream infections in the intensive care unit using machine-learning algorithms. Intensive Care Med. 2020;46:454–462. 10.1007/s00134-019-05876-8. [DOI] [PubMed] [Google Scholar]
  • 95. Li M. P., Liu W. C., Wu J. B., et al. Machine learning for the prediction of postoperative nosocomial pulmonary infection in patients with spinal cord injury. Eur Spine J. 2023;32:3825–3835. 10.1007/s00586-023-07772-8. [DOI] [PubMed] [Google Scholar]
  • 96. Asif S., Xu X., Zhao M., et al. ResMFuse-net: residual-based multilevel fused network with spatial-temporal features for hand hygiene monitoring. Appl Intell. 2024;54:3606–3628. 10.1007/s10489-024-05305-4. [DOI] [Google Scholar]
  • 97. Petrosyan Y., Thavorn K., Smith G., et al. Predicting postoperative surgical site infection with administrative data: a random forests algorithm. BMC Med Res Methodol. 2021;21:179. 10.1186/s12874-021-01369-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98. Verberk J. D. M., van der Werff S. D., et al. Augmented value of using clinical notes in semi-automated surveillance of deep surgical site infections after colorectal surgery. Antimicrob Resist Infect Control. 2023;12:117. 10.1186/s13756-023-01316-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99. Wang M., Li W., Hui W., et al. Development and validation of machine learning-based models for predicting healthcare-associated bacterial/fungal infections among COVID-19 inpatients: a retrospective cohort study. Antimicrob Resist Infect Control. 2024;13:42. 10.1186/s13756-024-01392-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100. Kim D., Canovas-Segura B. et al. Spatial-temporal simulation for hospital infection spread and outbreaks of clostridioides difficile. Sci Rep. 2023;13:20022. 10.1038/s41598-023-47296-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101. Atkinson A., Ellenberger B., Piezzi V., et al. Extending outbreak investigation with machine learning and graph theory: benefits of new tools with application to a nosocomial outbreak of a multidrug-resistant organism. Infect Control Hosp Epidemiol. 2023;44:246–252. 10.1017/ice.2022.66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Ötleş E, Balczewski E. A., Keidan M., et al. Clostridioides difficile infection surveillance in intensive care units and oncology wards using machine learning. Infect Control Hosp Epidemiol. 2023;44:1776–1781. 10.1017/ice.2023.54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103. Savin I., Ershova K., Kurdyumova N., et al. Healthcare-associated ventriculitis and meningitis in a neuro-ICU: incidence and risk factors selected by machine learning approach. J Crit Care. 2018;45:95–104. 10.1016/j.jcrc.2018.01.022. [DOI] [PubMed] [Google Scholar]
  • 104. Lyu J. W., Zhang X. D., Tang J. W., et al. Rapid prediction of multidrug-resistant Klebsiella pneumoniae through deep learning analysis of SERS spectra. Microbiol Spectr. 2023;11. 10.1128/spectrum.04126-22. [DOI] [PMC free article] [PubMed]
  • 105. Prey B. J., Colburn Z. T., Williams J. M., et al. The use of mobile thermal imaging and machine learning technology for the detection of early surgical site infections. Am J Surg. 2024;231:60–64. 10.1016/j.amjsurg.2023.04.011. [DOI] [PubMed] [Google Scholar]
  • 106. Lee J. H., Shim J. W., Lim M. H., et al. Towards optimal design of patient isolation units in emergency rooms to prevent airborne virus transmission: from computational fluid dynamics to data-driven modeling. Comput Biol Med. 2024;173:108309. 10.1016/j.compbiomed.2024.108309. [DOI] [PubMed] [Google Scholar]
  • 107. Fletcher R. R., Schneider G., Hedt-Gauthier B., et al. Use of convolutional neural networks (CNN) and transfer learning for prediction of surgical site infection from color images. IEEE Eng Med Biol Soc. 2021;43:5047. 10.1109/EMBC.2021.9630430. [DOI] [PubMed] [Google Scholar]
  • 108. Flores-Balado Á., Méndez C.C., González A.H., et al. Using artificial intelligence (AI) to reduce orthopedic surgical site infection surveillance workload: algorithm design, validation, and implementation in 4 Spanish hospitals. Am J Infect Control. 2023;51:1225–1229. 10.1016/j.ajic.2023.04.165. [DOI] [PubMed] [Google Scholar]
  • 109. Xu W., Yang H., Chen J., et al. WashRing: An energy-efficient and highly accurate handwashing monitoring system via smart ring. IEEE Trans Mob Comput. 2024;23:971–989. 10.1109/TMC.2022.3227299. [DOI] [Google Scholar]
  • 110. Yu T., Fu Y., He J., et al. Identification of antibiotic resistance in ESKAPE pathogens through plasmonic nanosensors and machine learning. ACS Nano. 2023;17:4551–4563. 10.1021/acsnano.2c10584. [DOI] [PubMed] [Google Scholar]
  • 111. Healthcare Information and Management Systems Society (HIMSS). Electronic Medical Record Adoption Model (EMRAM): Criteria and Methodology. Chicago: HIMSS, 2021. https://www.himss.org/sites/hde/files/media/file/2021/06/04/himss-emram-criteria.pdf. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112. Damschroder, L. J. , Aron, D. C. , Keith, R. E. et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Sci. 2009; 4: 50. 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Gastaldi et al. supplementary material 1

Gastaldi et al. supplementary material

DOI: 10.1017/ash.2025.10191.sm001
Gastaldi et al. supplementary material 2

Gastaldi et al. supplementary material

DOI: 10.1017/ash.2025.10191.sm002

Data Availability Statement

All data supporting the findings of this review are contained within the manuscript and its supplementary materials.


Articles from Antimicrobial Stewardship & Healthcare Epidemiology : ASHE are provided here courtesy of Cambridge University Press

RESOURCES