Skip to main content
Critical Care Explorations logoLink to Critical Care Explorations
. 2025 Dec 5;7(12):e1360. doi: 10.1097/CCE.0000000000001360

Artificial Intelligence-Based Predictive Modeling for Early Detection of Sepsis in Hospitalized Patients: A Systematic Review and Meta-Analysis

Ghulam Husain Abbas 1,, Palash Sen 2, Oviya Anjali Giri 3, Nawaid Hussain Khan 1
PMCID: PMC12685403  PMID: 41348160

Abstract

OBJECTIVES:

This systematic review evaluates artificial intelligence (AI)-based predictive models developed for early sepsis detection in adult hospitalized patients. It explores model types, input features, validation strategies, performance metrics, clinical integration, and implementation challenges.

DATA SOURCES:

A systematic search was conducted across PubMed, Scopus, Web of Science, Google Scholar, and CENTRAL for studies published between January 2015 and March 2025.

STUDY SELECTION:

Eligible studies included those developing or validating AI models for adult inpatient sepsis prediction using electronic health record data and reporting at least one performance metric (area under the curve [AUC], sensitivity, specificity, or F1 score). Studies focusing on pediatric populations, lacking quantitative evaluation, or unpublished in peer-reviewed journals were excluded.

DATA EXTRACTION:

Data extraction followed preferred reporting items for systematic reviews and meta-analyses guidelines. Extracted variables included study design, patient population, model type, input features, validation approach, and performance outcomes.

DATA SYNTHESIS:

A total of 52 studies met the inclusion criteria. Most used retrospective designs, with limited prospective or real-time clinical validation. Commonly used algorithms included random forests, neural networks, support vector machines, and deep learning architectures (long short-term memory, convolutional neural network). Input data varied from structured sources (vital signs, laboratory values, demographics) to unstructured clinical notes processed via natural language processing. Reported AUC values ranged from 0.79 to 0.96, indicating strong predictive performance across models.

CONCLUSIONS:

AI models demonstrate significant promise for early sepsis detection, outperforming conventional scoring systems in many cases. However, generalizability, interpretability, and clinical implementation remain major challenges. Future research should emphasize externally validated, explainable, and scalable AI solutions integrated into real-time clinical workflows.

Keywords: artificial intelligence, early detection, hospitalized patients, machine learning, predictive modeling, sepsis


KEY POINTS

Question: Can artificial intelligence (AI) models using electronic health record data reliably detect sepsis earlier than conventional clinical scoring systems?

Findings: Across 52 studies, AI-based predictive models, especially those using machine learning and deep learning, achieved strong discrimination (AUC 0.79–0.96) for early sepsis detection, often outperforming traditional tools such as systemic inflammatory response syndrome and quick SOFA (qSOFA). However, external validation and real-time implementation remain limited.

Meaning: AI models hold substantial promise for earlier, more accurate sepsis recognition, but widespread clinical adoption will require improved generalizability, transparency, and workflow integration.

Sepsis—defined by Sepsis-3 as life-threatening organ dysfunction from a dysregulated host response to infection—remains a major global challenge (1). Recent analyses estimate 48–50 million cases and ≈11 million deaths yearly (1), about 20% of worldwide mortality (2). Low- and middle-income countries bear the greatest burden (1); children under five account for roughly 20 million cases annually (2). In high-income nations, sepsis is among the most expensive hospital conditions, costing greater than $20 billion per year in the United States alone (1, 2), with average per-patient costs greater than $30,000 (1). Thus, sepsis imposes immense clinical and economic strain worldwide.

Early recognition is vital, as each hour of treatment delay worsens outcomes (3). Yet early diagnosis is difficult because initial signs are nonspecific (3). Common scoring tools—systemic inflammatory response syndrome (SIRS), modified early warning score (MEWS), Sequential Organ Failure Assessment (SOFA), qSOFA—help flag risk but cannot reliably identify sepsis early (3, 4). SIRS favors sensitivity; qSOFA favors specificity (4). Consequently, sepsis is often detected only after organ dysfunction appears. Despite awareness campaigns and early-bundle protocols, diagnosis frequently lags. Traditional biomarkers and imaging remain inadequate (35). Hence, better predictive tools are urgently needed.

Artificial intelligence (AI) and machine learning (ML) can exploit high-dimensional electronic health record (EHR) data to uncover subtle physiologic patterns preceding sepsis (3). Algorithms analyzing vitals, laboratories, and demographics can forecast deterioration hours before clinical recognition (36). Methods include tree-based models, recurrent and convolutional neural networks, and ensemble systems. Experimental studies already show strong predictive accuracy.

However, challenges persist: generalization across settings, interpretability for clinician trust, data completeness, and ethical or regulatory oversight (7). Few models have undergone prospective validation. This systematic review therefore synthesizes the current state of AI-driven models for early sepsis detection (7). We evaluate ML approaches, predictive features, validation strategies, and clinical translation gaps to guide future development (6). Key study characteristics are summarized in Supplemental Table 1 (https://links.lww.com/CCX/B588).

METHODS AND MATERIALS

Search Strategy

We searched PubMed, Scopus, Web of Science, Google Scholar, and CENTRAL for studies (January 2015–2025) on AI/ML models for early sepsis detection. Controlled vocabulary and free-text terms combined “sepsis,” “early detection,” “machine learning,” and “electronic health records,” using Boolean operators and truncation. Search strings were refined for balance between recall and precision and limited to English-language journal articles. The process followed preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, with the flow diagram shown in Figure 1.

Figure 1.

Figure 1.

Preferred reporting items for systematic reviews and meta-analyses flowchart depicting the study selection process for the systematic review. The diagram outlines the number of records identified through database searches, screened for eligibility, excluded with reasons, and ultimately included in the final analysis. This provides transparency in the literature review methodology and ensures reproducibility.

Eligibility Criteria

Included studies met these conditions: 1) peer-reviewed original research, 2) adult inpatients (18 yr old or older), 3) AI/ML model for early sepsis detection or prediction, 4) EHR-based structured or unstructured data, and 5) greater than or equal to one quantitative performance metric. We excluded pediatric or neonatal work, non-peer-reviewed sources, prognostic or late-recognition models, and papers lacking methodological or quantitative details. These criteria narrowed the review to rigorous, directly comparable studies.

Study Selection

Two reviewers (G.H.A. and P.S.) screened all records independently. Irrelevant titles or abstracts were removed, and remaining articles underwent full-text review. Disagreements were resolved by consensus or a third reviewer. Reasons for exclusion (e.g., wrong population or missing metrics) were documented in the PRISMA flowchart. Reference lists of included papers were hand-searched for additional studies.

Data Extraction

Each eligible study was summarized using a standardized form capturing authorship, year, country, clinical setting, design, sample size, sepsis definition, algorithm type, input features, validation method, and performance metrics (area under the curve [AUC], sensitivity, specificity, accuracy, F1, etc.). Implementation attempts or usability evaluations were also noted. Two reviewers extracted data independently and cross-checked for consistency, resolving discrepancies by consensus.

Quality Assessment

Because of methodological heterogeneity, quantitative meta-analysis was inappropriate (6). A narrative synthesis was used instead. We evaluated rigor and bias using an adapted prediction model risk of bias assessment tool framework (7) across four domains: participant selection, predictor measurement, outcome definition, and analysis. Calibration, decision-curve analysis, and transparency were noted when available. These assessments guided our qualitative comparison of strengths and limitations among studies.

RESULTS

Study Characteristics

Fifty-two studies met inclusion criteria, published 2015–2025 across Asia, North America, and Europe, with scarce data from Africa or South America. About 40% came from Asia (mainly China), 40% from North America, and the rest from Europe and elsewhere. Most were retrospective cohort analyses using EHR data (6); ≈ five were prospective or interventional trials. Sample sizes ranged from hundreds to millions of records (e.g., Steinbach et al [8] analyzed 1.38 million complete blood counts [CBCs] with > 2000 sepsis events). Typical cohorts included middle-aged to elderly adults with slight male predominance. Sepsis definitions varied (Sepsis-3, Sepsis-2, or International Classification of Diseases [ICD] coding), adding heterogeneity. Most models were developed in ICU settings within tertiary centers in Asia and North America. Performance metrics for each study appear in Supplemental Table 2 (https://links.lww.com/CCX/B588), and Supplemental Figure 1 (https://links.lww.com/CCX/B588) depicts AI-predicted risk vs observed outcomes.

AI Methodologies

A broad range of algorithms was applied. Tree-based models dominated—Random Forest in ~14 studies and Gradient Boosting (XGBoost, LightGBM) in ~10—owing to robustness and feature-ranking ability. Deep neural networks appeared in ~8 studies, mainly recurrent (long short-term memory [LSTM]) or convolutional (convolutional neural network) architectures for time-series data. Support Vector Machines featured in early pilot work, later supplanted by ensemble and deep models. Ensemble techniques (stacking, bagging) combined multiple learners for marginal gains. Additional approaches included autoencoders, semi-supervised and transfer learning, and limited natural language processing (NLP) for unstructured text. Studies incorporating clinical notes or radiology reports reported modest performance improvement. A few explored novel inputs—waveform or continuous vital-sign streams—but most relied on structured EHR data. Simpler algorithms served as baselines in smaller datasets, whereas complex deep or ensemble models dominated large-scale work. As Chen et al (9) stated, “Logistic regression, SVMs, random forests, and gradient boosting … use vital signs, laboratory results, and clinical factors to predict the likelihood of sepsis” (8).

Input Features

Core predictor categories were consistent across studies:

  • Vital signs—heart rate, respiratory rate, blood pressure, temperature, oxygen saturation.

  • Laboratory values—WBC, lactate, creatinine, blood urea nitrogen, platelet count, electrolytes, C-reactive protein, etc. Creatinine and lactate were frequent key markers.

  • Demographics/comorbidities—age, sex, body mass index, chronic diseases.

  • Medications/interventions—antibiotic administration, vasopressors, ventilation, fluid therapy.

  • Unstructured data (NLP)—~10% of studies mined clinician notes or imaging text for sepsis-related terms.

  • Novel features—electrocardiogram-derived metrics, ultrasound findings, time-trend derivatives, or multimodal fusions.

Most algorithms required 5–20 routine inputs; deep models could ingest hundreds. As Chen et al noted, such models rely on “vital signs … laboratory results … and clinical factors” (7). Heterogeneous EHR formats and coding practices hinder direct cross-site application (10).

Model Performance

Reported performance was generally strong. Across the 52 studies, AUC-receiver operating characteristic ranged 0.79–0.96 (median ≈ 0.88), with sensitivities and specificities typically 0.80–0.95 (10). Many models outperformed traditional scores: an ensemble achieved AUC ≈ 0.93 vs. 0.64 for qSOFA and 0.69 for MEWS (6); Delahanty et al reported AUC 0.86 vs. 0.63 (SIRS) and 0.71 (qSOFA) (11). High performers included Kim et al (12) (2020, AUC 0.96), Wang et al (2021, 0.93), and Steinbach et al (2024, 0.872 internal; 0.805–0.845 external).

Comparisons remain difficult due to differing definitions and horizons (13). Calibration and decision-curve analyses were rarely provided (14). Still, evidence shows ML models outperform rule-based scores (15). Only ~40% included external validation, where performance dropped 5–10 points (8, 16). Validation and generalizability details appear in Supplemental Table 3 (https://links.lww.com/CCX/B588).

Implementation and Clinical Integration

Few systems reached deployment. Roughly five studies described live clinical use (17). Duke University’s “Sepsis Watch” and Dascena’s “InSight” were among the few tested in hospitals (3, 18). Integration hurdles included EHR connectivity, alert fatigue, and clinician acceptance (19). A review by van der Vegt et al (10) highlighted common barriers: data heterogeneity, infrastructure demands, privacy regulations, and insufficient user training (20). Excessive false positives reduced trust (21). Transparent reasoning and feedback mechanisms improved acceptance. Overall, pilot implementations proved feasibility but underscored the need for workflow adaptation and multidisciplinary support (22, 23).

DISCUSSION

AI-driven models for early sepsis prediction achieve accuracy far above conventional scores, with many reporting AUC greater than 0.85 (6, 8). For example, one ML model achieved AUC 0.93, whereas qSOFA was 0.64 on the same cohort (6). By detecting deterioration hours earlier, such tools could expedite antibiotics and resuscitation, improving survival (3).

A major advantage is processing high-dimensional, longitudinal data (2427). LSTM networks, for instance, capture evolving trends—rising lactate or falling pressure—that static rules miss (28, 29). Deep learning can identify new latent features (30, 31), explaining its superior sensitivity/specificity (> 85%).

Still, limitations remain.

Generalizability is the foremost concern (32). Many models were single-center, risking overfitting (33, 34). CBC model by Steinbach et al (8) dropped from AUC 0.872 to 0.805–0.845 externally (8). Differences in populations, coding, and labeling (Sepsis-2 vs. 3, ICD vs. clinical) impede transfer (3538). Only ~40% used external validation (39, 40); multicenter collaborations are essential for robust generalization.

Explainability is another barrier. Complex neural and ensemble systems act as black boxes (41, 42). Clinicians hesitate to act on opaque alerts. Interpretability methods—SHapley additive explanations (SHAP), local interpretable model-agnostic explanations, attention heatmaps—can expose feature influence (e.g., elevated lactate + WBC = high risk) (4345). Yet few studies included such analyses (46, 47). Regulatory bodies increasingly demand transparency (10, 11, 47, 48).

Workflow integration also limits impact. Even accurate tools fail if poorly embedded (49, 50). Real-time data streaming, alert presentation, and clinician engagement require technical and human-factor design. Alarm fatigue and liability concerns persist. Usability training and co-design with end-users improve adoption.

Regulatory and ethical issues add complexity. Historical EHR data may encode demographic biases (51); without auditing, algorithms could amplify inequities. Privacy is critical (52). Federated or privacy-preserving methods offer mitigation. Legal accountability for missed or false alerts remains undefined (53). The draft guidance of Food and Drug Administration on AI/ML Software-as-a-Medical-Device emphasizes “good machine-learning practice,” lifecycle testing, and revalidation (13). Comparable frameworks are emerging internationally (54). Figure 2 illustrates sensitivity vs. specificity among architectures.

Figure 2.

Figure 2.

A scatter plot of sensitivity vs. specificity illustrating the architecture and operational steps of artificial intelligence (AI) systems used for early sepsis detection. The diagram includes components such as data input from electronic health records, preprocessing algorithms, model training and inference layers, risk scoring mechanisms, and clinical decision support outputs. This visualization emphasizes the end-to-end integration of AI in clinical practice across included studies.

In summary, AI sepsis prediction shows high accuracy but faces translational barriers in validation, interpretability, workflow, and governance (55). Addressing these is vital for real-world benefit.

FUTURE DIRECTIONS

Key priorities include:

  • Prospective multicenter validation: Most evidence is retrospective; randomized trials must evaluate real-time alerts and patient outcomes (5658). The van der Vegt framework (10) offers deployment guidance.

  • Pediatric and special populations: Sepsis also devastates children and pregnant or immunocompromised patients; tailored models are needed (59).

  • Low- and middle-income countries: Given their disproportionate burden (2), LMIC-based models using simplified variables are essential (60, 61).

  • Federated and privacy-preserving learning: Federated training enables multi-site modeling without sharing raw data (14). Early “FedSepsis” studies show comparable accuracy while preserving privacy (62, 63).

  • Explainable AI (XAI): Future models should integrate interpretability—feature trajectories, risk rationales, SHAP or GAM-based transparency—to foster trust (6466).

  • Clinical trials and implementation science: Randomized trials must test outcome impact (6668). Continuous-learning systems should be evaluated for safety over time.

  • Regulatory pathways and standards: Alignment with emerging Food and Drug Administration and global AI/ML guidelines is required; open code and documentation promote reproducibility (59).

Interdisciplinary collaboration among clinicians, data scientists, engineers, ethicists, and regulators will be crucial to translate technical advances into equitable care.

CONCLUSIONS

Artificial intelligence offers powerful tools for early sepsis detection. Our systematic review found that ML models using routine EHR data can forecast onset hours in advance with accuracy surpassing conventional scores. By enabling earlier intervention, AI could transform sepsis management. Yet widespread adoption demands validated generalization, interpretable reasoning, seamless workflow fit, and regulatory compliance. Progress depends on rigorous prospective trials and close collaboration between clinicians and technologists. With sustained, multidisciplinary effort, AI-driven prediction could become a reliable instrument to reduce sepsis mortality worldwide.

Supplementary Material

cc9-7-e1360-s001.pdf (1.1MB, pdf)

Footnotes

The authors have disclosed that they do not have any potential conflicts of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccejournal).

REFERENCES

  • 1.La Via L, Sangiorgio G, Stefani S, et al. : The global burden of sepsis and septic shock. Epidemiologia (Basel, Switzerland) 2024; 5:456–478 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.World Health Organization: WHO & World Health Organization: WHO. Sepsis. 2024. Available at: https://www.who.int/news-room/fact-sheets/detail/sepsis. Accessed May 3, 2024
  • 3.Haas R, McGill SC: Artificial intelligence for the prediction of sepsis in adults. Can J Health Technol 2022; 2:Article EN0034 [Google Scholar]
  • 4.Wang C, Xu R, Zeng Y, et al. : A comparison of qSOFA, SIRS and NEWS in predicting the accuracy of mortality in patients with suspected sepsis: A meta-analysis. PLoS One 2022; 17:e0266755. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Duncan CF, Youngstein T, Kirrane MD, et al. : Diagnostic challenges in sepsis. J Clin Med 2023; 12:1452. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Islam KR, Prithula J, Kumar J, et al. : Machine learning-based early prediction of sepsis using electronic health records: A systematic review. J Clin Med 2023; 12:5658. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wolff RF, Moons KGM, Riley RD, et al. ; PROBAST Group†: PROBAST: A tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019; 170:51–58 [DOI] [PubMed] [Google Scholar]
  • 8.Steinbach D, Ahrens PC, Schmidt M, et al. : Applying machine learning to blood count data predicts sepsis with ICU admission. Clin Chem 2024; 70:506–515 [DOI] [PubMed] [Google Scholar]
  • 9.Chen X, Yu B, Zhang Y, et al. : A machine learning model based on emergency clinical data predicting 3-day in-hospital mortality for stroke and trauma patients. Front Neurol 2025; 16:1512297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.van der Vegt AH, Scott IA, Dermawan K, et al. : Deployment of machine learning algorithms to predict sepsis: Systematic review and application of the SALIENT clinical AI implementation framework. J Am Med Inform Assoc 2023; 30:1349–1361 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Gorecki G-P, Tomescu D-R, Pleș L, et al. : Implications of using artificial intelligence in the diagnosis of sepsis/sepsis shock. GERMS 2024; 14:77–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kim T, Tae Y, Yeo HJ, et al. : Development and validation of deep-learning-based sepsis and septic shock early prediction system (DeepSEPS) using real-world ICU data. J Clin Med 2023; 12:7156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Center for Devices and Radiological Health: Artificial intelligence-enabled medical devices. U.S. Food And Drug Administration; 2021. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices. Accessed July 10, 2025 [Google Scholar]
  • 14.Alam MU, Rahmani R: FedSepsis: A federated multi-modal deep learning-based internet of medical things application for early detection of sepsis from electronic health records using Raspberry Pi and Jetson Nano devices. Sensors (Basel, Switzerland) 2023; 23:970. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Nicolaou A, Stylianides C, Sulaiman WA, et al. : An overview of explainable AI studies in the prediction of sepsis onset and sepsis mortality. Stud Health Technol Inform 2024; 316:808–812 [DOI] [PubMed] [Google Scholar]
  • 16.Lauritsen SM, Kalør ME, Kongsgaard EL, et al. : Early detection of sepsis utilizing deep learning on electronic health record event sequences. Artif Intell Med 2020; 104:101820. [DOI] [PubMed] [Google Scholar]
  • 17.Wang D, Li J, Sun Y, et al. : A machine learning model for accurate prediction of sepsis in ICU patients. Front Public Health 2021; 9:754348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kijpaisalratana N, Sanglertsinlapachai D, Techaratsami S, et al. : Machine learning algorithms for early sepsis detection in the emergency department: A retrospective study. Int J Med Inform 2022; 160:104689. [DOI] [PubMed] [Google Scholar]
  • 19.Nemati S, Holder A, Razmi F, et al. : An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit Care Med 2018; 46:547–553 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Duan Y, Huo J, Chen M, et al. : Early prediction of sepsis using double fusion of deep features and handcrafted features. Appl Intell (Dordr) 2023: 1–17 [Google Scholar]
  • 21.Rosnati M, Fortuin V: MGP-AttTCN: An interpretable machine learning model for the prediction of sepsis. PLoS One 2021; 16:e0251248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Zhang D, Yin C, Hunold KM, et al. : An interpretable deep-learning model for early prediction of sepsis in the emergency department. Patterns (New York, NY) 2021; 2:100196 [Google Scholar]
  • 23.Shashikumar SP, Wardi G, Malhotra A, et al. : Artificial intelligence sepsis prediction algorithm learns to say “I don’t know. NPJ Digital Med 2021; 4:134 [Google Scholar]
  • 24.Aşuroğlu T, Oğul H: A deep learning approach for sepsis monitoring via severity score estimation. Comput Methods Programs Biomed 2021; 198:105816. [DOI] [PubMed] [Google Scholar]
  • 25.Oei SP, van Sloun RJG, van der Ven M, et al. : Towards early sepsis detection from measurements at the general ward through deep learning. Intelligence-Based Medicine, 2021; 5:100042 [Google Scholar]
  • 26.Rafiei A, Rezaee A, Hajati F, et al. : SSP: Early prediction of sepsis using fully connected LSTM-CNN model. Comput Biol Med 2021; 128:104110. [DOI] [PubMed] [Google Scholar]
  • 27.Goh KH, Wang L, Yeow AYK, et al. : Artificial intelligence in sepsis early prediction and diagnosis using unstructured data in healthcare. Nat Commun 2021; 12:711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bedoya AD, Futoma J, Clement ME, et al. : Machine learning for early detection of sepsis: An internal and temporal validation study. JAMIA Open 2020; 3:252–260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Yang M, Liu C, Wang X, et al. : An explainable artificial intelligence predictor for early detection of sepsis. Crit Care Med 2020; 48:e1091–e1096 [DOI] [PubMed] [Google Scholar]
  • 30.Yuan KC, Tsai LW, Lee KH, et al. : The development an artificial intelligence algorithm for early sepsis diagnosis in the intensive care unit. Int J Med Inform 2020; 141:104176. [DOI] [PubMed] [Google Scholar]
  • 31.Kok C, Jahmunah V, Oh SL, et al. : Automated prediction of sepsis using temporal convolutional network. Comput Biol Med 2020; 127:103957. [DOI] [PubMed] [Google Scholar]
  • 32.Reyna MA, Josef CS, Jeter R, et al. : Early prediction of sepsis from clinical data: The PhysioNet/Computing in Cardiology Challenge 2019. Crit Care Med 2020; 48:210–217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ibrahim ZM, Wu H, Hamoud A, et al. : On classifying sepsis heterogeneity in the ICU: Insight using machine learning. J Am Med Inform Assoc 2020; 27:437–443 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Fagerström J, Bång M, Wilhelms D, et al. : LiSep LSTM: A machine learning algorithm for early detection of septic shock. Sci Rep 2019; 9:15132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kaji DA, Zech JR, Kim JS, et al. : An attention based deep learning model of clinical events in the intensive care unit. PLoS One 2019; 14:e0211057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Giannini HM, Ginestra JC, Chivers C, et al. : A machine learning algorithm to predict severe sepsis and septic shock: Development, implementation, and impact on clinical practice. Crit Care Med 2019; 47:1485–1492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Schamoni S, Lindner HA, Schneider-Lindner V, et al. : Leveraging implicit expert knowledge for non-circular machine learning in sepsis prediction. Artif Intell Med 2019; 100:101725. [DOI] [PubMed] [Google Scholar]
  • 38.Barton C, Chettipally U, Zhou Y, et al. : Evaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signs. Comput Biol Med 2019; 109:79–84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Delahanty RJ, Alvarez J, Flynn LM, et al. : Development and evaluation of a machine learning model for the early identification of patients at risk for sepsis. Ann Emerg Med 2019; 73:334–344 [DOI] [PubMed] [Google Scholar]
  • 40.Scherpf M, Gräßer F, Malberg H, et al. : Predicting sepsis with a recurrent neural network using the MIMIC III database. Comput Biol Med 2019; 113:103395. [DOI] [PubMed] [Google Scholar]
  • 41.Bloch E, Rotem T, Cohen J, et al. : Machine learning models for analysis of vital signs dynamics: A case for sepsis onset prediction. J Healthc Eng. 2019; 2019:5930379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.van Wyk F, Khojandi A, Kamaleswaran R: Improving prediction performance using hierarchical analysis of real-time data: A sepsis case study. IEEE J Biomed Health Inf 2019; 23:978–986 [Google Scholar]
  • 43.Yee CR, Narain NR, Akmaev VR, et al. : A data-driven approach to predicting septic shock in the intensive care unit. Biomed Inform Insights 2019; 11:1178222619885147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Mao Q, Jay M, Hoffman JL, et al. : Multicentre validation of a sepsis prediction algorithm using only vital sign data in the emergency department, general ward and ICU. BMJ Open 2018; 8:e017833 [Google Scholar]
  • 45.Taneja I, Damhorst GL, Lopez-Espina C, et al. : Diagnostic and prognostic capabilities of a biomarker and EMR-based machine learning algorithm for sepsis. Clin Transl Sci 2021; 14:1578–1589 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Horng S, Sontag DA, Halpern Y, et al. : Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One 2017; 12:e0174708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Kam HJ, Kim HY: Learning representations for the early detection of sepsis with deep neural networks. Comput Biol Med 2017; 89:248–255 [DOI] [PubMed] [Google Scholar]
  • 48.Shashikumar SP, Stanley MD, Sadiq I, et al. : Early sepsis detection in critical care patients using multiscale blood pressure and heart rate dynamics. J Electrocardiol 2017; 50:739–743 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Calvert JS, Price DA, Chettipally UK, et al. : A computational approach to early sepsis detection. Comput Biol Med 2016; 74:69–73 [DOI] [PubMed] [Google Scholar]
  • 50.Desautels T, Calvert J, Hoffman J, et al. : Prediction of sepsis in the intensive care unit with minimal electronic health record data: A machine learning approach. JMIR Med Inform 2016; 4:e28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Brown SM, Jones J, Kuttler KG, et al. : Prospective evaluation of an automated method to identify patients with severe sepsis or septic shock in the emergency department. BMC Emerg Med 2016; 16:31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Henry KE, Hager DN, Pronovost PJ, et al. : A targeted real-time early warning score (TREWScore) for septic shock. Sci Transl Med 2015; 7:299ra122 [Google Scholar]
  • 53.Sadasivuni S, Saha M, Bhanushali SP, et al. : Fusion of fully integrated analog machine learning classifier with electronic medical records for real-time prediction of sepsis onset. Sci Rep 2022; 12:5711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Tsang G, Xie X: Deep learning based sepsis intervention: The modelling and prediction of severe sepsis onset. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE. 2021, pp 8671–8678 [Google Scholar]
  • 55.Firoozabadi R, Babaeizadeh S: An ensemble of bagged decision trees for early prediction of sepsis. In: 2019 Computing in Cardiology (CinC). IEEE. 2019, pp. 1–4. [Google Scholar]
  • 56.Camacho-Cogollo JE, Muñoz-Gama J, Romero-Campos M: Predicting sepsis in adult ICU patients using ensemble machine learning models on the MIMIC-III database: A worldwide review. J Clin Med 2022; 11:5231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Biglarbeigi P, McLaughlin D, Rjoob K, et al. : Early prediction of sepsis considering early warning scoring systems. In: 2019 Computing in Cardiology Conference (CinC). IEEE. 2019, pp. 1–4. [Google Scholar]
  • 58.Fu M, Yuan J, Lu M, et al. : An ensemble machine learning model for the early detection of sepsis from clinical data. Comput Cardiol. 2019; 46:1–4 [Google Scholar]
  • 59.Liu R, Greenstein JL, Granite SJ, et al. : Data-driven discovery of a novel sepsis pre-shock state predicts impending septic shock in the ICU. Sci Rep 2019; 9:61F. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Futoma J, Hariharan S, Heller K: Learning to detect sepsis with a multitask Gaussian process RNN classifier. Proceedings of the 34th International Conference on Machine Learning (ICML 2017) 2017; 70:1174–1182. [Google Scholar]
  • 61.Gholamzadeh M, Abtahi H, Safdari R: Comparison of different machine learning algorithms to classify patients suspected of having sepsis infection in the intensive care unit. Inf Med Unlocked 2023; 38:101236 [Google Scholar]
  • 62.Zheng R, Zhang Y, Rong Z, et al. : Surviving Sepsis Campaign: international guidelines for management of sepsis and septic shock 2021, interpretation and expectation. Zhonghua Wei Zhong Bing Ji Jiu Yi Xue. 2021; 33:1159–1164 [DOI] [PubMed] [Google Scholar]
  • 63.Taylor RA, Pare JR, Venkatesh AK, et al. : Prediction of in-hospital mortality in emergency department patients with sepsis: A local big data-driven, machine learning approach. Acad Emerg Med 2016; 23:269–278 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Adams R, Henry KE, Sridharan A, et al. : Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat Med 2022; 28:1455–1460 [DOI] [PubMed] [Google Scholar]
  • 65.Shimabukuro DW, Barton CW, Feldman MD, et al. : Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: A randomised clinical trial. BMJ Open Respir Res 2017; 4:e000234 [Google Scholar]
  • 66.Su L, Xu Z, Chang F, et al. : Early prediction of mortality, severity, and length of stay in the intensive care unit of sepsis patients based on sepsis 3.0 by machine learning models. Front Med 2021; 8:664966 [Google Scholar]
  • 67.Misra D, Avula V, Wolk DM, et al. : Early detection of septic shock onset using interpretable machine learners. J Clin Med 2021; 10:301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Wardi G, Carlile M, Holder A, et al. : Predicting progression to septic shock in the emergency department using an externally generalizable machine-learning algorithm. Ann Emerg Med 2021; 77:395–406 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

cc9-7-e1360-s001.pdf (1.1MB, pdf)

Articles from Critical Care Explorations are provided here courtesy of Wolters Kluwer Health

RESOURCES