Abstract
Objectives
Neonatal early onset sepsis (EOS), bacterial infection during the first seven days of life, is difficult to diagnose because presenting signs are non-specific, but early diagnosis before birth can direct life-saving treatment for mother and baby. Specifically, maternal fever during labor from placental infection is the strongest predictor of EOS. Alterations in maternal heart rate variability (HRV) may precede development of intrapartum fever, enabling incipient EOS detection. The objective of this work was to build a predictive model for intrapartum fever.
Methods
Continuously measured temperature, heart rate, and beat-to-beat RR intervals were obtained from wireless sensors on women (n = 141) in labor; traditional manual vital signs were taken every 3–6 hours. Validated measures of HRV were calculated in moving 5-minute windows of RR intervals: standard deviation of normal-to-normal intervals (SDNN) and root mean square of successive differences (RMSSD) between normal heartbeats.
Results
Fever (>38.0 °C) was detected by manual or continuous measurements in 48 women. Compared to afebrile mothers, average SDNN and RMSSD in febrile mothers decreased significantly (p < 0.001) at 2 and 3 hours before fever onset, respectively. This observed HRV divergence and raw recorded vitals were applied to a logistic regression model at various time horizons, up to 4–5 hours before fever onset. Model performance increased with decreasing time horizons, and a model built using continuous vital signs as input variables consistently outperformed a model built from episodic vital signs.
Conclusions
HRV-based predictive models could identify mothers at risk for fever and infants at risk for EOS, guiding maternal antibiotic prophylaxis and neonatal monitoring.
Keywords: Neonatal early onset sepsis, predictive modeling, intrapartum fever, logistic regression, heart rate variability, continuous vital signs monitoring
Introduction
Neonatal early onset sepsis (EOS), defined as invasive bacterial infection during the first seven days after birth, is a major public health problem. Despite recent advances, it still poses a significant threat of morbidity and mortality to newborns. The incidence of EOS is 1–2 per 1000 live births for infants born at ≥37 weeks gestation1,2 and 4.4 per 1000 live births for late preterm neonates born at 34–36 weeks gestation. 3
Early diagnosis of affected newborns by EOS is critical since timely treatment with antibiotics and hemodynamic support can be life-saving. 4 However, it is difficult to diagnose EOS in newborns because the presenting signs are non-specific. Mild respiratory distress or tachypnea can be attributed to normal neonatal cardiopulmonary adaptation but then progress rapidly to hemodynamic collapse. The general response of clinicians to this dilemma has been to over-diagnose EOS in any infant with trivial risk factors or mild symptoms, resulting in unnecessary neonatal intensive care unit (NICU) admissions for invasive testing and treatment. NICU admissions to rule out EOS also cause separation of babies from their mothers, interfering with breastfeeding and bonding.
A better approach to early diagnosis of EOS would be to define maternal risk factors that are known before birth and can predict EOS in the infant with high sensitivity and specificity. EOS usually results from ascending colonization of the maternal genital tract and uterine compartment during labor, leading to infection of the infant by maternal gastrointestinal and genitourinary microflora. Prenatal factors such as length of ruptured membranes and colonization with group B streptococci increase the risk of placental infection, which can progress to fetal infection. Consequently, maternal fever during labor is the strongest predictor of EOS. 5 Therefore, the most important intervention to ameliorate EOS is administration of intravenous antibiotics to mothers at risk, because treating the mother decreases both the likelihood of transmission of infection to the baby and the progression and severity of disease in infants who are infected. 6 Real-time dynamic monitoring of maternal vital signs during labor could potentially identify infants with EOS before birth, reducing unnecessary NICU admission and decreasing health costs. 7 However, adequate technology and predictive algorithms have not yet been validated.
Emerging technologies now support continuous streaming of vital sign measurements using relatively inexpensive wearable devices, a capability that was previously restricted to specialized telemetry and critical care units.8–11 These devices improve detection of subtle changes in vital signs and improve outcomes.12,13 We have shown that they can detect dynamic physiological changes during labor in real time, while correlating well with manual assessments. 14 Of note, we found that continuous measurements often detect fevers earlier than routine manual measurements by nurses. In addition, continuous measurements detect transient fevers that are missed by episodic measurements. Earlier detection of maternal fever can facilitate the identification of infants with impending EOS, enabling both prenatal antibiotic prophylaxis of the mother and timely treatment of infants before they are symptomatic. The degree of protection of the infant is proportionate to the number of hours before delivery the antibiotic treatment is started 15 ; therefore, treating mothers up to 5 hours earlier decreases the risk of transmission of infection to the infant.
The accuracy of predicting EOS using continuous wireless monitoring can likely be improved by building more sophisticated models incorporating multiple maternal vital signs. Continuous vital sign measurements using wearable sensors enable the accumulation of large datasets for building predictive models. Data science and machine learning algorithms, in conjunction with continuous vital sign monitoring, can be used to extract indicative patterns from patient medical information and provide support for early clinical decision-making. These models can objectively quantify risk and predict disease progression.16–29 While these algorithms do not provide a substitute for clinical experience, accurate models provide objective information about risk and avoid biases in clinical decision-making. Among many modeling approaches, logistic regression (LR) is a statistical model that allows for multivariable analysis of a binary dependent variable with parameters that scale linearly with the number of independent variables, reducing the amount of data needed to properly fit the model.30,31 It is commonly used to predict binary outcomes such as mortality32,33 or disease diagnosis.34–36
Heart rate variability (HRV) may be predictive of intrapartum fever and infection, potentially providing an early indicator of neonatal EOS. HRV represents the beat-to-beat fluctuations in heart rate. Different time-domain indices have been linked to short-term metrics of sympathetic nervous system (SNS) or parasympathetic nervous system (PNS) activity. In particular, standard deviation of normal-to-normal intervals (SDNN, ms) shows how the interbeat intervals of normal sinus beats vary over time, while root mean square of successive differences (RMSSD, ms) reflects beat-to-beat changes in consecutive heartbeats. Both SDNN and RMSSD have been shown to be reliable measures of short-term HRV.37–39 Importantly, HRV can be used as a biomarker for early diagnosis of infection40–44 and development of risk prediction models for sepsis,45–50 and ICU outcomes.51–53 However, no studies have measured its predictive ability for intrapartum fever.
Our objective was to develop a model using HRV to predict intrapartum fever. We computed two time-domain HRV indices from RR intervals and compared their temporal progression among febrile and afebrile groups relative to fever onset. We developed and compared the performance of models that relied on continuously or episodically measured vital signs, while also assessing multiple time horizons. These models could assist with early identification of mothers at risk of fever and guide maternal antibiotic prophylaxis to avoid neonatal EOS and facilitate postnatal treatment of infants at risk.
Methods
This study was approved by the Institutional Review Board at Northwell Health. Women were recruited at the time of admission to the Labor and Delivery Suite at Katz Women's Hospital of Long Island Jewish Hospital, in labor at >35 weeks gestational age and <6 cm cervical dilation, and written informed consent was obtained from all participants prior to study initiation, signed at the time of enrollment. Lifetemp (Isansys Ltd, Abingdon, UK) sensors were affixed in the axilla by a silicone gel adhesive, providing minute-by-minute measurements of temperature, minute-by-minute measurements of heart rate, and beat-to-beat RR intervals. Data were transmitted via Bluetooth to a dedicated tablet mounted to a mobile unit within 3 m of the bedside. Sensors were replaced as needed until delivery of the infant. Along with continuous vital signs, conventional (intermittent) vital signs were measured by nurses every 3–6 hours per routine.
Of 336 patients recruited, 141 had >4 hours of continuous data, >2 manual vital measurements, and sufficient reliable RR interval collection to be included for analysis (Figure 1(a)). Fevers (>38.0 °C) were identified, either by manual, continuous, or both measurement methods. Any temperatures below 35.5 °C and any heart rates over 220 beats per minute were excluded as invalid. Continuous recordings were then cropped from the first valid vital measurement to the final valid vital recording to exclude time periods when the sensor was off. Dropped data between these values remained within the analysis, as excluding these data can insert unforeseen bias into the model. Additionally, an imputation step to estimate missing values may incorrectly adjust the variance of the data.
Figure 1.
Data and modeling summary schematics. (a) Patient recruitment and data refinement flowchart. (b) Logistic regression (LR) model development. Two LR models were developed to classify intrapartum fever and non-fever cases and to compare the value of continuous versus discrete vital sign data collection. Variables for the continuous LR model were temperature (T), heart rate (HR), standard deviation of normal-to-normal intervals (SDNN), and root mean square of successive differences (RMSSD) data averaged over the last 30 minutes before a specific timepoint, while the discrete LR model applied the most recent manually measured T and HR only, for the same timepoint. Four prediction time horizons were tested: 4–5, 3–4, 2–3, and 1–2 hours before fever onset. The LR models were validated by a four-fold leave-one-out cross-validation. All patients were shuffled, and each fold had approximately the same of number febrile and afebrile cases. Model performance was evaluated by calculated area under the curve (AUC) for receiver operating characteristic (ROC) and precision-recall (PR) curves.
Raw beat-to-beat RR intervals were calculated by taking the difference between timestamps for QRS intervals detected by the Lifetemp sensor. From RR intervals, SDNN and RMSSD were calculated by taking the standard deviation and RMSSD, respectively, of all RR values within a 5-minute moving window through each participant's recorded data. HRVs were averaged over all patients in febrile (n = 48) and afebrile (n = 93) groups. All time values in the febrile group were normalized to the time of fever onset as t = 0, while time values in the afebrile group were normalized to t = 0 as the mean time of fever onset in the febrile group, or 2.4 hours prior to delivery. Prior work details the validation of the continuously measured vital signs, compared to the corresponding episodically measured vital signs, and also quantifies the number fevers detected by either or both vital measurement methods. 14
(1) |
Data analysis and model development were completed using MATLAB (MathWorks, Natick, MA, USA). Two LR models were developed to classify intrapartum fever and non-fever cases and to compare the value of continuous versus discrete vital sign data collection. The LR model is shown in equation (1), where P is the probability that the dependent variable is in a particular category, either fever or nonfever in this case, x is a vector of independent variables, and are regression coefficient matrices. Because taking the natural logarithm of the ratio of P to gives a linear model, while allowing for an input of multinomial continuous and categorical variables, LR models can be simple to implement with multiple input variables. 30 The independent variables for the continuous LR model were temperature (T), heart rate (HR), SDNN, and RMSSD data averaged over the last 30 minutes before a specific timepoint, while the discrete LR model applied the most recent manually measured T and HR only, for the same timepoint. If a manually recorded heart rate was not available (if an incomplete set of vital signs were recorded), the instantaneous heart rate was taken from the continuous data for the same timepoint as the manually taken temperature. Temperatures were always recorded properly for the manually taken vital signs. Four prediction time horizons were tested: 4–5, 3–4, 2–3, and 1–2 hours before fever onset. For each patient, data were extracted from a random timepoint within the prediction horizon for both continuous and discrete models.
Statistical analysis
The LR models were validated by a four-fold leave-one-out cross-validation. All patients were shuffled, and each fold had approximately the same of number febrile and afebrile cases. Model performance was evaluated by calculated area under the curve (AUC) for receiver operating characteristic (ROC) and precision-recall (PR) curves.
Results
Fevers were detected in 48 of 141 subjects included in the analysis (Table 1). Raw recorded vital signs (T and HR), along with calculated HRV features (SDNN and RMSSD), were applied to an LR model (Figure 1(b)) to predict the probability of intrapartum fever at various time horizons, from up to 4–5 hours before fever onset.
Table 1.
Data summary statistics.
Non-fever | Fever | |
---|---|---|
Patients, n | 93 | 48 |
Gestational age (weeks), mean (SD) | 39.3 (1.4) | 39.4 (1.2) |
Temperature (°C), mean (SD) | 36.6 (0.4) | 37.1 (0.5) |
Max temperature (°C), mean (SD) | 37.3 (0.4) | 38.2 (0.6) |
Heart rate (BPM), mean (SD) | 83.7 (13.0) | 89.8 (11.8) |
SDNN (ms), mean (SD) | 59.2 (3.9) | 52.3 (4.2) |
RMSSD (ms), mean (SD) | 48.1 (3.6) | 37.2 (4.1) |
An example of patient vital sign data and corresponding calculated HRV is displayed in Figure 2. As shown by the first row, fever was detected by both continuous and manual measurement methods; however, fever was detected nearly 40 minutes earlier by the continuous measurement, shown by the dotted vertical line. The second row displays the minute-by-minute heart rate changes, while the third and fourth rows show SDNN and RMSSD, respectively, calculated from the recorded R–R intervals. Birth is denoted by the solid vertical line at t = 0.
Figure 2.
Example of recorded and calculated data. Shown is an example of all data corresponding to one patient. From top, temperature, heart rate, standard deviation of normal-to-normal intervals (SDNN), and root mean square of successive differences (RMSSD) were tracked during maternal labor, with t = 0 (vertical black solid line). Fever onset occurs approximately 128 minutes before delivery (vertical black dotted line). The blue traces for temperature and heart rate represent continuous data recorded by the wireless vital sign monitoring device, while the red trace shows manually taken temperatures. The yellow and purple traces for SDNN and RMSSD, respectively, are calculated from RR interval data recorded by the wireless vital sign monitoring device.
HRV was averaged over all patients in febrile and afebrile groups (Figure 3). Compared to non-fever cases, the average SDNN and RMSSD in fever cases show a significant decrease (ANOVA, p < 0.001) approximately 2 and 3 hours before fever onset (t = 0), respectively. The divergence in these indices between non-fever and fever cases suggests an underlying separation in HRV features that could be used to build a predictive model for maternal fever onset during labor. There was no significant difference between non-fever and fever cases in frequency-domain indices for HRV, as shown by the average LF/HF ratio.
Figure 3.
Average heart rate variability (HRV) during labor. HRVs were averaged over all patients in afebrile (blue) and febrile (red) groups. For the febrile group, t = 0 was defined as onset of fever. For afebrile group, t = 0 was defined 2.4 hours prior to delivery, corresponding to the mean time of fever onset in the febrile group. Average maternal standard deviation of normal-to-normal intervals (SDNN) and root mean square of successive differences (RMSSD) showed significant separation between febrile and afebrile cohorts at 2 and 3 hours prior to fever onset, respectively. Average maternal LF/HF ratio did not show a significant difference between afebrile and febrile groups.
Two sets of variables, continuous and discrete, served as predictors for LR models to classify intrapartum fever and non-fever cases. With a time horizon of 4–5 hours, the continuous model achieved an ROC AUC of 0.645 while the discrete model had an ROC AUC of 0.470 (Figure 4(a)). These values both increased at the time horizon of 2–3 hours, with the ROC AUC of the continuous and discrete models of 0.748 and 0.639, respectively (Figure 4(b)). As expected, the average AUC increased with decreasing time horizon values, and the model built with continuous variables always outperformed that built with discrete variables (Figure 4(c)).
Figure 4.
Model performance illustrated by receiver operating characteristic (ROC) curves. ROC curves for predicting intrapartum fever at time horizon of (a) 4–5 hours and (b) 2–3 hours are shown. Dotted lines show performance evaluated for individual folds, and the solid line shows the average performance of the model with continuous (red) and discrete (blue) input variables. (c) The area under the curve (AUC) increases with decreasing time horizon values. Empty markers show AUC of individual folds, and filled markers show the average performance of the model with continuous (red) and discrete (blue) input variables.
Model performance was also evaluated by comparing PR AUC values at various time horizons. At 4–5 hours before fever onset, the continuous model had a PR AUC of 0.448 while the discrete model produced a PR AUC of 0.311 (Figure 5(a)). The continuous model achieved a PR AUC of 0.565 at the 2- to 3-hour horizon, while the PR AUC of the discrete model was 0.374 (Figure 5(b)). As with the ROC performance, PR AUC values increased with smaller time horizons (Figure 5(c)).
Figure 5.
Model performance illustrated by precision-recall (PR) curves. PR curves for predicting intrapartum fever at time horizon of (a) 4–5 hours and (b) 2–3 hours are shown. Dotted lines show performance evaluated for individual folds, and the solid line shows the average performance of the model with continuous (red) and discrete (blue) input variables. (c) The area under the curve (AUC) increases with decreasing time horizon values. Empty markers show AUC of individual folds, and filled markers show the average performance of the model with continuous (red) and discrete (blue) input variables.
Discussion
We developed an LR model that predicts intrapartum fever using raw and calculated biomarkers from continuous vital sign monitoring during labor. Temperature and heart rate measurements were augmented with calculated HRV metrics, and the model was evaluated for predictive performance out to 4–5 hours before potential fever onset. Our previously published work has shown that episodes of fever are often unrecognized or missed by manual data collection, and using continuous vital sign data during labor allows detection of those cases, justifying the relatively high incidence of low-grade fevers in this randomly enrolled cohort. 14 In this work, we demonstrate the use of HRV to augment this fever detection further, for up to 5 hours before the continuous temperature measurements alone. This model can facilitate early identification of newborns at risk for EOS, because maternal fever during labor is the strongest known predictor of EOS. Early identification of incipient fever by HRV before delivery can identify infants who were exposed to uteroplacental infection but whose mothers did not yet develop a fever; this can guide appropriate monitoring of infants at elevated risk after birth. Conversely, identification of infants at low risk of EOS can prevent unnecessary postnatal testing, monitoring, and maternal-infant separation. Lengthening maternal antibiotic treatment, which would decrease the risk score from sepsis risk calculators, can allow infants to stay with their mothers after delivery, rather than be admitted to NICU observation or treatment. 7 Earlier assessment of risk for EOS will enable earlier antibiotic treatment of both mother and infant. 4
HRV indices reflect heart–brain interactions and autonomic nervous system processes that regulate heart rate, blood pressure, respiration, digestion, and, importantly for this work, body temperature. Time-domain, frequency-domain, and nonlinear HRV metrics over varying monitoring periods, from 2 minutes to 24 hours, can quantify autonomic nervous system function and its implications on clinical health.39,54,55 SDNN and RMSSD are time-domain indices of HRV reliable at a short-term recording period of 5 minutes.56,57 SDNN reflects both SNS and PNS nervous system activity and is also highly correlated with multiple frequency-domain HRV metrics. 58 Meanwhile, RMSSD is linked to the contribution of the PNS towards regulation of heart rate, which can indicate vagal tone.59,60 Decreased SDNN and RMSSD are consistent with PNS dysfunction 39 ; SNS hyperactivity, in response to PNS impairment, could contribute to fever development. 61 A healthy PNS also acts as an anti-inflammatory neural circuit, and any dysfunction can contribute to inflammation and, subsequently, fever.62,63
Therefore, maternal HRV could be useful for building predictive models for intrapartum fever. As shown in other studies, HRV can be predictive of infections40,41 that are usually accompanied by fever episodes.42,43 In our data, average maternal SDNN and RMSSD showed significant separation between febrile and afebrile cohorts at 2 and 3 hours prior to fever onset, respectively. Even before a patient's body temperature increases above the threshold for fever classification, there is a decrease in these time-domain HRV metrics, showing the importance of including such indices in any model for fever prediction.
The significant changes in HRV align with the increase in predictive performance from the time horizon of 4–5 and 3–4 hours before fever to that of 2–3 hours before fever onset. Although this performance increased further into 1–2 hours before fever, this marginal improvement can be attributed to the contribution of increasing temperature towards the fever threshold of 38 °C. Further, the model built with continuous vital sign measurements consistently outperformed the model using discrete inputs from conventional manual vital signs. This emphasizes the safety and accuracy of continuous vital sign monitoring during maternal labor, as these future mothers persevere through vigorous and dynamic biological changes while giving birth. Infrequent vital sign measurements may miss or belatedly capture biomarker variations that are crucial for optimizing the outcome of the mother–baby dyad.
This study has several limitations. The total number of patients, particularly the number of fever cases, is not particularly high, reflecting the difficulties of the collection of such data in the specific setting. Similarly, only LR modeling was evaluated for this dataset. Because of the small cohort, LR modeling provided a parsimonious method of multivariate analysis to prevent overfitting and assess the value of only a few preselected predictors. More data would allow the deployment of models with increased complexity to analyze and evaluate interactions between variables, as well as cover nonlinearities in the data. Other input variables may include pregnancy complications, such as hypertension, gestational diabetes, and existing infections, to forecast probabilities of labor and delivery complications like postpartum hemorrhage and abnormal fetal heart rate; however, inclusion of more inputs to predictive models would require a larger dataset.
Conclusions
Leveraging minute-by-minute vital signs data, we built and validated a simple statistical model to identify patients who may develop fever during labor and present risk for EOS for their newborns. This model is another example of how continuous vital sign data can be used to build machine learning models that can objectively quantify risk for clinical application and treatment optimization. By using non-invasive and real-time monitoring, this model can prevent NICU admissions and decrease health costs associated with treatment of newborns for sepsis.
Acknowledgements
We wish to thank the labor and delivery nursing staff at Katz Women's Hospital for their valuable support on this project.
Footnotes
Contributorship: SD, RK, BW, and TPZ researched literature and conceived the study. RK, NS, and DP were involved in protocol development, ethical approval, patient recruitment, and data acquisition. SD and TPZ derived the model framework, and SD analyzed the data and prepared all figures. SD, RK, BW, and TPZ wrote the manuscript. All authors discussed results, provided critical feedback, and edited and approved the final version of the manuscript.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval: This study was approved by the Institutional Review Board at Northwell Health, and informed consent was obtained from all participants.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by a Stacey and Steven Hoffman Clinical Care Innovations Grant from the Katz Institute for Women's Health Innovation Grants Program.
Guarantor: TZ.
ORCID iD: Theodoros P Zanos https://orcid.org/0000-0002-9204-9551
References
- 1.Bailit JL, Gregory KD, Reddy UM, et al. Maternal and neonatal outcomes by labor onset type and gestational age. Am J Obstet Gynecol 2010; 202: 245.e1–245.e12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Weston EJ, Pondo T, Lewis MM, et al. The burden of invasive early-onset neonatal sepsis in the United States, 2005–2008. Pediatr Infect Dis J 2011; 30: 937–941. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Cohen-Wolkowiez M, Moran C, Benjamin DK, et al. Early and late onset sepsis in late preterm infants. Pediatr Infect Dis J 2009; 28: 1052–1056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Committee opinion no. 712: intrapartum management of intraamniotic infection. Obstet Gynecol 2017; 130: e95–e101. [DOI] [PubMed] [Google Scholar]
- 5.Puopolo KM, Draper D, Wi S, et al. Estimating the probability of neonatal early-onset infection on the basis of maternal risk factors. Pediatrics 2011; 128: e1155–e1163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Puopolo KM, Lynfield R, Cummings JJ, et al. Management of infants at risk for group B streptococcal disease. Pediatrics 2019; 144: e20191881. [DOI] [PubMed] [Google Scholar]
- 7.Leonardi BM, Binder M, Griswold KJ, et al. Utilization of a neonatal early-onset sepsis calculator to guide initial newborn management. Pediatr Qual Saf 2019; 4: e214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bonnici T. Early detection of inpatient deterioration using wearable monitors . Doctoral Dissertation, 2019. Epub ahead of print 1 April 2019. DOI: 10.13140/RG.2.2.26843.92967. [DOI] [Google Scholar]
- 9.Weenk M, Bredie SJ, Koeneman M, et al. Continuous monitoring of vital signs in the general ward using wearable devices: randomized controlled trial. J Med Internet Res 2020; 22: e15471. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Areia C, Biggs C, Santos M, et al. The impact of wearable continuous vital sign monitoring on deterioration detection and clinical outcomes in hospitalised patients: a systematic review and meta-analysis. Crit Care 2021; 25: 351. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Smuck M, Odonkor CA, Wilt JK, et al. The emerging clinical role of wearables: factors for successful implementation in healthcare. npj Digit Med 2021; 4: 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Brown H, Terrence J, Vasquez P, et al. Continuous monitoring in an inpatient medical-surgical unit: a controlled clinical trial. Am J Med 2014; 127: 226–232. [DOI] [PubMed] [Google Scholar]
- 13.Downey CL, Chapman S, Randell R, et al. The impact of continuous versus intermittent vital signs monitoring in hospitals: a systematic review and narrative synthesis. Int J Nurs Stud 2018; 84: 19–27. [DOI] [PubMed] [Google Scholar]
- 14.Koppel R, Debnath S, Zanos TP, et al. Efficacy of continuous monitoring of maternal temperature during labor using wireless axillary sensors. J Clin Monit Comput 2022; 36: 103–107. [DOI] [PubMed] [Google Scholar]
- 15.Evans L, Rhodes A, Alhazzani W, et al. Executive summary: surviving sepsis campaign: international guidelines for the management of sepsis and septic shock 2021. Crit Care Med 2021; 49: 1974–1982. [DOI] [PubMed] [Google Scholar]
- 16.Hendriksen JMT, Geersing GJ, Moons KGM, et al. Diagnostic and prognostic prediction models. J Thromb Haemostasis 2013; 11: 129–141. [DOI] [PubMed] [Google Scholar]
- 17.Churpek MM, Adhikari R, Edelson DP. The value of vital sign trends for detecting clinical deterioration on the wards. Resuscitation 2016; 102: 1–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kipnis P, Turk BJ, Wulf DA, et al. Development and validation of an electronic medical record-based alert score for detection of inpatient deterioration outside the ICU. J Biomed Inform 2016; 64: 10–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Janke AT, Overbeek DL, Kocher KE, et al. Exploring the potential of predictive analytics and big data in emergency care. Ann Emerg Med 2016; 67: 227–236. [DOI] [PubMed] [Google Scholar]
- 20.Levin S, Toerper M, Hamrock E, et al. Machine-learning-based electronic triage more accurately differentiates patients with respect to clinical outcomes compared with the emergency severity index. Ann Emerg Med 2018; 71: 565–574.e2. [DOI] [PubMed] [Google Scholar]
- 21.Raita Y, Goto T, Faridi MK, et al. Emergency department triage prediction of clinical outcomes using machine learning models. Crit Care 2019; 23: 64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Brekke IJ, Puntervoll LH, Pedersen PB, et al. The value of vital sign trends in predicting and monitoring clinical deterioration: a systematic review. PLoS One 2019; 14: e0210875. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Tomašev N, Glorot X, Rae JW, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature 2019; 572: 116–119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Brajer N, Cozzi B, Gao M, et al. Prospective and external evaluation of a machine learning model to predict in-hospital mortality of adults at time of admission. JAMA Network Open 2020; 3: e1920733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Debnath S, Barnaby DP, Coppa K, et al. Machine learning to assist clinical decision-making during the COVID-19 pandemic. Bioelectron Med 2020; 6: 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Tóth V, Meytlis M, Barnaby DP, et al. Let sleeping patients lie, avoiding unnecessary overnight vitals monitoring using a clinically based deep-learning model. npj Digit Med 2020; 3: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Bolourani S, Brenner M, Wang P, et al. A machine learning prediction model of respiratory failure within 48 hours of patient admission for COVID-19: model development and validation. J Med Internet Res 2021; 23: e24246. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Choi A, Chung K, Chung SP, et al. Advantage of vital sign monitoring using a wireless wearable device for predicting septic shock in febrile patients in the emergency department: a machine learning-based analysis. Sensors 2022; 22: 7054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Levy TJ, Coppa K, Cang J, et al. Development and validation of self-monitoring auto-updating prognostic models of survival for hospitalized COVID-19 patients. Nat Commun 2022; 13: 6812. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Shipe ME, Deppen SA, Farjah F, et al. Developing prediction models for clinical use using logistic regression: an overview. J Thorac Dis 2019; 11: S574–S584. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Nusinovici S, Tham YC, Chak Yan MY, et al. Logistic regression was as good as machine learning for predicting major chronic diseases. J Clin Epidemiol 2020; 122: 56–69. [DOI] [PubMed] [Google Scholar]
- 32.Eftekhar B, Mohammad K, Ardebili HE, et al. Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data. BMC Med Inform Decis Mak 2005; 5: 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Bilimoria KY, Liu Y, Paruch JL, et al. Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg 2013; 217: 833–842.e3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Kurt I, Ture M, Kurum AT. Comparing performances of logistic regression, classification and regression tree, and neural networks for predicting coronary artery disease. Expert Syst Appl 2008; 34: 366–374. [Google Scholar]
- 35.McWilliams A, Tammemagi MC, Mayo JR, et al. Probability of cancer in pulmonary nodules detected on first screening CT. N Engl J Med 2013; 369: 910–919. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Johnson P, Vandewater L, Wilson W, et al. Genetic algorithm with logistic regression for prediction of progression to Alzheimer’s disease. BMC Bioinf 2014; 15: S11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Ahmad S, Tejuja A, Newman KD, et al. Clinical review: a review and analysis of heart rate variability and the diagnosis and prognosis of infection. Crit Care 2009; 13: 232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Billman GE, Huikuri HV, Sacha J, et al. An introduction to heart rate variability: methodological considerations and clinical applications. Front Physiol 2015; 6: 55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Shaffer F, Ginsberg JP. An overview of heart rate variability metrics and norms. Front Public Health 2017; 5: 258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Griffin MP, Lake DE, Bissonette EA, et al. Heart rate characteristics: novel physiomarkers to predict neonatal infection and death. Pediatrics 2005; 116: 1070–1074. [DOI] [PubMed] [Google Scholar]
- 41.Tang CHH, Chan GSH, Middleton PM, et al. Spectral analysis of heart period and pulse transit time derived from electrocardiogram and photoplethysmogram in sepsis patients. In: 2009 annual international conference of the IEEE engineering in medicine and biology society, 2009, pp.1781–1784. [DOI] [PubMed] [Google Scholar]
- 42.Ahmad S, Ramsay T, Huebsch L, et al. Continuous multi-parameter heart rate variability analysis heralds onset of sepsis in adults. PLoS One 2009; 4: e6642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Swor DE, Thomas LF, Maas MB, et al. Admission heart rate variability is associated with fever development in patients with intracerebral hemorrhage. Neurocrit Care 2019; 30: 244–250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Williams DP, Koenig J, Carnevali L, et al. Heart rate variability and inflammation: a meta-analysis of human studies. Brain Behav Immun 2019; 80: 219–226. [DOI] [PubMed] [Google Scholar]
- 45.De Castilho FM, Ribeiro ALP, Da Silva JLP, et al. Heart rate variability as predictor of mortality in sepsis: a prospective cohort study. PLoS One 2017; 12: e0180060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Nemati S, Holder A, Razmi F, et al. An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit Care Med 2018; 46: 547–553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Samsudin MI, Liu N, Prabhakar SM, et al. A novel heart rate variability based risk prediction model for septic patients presenting to the emergency department. Medicine (Baltimore) 2018; 97: e10866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Barnaby D, Ferrick K, Kaplan DT, et al. Heart rate variability in emergency department patients with sepsis. Acad Emerg Med 2002; 9: 661–670. [DOI] [PubMed] [Google Scholar]
- 49.Chiew CJ, Liu N, Tagami T, et al. Heart rate variability based machine learning models for risk prediction of suspected sepsis patients in the emergency department. Medicine (Baltimore) 2019; 98: e14197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Liu N, Chee ML, Foo MZQ, et al. Heart rate n-variability (HRnV) measures for prediction of mortality in sepsis patients presenting at the emergency department. PLoS One 2021; 16: e0249868. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Chen W-L, Chen J-H, Huang C-C, et al. Heart rate variability measures as predictors of in-hospital mortality in ED patients with sepsis. Am J Emerg Med 2008; 26: 395–401. [DOI] [PubMed] [Google Scholar]
- 52.Oh J, Cho D, Park J, et al. Prediction and early detection of delirium in the intensive care unit by using heart rate variability and machine learning. Physiol Meas 2018; 39: 035004. [DOI] [PubMed] [Google Scholar]
- 53.Bodenes L, N’Guyen Q-T, Le Mao R, et al. Early heart rate variability evaluation enables to predict ICU patients’ outcome. Sci Rep 2022; 12: 2498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Stavrakis S, Kulkarni K, Singh JP, et al. Autonomic modulation of cardiac arrhythmias: methods to assess treatment and outcomes. JACC Clin Electrophysiol 2020; 6: 467–483. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Debnath S, Levy TJ, Bellehsen M, et al. A method to quantify autonomic nervous system function in healthy, able-bodied individuals. Bioelectron Med 2021; 7: 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Munoz ML, Van Roon A, Riese H, et al. Validity of (ultra-)short recordings for heart rate variability measurements. PLoS One 2015; 10: e0138921. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Kang JH, Kim JK, Hong SH, et al. Heart rate variability for quantification of autonomic dysfunction in fibromyalgia. Ann Rehabil Med 2016; 40: 301–309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Umetani K, Singer DH, McCraty R, et al. Twenty-four hour time domain heart rate variability and heart rate: relations to age and gender over nine decades. J Am Coll Cardiol 1998; 31: 593–601. [DOI] [PubMed] [Google Scholar]
- 59.Laborde S, Mosley E, Thayer JF. Heart rate variability and cardiac vagal tone in psychophysiological research – recommendations for experiment planning, data analysis, and data reporting. Front Psychol 2017; 8: 213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Zanos TP. Recording and decoding of vagal neural signals related to changes in physiological parameters and biomarkers of disease. Cold Spring Harb Perspect Med 2019; 9: a034157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Perkes I, Baguley IJ, Nott MT, et al. A review of paroxysmal sympathetic hyperactivity after acquired brain injury. Ann Neurol 2010; 68: 126–135. [DOI] [PubMed] [Google Scholar]
- 62.Kox M, Pickkers P. Modulation of the innate immune response through the vagus nerve. NEF 2015; 131: 79–84. [DOI] [PubMed] [Google Scholar]
- 63.Chavan SS, Pavlov VA, Tracey KJ. Mechanisms and therapeutic relevance of neuro-immune communication. Immunity 2017; 46: 927–942. [DOI] [PMC free article] [PubMed] [Google Scholar]