Skip to main content
Heliyon logoLink to Heliyon
. 2025 Jan 15;11(2):e41994. doi: 10.1016/j.heliyon.2025.e41994

The role of continuous monitoring in acute-care settings for predicting all-cause 30-day hospital readmission: A pilot study

Michael Joseph Pettinati a, Kyriakos Vattis a, Henry Mitchell b, Nicole Alexis Rosario b, David Michael Levine b,c, Nandakumar Selvaraj a,
PMCID: PMC11787643  PMID: 39897919

Abstract

Background

Accurate prediction and prevention of hospital readmission remains a clinical challenge. The influence of different data sources, including remotely monitored continuous vital signs and activity, on machine learning (ML) models’ performances is examined for predicting all-cause unplanned 30-day readmission.

Methods

Patients (n = 354) recruited in the emergency department and admitted to acute care at either hospital or home hospital settings are analyzed. Data sources included continuous vital signs and activity, electronic health record (EHR) data – episodic physiological monitoring of laboratory and vital signs, demographics, hospital utilization history, and quality of life survey measures. Five (5) machine learning classifiers were systematically trained by varying input data sources for readmission. Performances of ML models as well as the standard-of-care HOSPITAL score for readmissions were assessed with area under the receiver operating characteristic curve (AUROC) and area under precision-recall curve (AUPRC) statistics.

Results

There were 29 patients readmitted out of the 354 total included patients (an 8.2 % readmission rate). The average five-fold cross-validation AUROC and AUPRC scores of the five readmission models ranged from 0.76 to 0.84 (P > .05) and 0.23–0.49 (P < .05), respectively. The model input with episodic physiological monitoring (vitals and labs) had an AUPRC of 0.23 ± 0.07, while the model input with continuous vitals and activity data and episodic vitals and laboratory measurements had an AUPRC of as 0.49 ± 0.10 (P < .005). The HOSPITAL score had an AUROC of 0.62 and AUPRC of 0.16 in this pilot study.

Conclusions

The systematic ML modeling and analysis showcased diversity in predictive power and performances of patient data sources for predicting readmission. This pilot study suggests continuous vital signs and activity data, when added to episodic physiological monitoring, boosts performance. The HOSPITAL score shows low predictive power for readmission in this population. Predictive modeling of unplanned 30-day readmission improves with continuous vital signs and activity monitoring.

Keywords: Monitoring, Physiologic, Patient readmission, Remote sensing technology, Home environment, Home hospital

1. Introduction

1.1. Background and significance

Unplanned 30-day hospital readmissions have profound safety risks as well as emotional and economic costs for patients. Patients who end up back in the hospital are at risk for a host of avoidable adverse events and mental/psychological issues due to the possibility of extended length of stay [[1], [2], [3], [4]]. Readmissions have cost Medicare billions of dollars dating back to the 1970s [5] and, more recently, with the introduction of the Hospital Readmissions Reduction Program (HRRP), have cost hospitals hundreds of millions of dollars in penalties each year [6,7].

There are a variety of approaches being undertaken to address this problem. Telemedicine and home hospital programs have been developed and adapted to reduce readmission rates [[8], [9], [10], [11]]. Additionally, the readmission problem has been increasingly addressed by developing tools to predict 30-day unplanned hospital readmission to help identify patients at high risk of admission [[12], [13], [14], [15], [16], [17]]. Machine learning-based models, assessing an individual patient's risk for unplanned readmission, have shown widely varying performances and limitations in generalizing to novel patient populations [18]. Two of the most widely used and validated tools to assess patient risk for all-cause unplanned readmission are the HOSPITAL and LACE scores [[19], [20], [21], [22]]. There are numerous studies, however, that suggest these standard tools may not be equally discriminative for diverse demographics, e.g., differences in a cohort's gender and age composition, medical conditions, etc. Makes large differences in the tools' performance [[23], [24], [25], [26]].

The difficulty of predicting all-cause unplanned hospital readmission arises from the heterogeneity of what causes readmission [27,28]. While developing a tool to predict a patient's risk for all-cause unplanned readmission, all available input data can be leveraged, and the model parameters can be determined to best separate the readmitted from the not readmitted patient population. This approach provides limited explanation about the predictive power of different data sources for this heterogeneous readmission problem.

This pilot study explores a systematic approach to examine and incorporate diverse input data sources including remotely monitored continuous vital signs and activity to build machine learning-based models for predicting all-cause unplanned 30-day readmission.

1.2. Objective

The pilot study aims to retrospectively delineate the predictive power of numerous patient data types (such as continuous vital signs and activity monitoring, episodic physiological electronic health record (EHR) data including vital signs and laboratory monitoring, other EHR information including demographic and utilization information as well as quality-of-life (QoL) surveys) for the prediction of all-cause unplanned 30-day readmission. Systematic inclusion of those diverse data as input into the machine learning (ML) models are explored for predicting 30-day hospital readmission and corresponding performances are evaluated. The study also characterizes the performance of the standard-of-care HOSPITAL score for predicting readmission.

2. Materials and methods

2.1. Dataset and patient cohort overview

The total gathered patient cohort consisted of 372 patients with data collected by Brigham and Women's Hospital (BWH), an academic hospital in Boston, MA. The patients were recruited in the emergency department (ED) of BWH and were admitted between June 2017 and October 2019 (NCT 03203759, NCT02864420). This collective dataset is comprised of 32 patients receiving care in a hospital setting and 340 patients receiving care in a home hospital setting [8,[29], [30], [31]]. Additional information about the home hospital setting, which was shown to provide care at least comparable to the traditional hospital setting, can be found in these cited resources. Continuous remote monitoring of vital signs data, heart rate (HR) and respiration rate (RR), and activity data was enabled using the FDA-cleared VitalPatch Biosensor (VitalConnect Inc, San Jose CA), hereinafter referred to as Patch Sensor [[32], [33], [34]]. Patients were monitored with this patch sensor throughout the length of stay. EHR data comprised of the patients' medical histories, medications, utilization histories, manually recorded vital signs and laboratory measurements were also acquired in addition to continuous monitoring. Finally, there were also several QoL measures provided at the times of admission and discharge attempting to capture the psychological and physical state of the patient from their perspective [35,36]. All the patients who were recruited during the study period were incorporated into this pilot analysis. The extensive data collection that was done for each patient in this study, e.g. the continuous wearables monitoring and the QOLs measures, is resource intensive to collect and is not common in other datasets. The models, developed retrospectively, predicted all-cause unplanned 30-day readmission using the continuous wearable and EHR data acquired during the acute care period up until the discharge.

A set of inclusion and exclusion criteria was applied to screen and prepare the dataset. Patients with a mortality event post-discharge and those discharged to hospice were excluded from the dataset because such severe adverse events can be unrelated to 30-day hospital readmission. A separate specialized mortality screening would be more appropriate to predict such critical events. Additionally, there were patients monitored in a home hospital setting who had to return to the hospital setting during the study. Patients who returned to the hospital from a home hospital were removed because their enrollment and data collection were ended before discharge. Patient recordings with under 3 h of patch data or invalid EHR data, such as measurements outside of possible physiological range or typological errors, were removed.

The screening process resulted in a 354-patient cohort for analysis.

2.2. Feature development

Broadly, there were four feature categories derived from the raw data summarized in Table 1: Patch Sensor-related features (derivations of patch data including step counts and ambulation-related parameters), episodic physiological monitoring features from manually recorded vital signs and labs, survey-related features (QoL measures), and other EHR-related features such as patient demographic and utilization-related information.

Table 1.

The data types from which features were extracted along with the summary of the feature development process.

Feature Group Feature Description
Patch Sensor
  • Vital Signs
    • o
      Heart Rate
    • o
      Respiration Rate
    • o
      Activity
  • Summarizations
    • o
      Extremes
    • o
      Statistical Moments
    • o
      Univariate Variance
    • o
      Covariance
  • Step Count and Ambulation Parameters [18]

Episodic Physiological Monitoring (Vitals and Labs)
  • Vital Signs
    • o
      HR, RR, SBP, DBP, SpO 2, Temperature, Weight
  • Laboratory Results
    • o
      20 common lab results
  • Summarizations
    • o
      Extremes
    • o
      Statistical Moments
    • o
      Overall Trends
Quality of Life Survey
  • 7 Unique Standard QoL Surveys

  • Summarization
    • o
      Individual Answers
    • o
      Summation of Scales
    • o
      Differences Admission to Discharge
Demographics and Utilization
  • Previous Utilization History

  • Comorbidities

  • Chronic Condition

  • Demographics
    • o
      Age
    • o
      Gender
    • o
      Race
    • o
      Insurance
    • o
      Employment
    • o
      BMI at Admission
  • Summarization
    • o
      Counts
    • o
      Groupings

There is a single decision point for our all-cause 30-day unplanned readmission models, the time the patient is discharged from the in-hospital or hospital-at-home setting. Therefore, eligible features for inclusion in the models were summarizations of the entirety of the health record and continuous wearable data for the patient stays. For the episodic physiological monitoring data (episodic vitals and labs), the features were derived by looking at the extreme values, statistical moments, and differences and ranges over time in the extremes and statistical moments. For the wearable data, there were also features derived relating to extreme values and statistical moments; additionally, there were features related to signal variances and covariances derived. These were derived for different times of day to understand broken circadian rhythms and different times of the patient journey to allow for comparisons across the patient's acute stay. Surveys and quality of life measures were taken by the patients at both their admission and discharge times. Features derived from these data points related to the patients' feelings and well-being at those times in isolation as well as how responses changed between the two times. Finally, demographics and previous utilization-related information were used to generate features. These included patients' ages, genders, races, categorical insurance information, body mass indices, trips to the emergency room and admissions in the last six months, etc. All features were included in the development processes described below and eligible to be selected as part of the respective models.

To systematically vary the input data and understand how additional input data types influenced predictive algorithms classifying all-cause unplanned 30-day readmission, five model types were developed: models using only the Patch Sensor-related features, models using only episodic vital signs and labs data, models using features from both Patch Sensor and episodic vital signs and labs data, models using Patch Sensor and survey-related features, and a model using the Entire Feature Set including: the Patch Sensor, episodic vital sign and lab measurements, demographics, utilization, and QoL survey data.

2.3. Model development and evaluation

The models developed in this work were XGBoost models [37], which bring together the concepts of gradient boosting and decision trees. Multiple decision trees are created sequentially; each new tree tries to improve on the classification of previously missed data points. The output is an ensemble prediction, a summation of each individual tree's prediction. The primary motivation of this study was to understand how different data types, particularly the remotely monitored continuous vital signs, may contribute to the separability of readmitted and not readmitted patients. XGBoost was employed due to its ability to handle missing values without need for potentially biasing imputation.

The model development process began by randomly splitting patients into five balanced folds that preserved the proportion of readmitted and not readmitted patients. Candidate XGBoost models were developed by combining four folds into a training set while withholding the fifth one as a test set with the process repeating for all folds treated as a test set. During training, the true positive class (readmitted patients) were more heavily weighted than patients who were not readmitted to address the class imbalance, and the importance of sensitivity to readmission. Feature selection progressed within each fold with some of the most predictive features learned from the four training folds applied to the reserved test folds. Between the folds, different features were found to be predictive of 30-day unplanned readmission (feature instability).

Additional approaches were employed to better understand this classification problem. Candidate models were developed by dropping features that showed limited information gain when averaging the information gain of features in each of the five training subsets and retaining features that showed high average information gain across the five training sets. Final models were selected based on high average area under receiver operating characteristic (AUROC) curve performance on the five test folds, high area under precision and recall (AUPRC) curve performance on the five test folds, low standard deviations in model performance across the five test folds and logical patterns in the final features incorporated. Using the average feature importance across the folds removes the feature instability problem and generalization to a great extent. The chosen feature sets were tested using repeated five-fold cross validation to understand how dependent the reported performances were on the initial five-fold split. The repeated five-fold cross-validation used 20 random splits.

The model evaluation process included three distinct aspects. First, standard metrics are reported with respect to the models' performances. The average AUROC curve and the AUPRC curve are presented for the test sets of the five-fold cross validation for the models devised through average information gain across the five folds. The standard deviations of these metrics are also reported. A Shapiro-Wilks test was used to check for the normality of the models’ performance metrics (AUROC and AUPRC). A one-way repeated measure analysis of variance (ANOVA) test was performed to look for differences in performance between the five-fold models. If a significant difference (p < .05) was found using the ANOVA, paired t tests were used with Bonferroni correction to test for significant differences (p < .005) between pairs of models. The sensitivity, specificity, and precision are reported for these models at the threshold that maximizes the product of specificity and sensitivity. The repeated five-fold cross validation results (AUROC and AUPRC averages and standard deviations) are reported for these models as well using micro averaging across each individual fold.

AUROC/AUPRC metrics do not require selecting a decision boundary and optimizing for a single model, rather this approach evaluates the overall efficacy of all possible models. For the prospective pivotal validation and deployment, a single model is typically optimized based on the product requirements and/or intended use. In such undertakings, other performance metrics (e.g., F1 score, sensitivity, specificity, etc) will be computed and reported using the single optimized model for readmission. This is beyond the focus of this pilot investigation.

SHapley Additive exPlanations (SHAP) values [38] were used to assess the contributions of all features on the models’ outputs. The intuition and the underlying iterative calculation processes for SHAP values are captured elsewhere [38]. In the context of this pilot study for readmission prediction, a larger magnitude SHAP value for a certain feature value can explain a greater contribution to the readmission prediction. If the feature value increases the likelihood of readmission irrespective of other features, then that feature value will have a positive SHAP value. If that feature value reduces the chances of readmission irrespective of other features, then that feature value will be negative.

The SHAP summarization plots shown are the SHAP values for each feature value from the XGBoost models trained using all 354 patients' data. All these SHAP values visualized together can reveal patterns how certain features' values can influence the model's readmission prediction. For example, higher or lower values of certain features may correspond to a higher contribution to the model's predicted likelihood of readmission.

Finally, the HOSPITAL Score, as the current standard of care, was applied to the whole study cohort. AUROC and AUPRC performance metrics were then calculated using the HOSPITAL score model's output. These HOSPITAL score's performance metrics were used as a reference to compare for our models' performances.

This study is approved by the Mass General IRB (eProtocol # IRB-2017P002583).

3. Results

The study cohort consisted of 354 participants (67.7 ± 18.8 years, male/female: 146/208) for the analysis, in which 29 patients (74.1 ± 13.55 years, male/female: 15/14) had a hospital readmission within 30 days of discharge. These 354 patients were included from the original 372 as shown in Fig. 1. More demographic characteristics of the study cohorts are shown in Table 2. These 354 patients come from an extremely diverse background with high representation across age groups, genders, races, socioeconomic groupings, and discharge diagnoses. These diagnoses for these patients span cardiology, pulmonology, and infectious disease. For the 29 readmitted patient cohort, Table 3 summarizes their diagnoses at the time of their hospital readmission showcasing the extremely diverse underlying disease conditions leading to readmission, in some cases mirroring the discharge diagnoses and in some cases unrelated to the index diagnosis.

Fig. 1.

Fig. 1

Inclusion/Exclusion flow diagram. Patients were excluded for mortality within 30-days of discharge, under 3 h of remote continuous monitoring with a wearable Patch Sensor or having any editorial uncertainties such as erroneous out-of-range vital sign values in the manually recorded electronic health record.

Table 2.

The demographic composition of the patient cohort.

Demographic Readmitted
Not Readmitted
Total Population
(N = 29) (N = 325) (N = 354)
Age (±SD) 74.07 67.12 67.69
(13.55) (19.14) (18.84)
Sex Male 15 131 146
(51.7 %) (40.3 %) (41.2 %)
Female 14 194 208
(48.3 %) (59.7 %) (58.8 %)
Race White 17 141 158
(58.6 %) (43.4 %) (44.6 %)
Black 9 69 78
(31.0 %) (21.2 %) (22.0 %)
Latin@ 2 95 97
(6.9 %) (29.2 %) (27.1 %)
Asian 0 5 5
(0.0 %) (1.5 %) (1.4 %)
Other 1 15 16
(3.4 %) (4.6 %) (4.5 %)
Language English 26 229 255
(89.7 %) (70.5 %) (72.0 %)
Spanish 1 76 77
(3.4 %) (23.4 %) (21.8 %)
Creole 0 7 7
(0.0 %) (2.2 %) (2.0 %)
Other 2 12 14
(6.9 %) (3.7 %) (4.0 %)
Not Recorded 0 1 1
(0.0 %) (0.3 %) (0.3 %)
Employment Not Recorded 1 13 14
(3.4 %) (4.0 %) (4.0 %)
Employed 4 81 85
(13.8 %) (24.9 %) (24.0 %)
Unemployed 2 58 60
(6.9 %) (17.8 %) (16.9 %)
Retired 22 173 195
(75.9 %) (53.2 %) (55.1 %)
Education < High School 5 92 97
(17.2 %) (28.3 %) (27.4 %)
High School 8 77 85
(27.6 %) (23.7 %) (24.0 %)
<4yr College 2 51 53
(6.9 %) (15.7 %) (15.0 %)
4yr College 5 47 52
(17.2 %) (14.5 %) (14.7 %)
>4yr College 6 44 50
(20.7 %) (13.5 %) (14.1 %)
Unreported
Undefined
Value
3 14 17
(10.3 %) (4.3 %) (4.8 %)
BMI (±SD)
4 Not Readmitted Patients w/o BMI Recorded
28.88 30.40 30.27
(9.76) (8.71) (8.81)
Discharge Diagnosis Heart Failure 10 52 62
(34.5 %) (16.0 %) (17.5 %)
Pneumonia 4 43 47
(13.8 %) (13.2 %) (13.3 %)
Other Infection 4 29 33
(13.8 %) (8.9 %) (9.3 %)
Skin and Soft Tissue Infection 3 52 55
(10.3 %) (16.0 %) (15.5 %)
Complicated Urinary Tract Infection/Pyelonephritis 3 61 64
(10.3 %) (18.8 %) (18.1 %)
Asthma 2 20 22
(6.9 %) (6.2 %) (6.2 %)
Chronic obstructive pulmonary disease 1 29 30
(3.4 %) (8.9 %) (8.5 %)
Diabetes mellitus 1 16 17
(3.4 %) (4.9 %) (4.8 %)
Other 1 12 13
(3.4 %) (3.7 %) (3.7 %)
Hypertension 0 6 6
(0.0 %) (1.8 %) (1.7 %)
Afib w/rapid ventricular response 0 4 4
(0.0 %) (1.2 %) (1.1 %)
End-of-Life 0 1 1
(0.0 %) (0.3 %) (0.3 %)
Length of Stay in Days (±SD) 4.78 3.55 3.65
(3.45) (2.46) (2.58)
ED Visits Last 6 Months 0 19 212 231
(65.5 %) (65.2 %) (65.3 %)
1 6 51 57
(20.7 %) (15.7 %) (16.1 %)
2 1 28 29
(3.4 %) (8.6 %) (8.2 %)
>2 2 18 20
(6.9 %) (5.5 %) (5.6 %)
Not Reported 1 16 17
(3.4 %) (4.9 %) (4.8 %)

Table 3.

A summary of the diagnoses of the 29 readmitted patients at the time of their readmission.

Diagnosis at Readmission Number of Patients
Chronic Heart Failure Exacerbation 4
Other Cardiac Issue 3
COPD/Asthma Exacerbation 3
Infection/Sepsis 4
Other 15

The receiver operating characteristic and precision-recall performance curves of the models from the five-fold cross validation are shown in Fig. 2, Fig. 3. The AUROC±SD statistic and AUPRC±SD statistic of the models were as follows: 0.76 ± 0.05 and 0.42 ± 0.08 for Patch Sensor-only, 0.76 ± 0.02 and 0.23 ± 0.07 for episodic physiological monitoring (vitals and labs), 0.83 ± 0.06 and 0.49 ± 0.10 for Patch Sensor and Survey, 0.82 ± 0.04 and 0.49 ± 0.10 for Patch Sensor and episodic physiological monitoring, and 0.84 ± 0.04 and 0.42 ± 0.07 for the Entire Feature Set.

Fig. 2.

Fig. 2

Receiver operating characteristic (ROC) curves along with the five-fold cross validation statistics associated with each model type. A. The patch sensor-model includes only the features derived from continuous streams of heart rate (HR), respiration rate (RR), and activity data from a wearable patch sensor. B. Theepisodic physiological monitoring model includes features derived from episodically measured vital signs and labs. C. The patch sensor and survey model includes the features derived from the HR, RR, and activity data as well as features derived from surveys administered at time of admission and discharge. D. The patch sensor and episodic physiological monitoring model includes the continuous HR, RR, and activity-derived features as well as the features derived from episodic vital signs and labs. E. The highest area under ROC statistic belongs to the Entire Feature Set model, which includes the features derived from the continuous patch data, the episodic physiological monitoring data (vitals and labs) as well as the utilization and demographic data. The HOSPITAL score ROC curve is shown alongside the ML model's ROC curve.

Fig. 3.

Fig. 3

Precision and recall (PR) curves along with the five-fold cross validation statistics associated with each model type. The organization of these plots mirror the organization described in Fig. 2.

3.1. Model separability comparison

According to a one-way repeated measures ANOVA, there is a significant difference between all 5 models (p < .05) for the AUPRC statistics. Pairwise repeated measure t-tests using Bonferroni correction reveal significant differences (p < .005) between the episodic physiological monitoring (vitals and labs)model and Patch Sensor and episodic physiological monitoring model. The other model types are not significantly different with respect to AUPRC curve (p > .005). There was no significant difference (p > .05) for the AUROC statistics.

3.2. Hospital score

The AUROC and AUPRC of HOSPITAL score is 0.62 and 0.16 respectively. The ROC and PR curves for the model making use of all data types are shown alongside the HOSPITAL score curves in the bottom right side of Fig. 2, Fig. 3.

3.3. Cross validation model performance

Table 4 summarizes all the performance metrics for the five-fold cross validation and the repeated five-fold cross validation. In the repeated five-fold cross validation, the standard deviation of the AUROC and AUPRC between folds increases substantially. In the case of the model incorporating the Entire Feature Set, the standard deviation of the AUROC doubles from 0.04 to 0.08, and the standard deviation of AUPRC more than doubles from 0.07 to 0.16.

Table 4.

Performance metrics of the five different model types.

CV Method Performance
Metric
Episodic Physiological Monitoring (Vitals and Labs) Patch Sensor Patch Sensor +
Survey
Patch Sensor + Episodic Physiological Monitoring Entire Feature Set
5-fold CV AUROC 0.76 ± 0.02 0.76 ± 0.05 0.83 ± 0.06 0.82 ± 0.04 0.84 ± 0.04
AUPRC 0.23 ± 0.07 0.42 ± 0.08 0.49 ± 0.10 0.49 ± 0.10 0.42 ± 0.07
Repeated
5-fold CV
AUROC 0.72 ± 0.10 0.76 ± 0.09 0.79 ± 0.09 0.80 ± 0.08 0.82 ± 0.08
AUPRC 0.21 ± 0.12 0.30 ± 0.12 0.35 ± 0.16 0.37 ± 0.16 0.39 ± 0.16

EHR, Electronic Health Record; CV, Cross-Validation; AUROC, Area under the receiver operating characteristic curve; AUPRC, the Area under the precision-recall curve.

3.4. SHAP values

Fig. 4, Fig. 5show the SHAP summary plots for the Patch Sensor-only model and the Entire Feature Set model. The three most important features in the Patch Sensor-only model relate to the patient's variance in activity, covariance of activity and heart rate, and covariance of activity and respiration rate. These same three features are included in the Entire Feature Set model along with three other physiology-related features, features related to changes in systolic blood pressure and temperature. These six objective features are the most important features. The seventh included feature is the only feature that is not related to patient activity level and or vital signs, a count of the patient's comorbid conditions from the EHR.

Fig. 4.

Fig. 4

SHAP summary plots for the Patch Sensor-only model devised using information gain across the five folds.

Fig. 5.

Fig. 5

The SHAP summary plot for the Patch Sensor and Entire Feature Set model.

4. Discussion

This work systematically assessed the influence of different data types on the ability of machine learning models to predict all-cause unplanned 30-day readmission using a unique acute patient dataset involving 24/7 continuous remote patient monitoring in home-hospital settings. The study findings show the importance of continuous remote monitoring of patient vital signs and activity in predicting unplanned readmissions. The combination of continuous monitoring and episodic monitoring is shown to be more powerful than episodic monitoring alone for this problem in this pilot dataset. There was a significant difference with respect to AUPRC between the model incorporating patch sensor and episodic physiological monitoring (vitals and labs) and the model incorporating just episodic physiological monitoring.

The readmission models of this study incorporated patient data from both in-hospital patients as well as hospital-at-home patients since hospital-level care was essentially delivered at home settings [31]. These acute care delivery models are not known to have any specific impact on the future 30-day readmission in those patients. The targeted outcomes of this work were a better understanding of the role of continuous monitoring and other data types in candidate ML models for the prediction of 30-day all-cause unplanned readmission. The data gathered for all patients was entirely the same for both in-hospital as well as home-hospital patients, and the care and patient outcomes were found to be at least equivalent in the remote home hospital setting as the in-hospital setting. As described by Levine et al. [31], patients had to be eligible for the home-hospital setting to be included in the study. To be included in the study, patients had to be consenting adults who lived in homes suitable for the home-hospital program within a certain vicinity of the hospital. Additionally, these patients could not require assistance or services unavailable in-home hospital settings or be likely to imminently deteriorate. Patients were randomly assigned to the in-hospital group, or the hospital-at-home group, only once deemed eligible for hospital-at-home care. Given all of this, the combined population was included in the current analysis because the data collected for this pilot work was limited. More importantly, the dataset included in this work is unique and expensive to gather; therefore, all possible data was included in the current analysis of predicting all-cause unplanned 30-day readmission.

The difficulty of developing a generic model for all-cause unplanned 30-day readmission is highlighted by the performance of the HOSPITAL score in this population. This metric has been validated on hundreds of thousands of patients in diverse international datasets with a reported C statistic of 0.72 [20]. In this population, the AUROC curve score was 0.62 and the AUPRC curve score was 0.16. The clinical utility of the HOSPITAL score for stratifying patient's risk for unplanned 30-day readmission needs to be examined furthermore.

It is important to note that the AUPRC statistic is found to be much lower than the AUROC statistic in all the models we examined for this population. This is a very common challenge for machine learning-based models that need to be highly sensitive and deal with a substantial class imbalance. The choice of being highly sensitive can come at a cost to precision. For the models to be highly sensitive in identifying a large proportion of the true positive cases, this might likely result in producing some proportion of false positives as well. Given the number of negatives is far greater than the number of positives, the false positives can lower the precision value while the sensitivity can be relatively higher. This is an appropriate tradeoff, however, as true positive cases of readmission (or the adverse event in question) need to be identified. In another recently published work, the targeted outcome was predicting unplanned readmission of cancer patients for cardiovascular disease [39]. The rate of unplanned readmission in this other work was only 5.86 %, while the range of average AUPRC statistics for the machine learning-based models ranged from 0.13 to 0.15 [39]. In yet another recent publication, machine learning-based models were used for adverse drug event detection [40]. The true positive class in this other work represented 12.7 % of the total population, while the models used to identify the adverse event had an average AUPRC range from 0.25 to 0.48 [40]. In our dataset, we can see the current standard-of-care HOSPITAL score for unplanned 30-day readmission has an AUPRC of only 0.16 while our five-fold cross-validation averages ranged from 0.21 to 0.49. In our future work, we propose how we can further increase our model's accuracy and precision given additional data.

As our work builds on others, this study is one of many that demonstrate the additive power of data for classifying patient outcomes including deterioration [[41], [42], [43]] and readmissions [18,44]. Previous works have considered acute heart failure hospitalizations, systemic inflammatory response syndrome deteriorations (sepsis and cytokine release syndrome), as well as hospital readmissions. Fonarow et al. identified precipitating factors in heart failure patient hospitalizations [41]. The biomarkers identifying factors related to admission were best captured by different wearables devices, laboratory tests, patient-facing smartphone applications and more. Pettinati et al. systematically removed certain clinical data, laboratory data, and vital signs when identifying inflammation-related deteriorations [42,43]. For example [42], incorporating clinical data into sepsis prediction models such as patient demographics, frequency of labs, and the unit the patient was in, enhanced performance compared to purely objective models with vital signs and labs. Further, laboratory data and vital signs allowed for increased patient separability compared with models incorporating vital signs alone. Finally, Patel et al. showed how actigraphy data from post-discharge monitoring could be added to more traditional models to help monitor patients susceptible to readmission [44]. Our study adds evidence that remotely monitored continuous vitals and activity of the patients during their acute admission period provides valuable data for predicting unplanned 30-day readmission compared to episodic monitoring alone.

The addition of remote continuous monitoring and the associated data will help clinicians to identify patients at high-risk patients for complications and readmissions. This stratification will allow caregivers to work with high-risk patients for appropriately timed discharges and follow ups, which will help to lower patient complications, readmission associated costs, and patient/system burden.

4.1. Limitations

This study was limited in the number of patients available for model development. Our work was making use of a unique and comprehensive dataset involving remote continuous monitoring of patients for acute care delivery. With only 29 positive cases of readmission, and the diversity of these cases as seen in Table 3, the dataset is sufficiently small such that it is not clear the degree to which the developed models will generalize. This problem is compounded by the fact that there were numerous data types collected for each patient; each data type could have unique predictive power for all-cause readmission or specific causes of readmission. The findings, such as increasing standard deviations of the performance metrics in the repeated five-fold cross validation results and feature instability, suggest that these current models may have limited generalizability. Optimizing precise models to such a heterogenous problem remains an open problem to be explored further. For example, disease-specific homogeneous readmission models may produce more stable and precise predictions of readmission. This may require a lot more data in each disease stratification with statistically sufficient readmission samples.

Given the limited data power, we were unable to undertake extensive hyperparameter tuning of our XGBoost models. Our dataset is small enough to be very susceptible to overfitting. With several candidate groupings of features, we considered the complexity of the model by altering the number of trees the model would fit as well as the depth of these trees in a five-fold cross-validation. We also considered different weights for the true positive class. These models didn't show clear differences between the different hyperparameter values, so largely default parameters were currently used in our model training using this pilot dataset (readmission was weighted more heavily than no readmission). As stated in the methodology section of the paper, XGBoost models alone were chosen for presentation in this manuscript because of their ability to handle missing values without need for potentially biasing imputation, and the various data types explored having different amounts of data completeness. For this pilot data analysis, which included a limited dataset and numerous potential data types for imputation, XGBoost models produced the best separation of readmitted and not readmitted patients.

The most obvious trends drawn from the SHAP plots (Fig. 4, Fig. 5) indicate that patients who are older and sicklier had a higher likelihood of readmission. Patients who had limited mobility, incongruent covariance of heart rate, respiration rate, and activity, lower temperature, and higher comorbidity counts were predicted as likely to be readmitted. The definition of older and sicklier can vary substantially between different populations and conditions.

The application of some of these models may also be difficult in practice. For example, the surveys gathered as part of this effort would not be traditionally gathered as part of a patient's standard of care. Consideration needs to be given to the ease with which readmission models can be integrated into the existing workflows. Further, we were not exhaustive in the types of models that could be considered. Certainly, existing workflows may have an EHR that does not include surveys but would include episodic physiological vitals and labs along with patient demographics and utilization history. This feature combination was not considered. Given additional data and a better understanding of how certain models generalize, we will better be able to assess what types of models could best integrate with existing standard-of-care workflows. At the present, we have shown how continuous physiological monitoring can have a benefit over episodic physiological monitoring alone. Additional data collection is needed to understand what data types and features may contribute to the best models for existing workflows and how these models generalize. Physiological monitoring in future data collection should include continuous data to allow for the best possible future models to be constructed.

Attempts were made to mitigate these limitations. There were several different cross-validation analyses used to understand how our models might generalize to novel patients. There is a plan to develop more generalizable models given additional data. The patterns of features selected for the models were examined, which provides insight into the types of signals that are important for predicting all-cause unplanned 30-day readmission.

4.2. Future work

This work has two immediately planned extensions. First, the models introduced above will be tuned using increased data power and condition-specific unplanned readmission algorithms will be explored. Condition-specific algorithms should produce more consistent biomarkers of readmission. Further, this would allow for more precise recommendations to be provided with respect to whom the developed models can be applied. Second, the models presented above will be validated using unseen data to better understand how these models generalize and compare with current standards of care.

5. Conclusion

All-cause unplanned 30-day readmission is a heterogeneous problem that will likely require relatively large high-fidelity datasets consisting of numerous data types to devise practical predictive tools. In this pilot study, the predictive models developed for all-cause unplanned 30-day readmission show diverse performances depending on the data types incorporated. This study suggests that some form of continuous vital sign and activity monitoring can add value to the future data gathering and predictive modeling efforts.

Summary Table

What Was Known
  • Hospital readmission is an expensive, dangerous, and burdensome clinical problem for patients and the healthcare system.

  • The tools that exist for identifying patients at risk for unplanned readmission need to be further improved to make them more accurate, explainable, and actionable.

What Was Shown
  • The current standard-of-care readmission prediction tool, the HOSPITAL score, has low predictive power in the patient population receiving acute care at the emerging home hospital setting.

  • Prediction of unplanned 30-day readmission is shown to be enhanced by the inclusion of continuous vital and activity monitoring data.

CRediT authorship contribution statement

Michael Joseph Pettinati: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Methodology, Formal analysis, Data curation, Conceptualization. Kyriakos Vattis: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Formal analysis, Data curation. Henry Mitchell: Writing – review & editing, Methodology, Formal analysis, Data curation, Conceptualization. Nicole Alexis Rosario: Writing – review & editing, Methodology, Formal analysis, Data curation, Conceptualization. David Michael Levine: Writing – review & editing, Validation, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Nandakumar Selvaraj: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Felix H.C., Seaberg B., Bursac Z., Thostenson J., Stewart M.K. Why do patients keep coming back? Results of a readmitted patient survey. Soc. Work. Health Care. 2015;54(1):1–15. doi: 10.1080/00981389.2014.966881. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.De Vries E.N., Ramrattan M.A., Smorenburg S.M., Gouma D.J., Boermeester M.A. The incidence and nature of in-hospital adverse events: a systematic review. BMJ Qual. Saf. 2008;17(3):216–223. doi: 10.1136/qshc.2007.023622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Alzahrani N. The effect of hospitalization on patients' emotional and psychological well-being among adult patients: an integrative review. Appl. Nurs. Res. 2021;61 doi: 10.1016/j.apnr.2021.151488. [DOI] [PubMed] [Google Scholar]
  • 4.Jencks S.F., Williams M.V., Coleman E.A. Rehospitalizations among patients in the Medicare fee-for-service program. N. Engl. J. Med. 2009;360(14):1418–1428. doi: 10.1056/NEJMsa0803563. [DOI] [PubMed] [Google Scholar]
  • 5.Anderson G.F., Steinberg E.P. Hospital readmissions in the Medicare population. N. Engl. J. Med. 1984;311(21):1349–1353. doi: 10.1056/NEJM198411223112105. [DOI] [PubMed] [Google Scholar]
  • 6.Hospital readmissions reduction program (HRRP). CMS.(n.d.). Retrieved November 20, 2022, from https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.
  • 7.Boccuti C., Casillas G. Aiming for fewer hospital u-turns: the Medicare hospital readmission reduction program. Henry J kaiser family foundation. https://www.kff.org/medicare/issue-brief/aiming-for-fewer-hospital-u-turns-the-medicare-hospital-readmission-reduction-program/ Published March 10, 2017.
  • 8.Levine D.M., Ouchi K., Blanchfield B., Saenz A., Burke K., Paz M., Diamond K., Pu C.T., Schnipper J.L. Hospital-level care at home for acutely ill adults: a randomized controlled trial. Ann. Intern. Med. 2020;172(2):77–85. doi: 10.7326/M19-0600. [DOI] [PubMed] [Google Scholar]
  • 9.Xu H., Granger B.B., Drake C.D., Peterson E.D., Dupre M.E. Effectiveness of telemedicine visits in reducing 30‐day readmissions among patients with heart failure during the COVID‐19 pandemic. J. Am. Heart Assoc. 2022;11(7) doi: 10.1161/JAHA.121.023935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sorknæs A.D., Madsen H., Hallas J., Jest P., Hansen‐Nord M. Nurse tele‐consultations with discharged COPD patients reduce early readmissions–an interventional study. The clinical respiratory journal. 2011;5(1):26–34. doi: 10.1111/j.1752-699X.2010.00187.x. [DOI] [PubMed] [Google Scholar]
  • 11.Racsa P., Rogstad T., Stice B., Flagg M., Dailey C., Li Y., Sallee B., Worley K., Sharma A., Annand D. Value-based care through postacute home health under CMS PACT regulations. Am. J. Manag. Care. 2022;28(2):e49–e54. doi: 10.37765/ajmc.2022.88827. [DOI] [PubMed] [Google Scholar]
  • 12.Huang Y., Talwar A., Chatterjee S., Aparasu R.R. Application of machine learning in predicting hospital readmissions: a scoping review of the literature. BMC Med. Res. Methodol. 2021;21(1):1–14. doi: 10.1186/s12874-021-01284-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kansagara D., Englander H., Salanitro A., Kagen D., Theobald C., Freeman M., Kripalani S. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. doi: 10.1001/jama.2011.1515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Zhou H., Della P.R., Roberts P., Goh L., Dhaliwal S.S. Utility of models to predict 28-day or 30-day unplanned hospital readmissions: an updated systematic review. BMJ Open. 2016;6(6) doi: 10.1136/bmjopen-2016-011060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Jeong I.C., Healy R., Bao B., Xie W., Madeira T., Sussman M., Whitman G., Schrack J., Zahradka N., Hoyer E., Brown C., Searson P.C. Assessment of patient ambulation profiles to predict hospital readmission, discharge location, and length of stay in a cardiac surgery progressive care unit. JAMA Netw. Open. 2020;3(3) doi: 10.1001/jamanetworkopen.2020.1074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hosseinzadeh A., Izadi M., Verma A., Precup D., Buckeridge D. Twenty-fifth IAAI Conference. 2013, June. Assessing the predictability of hospital readmission using machine learning. [Google Scholar]
  • 17.Wang H., Cui Z., Chen Y., Avidan M., Abdallah A.B., Kronzer A. Predicting hospital readmission via cost-sensitive deep learning. IEEE ACM Trans. Comput. Biol. Bioinf. 2018;15(6):1968–1978. doi: 10.1109/TCBB.2018.2827029. [DOI] [PubMed] [Google Scholar]
  • 18.Fry B.A., Rajput K.S., Selvaraj N. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) IEEE; 2021, November. Patient ambulations predict hospital readmission; pp. 7506–7510. [DOI] [PubMed] [Google Scholar]
  • 19.Donzé J., Aujesky D., Williams D., Schnipper J.L. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern. Med. 2013;173(8):632–638. doi: 10.1001/jamainternmed.2013.3023. [DOI] [PubMed] [Google Scholar]
  • 20.Donzé J.D., Williams M.V., Robinson E.J., Zimlichman E., Aujesky D., Vasilevskis E.E., Kripalani S., Metlay J.P., Wallington T., Fletcher G.S., Auerbach A.D. International validity of the HOSPITAL score to predict 30-day potentially avoidable hospital readmissions. JAMA Intern. Med. 2016;176(4):496–502. doi: 10.1001/jamainternmed.2015.8462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Tan S.Y., Low L.L., Yang Y., Lee K.H. Applicability of a previously validated readmission predictive index in medical patients in Singapore: a retrospective study. BMC Health Serv. Res. 2013;13(1):1–6. doi: 10.1186/1472-6963-13-366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Van Walraven C., Dhalla I.A., Bell C., Etchells E., Stiell I.G., Zarnke K., Austin P.C., Forster A.J. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ (Can. Med. Assoc. J.) 2010;182(6):551–557. doi: 10.1503/cmaj.091117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Robinson R., Hudali T. The HOSPITAL score and LACE index as predictors of 30 day readmission in a retrospective study at a university-affiliated community hospital. PeerJ. 2017;5:e3137. doi: 10.7717/peerj.3137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Neerukonda A.R., Polite B.N., Connolly M. Risk stratification and predictive value of the HOSPITAL score for oncology patient readmissions. Journal of Clinical Oncology 2018. 2018;36(15_suppl):6548. 6548. [Google Scholar]
  • 25.Regmi M.R., Parajuli P., Tandan N., Bhattarai M., Maini R., Garcia O.E.L., Bakare M., Kulkarni A., Robinson R. An assessment of race and gender-based biases among readmission predicting tools (HOSPITAL, LACE, and RAHF) in heart failure population. Ir. J. Med. Sci. 2022;191(1):205–211. doi: 10.1007/s11845-021-02519-0. [DOI] [PubMed] [Google Scholar]
  • 26.Smith L.N., Makam A.N., Darden D., Mayo H., Das S.R., Halm E.A., Nguyen O.K. Acute myocardial infarction readmission risk prediction models: a systematic review of model performance. Circulation: Cardiovascular Quality and Outcomes. 2018;11(1) doi: 10.1161/CIRCOUTCOMES.117.003885. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.van der Does A.M., Kneepkens E.L., Uitvlugt E.B., Jansen S.L., Schilder L., Tokmaji G., Wijers S.C., Radersma M., Heijnen J.N.M., Teunissen P.F., Hulshof P.B. Preventability of unplanned readmissions within 30 days of discharge. A cross-sectional, single-center study. PLoS One. 2020;15(4) doi: 10.1371/journal.pone.0229940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Herzig S.J., Schnipper J.L., Doctoroff L., Kim C.S., Flanders S.A., Robinson E.J., Ruhnke G.W., Thomas L., Kripalani S., Lindenauer P.K., Williams M.V. Physician perspectives on factors contributing to readmissions and potential prevention strategies: a multicenter survey. J. Gen. Intern. Med. 2016;31(11):1287–1293. doi: 10.1007/s11606-016-3764-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Levine D.M., Paz M., Burke K., Beaumont R., Boxer R.B., Morris C.A., Britton K.A., Orav E.J., Schnipper J.L. Remote vs in-home physician visits for hospital-level care at home: a randomized clinical trial. JAMA Netw. Open. 2022 Aug 1;5(8) doi: 10.1001/jamanetworkopen.2022.29067. PMID: 36040741; PMCID: PMC9428739. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Levine D.M., Pian J., Mahendrakumar K., Patel A., Saenz A., Schnipper J.L. Hospital-level care at home for acutely ill adults: a qualitative evaluation of a randomized controlled trial. J. Gen. Intern. Med. 2021 Jul;36(7):1965–1973. doi: 10.1007/s11606-020-06416-7. Epub 2021 Jan 21. PMID: 33479931; PMCID: PMC8298744. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Levine D.M., Ouchi K., Blanchfield B., Diamond K., Licurse A., Pu C.T., Schnipper J.L. Hospital-level care at home for acutely ill adults: a pilot randomized controlled trial. J. Gen. Intern. Med. 2018 May;33(5):729–736. doi: 10.1007/s11606-018-4307-z. Epub 2018 Feb 6. PMID: 29411238; PMCID: PMC5910347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Chan A.M., Selvaraj N., Ferdosi N., Narasimhan R. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE; 2013, July. Wireless patch sensor for remote monitoring of heart rate, respiration, activity, and falls; pp. 6115–6118. [DOI] [PubMed] [Google Scholar]
  • 33.Morgado Areia C., Santos M., Vollam S., Pimentel M., Young L., Roman C., Ede J., Piper P., King E., Gustafson O., Harford M. A chest patch for continuous vital sign monitoring: clinical validation study during movement and controlled hypoxia. J. Med. Internet Res. 2021;23(9) doi: 10.2196/27547. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Selvaraj N. 2014 IEEE Healthcare Innovation Conference (HIC) IEEE; 2014, October. Long-term remote monitoring of vital signs using a wireless patch sensor; pp. 83–86. [Google Scholar]
  • 35.The EuroQol Group EuroQol-a new facility for the measurement of health-related quality of life. Health Pol. 1990;16(3):199–208. doi: 10.1016/0168-8510(90)90421-9. [DOI] [PubMed] [Google Scholar]
  • 36.Herdman M., Gudex C., Lloyd A., Janssen M.F., Kind P., Parkin D., Bonsel G., Badia X. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L) Qual. Life Res. 2011;20(10):1727–1736. doi: 10.1007/s11136-011-9903-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Chen T., Guestrin C. XGBoost: a scalable tree boosting system. arXiv e-prints. 2016 doi: 10.48550/arXiv.1603.02754. [DOI] [Google Scholar]
  • 38.Lundberg S.M., Lee S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017;30 [Google Scholar]
  • 39.Han S., Sohn T.J., Ng B.P., Park C. Predicting unplanned readmission due to cardiovascular disease in hospitalized patients with cancer: a machine learning approach. Sci. Rep. 2023;13(1) doi: 10.1038/s41598-023-40552-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Liao W., Derijks H.J., Blencke A.A., De Vries E., Van Seyen M., Van Marum R.J. Dual autoencoders modeling of electronic health records for adverse drug event preventability prediction. Intelligence-Based Medicine. 2022;6 [Google Scholar]
  • 41.Fonarow G.C., Abraham W.T., Albert N.M., Stough W.G., Gheorghiade M., Greenberg B.H., O'Connor C.M., Pieper K., Sun J.L., Yancy C.W., Young J.B. Factors identified as precipitating hospital admissions for heart failure and clinical outcomes: findings from OPTIMIZE-HF. Arch. Intern. Med. 2008;168(8):847–854. doi: 10.1001/archinte.168.8.847. [DOI] [PubMed] [Google Scholar]
  • 42.Pettinati M.J., Chen G., Rajput K.S., Selvaraj N. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) IEEE; 2020, July. Practical machine learning-based sepsis prediction; pp. 4986–4991. [DOI] [PubMed] [Google Scholar]
  • 43.Pettinati M.J., Lajevardi-Khosh A., Rajput K.S., Majmudar M., Selvaraj N. 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) IEEE; 2022, July. Towards remote continuous monitoring of cytokine release syndrome; pp. 966–970. [DOI] [PubMed] [Google Scholar]
  • 44.Patel M.S., Volpp K.G., Small D.S., Kanter G.P., Park S.H., Evans C.N., Polsky D. Using remotely monitored patient activity patterns after hospital discharge to predict 30 day hospital readmission: a randomized trial. Sci. Rep. 2023;13(1):8258. doi: 10.1038/s41598-023-35201-9. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES