Skip to main content
BMJ Open logoLink to BMJ Open
. 2022 Mar 30;12(3):e055956. doi: 10.1136/bmjopen-2021-055956

Applicability of predictive models for 30-day unplanned hospital readmission risk in paediatrics: a systematic review

Ines Marina Niehaus 1,, Nina Kansy 1, Stephanie Stock 2, Jörg Dötsch 3, Dirk Müller 2
PMCID: PMC8968996  PMID: 35354615

Abstract

Objectives

To summarise multivariable predictive models for 30-day unplanned hospital readmissions (UHRs) in paediatrics, describe their performance and completeness in reporting, and determine their potential for application in practice.

Design

Systematic review.

Data source

CINAHL, Embase and PubMed up to 7 October 2021.

Eligibility criteria

English or German language studies aiming to develop or validate a multivariable predictive model for 30-day paediatric UHRs related to all-cause, surgical conditions or general medical conditions were included.

Data extraction and synthesis

Study characteristics, risk factors significant for predicting readmissions and information about performance measures (eg, c-statistic) were extracted. Reporting quality was addressed by the ‘Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis’ (TRIPOD) adherence form. The study quality was assessed by applying six domains of potential biases. Due to expected heterogeneity among the studies, the data were qualitatively synthesised.

Results

Based on 28 studies, 37 predictive models were identified, which could potentially be used for determining individual 30-day UHR risk in paediatrics. The number of study participants ranged from 190 children to 1.4 million encounters. The two most common significant risk factors were comorbidity and (postoperative) length of stay. 23 models showed a c-statistic above 0.7 and are primarily applicable at discharge. The median TRIPOD adherence of the models was 59% (P25–P75, 55%–69%), ranging from a minimum of 33% to a maximum of 81%. Overall, the quality of many studies was moderate to low in all six domains.

Conclusion

Predictive models may be useful in identifying paediatric patients at increased risk of readmission. To support the application of predictive models, more attention should be placed on completeness in reporting, particularly for those items that may be relevant for implementation in practice.

Keywords: health services administration & management, health & safety, risk management, paediatrics


Strengths and limitations of this study.

  • Independent and standardised methodological approach for study selection, data extraction and risk of bias assessment.

  • Comprehensive presentation of predictive models that provide information about applicability, performance and reporting quality at a model level, differentiated by 30-day all-cause, surgical conditions and general medical condition-related paediatric unplanned hospital readmissions.

  • Due to study heterogeneity, the models were only narratively synthesised.

Introduction

Hospital readmissions (HRs) are becoming increasingly important as a quality indicator for paediatric inpatient care.1 2 HR is often defined as a subsequent, unplanned admission within a period of 30 days after the index hospitalisation.3 For paediatric populations, rates of all-cause 30-day unplanned hospital readmission (UHR) ranged from 3.4% to 18.7%.3–5 In addition, taking 27 US states into account, it has been estimated that paediatric HRs can cost up to $2 billion annually, with approximately 40% of these occurring HRs being potentially preventable.6

Identifying the reasons for paediatric HRs is a major challenge, as the health of children is also affected by factors aside of inpatient care.7 Predictive models can be applied as a tool for the identification of patients with a risk of HR higher than that of the average population and for the implementation of preventive interventions to reduce the risk of HR.8 Especially in the context of the ongoing COVID-19 pandemic, where children and adolescents are also being hospitalised with a variety of symptoms,9–11 the prevention of UHRs can be beneficial, as it would allow hospital resources to be used in a more target-orientated way.

This systematic review aimed to address two research gaps that have been identified:

  1. Predictive models with good performance are useful in practice when clinicians and other stakeholders have all the necessary information for their application in clinical practice and critical assessment.12 However, previous systematic reviews discussed the shortcomings in reporting the quality of prediction models13–15 and also for paediatric clinical prediction rules16.

  2. A previous systematic review has already identified 36 significant risk factors for UHRs in paediatric patients with different health conditions.3 The largest number of risk factors was identified for surgical procedure-related UHRs. Among others, comorbidity was one of the most common risk factors across the 44 included studies.3 The review3 extends the findings of an earlier systematic review that focused on 29 paediatric studies targeting predictors for asthma-related UHRs17.

Both reviews3 17 were primarily addressed to predictor finding studies14, while to date, there is no published review of existing 30-day UHR predictive models in paediatrics.

The objective of this systematic review was to determine the potential application of multivariable predictive models for individualised risk prediction of 30-day UHR in the paediatric population by evaluating the models’ discriminative ability, completeness in reporting and the risk factors shown to be significant for prediction of 30-day UHR.

Method

The 2020 Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was adhered to for conducting and reporting of this systematic review.18 Screening of the titles and abstracts, data extraction, quality assessment and analyses (eg, completeness in reporting) were performed by two independent reviewers, while disagreements were discussed with a third author. A protocol for this non-registered systematic review was prespecified and is available from the corresponding author. Based on expert recommendation, the analysis was subsequently focused on 30-day UHRs instead of 30-day HRs (ie, planned HRs and UHRs), deviating from the prespecified protocol.

Data source and search strategy

CINAHL, Embase and PubMed were used for an electronic database search to identify studies published up to 7 October 2021. The key search terms include the outcome variables used for the model (ie, readmission/rehospitalisation), elements of the study design (ie, prediction/c-statistic) and the population of interest (ie, paediatrics/children) (see online supplemental material for full search strategies—online supplemental tables A1–A3). The reference lists of the included studies and of comparable systematic reviews3 17 were examined for further potential studies.

Supplementary data

bmjopen-2021-055956supp001.pdf (1.8MB, pdf)

Inclusion criteria

Studies addressing multivariable predictive models for children and adolescents (except newborns/preterm newborns, as the index admission is the birth hospitalisation) were included if they were published in English or German and available as full texts in peer-reviewed original journal articles. Studies aiming to develop a new model or to validate an existing model were included (1) if the model was potentially appropriate for the individual prediction of 30-day UHR from acute healthcare service after discharge or after index procedure in paediatrics and (2) if the model provided at least one discrimination measure (eg, c-statistic). Discriminative ability is a key factor in evaluating predictive models19 and a necessary information to make well-founded conclusions about the performance of a model. In addition, (3) predictive model studies that developed a new model (ie, development design) or determined the incremental or added value of a predictor for an existing model (ie, incremental value design) had to be based on a regression modelling approach. This inclusion criterion enables us to identify significant risk factors and to apply the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) adherence form, which was originally developed for regression models.20 This implies that predictive models using machine-learning (ML) techniques (eg, least absolute selection and shrinkage operator21 or random forest22) are excluded and coded as non-regression models. Studies that aimed to identify 30-day UHR predictors and did not provide a discrimination measure are classified as prognostic factor studies and are thus excluded from the analysis (so as not to bias them adversely in TRIPOD adherence). Prognostic factor studies, for example, are not required to present a simplified scoring rule (cf. TRIPOD item 15b23). Due to specific requirements of mental diseases, studies were only included (4) if they addressed non-mental health condition-related 30-day UHRs.3

Data extraction

Just as in previous systematic reviews,3 24 studies were categorised by health conditions in all tables. Basic study characteristics were extracted according to criteria in tables 1 and 2. To assess the applicability of the predictive models, significant risk factors (ie, odds ratio (OR) or hazard ratio>1 with a p value of <0.05) were assigned to established and revised variable categories3 in table 3. If all variables of a predictive model are available for a patient at the time of index admission (eg, previous health service usage before index admission), the model is applicable at admission. Applicability of predictive models at discharge is given if all variables are available at this point for a patient (eg, length of stay and operative time).

Table 1.

Summary of study characteristics for all-cause 30-day UHR predictive models

Reference Model name Medical condition Model outcome Study design/data source Sample size Age group Period of data collection Readmission rate Model type/validation method
All-cause related UHRs
Brittan et al., USA64 Composite score All-cause 30-day UHRs Retrospective/1 children’s hospital 29 542 patients 0–21 years 2014–2015 4.0% Development study/internal: cross
Sills et al., USA68 PACR+SDH All-cause 30-day UHRs Retrospective/PHIS database, US Census’s American Community Survey data, 47 hospitals 458 686 index discharges <18 years 2014 6.1% Incremental value study/apparent
Ehwerhemuepha et al., USA65
Unnamed All-cause
30-day UHRs
Retrospective/US Census’s American Community Survey data, one tertiary paediatric hospital
38 143 inpatient clinical encounters (DC: 19 072, VC: 19 071) Between 28 days and 17 years
July 2013–June 2017
10.4% Development study/internal: random–split sample
LACE (validation) VC: 19 071 inpatient clinical encounters NR External validation study
Bradshaw et al., USA63 HARRPS tool All-cause 30-day UHRs Retrospective/1 paediatric hospital 5306 patients <18 years May 2017–June 2018 25.3% Development study/internal: cross
Zhou et al., Australia61 Unnamed All-cause 30-day UHRs Retrospective/Australian Census data, 1 tertiary paediatric hospital 73 132 patients Age limit for admission: 15 years, special permissions by hospital executives possible 2010–2014 4.6% Development study/apparent
Ehwerhemuepha et al., USA69 LACE (validation) All-cause 30-day UHRs Retrospective/Cerner Health Facts Database, 48 hospitals 1.4 million encounters <18 years 2000–2017 12.6% (DC) External validation study
Zhou et al., Australia22 Model 1: GLM All-cause 30-day UHRs Retrospective matched case–control/1 tertiary paediatric facility, administrative inpatient data 940 patients Different paediatric age groups* 2010–2014 4.55%† Development study/internal: cross
Model 1: G-S Development study/internal: cross
Model 2: GLM Retrospective matched case–control/1 tertiary paediatric facility, administrative inpatient data, medical records Development study/internal: cross
Model 2: G-S Development study/internal: cross
Model 3: GLM Retrospective matched case-control /1 tertiary paediatric facility, administrative inpatient data, medical records, written discharge documentation Development study/internal: cross
Model 3: G-S Development study/internal: cross

*Mean age (years): 5.2 with HR, 5.3 without HR.

†Based on 3330 patients from the initial data set.

DC, derivation cohort; GLM, logistic regression; G-S, stepwise logistic regression; HARRPS, High-Acuity Readmission Risk Pediatric Screen; HR, hospital readmission; LACE, Length of stay, Acuity of admission, Comorbidity of the patient, Emergency department use; NR, not reported; PACR, paediatric all-condition readmission; PHIS, Paediatric Health Information Systems; SDH, social determinants of health; UHR, unplanned hospital readmission; VC, validation cohort.

Table 2.

Summary of study characteristics for surgical and general medical conditions-related 30-day UHR predictive models

Reference Model name Medical condition Model outcome Study design/data source Sample size Age group Period of data collection Readmission rate Model type/validation method
Surgical conditions related UHRs
Vo et al., USA57 Unnamed All surgical specialties without cardiac surgery 30-day unplanned postsurgical HRs relating to non-cardiac surgery Retrospective/ACS NSQIP-P database 182 589 patients <18 years 2012–2014 4.8% Development study/internal: bootstrap
Polites et al., USA56 Unnamed General and thoracic surgery 30-day UHRs related to the index surgical procedure Retrospective/ACS NSQIP-P database 54 870 patients (DC: 38 397, VC: 16 473) 29 days–<18 years 2012–2014 3.6% Development study/internal: random–split sample
Delaplain et al., USA70 30-day readmission model Trauma-related conditions 30-day unplanned trauma HRs Retrospective/Cerner Health Facts database, 28 hospitals 82 532 patients (DC: 75%, VC: 25%) <18 years 2000–2017 8.8% Development study/internal: random–split sample*
Chotai et al., USA67 Unnamed Neurosurgery 30-day UHRs following index surgery for neurosurgical diagnoses Retrospective/1 paediatric hospital 536 children <18 years January 2012–March 2015 11.9% Development study/apparent
Davidson et al., USA73 Unnamed Ureteroscopy 30-day UHRs after ureteroscopy Retrospective/NSQIP-P database 2510 patients ≤18 years 2015–2018 6.5% Development study/apparent
Garcia et al., USA74 Unnamed Kasai procedure 30-day UHRs related to Kasai procedure Retrospective/ NSQIP-P database 190 children <1 year 2012–2015 15.3% Development study/apparent
Lee et al., USA75 Unnamed Adolescent idiopathic scoliosis surgery 30-day UHRs after adolescent idiopathic scoliosis surgery Retrospective/nationwide readmissions database 30 677 patients 10–18 years 2012–2015 2.9% Development study/apparent
Minhas et al., USA58 Idiopathic scoliosis Spinal surgeries (scoliosis) 30-day UHRs Retrospective/NSQIP-P database 3482 children ≤18 years 2012–2013 3.4% Development study/apparent
Progressive infantile scoliosis Development study/apparent
Scoliosis due to other conditions Development study/apparent
Roddy and Diab, USA59 Unnamed Spine fusion 30-day UHRs Retrospective/state inpatient database 13 287 patients <21 years 2006–2010 (New York, Utah, Nebraska, Florida and North Carolina), 2006–2011 (California) 4.7% Development study/apparent
Sherrod et al., USA77 Unnamed Neurosurgery 30-day UHRs after neurosurgery Retrospective/NSQIP-P database 9799 cases <18 years 2012–2013 11.2% Development study/apparent
Tahiri et al., USA60 Unnamed Plastic surgery 30-day UHRs following paediatric plastic surgery procedures Retrospective/NSQIP database 5376 patients ≤18 years 2012 2.4% Development study/apparent
Wheeler et al., USA78 Unnamed Burn diagnosis 30-day UHRs Retrospective/nationwide readmissions database 11 940 patients 1–17 years January–November 2013,
January–November 2014
2.7% Development study/apparent
Vedantam et al., USA31 Unnamed Epilepsy surgery 30-day UHRs after epilepsy surgery Retrospective/NSQIP-P database 280 surgeries ≤18 years 2015 7.1% Development study/apparent
Basques et al., USA53 Unnamed Posterior spinal fusion 30-day UHRs after posterior spinal fusion Retrospective/NSQIP-P database 733 patients 11–18 years 2012 1.5% Development study/apparent
Martin et al., USA54 Unnamed Spinal deformity surgery 30-day UHRs after spinal deformity surgery Retrospective/NSQIP-P database 1890 patients <18 years 2012 3.96% Development study/apparent
General medical conditions related UHRs
Leary et al., USA66
Prediction at admission Complex chronic conditions 30-day UHRs Retrospective /US Census Bureau data, 1 academic medical centre 2296 index admissions 6 months–18 years October 2010–July 2016
8.2%
Development study/internal: bootstrap
Prediction at discharge Incremental value study/internal: bootstrap
Ryan et al., USA62 PASS (validation) Asthma 30-day UHRs Retrospective/1 university-affiliated, tertiary paediatric referral centre 328 patients 5–18 years May 2015–October 2017 3.0% External validation study
O’Connell et al., USA72 Unnamed Nervous system condition 30-day UHRs Retrospective/Cerner Health Facts database, 18 hospitals 105 834 index admissions (DC: 80%, VC: 20%) <18 years 2000–2017 12.0% Development study/internal: random–split sample
Hoenk et al., USA71 Unnamed Oncology 30-day UHRs Retrospective/Cerner Health Facts database, 16 hospitals 10 418 patients (DC: 7814, VC: 2604) <21 years 2000–2017 41.2% Development study/internal: random–split sample
Sanchez-Luna et al., Spain76 Unnamed Acute bronchiolitis due to respiratory syncytial virus 30-day UHRs Retrospective/Spanish National Health Service records 63 948 discharges <1 year 2004–2012 7.5% Development study/apparent
Sacks et al., USA55 Unnamed Cardiac conditions 30-day UHRs Retrospective/1 academic children’s hospital 1993 hospitalisations 0–12.9 years 2012–2014 20.5% Development study/apparent

*Assumption for validation method: ORs for 30-day UHRs are displayed in a table that is part of the DC from the 7-day UHR predictive model.70

ACS, American College of Surgeons; DC, derivation cohort; HR, hospital readmission; NR, not reported; NSQIP-P, National Surgical Quality Improvement Programme Paediatric; PASS, Paediatric Asthma Severity Score; PHIS, Paediatric Health Information Systems; UHR, unplanned hospital readmission; VC, validation cohort.

Table 3.

Significant risk factors for 30-day unplanned hospital readmission predictive models with a development or incremental value design

Health condition group All-cause (n=5*) Surgical conditions related (n=17) General medical conditions related (n=6)
Reference 64 68 65 63 61 57 56 70 67 73 74 75 58 58 58§ 59 77 60 78 31 53 54 66 66** 72 71 76 55
Location of residence†† x x x
Health insurance x x x
Type of index hospital x x x x x
Living environment x
Characteristics of primary care provider x
Age at admission/operation x x x
Sex x x
Race/ethnicity x x x x
Health service usage prior to index admission‡‡ x x x x x x x x
Prematurity x x
Comorbidity x x x x x x x x x x x x x x x x x x
Illness severity§§ x x x x x x x x x
LOS/postoperative LOS x x x x x x x x x x
Principal diagnoses x x x x x
Principal procedures x x x x x x x x x
Inpatient complications x x x x x x x x
(Specific) medication at index admission x x x
Length of operation x x x
Wound contamination before operation x x
The ASA class x x x x
Discharge on Friday or weekend x
Discharge disposition x x x
Discharge with increased medication/further treatment x
Admission on Friday x
Surgical location x

x=risk factor (OR/hazard ratio>1).

*The six predictive models of Zhou et al22 are not included in this analysis due to missing information about ORs. See online supplemental table A6 in the online supplemental material for a list of included variables.

†Model for idiopathic scoliosis.

‡Model for progressive infantile scoliosis.

§Model for scoliosis due to other conditions.

¶Admission model.

**Discharge model.

††Social determinants of health are included (eg, median household income).

‡‡Risk factor category includes, for example, the number of previous emergency department visits or hospitalisations.

§§The risk factor category also captures the urgency of the index admission. The risk factor category includes, for example, PICU or emergency department admission.

ASA, American Society of Anesthesiologists; LOS, length of stay; PICU, paediatric intensive care unit; postoperative LOS, postoperative length of stay.

Reporting quality and performance

Predictive models can just be used in practice when clinicians and other stakeholders have access to all information required for their application in clinical practice.12 The newly developed 'Critical Appraisal of Models that Predict Readmission (CAMPR)' contains 15 expert recommendations for predictive model development relating to HRs. However, CAMPR should not be used as a reporting standard so far and relates to aspects that are out of the scope of this systematic review (eg, considering different time frames for UHRs).25 Due to the importance of high-quality information about predictive models, we decided to assess the completeness of reporting by using the TRIPOD adherence form and scoring rules.12 23 26 The TRIPOD adherence form consists of 22 main criteria based on the TRIPOD statement,20 resulting in 37 items that are applicable to varying degrees to the development, validation and incremental value studies.23 We decided to apply the TRIPOD adherence form at predictive model level. Therefore, publications that report the development and validation of the same predictive model, for example, are assessed separately. According to previous research, our analysis concentrates on items that could be reported in the main text or supplements27.

TRIPOD adherence at model level was merged with the performance results (ie, discrimination and calibration measures) and the applicability assignment in table 4. The discrimination of a predictive model is often evaluated by the c-statistic or area under the receiver operating characteristic curve. The c-statistic can take a value between 0.5 and 1. A value of 0.5 indicates that the model is not superior to a random prediction of outcome, while values between 0.7 and 0.8 indicate that the model is appropriate. A value of 0.8 or greater indicates a strong discrimination of a model.28

Table 4.

Performance, application and TRIPOD adherence of 30-day UHR predictive models in paediatrics (n=37)

Reference Model name Performance TRIPOD score Potentially applicable…
Discrimination
(c-statistic)
Calibration
All-cause related UHRs
Brittan et al.64 Composite Score 0.62 73.33% At discharge
Sills et al.68 PACR+SDH 0.708 64.71% At discharge
Ehwerhemuepha et al.65
Unnamed VC: 0.79 63.33% At discharge
LACE (validation) 0.68 44.44% At discharge
Bradshaw et al.63 HARRPS-tool Score: 0.65 73.33% At admission
Zhou et al.61 Unnamed 0.645 62.07% At discharge
Ehwerhemuepha et al.69 LACE (validation) 0.7014 33.33% At discharge
Zhou et al.22 Model 1: GLM 0.487 68.97% At admission
Model 1: G-S 0.477 68.97% At discharge
Model 2: GLM 0.585 68.97% At discharge
Model 2: G-S 0.593 68.97% At discharge
Model 3: GLM 0.609 68.97% At discharge
Model 3: G-S 0.617 68.97% At discharge
Surgical condition-related UHRs
Vo et al.57 Unnamed 0.747 Slope: 1, intercept: 0.002 68.97% At discharge
Polites et al.56 Unnamed DC: 0.71; VC: 0.701 DC: p=0.95, O:E ratio=1.03; VC: p=0.36, O:E ratio=1.07 62.07% At discharge
Delaplain et al.70 30-day readmission model VC: 0.799 51.72% At discharge
Chotai et al.67 Unnamed 0.72 42.86% At discharge
Davidson et al.73 Unnamed 0.73 H&L χ2: 7.5 (p=0.4474) 58.62% At discharge
Garcia et al.74 Unnamed 0.703 51.72% At discharge
Lee et al.75 Unnamed 0.712 H&L: 0.0974 58.62% At discharge
Minhas et al.58 Idiopathic scoliosis 0.760–0.769 55.17% At discharge*
Progressive infantile scoliosis 55.17% At discharge*
Scoliosis due to other conditions 55.17% At discharge*
Roddy and Diab59 Unnamed 0.75 H&L (p value): 0.46 55.17% At discharge
Sherrod et al.77 Unnamed 0.759 55.17% At discharge
Tahiri et al.60 Unnamed 0.784 55.17% At discharge
Wheeler et al.78 Unnamed 0.72 55.17% At discharge
Vedantam et al.31 Unnamed 0.71 H&L (p value): 0.94 41.38% At discharge
Basques et al.53 Unnamed 0.87 H&L: value not reported† 68.97% At discharge
Martin et al.54 Unnamed 0.77 62.07% At discharge
General medical condition-related UHRs
Leary et al.66
Prediction at admission 0.65, score: 0.65 Calibration plot 79.31% At admission
Prediction at discharge 0.67, score: 0.67 Calibration plot 81.25% At discharge
Ryan et al.62 PASS (validation) 0.28 55.17% At discharge
O’Connell et al.72 Unnamed VC: 0.733 51.72% At discharge
Hoenk et al.71 Unnamed VC: 0.714 55.17% At discharge
Sanchez-Luna et al.76 Unnamed 0.611 56.67% At admission
Sacks et al.55 Unnamed 0.75 58.62% At discharge

*Assumption for applicability based on variables included in the univariable analysis.

†H&L shows ‘no evidence of a lack of fit’ (Basques53 p290).

DC, derivation cohort; GLM, logistic regression; G-S, stepwise logistic regression; HARRPS, High Acuity Readmission Risk Paediatric Screen; H&L, Hosmer-Lemeshow; LACE, Length of stay, Acuity of admission, Comorbidity of the patient, Emergency department use; NR, not reported; PACR, paediatric all-condition readmission; PASS, Paediatric Asthma Severity Score; SDH, social determinants of health; TRIPOD, Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis; UHR, unplanned hospital readmission; VC, validation cohort.

Quality assessment

Following previous systematic reviews,3 24 29 the refined version of the quality in prognosis studies (QUIPS) tool with its prompting items30 was used to appraise the studies critically with regard to the included predictive models based on six domains. Each domain was rated with a ‘high’, ‘moderate’ or ‘low’ risk of bias.

The six domains are30 ‘study participation’, ‘study attrition’, ‘prognostic factor measurement’, ‘outcome measurement’, ‘study confounding’ and ‘statistical analysis and reporting’.

Data synthesis

Because a quantitative evaluation in the form of a meta-analysis was not possible due to the high heterogeneity among the studies, the studies were qualitatively synthesised; that is, the results for performance, completeness in reporting and significant risk factors were presented in a narrative and simplified quantitative form.

Patient and public involvement

Due to the study design, we did not involve patients or the public.

Results

Search result

From the electronic database search, 10076 records were obtained. After duplicates had been removed, the titles and abstracts were screened for 7694 records. Based on the predefined inclusion criteria, 7586 records were excluded. Adding one additional recommended article31, we found that this results in 109 records being included in the full-text assessment. Among the 84 excluded records, 2 were predictive model studies for 30-day HRs (ie, UHRs and planned HRs) with discrimination metrics32 33; 12 studies analysed 30-day UHRs or 30-day HRs combined with another outcome (ie, emergency department return visits (n=5),34–38 mortality (n=3)39–41 and other complications (n=4)42–45); 3 were predictive model studies for 30-day UHRs or 30-day HRs with no discrimination metrics46–48; 5 were non-regression-based predictive model studies for 30-day UHRs or 30-day HRs in paediatrics21 49–52; and 59 were prognostic factor studies for 30-day UHRs or 30-day HRs. Based on the full-text assessments (n=25) and the hand search of reference lists (n=353–55), 28 studies were included in the systematic review, with 6 of them55–60 already presented in a previous systematic review3 with a different focus. The results of the review process regarding the database search are provided in online supplemental figure A1 in the online supplemental material (see online supplemental table A4 in the online supplemental material for a summary of study characteristics of selected excluded models).

Quality assessment

Overall, the quality of many studies was moderate to low for several domains. For instance, the study quality had to be reduced due to a lack of sufficient information (eg, in the domain ‘study participants’ or 'study attrition'), while all studies were rated as ‘low’ for the domain 'study confounding' (see online supplemental table A5 in the online supplemental material for the results of the risk of bias assessment).

Study characteristics

All studies were based on retrospective data, with 9 studies based on tertiary or paediatric hospital data,22 55 61–67 and 19 studies based on centralised databases31 53 54 56–60 68–78. Four of 28 studies additionally included census data in the analysis.61 65 66 68 The period of data collection ranged from 1 year31 53 54 60 63 68 to 17 years69 70. The majority of studies included patients up to an age of <18 or ≤18 years. Only 5 studies considered patients up to 21 years of age59 64 71 or younger than 1 year74 76. The sample size was specified with different units in the individual studies (eg, encounters and admissions) and varies between 190 children74 and 1.4 million encounters69.

The 28 included studies resulted in 37 predictive models for 30-day UHRs in paediatrics. 10 of 28 studies developed or validated more than one predictive model for UHRs,22 58 59 65–70 75 which were in part excluded due to non-agreement with the inclusion criteria. The models included were grouped into three health conditions: (1) all-cause UHR (n=13),22 61 63–65 68 69 (2) surgical condition-related UHR (n=17)31 53 54 56–60 67 70 73–75 77 78 and (3) general medical condition-related UHR (n=7)55 62 66 71 72 76. The 30-day UHR rates varies from 1.5%53 to 41.2%71.

Among the 37 predictive models included, 32 (87%) used a development design22 31 53–61 63–67 70–78; 3 (8%) used an external validation design62 65 69; and 2 (5%) used an incremental value design66 68. All external validated models were based on existing predictive models that had been previously used in the adult population65 69 or for different outcomes62. Furthermore, 5 of the 28 studies included did not state the primary aim to develop, validate externally or assess the incremental value of the respective 30-day UHR predictive model.65 67–70

Of the predictive models with a development or incremental value design, 18 employed an apparent validation31 53–55 58–61 67 68 73–78 and 16 employed an internal validation22 56 57 63–66 70–72. The most commonly applied internal validation method was cross-validation (n=8)22 63 64 followed by split sample (n=5)56 65 70–72 and bootstrapping (n=3)57 66. In order to analyse the data, either a logistic regression22 31 53–55 57–61 63–68 70–78 or a Cox proportional hazard regression56 was used. Most models presented their results by ORs with a 95% CI. With a p value of <0.05, we considered the results as statistically significant.3 A summary of characteristics of all included studies is provided in tables 1 and 2.

Applicability and significant risk factors in predictive models

Based on the 28 predictive models with a development or incremental value design, 25 significant risk factors associated with 30-day UHRs were identified (see table 3). The most common risk factors were comorbidity (n=18), (postoperative) length of stay (n=10), illness severity (n=9) and principal procedures (n=9). The significant risk factors were inconsistently defined across predictive models, allowing a direct comparison only to a limited extent. ORs for comorbidity ranged from 1.0172 to 10.0858 across predictive models. A length of stay of ≥15 days (OR=2.39)61 and a postoperative length of stay of >4 days (hazard ratio=3.12)56 were considered to be a major risk factor. For illness severity, ‘intensive care unit stay’ (OR=3.302)67 and for principal procedures ‘isolated primary anterior spinal fusion’ (OR=7.65)54 were one of the most pronounced risk factors, respectively. The risk factor with the highest OR value was ‘any inpatient complication’ (OR=180.44).53 For all-cause UHRs, UHRs related to surgical conditions and UHRs related to general medical conditions, 14, 19 and 12 significant risk factors were found, respectively.

Most predictive models are potentially applicable at discharge (n=33), while 4 predictive models can be used at index admission,22 63 66 76 based on the significant and examined variables (see online supplemental table A6 in the online supplemental material for an overview of variables and table 4 for an application description).

Completeness in reporting and discriminative ability at model level

Information about TRIPOD adherence and performance at model level is provided in table 4. The median TRIPOD adherence of the models was 59% (P25–P75, 55%–69%; average: 60%), ranging from 33%69 to 81%66. Developed predictive models had a more favourable reporting quality in comparison with external validated models (ie, 59% (P25–P75, 55%–69%; average: 61%) compared with 44% (P25–P75, 39%–50%; average: 44%), respectively). Two models with poor adherence in reporting were based on an external validation design, and the validation of these models was not the primary aim of the study.65 69

Including all 37 items, we found that the overall median adherence per TRIPOD item across models was 65% (P25–P75, 32%–92%; average: 57%), ranging from 0% to 100% (see online supplemental table A7 in the online supplemental material for a detailed description by model type). The overall adherence per TRIPOD item is illustrated in figure 1.

Figure 1.

Figure 1

Overall adherence per TRIPOD item across all included predictive models (n=37). Notes: Percentages relate to the number of models for which an item was applicable (in this case, the respective item should have been reported). *Indication of derivation from the total number of models for which a TRIPOD item was applicable (N=# of models for which the TRIPOD item is applicable): 10a (N=34), 10b (N=34), 10c (N=4), 10e (N=2), 11 (N=5), 12 (N=5), 13c (N=5), 14a (N=34), 14b (N=32), 15a (N=34), 15b (N=34), 17 (N=1), 19a (N=5). TRIPOD, Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis

14% of the models reported the title (item 1) completely, while 19%62–66 68 of the models mentioned the predictive model type in this context. 3% of the models had a completed abstract (item 2). The detailed predictor definition (item 7a) was fulfilled for more models (95%), in contrast to outcome definition (item 6a) (reported in 70%). The handling of predictors in the analysis (item 10a) showed incomplete reporting in 82% of the models. In addition, the handling (item 9, reported in 35%) and reporting of missing values (part of item 13b, reported in 32%) were not addressed in many models. Just 9% of the models displayed complete reporting of the model-building procedure (item 10b), as the majority of the models (91%) did not address the testing of interaction terms22 31 53–61 64–68 70 72–75 77 78. The description (item 10d) and reporting of performance measures (item 16) were incomplete in 68% and 89% of the models. Just 24% of the models addressed results of calibration measures (cf. table 4). No model presented the full predictive model (item 15a) by providing an example of an intercept. An explanation for using the prediction model (item 15b, eg, by a simplified scoring rule) was presented in 21% of the models. One model provided detailed information about a simplified scoring rule (item 15b) in the online supplemental material66.

The discriminative ability (c-statistic) of the models ranged from 0.2862 to 0.8753. 14 out of 37 predictive models had a c-statistic of <0.7. The linear correlation between c-statistic and TRIPOD score at model level was not statistically significant (r=−0.241, p=0.15). Models with good discriminative ability (c-statistic >0.7)31 53–60 65 67–75 77 78 are primary applicable at discharge and have a TRIPOD score ranging from 41%31 to 69%57. The two models with the highest reporting quality (79% and 81%) are applicable for predicting 30-day UHRs of children with complex chronic conditions. The c-statistic values of these models were 0.6566 and 0.6766, respectively (see online supplemental figure A2 in the online supplemental material for an illustration of the models’ performance and TRIPOD adherence).

Discussion

Based on 28 studies, this systematic review identifies 37 predictive models that could potentially be used for determining individual 30-day UHR risk in paediatrics. According to the models, the 4 most common significant risk factors in predictive models were comorbidity, (postoperative) length of stay, illness severity and principal procedures. 23 validated predictive models have a c-statistic of >0.7. The median TRIPOD adherence of the predictive models included was 59% (P25–P75, 55%–69%), ranging from 33% to 81%, which is similar to that of other systematic reviews12 27.

Practical clinical and policy implications

In general, reporting quality and discriminative ability can provide crucial information about the strengths and weaknesses of a predictive model for implementation in practice (see online supplemental figure A2 in the online supplemental material for a combined illustration). However, the results from this systematic review revealed considerable differences in the c-statistics (0.2862–0.8753) and in the TRIPOD scores (33%69–81%66) at the model level. When considering the available information about reporting quality and discriminative ability in relation to each other, it should be noted that the linear correlation between c-statistic and TRIPOD score at model level was not statistically significant (r=−0.241, p=0.15). Therefore, an independent evaluation of both aspects for the selection of an appropriate predictive model is recommended.

Clinicians and decision makers should use predictive models with good discriminative ability (ie, c-statistic above 0.7) and sufficient data availability. Especially predictive models that are based on census data61 65 66 68 or manual data entry (eg, written discharge documentation22) may be more difficult to implement than models relying on centralised databases31 53 54 56–60 69–78. The TRIPOD score at the predictive model level (see table 4) can be used as a first indicator if the predictive model can be assessed and implemented with the given information.

Similar to a previous systematic review,3 comorbidity and (postoperative) length of stay were identified as consistently cited risk factors across the included studies. In addition, illness severity was one main risk factor among all three health condition groups. For surgical condition-related UHR, the principal procedure has been shown to be crucial as a risk factor. The practical application of risk factors should be made with caution because risk factors are often inconsistently defined across studies. Therefore, knowledge about study-related predictor definitions is required before application.

Limitations

This systematic review has certain limitations:

  1. The studies included needed be to published in English or German with full-text access.

  2. Summarising the results of the included studies quantitatively was not possible due to the heterogeneity of the predictive models (resulting from differences in sample sizes, the examined variables or variations in the periods of data collection).

  3. The sample size of the included studies was reported in different units (eg, encounters and discharges), impeding the comparisons of UHR rates.

  4. Our assignment of the predictive models that are potentially applicable at discharge assumes that the required variables are available at the time point. If clinicians and other stakeholders decide to use a predictive model, it should be checked beforehand whether complete data collection is possible at the desired time.

  5. In addition to the identified medical risk factors (eg, comorbidity) and several country-specific risk factors (eg, location of residence) that result in paediatric readmissions, health-policy initiatives may also affect the readmission rates in paediatric clinical practice79. However, due to a lack of data, these aspects could not be captured by this review.

Future research

This systematic review did not identify predictive models for individualised risk prediction of potentially preventable UHRs in paediatrics, emphasising past discussions to expand the research field further.3

Current external validation studies were conducted in the USA and examined the applicability of existing predictive models with other outcomes or population backgrounds to paediatric 30-day UHRs.62 65 69 Therefore, external validation studies are needed for those models that are explicitly developed to predict 30-day UHRs in paediatrics. Because the number of predictive models related to medical condition-related UHRs was small (n=7)55 62 66 71 72 76, with 4 out of 7 models demonstrating a c-statistic below 0.762 66 76, there is a need for high-quality models in this area.

Non-regression-based techniques (eg, machine learning) are an increasing field in order to predict 30-day HRs in paediatrics, most of which show good discriminative ability21 22 47 49–52 69 (see online supplemental table A4 in the online supplemental material). Future systematic reviews should summarise and critically assess existing non-regression-based HR predictive models in paediatrics, for instance, by applying the TRIPOD-ML statement that is going to be published.80

Existing studies discuss the benefit of shorter time intervals in order to identify preventable readmissions more accurately6 81; one study concluded that a 30-day UHR metric was more precise (c-statistic=0.799) for paediatric trauma patients than a 7-day UHR metric (c-statistic=0.737).70 To our knowledge, there is one predictive model for 365-day7, 3 for 90-day59 67 75 and one for 7-day70 UHRs in paediatrics with good discriminative ability (c-statistic>0.7). Future studies should address the evaluation of paediatric UHR predictive models with different time intervals.

Conclusion

This systematic review revealed an increase in the development of predictive models for 30-day UHRs in paediatrics in recent years. To support the implementation of the predictive models in the long term, it is essential to validate existing models in order to test their applicability in different settings. To increase accessibility for use, more attention should be given on completeness in reporting, particularly for items that may be relevant for the implementation of paediatric 30-day UHR predictive models in practice (ie, those relating to outcome and predictor definitions, handling of missing values, full predictive model presentation and an explanation for its use).

Supplementary Material

Reviewer comments
Author's manuscript

Footnotes

Contributors: IMN conceptualised and designed the systematic review, participated in the literature search, study selection, quality assessment, data extraction and data analyses, and drafted the initial manuscript. NK contributed to the literature search, study selection, quality assessment and data extraction, and critically reviewed the manuscript. SS contributed to the data analysis and critically reviewed the manuscript. JD contributed to the study selection, data extraction and data analysis, and critically reviewed the manuscript. DM conceptualised and designed the systematic review, participated in the study selection, quality assessment, data extraction and data analyses, and critically reviewed the manuscript. All authors approved the final manuscript for submission and agreed to be accountable for all aspects of the work. IMN is the guarantor of the study.

Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Data availability statement

Data are available upon reasonable request. Additional information, including the protocol, is available from the corresponding author.

Ethics statements

Patient consent for publication

Not applicable.

Ethics approval

This study does not involve human participants.

References

  • 1.Bardach NS, Vittinghoff E, Asteria-Peñaloza R, et al. Measuring Hospital quality using pediatric readmission and revisit rates. Pediatrics 2013;132:429–36. 10.1542/peds.2012-3527 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Auger KA, Ponti-Zins MC, Statile AM, et al. Performance of pediatric readmission measures. J Hosp Med 2020;15:723–6. 10.12788/jhm.3521 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Zhou H, Roberts PA, Dhaliwal SS, et al. Risk factors associated with paediatric unplanned Hospital readmissions: a systematic review. BMJ Open 2019;9:e020554. 10.1136/bmjopen-2017-020554 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Beck CE, Khambalia A, Parkin PC, et al. Day of discharge and hospital readmission rates within 30 days in children: a population-based study. Paediatr Child Health 2006;11:409–12. 10.1093/pch/11.7.409 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Coller RJ, Klitzner TS, Lerner CF, et al. Predictors of 30-day readmission and association with primary care follow-up plans. J Pediatr 2013;163:1027–33. 10.1016/j.jpeds.2013.04.013 [DOI] [PubMed] [Google Scholar]
  • 6.Gay JC, Agrawal R, Auger KA, et al. Rates and impact of potentially preventable readmissions at children's hospitals. J Pediatr 2015;166:613–9. 10.1016/j.jpeds.2014.10.052 [DOI] [PubMed] [Google Scholar]
  • 7.Feudtner C, Levin JE, Srivastava R, et al. How well can Hospital readmission be predicted in a cohort of hospitalized children? A retrospective, multicenter study. Pediatrics 2009;123:286–93. 10.1542/peds.2007-3395 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA 2011;306:1688–98. 10.1001/jama.2011.1515 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Lu X, Zhang L, Du H, et al. SARS-CoV-2 infection in children. N Engl J Med 2020;382:1663–5. 10.1056/NEJMc2005073 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Shelmerdine SC, Lovrenski J, Caro-Domínguez P, et al. Coronavirus disease 2019 (COVID-19) in children: a systematic review of imaging findings. Pediatr Radiol 2020;50:1217–30. 10.1007/s00247-020-04726-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.CDC COVID-19 Response Team . Coronavirus Disease 2019 in Children - United States, February 12-April 2, 2020. MMWR Morb Mortal Wkly Rep 2020;69:422–6. 10.15585/mmwr.mm6914e4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Heus P, Damen JAAG, Pajouheshnia R, et al. Poor reporting of multivariable prediction model studies: towards a targeted implementation strategy of the TRIPOD statement. BMC Med 2018;16:120. 10.1186/s12916-018-1099-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mallett S, Royston P, Waters R, et al. Reporting performance of prognostic models in cancer: a review. BMC Med 2010;8:21. 10.1186/1741-7015-8-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bouwmeester W, Zuithoff NPA, Mallett S, et al. Reporting and methods in clinical prediction research: a systematic review. PLoS Med 2012;9:e1001221–12. 10.1371/journal.pmed.1001221 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Collins GS, Mallett S, Omar O, et al. Developing risk prediction models for type 2 diabetes: a systematic review of methodology and reporting. BMC Med 2011;9:103. 10.1186/1741-7015-9-103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Maguire JL, Kulik DM, Laupacis A, et al. Clinical prediction rules for children: a systematic review. Pediatrics 2011;128:e666–77. 10.1542/peds.2011-0043 [DOI] [PubMed] [Google Scholar]
  • 17.Chung HS, Hathaway DK, Lew DB. Risk factors associated with Hospital readmission in pediatric asthma. J Pediatr Nurs 2015;30:364–84. 10.1016/j.pedn.2014.09.005 [DOI] [PubMed] [Google Scholar]
  • 18.Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Pencina MJ, D'Agostino RB. Evaluating discrimination of risk prediction models: the C statistic. JAMA 2015;314:1063–4. 10.1001/jama.2015.11082 [DOI] [PubMed] [Google Scholar]
  • 20.Moons KGM, Altman DG, Reitsma JB, et al. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med 2015;162:W1–73. 10.7326/M14-0698 [DOI] [PubMed] [Google Scholar]
  • 21.Jovanovic M, Radovanovic S, Vukicevic M, et al. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression. Artif Intell Med 2016;72:12–21. 10.1016/j.artmed.2016.07.003 [DOI] [PubMed] [Google Scholar]
  • 22.Zhou H, Albrecht MA, Roberts PA, et al. Using machine learning to predict paediatric 30-day unplanned Hospital readmissions: a case-control retrospective analysis of medical records, including written discharge documentation. Aust Health Rev 2021;45:328–37. 10.1071/AH20062 [DOI] [PubMed] [Google Scholar]
  • 23.Transparent reporting of studies on prediction models for individual prognosis or diagnosis reporting guideline. Assessing adherence of prediction model reports to the TRIPOD guideline, 2018. Available: https://www.tripod-statement.org/wp-content/uploads/2020/01/TRIPOD-Adherence-assessment-form_V-2018_12.pdf [Accessed 07 Jan 2021].
  • 24.Zhou H, Della PR, Roberts P, et al. Utility of models to predict 28-day or 30-day unplanned Hospital readmissions: an updated systematic review. BMJ Open 2016;6:e011060. 10.1136/bmjopen-2016-011060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Grossman Liu L, Rogers JR, Reeder R, et al. Published models that predict Hospital readmission: a critical appraisal. BMJ Open 2021;11:e044964. 10.1136/bmjopen-2020-044964 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Heus P, Damen JAAG, Pajouheshnia R, et al. Uniformity in measuring adherence to reporting guidelines: the example of TRIPOD for assessing completeness of reporting of prediction model studies. BMJ Open 2019;9:e025611. 10.1136/bmjopen-2018-025611 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Zamanipoor Najafabadi AH, Ramspek CL, Dekker FW, et al. Tripod statement: a preliminary pre-post analysis of reporting and methods of prediction models. BMJ Open 2020;10:e041537. 10.1136/bmjopen-2020-041537 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Hosmer D, Lemeshow S, Sturdivant R. Applied logistic regression 3ed. New Jersey: John Wiley & Sons, 2013. [Google Scholar]
  • 29.Hayden JA, Côté P, Bombardier C. Evaluation of the quality of prognosis studies in systematic reviews. Ann Intern Med 2006;144:427–37. 10.7326/0003-4819-144-6-200603210-00010 [DOI] [PubMed] [Google Scholar]
  • 30.Hayden JA, van der Windt DA, Cartwright JL, et al. Assessing bias in studies of prognostic factors. Ann Intern Med 2013;158:280–6. 10.7326/0003-4819-158-4-201302190-00009 [DOI] [PubMed] [Google Scholar]
  • 31.Vedantam A, Pan I-W, Staggers KA, et al. Thirty-day outcomes in pediatric epilepsy surgery. Childs Nerv Syst 2018;34:487–94. 10.1007/s00381-017-3639-z [DOI] [PubMed] [Google Scholar]
  • 32.Jiang R, Wolf S, Alkazemi MH, et al. The evaluation of three comorbidity indices in predicting postoperative complications and readmissions in pediatric urology. J Pediatr Urol 2018;14:244.e1–244.e7. 10.1016/j.jpurol.2017.12.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Smith AH, Doyle TP, Mettler BA, et al. Identifying predictors of hospital readmission following congenital heart surgery through analysis of a multiinstitutional administrative database. Congenit Heart Dis 2015;10:142–52. 10.1111/chd.12209 [DOI] [PubMed] [Google Scholar]
  • 34.Ambroggio L, Herman H, Fain E, et al. Clinical risk factors for revisits for children with community-acquired pneumonia. Hosp Pediatr 2018;8:718–23. 10.1542/hpeds.2018-0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Gay AC, Barreto NB, Schrager SM, et al. Factors associated with length of stay and 30-day revisits in pediatric acute pancreatitis. J Pediatr Gastroenterol Nutr 2018;67:e30–5. 10.1097/MPG.0000000000002033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Miller R, Tumin D, McKee C, et al. Population-Based study of congenital heart disease and revisits after pediatric tonsillectomy. Laryngoscope Investig Otolaryngol 2019;4:30–8. 10.1002/lio2.243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Shah AN, Auger KA, Sucharew HJ, et al. Effect of parental adverse childhood experiences and resilience on a child's healthcare reutilization. J Hosp Med 2020;15:645–51. 10.12788/jhm.3396 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Xu W, Fox JP, Gerety PA, et al. Assessing risk factors for hospital-based, acute care within thirty days of craniosynostosis surgery using the healthcare cost and utilization project. J Craniofac Surg 2016;27:1385–90. 10.1097/SCS.0000000000002827 [DOI] [PubMed] [Google Scholar]
  • 39.Brown JR, Stabler ME, Parker DM, et al. Biomarkers improve prediction of 30-day unplanned readmission or mortality after paediatric congenital heart surgery. Cardiol Young 2019;29:1051–6. 10.1017/S1047951119001471 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Parker DM, Everett AD, Stabler ME, et al. The association between cardiac biomarker NT-proBNP and 30-day readmission or mortality after pediatric congenital heart surgery. World J Pediatr Congenit Heart Surg 2019;10:446–53. 10.1177/2150135119842864 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Parker DM, Everett AD, Stabler ME, et al. Biomarkers associated with 30-day readmission and mortality after pediatric congenital heart surgery. J Card Surg 2019;34:329–36. 10.1111/jocs.14038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Lee Y, Cho H, Gwak G, et al. Scoring system for differentiation of complicated appendicitis in pediatric patients: appendicitis scoring system in children. Glob Pediatr Health 2021;8:2333794X2110222–9. 10.1177/2333794X211022268 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Pecha PP, Hamberis A, Patel TA, et al. Racial disparities in pediatric endoscopic sinus surgery. Laryngoscope 2021;131:e1369–74. 10.1002/lary.29047 [DOI] [PubMed] [Google Scholar]
  • 44.Snyder CW, Bludevich BM, Gonzalez R, et al. Risk factors for complications after abdominal surgery in children with sickle cell disease. J Pediatr Surg 2021;56:711–6. 10.1016/j.jpedsurg.2020.08.034 [DOI] [PubMed] [Google Scholar]
  • 45.Tan GX, Boss EF, Rhee DS. Bronchoscopy for pediatric airway foreign body: thirty-day adverse outcomes in the ACS NSQIP-P. Otolaryngol Head Neck Surg 2019;160:326–31. 10.1177/0194599818800470 [DOI] [PubMed] [Google Scholar]
  • 46.Desai AD, Zhou C, Stanford S, et al. Validity and responsiveness of the pediatric quality of life inventory (PedsQL) 4.0 generic core scales in the pediatric inpatient setting. JAMA Pediatr 2014;168:1114–21. 10.1001/jamapediatrics.2014.1600 [DOI] [PubMed] [Google Scholar]
  • 47.Janjua MB, Reddy S, Samdani AF, et al. Predictors of 90-day readmission in children undergoing spinal cord tumor surgery: a nationwide readmissions database analysis. World Neurosurg 2019;127:e697–706. 10.1016/j.wneu.2019.03.245 [DOI] [PubMed] [Google Scholar]
  • 48.Santos CAD, Rosa CdeOB, Franceschini SdoCC, et al. StrongKids for pediatric nutritional risk screening in Brazil: a validation study. Eur J Clin Nutr 2020;74:1299–305. 10.1038/s41430-020-0644-1 [DOI] [PubMed] [Google Scholar]
  • 49.Stiglic G, Povalej Brzan P, Fijacko N, et al. Comprehensible predictive modeling using regularized logistic regression and comorbidity based features. PLoS One 2015;10:e0144439. 10.1371/journal.pone.0144439 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Stiglic G, Wang F, Davey A, et al. Pediatric readmission classification using stacked regularized logistic regression models. AMIA Annu Symp Proc 2014;2014:1072–81. [PMC free article] [PubMed] [Google Scholar]
  • 51.Wolff P, Graña M, Ríos SA, et al. Machine learning readmission risk modeling: a pediatric case study. Biomed Res Int 2019;2019:1–9. 10.1155/2019/8532892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Taylor T, Altares Sarik D, Salyakina D. Development and validation of a web-based pediatric readmission risk assessment tool. Hosp Pediatr 2020;10:246–56. 10.1542/hpeds.2019-0241 [DOI] [PubMed] [Google Scholar]
  • 53.Basques BA, Bohl DD, Golinvaux NS, et al. Patient factors are associated with poor short-term outcomes after posterior fusion for adolescent idiopathic scoliosis. Clin Orthop Relat Res 2015;473:286–94. 10.1007/s11999-014-3911-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Martin CT, Pugely AJ, Gao Y, et al. Causes and risk factors for 30-day unplanned readmissions after pediatric spinal deformity surgery. Spine 2015;40:238–46. 10.1097/BRS.0000000000000730 [DOI] [PubMed] [Google Scholar]
  • 55.Sacks JH, Kelleman M, McCracken C, et al. Pediatric cardiac readmissions: an opportunity for quality improvement? Congenit Heart Dis 2017;12:282–8. 10.1111/chd.12436 [DOI] [PubMed] [Google Scholar]
  • 56.Polites SF, Potter DD, Glasgow AE, et al. Rates and risk factors of unplanned 30-day readmission following general and thoracic pediatric surgical procedures. J Pediatr Surg 2017;52:1239–44. 10.1016/j.jpedsurg.2016.11.043 [DOI] [PubMed] [Google Scholar]
  • 57.Vo D, Zurakowski D, Faraoni D. Incidence and predictors of 30-day postoperative readmission in children. Paediatr Anaesth 2018;28:63–70. 10.1111/pan.13290 [DOI] [PubMed] [Google Scholar]
  • 58.Minhas SV, Chow I, Feldman DS, et al. A predictive risk index for 30-day readmissions following surgical treatment of pediatric scoliosis. J Pediatr Orthop 2016;36:187–92. 10.1097/BPO.0000000000000423 [DOI] [PubMed] [Google Scholar]
  • 59.Roddy E, Diab M. Rates and risk factors associated with unplanned Hospital readmission after fusion for pediatric spinal deformity. Spine J 2017;17:369–79. 10.1016/j.spinee.2016.10.008 [DOI] [PubMed] [Google Scholar]
  • 60.Tahiri Y, Fischer JP, Wink JD, et al. Analysis of risk factors associated with 30-day readmissions following pediatric plastic surgery: a review of 5376 procedures. Plast Reconstr Surg 2015;135:521–9. 10.1097/PRS.0000000000000889 [DOI] [PubMed] [Google Scholar]
  • 61.Zhou H, Della PR, Porter P, et al. Risk factors associated with 30-day all-cause unplanned Hospital readmissions at a tertiary children's hospital in Western Australia. J Paediatr Child Health 2020;56:68–75. 10.1111/jpc.14492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Ryan KS, Son S, Roddy M, et al. Pediatric asthma severity scores distinguish suitable inpatient level of care for children admitted for status asthmaticus. J Asthma 2021;58:151–9. 10.1080/02770903.2019.1680998 [DOI] [PubMed] [Google Scholar]
  • 63.Bradshaw S, Buenning B, Powell A, et al. Retrospective chart review: readmission prediction ability of the high acuity readmission risk pediatric screen (HARRPS) tool. J Pediatr Nurs 2020;51:49–56. 10.1016/j.pedn.2019.12.008 [DOI] [PubMed] [Google Scholar]
  • 64.Brittan MS, Martin S, Anderson L, et al. An electronic health record tool designed to improve pediatric hospital discharge has low predictive utility for readmissions. J Hosp Med 2018;13:779–82. 10.12788/jhm.3043 [DOI] [PubMed] [Google Scholar]
  • 65.Ehwerhemuepha L, Finn S, Rothman M, et al. A novel model for enhanced prediction and understanding of unplanned 30-day pediatric readmission. Hosp Pediatr 2018;8:578–87. 10.1542/hpeds.2017-0220 [DOI] [PubMed] [Google Scholar]
  • 66.Leary JC, Price LL, Scott CER, et al. Developing prediction models for 30-day unplanned readmission among children with medical complexity. Hosp Pediatr 2019;9:201–8. 10.1542/hpeds.2018-0174 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Chotai S, Guidry BS, Chan EW, et al. Unplanned readmission within 90 days after pediatric neurosurgery. J Neurosurg Pediatr 2017;20:542–8. 10.3171/2017.6.PEDS17117 [DOI] [PubMed] [Google Scholar]
  • 68.Sills MR, Hall M, Cutler GJ, et al. Adding social determinant data changes children's hospitals' readmissions performance. J Pediatr 2017;186:150–7. 10.1016/j.jpeds.2017.03.056 [DOI] [PubMed] [Google Scholar]
  • 69.Ehwerhemuepha L, Gasperino G, Bischoff N, et al. HealtheDataLab - a cloud computing solution for data science and advanced analytics in healthcare with application to predicting multi-center pediatric readmissions. BMC Med Inform Decis Mak 2020;20:115. 10.1186/s12911-020-01153-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Delaplain PT, Guner YS, Feaster W, et al. Prediction of 7-day readmission risk for pediatric trauma patients. J Surg Res 2020;253:254–61. 10.1016/j.jss.2020.03.068 [DOI] [PubMed] [Google Scholar]
  • 71.Hoenk K, Torno L, Feaster W, et al. Multicenter study of risk factors of unplanned 30-day readmissions in pediatric oncology. Cancer Rep 2021;4:e1343. 10.1002/cnr2.1343 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.O'Connell R, Feaster W, Wang V, et al. Predictors of pediatric readmissions among patients with neurological conditions. BMC Neurol 2021;21:5. 10.1186/s12883-020-02028-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Davidson J, Ding Y, Chan E, et al. Postoperative outcomes of ureteroscopy for pediatric urolithiasis: a secondary analysis of the National surgical quality improvement program pediatric. J Pediatr Urol 2021;17:649.e1–649.e8. 10.1016/j.jpurol.2021.06.004 [DOI] [PubMed] [Google Scholar]
  • 74.Garcia AV, Ladd MR, Crawford T, et al. Analysis of risk factors for morbidity in children undergoing the Kasai procedure for biliary atresia. Pediatr Surg Int 2018;34:837–44. 10.1007/s00383-018-4298-1 [DOI] [PubMed] [Google Scholar]
  • 75.Lee NJ, Fields MW, Boddapati V, et al. The risks, reasons, and costs for 30- and 90-day readmissions after fusion surgery for adolescent idiopathic scoliosis. J Neurosurg 2021;34:245–53. 10.3171/2020.6.SPINE20197 [DOI] [PubMed] [Google Scholar]
  • 76.Sanchez-Luna M, Elola FJ, Fernandez-Perez C, et al. Trends in respiratory syncytial virus bronchiolitis hospitalizations in children less than 1 year: 2004-2012. Curr Med Res Opin 2016;32:693–8. 10.1185/03007995.2015.1136606 [DOI] [PubMed] [Google Scholar]
  • 77.Sherrod BA, Johnston JM, Rocque BG. Risk factors for unplanned readmission within 30 days after pediatric neurosurgery: a nationwide analysis of 9799 procedures from the American College of surgeons national surgical quality improvement program. J Neurosurg Pediatr 2016;18:350–62. 10.3171/2016.2.PEDS15604 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Wheeler KK, Shi J, Nordin AB, et al. U.S. pediatric burn patient 30-day readmissions. J Burn Care Res 2018;39:73–81. 10.1097/BCR.0000000000000596 [DOI] [PubMed] [Google Scholar]
  • 79.Bucholz EM, Toomey SL, Schuster MA. Trends in pediatric hospitalizations and readmissions: 2010-2016. Pediatrics 2019;143:e20181958. 10.1542/peds.2018-1958 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet 2019;393:1577–9. 10.1016/S0140-6736(19)30037-6 [DOI] [PubMed] [Google Scholar]
  • 81.Chin DL, Bang H, Manickam RN, et al. Rethinking thirty-day Hospital readmissions: shorter intervals might be better indicators of quality of care. Health Aff 2016;35:1867–75. 10.1377/hlthaff.2016.0205 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjopen-2021-055956supp001.pdf (1.8MB, pdf)

Reviewer comments
Author's manuscript

Data Availability Statement

Data are available upon reasonable request. Additional information, including the protocol, is available from the corresponding author.


Articles from BMJ Open are provided here courtesy of BMJ Publishing Group

RESOURCES