Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 26.
Published in final edited form as: Acad Pediatr. 2019 Nov 21;20(4):524–531. doi: 10.1016/j.acap.2019.11.007

Provider-Level and Hospital-Level Factors and Process Measures of Quality Care Delivered in Pediatric Emergency Departments

James P Marcin 1, Patrick S Romano 1, Parul Dayal 1, Madan Dharmar 1, James M Chamberlain 1, Nanette Dudley 1, Charles G Macias 1, Lise E Nigrovic 1, Elizabeth C Powell 1, Alexander J Rogers 1, Meridith Sonnett 1, Leah Tzimenatos 1, Elizabeth R Alpern 1, Rebecca Andrews-Dickert 1, Dominic A Borgialli 1, Erika Sidney 1, T Charles Casper 1, Nathan Kuppermann 1; Pediatric Emergency Care Applied Research Network1
PMCID: PMC9701102  NIHMSID: NIHMS1851407  PMID: 31760173

Abstract

Objective:

Differences in the quality of emergency department (ED) care are often attributed to nonclinical factors such as variations in the structure, systems, and processes of care. Few studies have examined these associations among children. We aimed to determine whether process measures of quality of care delivered to patients receiving care in children’s hospital EDs were associated with physician-level or hospital-level factors.

Methods:

We included children (<18 years old) who presented to any of the 12 EDs participating in the Pediatric Emergency Care Applied Research Network (PECARN) between January 2011 and December 2011. We measured quality of care from medical record reviews using a previously validated implicit review instrument with a summary score ranging from 5 to 35, and examined associations between process measures of quality and physician- and hospital-level factors using a mixed-effects linear regression model adjusted for patient case-mix, with hospital site as a random effect.

Results:

Among the 620 ED encounters reviewed, we did not find process measures of quality to be associated with any physician-level factors such as physician sex, years since medical school graduation, or physician training. We found, however, that process measures of quality were positively associated with delivery at freestanding children’s hospitals (1.96 points higher in quality compared to nonfreestanding status, 95% confidence interval: 0.49, 3.43) and negatively associated with higher annual ED patient volume (−0.03 points per thousand patients, 95% confidence interval: −0.05, −0.01).

Conclusion:

Process measures of quality of care delivered to children were higher among patients treated at freestanding children’s hospitals but lower among patients treated at higher volume EDs.

Keywords: emergency care, pediatrics, quality of care


The Quality OF care delivered to patients receiving treatment in emergency departments (EDs) is highly variable.1 Differences in quality are related to variation in the structure of care, such as ED and hospital resources and equipment, as well as staffing and other factors that influence processes of care.2 Quality of care also varies specifically among pediatric patients, in part due to differences in structure, systems, and processes of care, such as access to pediatric specialists typically regionalized in urban children’s hospitals.3 These disparities are noteworthy because fewer than 20% of children receiving ED care are treated in children’s hospitals.4

Previous studies have found associations between quality of care and physician-level factors, such as physician sex,5 specialty of training,6 and years of experience.7,8 These associations have also been found with hospital factors,9 such as annual patient volume10,11 and waiting times to see a physician.12 Variability in these factors is correlated with important outcomes such as mortality,9,12 ED length of stay,13 appropriateness of admission,14 readmissions,15 and rates of patients leaving EDs without being seen.13 Many of these studies, however, have been limited to adult patients, have been conducted at single institutions, or have evaluated a relatively small number of patients. Few studies have examined the associations of physician-level and hospital-level factors with overall measures of quality of care among children presenting to a large sample of children’s hospitals.6,9,10,16,17

Recently, we tested and validated an ED-specific implicit review instrument on a large sample of children treated in 12 EDs participating in the Pediatric Emergency Care Applied Research Network (PECARN).18,19 This instrument encompasses 4 dimensions of care, including the physician’s initial data gathering, integration of information and development of appropriate diagnoses, initial treatment plans and physician orders, and plan for disposition and follow-up, as well as one item assessing the overall quality of care. We found that this process of care instrument had high construct validity and the summary score correlated well with condition-specific, criterion-based explicit quality measures.18,19 Specifically, we found that a difference of 1 in the summary quality of care score was significantly associated with differences in quality as measured by these 4 condition-specific quality measures. Using this instrument, we recently published our findings investigating associations between quality of care and patient-level factors and found that overall quality did not differ by patient age, sex, race/ethnicity, and payment source, but did vary by the presenting chief complaint.20

The purpose of this study was to examine the association between the quality of care measured using this process of care implicit review instrument and physician- and hospital-level factors among the same cohort of children receiving care in PECARN EDs. We hypothesized that physician factors such as specialty training and years of experience, and hospital factors such as freestanding status, annual patient volume, and waiting time to see a physician would be associated with summary quality of care scores, given a comparable patient case-mix. Based on previous research,810,12,21 we specifically hypothesized that care provided by more experienced, subspecialty trained physicians, and care provided in freestanding children’s hospitals would be associated with higher process measures of quality of care, after adjusting for differences in patients’ clinical and demographic characteristics.

Methods

Study Design and Hospital Sample

We performed a retrospective, cohort study of children presenting to 12 EDs participating in PECARN, the only federally funded pediatric emergency medicine research collaborative in the United States. The same cohort of patients was previously used to test and validate the implicit review instrument as well as to evaluate whether patient-level factors were associated with quality of care.18,20 At the time of the study, PECARN was comprised of 4 geographically distinct research nodes with 22 participating EDs. For the purpose of these studies, we nonrandomly selected 3 EDs from each of the 4 nodes for equal nodal representation. The 3 EDs were specifically selected to maximize clinician and patient diversity with differences in annual volume (large and small), treating physicians (general emergency medicine and pediatric emergency medicine), and patient populations (including racial/ethnic diversity).

Study Setting and Population

Children younger than 18 years of age who presented to any of the 12 study EDs for evaluation between January 1, 2011 and December 31, 2011 were eligible for inclusion. We randomly sampled patient visits from the ED logs at each study hospital using a 2-stage date and patient sampling scheme generated by the PECARN Data Coordinating Center. First, the study year was stratified into six, 2-month blocks (January–February; March–April; etc.) to ensure an equal distribution of patient encounters throughout the calendar year. The sampling scheme then provided a list of random dates and an associated list of random numbers. For each randomly selected date, a patient encounter was identified using the random number. If the patient encounter did not qualify, the next randomly sampled patient from that date was evaluated until an eligible patient encounter was identified. We excluded medical records of children who were seen in the ED for scheduled procedures (eg, suture removal), those transiently evaluated in the ED before direct admission to the hospital, and those who left the ED without being seen by an attending physician. A minimum of 50 records was obtained and reviewed from each participating ED, for a total of 620 medical records.18,19

Study Protocol

After removing all patient, hospital, and physician identifiers, the research coordinator at each participating hospital photocopied medical records of sampled patients. Essential components of the medical records for quality evaluation included ED physician notes, triage nurse notes, ED nursing notes, all physician orders, all medication orders, laboratory results, and discharge instructions. Nonessential elements that were photocopied when available included radiology results and consultation reports. No inpatient records were considered. The research coordinators abstracted relevant patient data from each medical record and uploaded the deidentified record to a secure server at the PECARN Data Coordinating Center for review.

Quality of Care Score and Measurement

The process measures of quality of care provided to each child in the ED was assessed using the previously validated and published implicit review instrument.19 Briefly, this 5-item instrument includes 4 items assessing different dimensions of care and 1 item assessing the overall quality of care. The 4 dimension-specific items focus on processes of care and include: the initial data gathering about acute problems; the integration of information and development of appropriate diagnoses; the initial treatment plan and orders; and the plan for disposition and follow-up. All 5 items were assessed on a 7-point ordered adjectival scale ranging from “extremely inappropriate” to “extremely appropriate.” We then calculated a summary quality of care score, which was the sum of the 5 item-specific scores from each record, resulting in a score ranging from 5 to 35 for each patient.18,19

Recently, we demonstrated that the instrument had good internal consistency, moderate inter-rater reliability, and high inter-rater agreement. We also demonstrated that the summary quality of care score correlated well with 4 condition-specific, criterion-based explicit quality of care instruments for asthma, febrile seizure, diarrhea and dehydration, and head trauma.

Each deidentified medical record was randomly assigned to 4 of the 8 physician reviewers for independent assessments of quality.19 The physician reviewers did not review records from their own institution and were blinded to the site and physician caring for the patient. Prior to reviewing the medical records, all reviewers met for a 1-day, in-person training session to review the manual of operations. The group discussed general principles of structured implicit review, how the instrument should be applied, outlined anchors for the adjectival scale, and reviewed several sample medical records both individually and as a group. All 8 reviewers were board certified in pediatric emergency medicine (PEM).

Physician-Level Factors, Hospital-Level Factors, and Risk Adjustment

We abstracted several factors that could be related to quality of care. Physician factors, collected by each participating site included physician sex, years since medical school graduation, and type of residency training (pediatrics, general emergency medicine, pediatric emergency medicine). Hospital factors included whether or not the children’s hospital was freestanding, the total annual ED patient volume, the number of pediatric beds in the ED, and the proportion of patients who left the ED without being seen by a clinician during the calendar year of the study. For each selected encounter, we also calculated the patient ED wait time, defined as the time between a patient’s presentation to the ED and initial physician contact as documented in the notes of the medical record.

Based on previous research and the findings of our recently published study using the same data, we included patient age, sex, race/ethnicity, triage category, payment source/insurance type, and chief complaint in the multivariable analysis.20 Race and ethnicity were recategorized into a single variable (Race/Ethnicity) using a previously described method.22 Chief complaints were categorized into Pediatric Emergency Reason for Visit Clusters (PERCs).23 Each PERC was further collapsed into 8 broad chief complaint categories.20

Data Analysist

The mean summary quality of care score across reviewers was the main dependent variable in our analyses. For univariable analyses, we compared mean quality of care scores using the Student’s t test or ANOVA for categorical variables, and compared mean quality of care scores for continuous variables using linear regression, testing for significance using likelihood ratio tests. Considering clinical and statistical associations from the univariable analyses, we also compared the association between the mean summary quality of care scores with physician-level and hospital-level factors using a mixed-effects linear regression model adjusted for patient factors, with hospital site as a random effect to account for clustering of observations by the source hospital. We did not account for physician-level clustering in our model, as most of the physicians providing care for the visits selected in our sample were represented only once. Physician-, hospital-, and patient-level factors were chosen for inclusion a priori, based on our hypotheses and after considering clinical and statistical associations from the univariable analyses. All analyses were performed using SAS Version 9.4 (SAS Institute, Cary, NC). P values <.05 were considered to be significant. The study was approved by the institutional review board at each participating hospital and the data coordinating center.

Results

A total of 620 ED encounters were included in the study. Approximately, 50 medical records (range: 47–55) were reviewed from each of the 12 participating EDs. In the univariable analyses, we found variation in the mean summary quality of care scores based on the responsible clinician’s specialty training (Table 1). We also found variation in the mean summary quality of care among some hospital-level factors, including whether the care was provided in a freestanding children’s hospital or not, annual ED volume during the study period, the number of pediatric beds in the ED, and the proportion of patients who left without being seen. Among the patient-level factors, the mean summary quality of care scores varied based on the patient’s sex, race/ethnicity, and insurance status as well as the patient’s triage category and chief complaint category.

Table 1.

Association of Mean Summary Quality of Care Scores With Physician-Level, Hospital-Level, and Patient-Level Factors

Characteristic (N = 620) N (%) Mean Summary Quality of Care Scores (SD) P
Physician-level factors
Sex of the clinician .10
 Female 372 (60.0) 30.7 (2.1)
 Male 247 (39.8) 30.4 (2.2)
Responsible clinician’s specialty training .005
 Pediatric emergency medicine 410 (66.1) 30.7 (2.2)
 General emergency medicine 99 (16.0) 30.0 (2.2)
 Pediatrics 80 (12.9) 30.5 (2.0)
 Nurse practitioner 20 (3.2) 31.7 (2.1)
 Physician assistant 9 (1.5) 29.5 (2.9)
Years since medical school graduation, mean (SD) 14.5 (7.6) −0.21 (0.12)* .07
Hospital-level factors
Freestanding children’s hospital .001
 Yes 519 (83.7) 30.7 (2.1)
 No 101 (16.3) 30.0 (2.5)
Annual pediatric patient volume, mean (SD) 49,445.7 (23,934.1) 0.10 (0.04) .006
Waiting time to see a physician in minutes, mean (SD) 57.3 (61.6) 0.34 (0.14) .02
Pediatric beds in the ED, mean (SD) 37.1 (17.3) −1.23 (0.53)§ .02
Percent patients who left without being seen, mean (SD) 1.5 (1.3) −0.14 (0.07) .04
Patient-level factors
Patient’s age category .49
 0–2 years 241 (38.9) 30.5 (2.2)
 2–8 years 225 (36.3) 30.7 (2.1)
 8 years or above 153 (24.7) 30.7 (2.3)
Patient’s sex .02
 Female 276 (44.6) 30.4 (2.3)
 Male 343 (55.4) 30.8 (2.0)
Patient’s race/ethnicity .002
 Hispanic 159 (25.7) 30.5 (2.0)
 White, non-Hispanic 203 (32.8) 31.0 (2.1)
 African American, non-Hispanic 175 (28.3) 30.2 (2.3)
 Other 82 (13.2) 30.9 (2.2)
Patient’s primary payment source <.001
 Public insurance 384 (62.0) 30.4 (2.1)
 Private insurance 204 (33.0) 31.1 (2.1)
 Uninsured 31 (5.0) 29.9 (2.5)
Patient’s triage category .04
 Nonurgent 38 (6.1) 29.8 (2.6)
 Urgent 437 (70.6) 30.6 (2.2)
 Emergent 144 (23.3) 30.8 (1.9)
Patient’s chief complaint category <.001
 Trauma 135 (21.8) 31.2 (2.3)
 Abdominal pain 26 (4.2) 29.6 (2.0)
 Asthma or wheezing 76 (12.3) 30.9 (1.8)
 Seizures or neurological symptoms 60 (9.7) 30.2 (2.3)
 Upper respiratory symptoms 69 (11.1) 30.2 (2.3)
 Gastroenteritis 70 (11.3) 30.5 (2.0)
 Fever 86 (13.9) 30.2 (1.8)
 Other 97 (15.7) 30.8 (2.3)
*

Change in quality of care score per 10-year increase in time since medical school graduation.

Change in quality of care score per 10,000 per year increase in patient volume.

Change in quality of care score per 100-minute increase in wait time.

§

Change in quality of care score per 100 increase in pediatric beds.

In the mixed-effects model (Table 2), however, few factors retained their significance. Children’s hospital’s freestanding status was associated with higher mean summary quality of care scores (1.96 higher points in quality, 95% confidence interval [CI]: 0.49, 3.43). Higher annual ED volume was associated with lower mean summary quality of care scores (−0.03 lower points in quality per thousand patients, 95% CI: −0.05, −0.01). Among the patient’s chief complaint categories, some were significantly associated with quality of care. Children presenting with fever, abdominal pain, and upper respiratory symptoms had lower than quality of care scores by an adjusted mean of −0.62 points (95% CI: −1.22, −0.02), points (95% CI: −1.91, −0.14), − 1.02 and − 0.77 (95% CI: −1.40, −0.14), points respectively. Within the final multivariable model, the intraclass correlation for site was 0.11 and the estimated random-effect (site) variance was 0.46.

Table 2.

Multivariable Analysis Examining Association Between the Mean Summary Quality of Care Scores With Physician-, Hospital-, and Patient-Level Characteristics*

Characteristic Estimate (95% CI) P
Physician-level factors
Sex of the clinician
 Male Reference .75
 Female 0.06 (−0.29, 0.41)
Years since medical school graduation −0.02 (−0.04, 0.01) .21
Responsible clinician’s specialty training
 Pediatric emergency medicine Reference .12
 General emergency medicine −0.34 (−0.91, 0.24)
 Pediatrics −0.42 (−0.97, 0.14)
 Nurse practitioner 1.15 (−0.06, 2.36)
 Physician assistant 0.67 (−1.63, 2.97)
 Other 2.33 (−1.62, 6.29)
Hospital-level factors
Freestanding children’s hospital
 No Reference .01
 Yes 1.96 (0.49, 3.43)
Annual pediatric patient volume, thousands −0.03 (−0.05, −0.01) .01
Pediatric patients who left without being seen, % 0.01 (−0.32, 0.33) .97
Waiting time to see a physician, hours −0.06 (−0.23, 0.12) .51
Patient-level factors
Patient age, years 0.01 (−0.02, 0.04) .50
Patient’s sex
 Male Reference .11
 Female −0.26 (−0.59, 0.06)
Patient’s race/ethnicity
 White, non-Hispanic Reference .91
 African American, non-Hispanic 0.13 (−0.36, 0.62)
 Hispanic 0.02 (−0.48, 0.52)
 Other 0.15 (−0.38, 0.69)
Patient’s chief complaint category
 Other Reference <.001
 Trauma 0.52 (−0.02, 1.05)
 Abdominal pain −1.02 (−1.91, −0.14)
 Asthma or wheezing 0.10 (−0.51, 0.71)
 Seizures or neurological symptoms −0.41 (−1.06,0.25)
 Upper respiratory symptoms −0.77 (−1.40, −0.14)
 Gastroenteritis −0.30 (−0.95, 0.34)
 Fever −0.62 (−1.22, −0.02)
Patient’s primary payment source
 Public insurance Reference .16
 Private insurance 0.24 (−0.16, 0.64)
 Uninsured −0.54 (−1.36, 0.28)
Patient’s triage category
 Nonurgent Reference .84
 Urgent 0.19 (−0.55, 0.93)
 Emergent/critical 0.13 (−0.70, 0.95)
*

Analysis conducted using mixed-effects multivariable linear regression model with hospital site as a random effect.

P < .05.

Discussion

In this retrospective cohort analysis, we evaluated process measures of quality of care provided to a random sample of children presenting to 12 EDs participating in PECARN using a previously validated quality of care implicit review instrument. We found that process measures of quality of care provided to children was higher at freestanding children’s hospital EDs compared to nonfreestanding children’s hospital EDs. We also found an inverse association between process measures of quality of care provided to these children and the ED’s annual pediatric patient volume. Unlike other studies examining structural and process factors on patient outcomes, we did not find differences in quality associated with physician-level factors such as the physician’s sex, years of experience, or specialty training.58 Consistent with previous research, we did find that quality of care was associated with a patient’s presenting chief complaint but did not differ by a patient’s demographic factors such as age, sex, race/ethnicity, or primary payment source.

Our finding that freestanding children’s hospitals provide higher quality of care using process measures is consistent with other research evaluating quality of care outcome measures. In a recent study, investigators found that critically ill children receiving care in pediatric intensive care units (PICU) located within freestanding children’s hospitals had lower risk-adjusted mortality compared to the children receiving care at PICUs located within nonfreestanding children’s hospitals.9 The factors that might contribute to differences in outcomes observed between freestanding and nonfreestanding children’s hospitals have been suggested in previous research; freestanding children’s hospitals have more physical resources, specialized staff, and other services dedicated solely to the care of children.24 Another study reported that freestanding children’s hospitals have greater nurse staffing coverage and more support services than nonfreestanding hospitals, which could lead to more complete patient surveillance, monitoring, and outcomes.25 However, better availability and/or allocation of factors such as nursing staff at freestanding children’s hospitals may not have a significant impact on the physician-directed quality of care that was measured using our implicit quality of care instrument. Furthermore, while pediatric EDs located in freestanding children’s hospitals are more likely to be staffed with physicians trained in pediatric emergency medicine as compared to general emergency medicine or pediatrics, we did not find that the specialty training of the clinician was associated with quality of care in the multivariable model. This could mean that factors other than the clinician’s specific training, such as the physician’s exposure to pediatric cases, or other structural and process factors, contribute to the measured quality of care delivered to children on an individual level.

Our finding that the quality of care delivered to children was lower among patients treated at the busier EDs as measured by annual volume is supported by some studies.10,26,27 However, our results are not consistent with the general consensus that patient volume is positively associated with outcomes.11,2830 A previous study of care delivered in 15 PICUs suggested that there may exist a “ceiling effect” or “tipping point” in the volume-outcome relationship, resulting in lower quality and worse outcomes once a threshold or very large patient volume is met.10 For example, one group of investigators examined the volume-outcome relationship in the National Pediatric Trauma Registry and found that among 37 pediatric trauma centers, “mid-sized” programs had the lowest severity-adjusted mortality.31 These findings could be explained by an ED or health system that is overwhelmed, too busy, or operating at a point of decreased effectiveness. While larger patient volumes are likely to improve specialized procedure based therapies,32 the impact of increasing volume at already specialized EDs beyond a certain threshold may not contribute to additional expertise or efficiencies in care. In fact, large ED volumes leading to overcrowding are associated with poorer care for asthma and long bone fractures.33,34 Of note, we found that the annual ED patient volume has a small but positive association with quality of care in the univariable analysis, but the direction of this association reversed in the multivariable models. This effect, commonly referred to as “Simpson’s paradox,” points to the potential influence of other physician, hospital, or patient factors influencing the volume-quality relation-ship in pediatric emergency care.35 Systematic reviews have found that physician-level and/or hospital-level factors explain the association between patient volume and patient outcomes in many studies.28,30 Finally, because the mean annual patient volumes at the 12 EDs included in our study are relatively high, our study results do not exclude a possible positive relationship between volume and outcome relationship among smaller and mid-sized EDs, beyond which there is a volume-outcome ceiling effect.

Our study has limitations. First, the instrument used to measure quality of care focuses on process of care measures, namely physician and provider-led decision-making, which may not capture other differences in the quality related to processes or outcomes of care. For example, there may be differences in patient/family satisfaction of care, quality of nursing care, and other nonphysician-directed aspects of care quality that are not captured by the instrument. Furthermore, it is difficult to relate the magnitude of the differences observed in the quality of care scores to differences in clinical quality and outcomes. The implicit review instrument is limited by the completeness and accuracy of the source documents and did not consider final discharge diagnoses and ultimate patient outcomes, such as whether or not the patients’ conditions improved after treatment. While our instrument has been shown to correlate well with condition-specific, criterion-based explicit measures of care, it is difficult to quantify these differences or to correlate them with more familiar measures of quality. Another limitation is the fact that we did not consider the potential impact of student and/or resident involvement in our analyses; we did this because in our conceptual framework, we considered the provider ultimately responsible for the decision-making and care of the patient. Finally, while our sample was derived from children treated at 12 children’s hospital EDs across the country, it only included large academic children’s hospitals, only 2 of which are nonfreestanding children’s hospital, and only included approximately 50 encounters from each site; as a result, our findings may not accurately reflect the patient population and/or physician-directed quality of care for children receiving treatment at non-children’s hospitals, including community and critical access hospitals. Further research is warranted to replicate our findings among a larger and more broad-based sample of EDs.

While our study has limitations, it also has strengths. First, we used a previously validated implicit review instrument that is widely applicable to a variety of conditions in the ED as compared to disease-specific measures. The peer-review process used in implicit review ensures that quality of care is evaluated using the most current knowledge of physicians and is considered a robust means of grading processes and quality of care, in aggregate. Of note, implicit review instruments are typically used for research and administrative evaluations rather than for evaluating individual clinical assessments or for disseminating quality data to the public. Finally, we evaluated the medical records of children presenting to 12 children’s hospital EDs across the country and included the implicit review evaluations from 8 different pediatric emergency medicine physicians from 8 different institutions.

In conclusion, we did not find any physician-level or patient-level demographic factors to be associated with provider-directed process measures of quality of care delivered to a large cohort of pediatric patients presenting to 12 children’s hospital EDs. We found that the freestanding status of a children’s hospital was associated with higher process measures of quality of care, and annual patient volume was negatively associated with process measures of quality of care. These findings support the regionalization of ED services at freestanding children’s hospitals but draw caution to the fact that EDs could be overwhelmed and quality compromised once a threshold or very large patient volume is met.

Acknowledgments

What’s New.

Our study shows that hospital-level, but not physician-level factors are associated with process measures of quality in pediatric emergency departments (EDs). Higher measures of quality were found at freestanding children’s hospitals while lower measures of quality were found at the highest volume EDs.

Financial statement:

This work was supported by the Agency for Healthcare Research and Quality grant # 1R01HS019712. This project was also supported in part by the Health Resources and Services Administration (HRSA), Maternal and Child Health Bureau (MCHB), Emergency Medical Services for Children (EMSC) Network Development Demonstration Program under cooperative agreements U03MC00008, U03MC00001, U03MC00003, U03MC00006, U03MC00007, U03MC22684, and U03MC22685. This information or content and conclusions are those of the author and should not be construed as the official position or policy of, nor should any endorsements be inferred by HRSA, HHS, or the US Government.

Footnotes

The authors have no conflicts of interest to disclose.

References

  • 1.2014 Healthcare Quality and Disparities Report. Rockville, MD: Agency for Healthcare Research and Quality; 2015. AHRQ Pub. No. 15–0007. [Google Scholar]
  • 2.Donabedian A The quality of care—how can it be assessed. JAMA. 1988;260:1743–1748. [DOI] [PubMed] [Google Scholar]
  • 3.Franca UL, McManus ML. Trends in regionalization of hospital care for common pediatric conditions. Pediatrics. 2018;141:e20171940. [DOI] [PubMed] [Google Scholar]
  • 4.Remick K, Kaji AH, Olson L, et al. Pediatric readiness and facility verification. Ann Emerg Med. 2016;67. 320–328 e321. [DOI] [PubMed] [Google Scholar]
  • 5.Tsugawa Y, Jena AB, Figueroa JF, et al. Comparison of hospital mortality and readmission rates for Medicare patients treated by male vs female physicians. JAMA Inter Med. 2017;177:206–213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dharmar M, Marcin JP, Romano PS, et al. Quality of care of children in the emergency department: association with hospital setting and physician training. J Pediatr. 2008;153:783–789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Tsugawa Y, Newhouse JP, Zaslavsky AM, et al. Physician age and outcomes in elderly patients in hospital in the US: observational study. BMJ. 2017;357:j1797. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Goodwin JS, Salameh H, Zhou J, et al. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Inter Med. 2018;178:196–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Gupta P, Rettiganti M, Fisher PL, et al. Association of freestanding children’s hospitals with outcomes in children with critical illness. Crit Care Med. 2016;44:2131–2138. [DOI] [PubMed] [Google Scholar]
  • 10.Marcin JP, Song J, Leigh JP. The impact of pediatric intensive care unit volume on mortality: a hierarchical instrumental variable analysis. Pediatr Crit Care Med. 2005;6:136–141. [DOI] [PubMed] [Google Scholar]
  • 11.Kahn JM. What’s new in ICU volume-outcome relationships? Intensive Care Med. 2013;39:1635–1637. [DOI] [PubMed] [Google Scholar]
  • 12.Guttmann A, Schull MJ, Vermeulen MJ, et al. Association between waiting times and short term mortality and hospital admission after departure from emergency department: population based cohort study from Ontario, Canada. BMJ. 2011;342:d2983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Handel DA, Fu R, Vu E, et al. Association of emergency department and hospital characteristics with elopements and length of stay. J Emerg Med. 2014;46:839–846. [DOI] [PubMed] [Google Scholar]
  • 14.Chamberlain JM, Patel KM, Pollack MM. Association of emergency department care factors with admission and discharge decisions for pediatric patients. J Pediatr. 2006;149:644–649. [DOI] [PubMed] [Google Scholar]
  • 15.Hyder O, Dodson RM, Nathan H, et al. Influence of patient, physician, and hospital factors on 30-day readmission following pancreatoduodenectomy in the United States. JAMA Surg. 2013;148:1095–1102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Schuster MA. Measuring quality of pediatric care: where we’ve been and where we’re going. Pediatrics. 2015;135:748–751. [DOI] [PubMed] [Google Scholar]
  • 17.Pollack MM, Alexander SR, Clarke N, et al. Improved outcomes from tertiary center pediatric intensive care: a statewide comparison of tertiary and nontertiary care facilities. Crit Care Med. 1991;19: 150–159. [DOI] [PubMed] [Google Scholar]
  • 18.Marcin JP, Romano PS, Dharmar M, et al. Implicit review instrument to evaluate quality of care delivered by physicians to children in emergency departments. Health Serv Res. 2018;53:1316–1334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Dharmar M, Marcin JP, Kuppermann N, et al. A new implicit review instrument for measuring quality of care delivered to pediatric patients in the emergency department. BMC Emer Med. 2007;7:13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Marcin JP, Romano PS, Dayal P, et al. Patient-level factors and the quality of care delivered in pediatric emergency departments. Acad Emerg Med. 2018;25:301–309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Barata I, Brown KM, Fitzmaurice L, et al. Best practices for improving flow and care of pediatric patients in the emergency department. Pediatrics. 2015;135:e273–e283. [DOI] [PubMed] [Google Scholar]
  • 22.Natale JE, Joseph JG, Rogers AJ, et al. Cranial computed tomography use among children with minor blunt head trauma: association with race/ethnicity. Arch Pediatr Adolesc Med. 2012;166:732–737. [DOI] [PubMed] [Google Scholar]
  • 23.Gorelick MH, Alpern ER, Alessandrini EA. A system for grouping presenting complaints: the pediatric emergency reason for visit clusters. Acad Emerg Med. 2005;12:723–731. [DOI] [PubMed] [Google Scholar]
  • 24.Leyenaar JK, Ralston SL, Shieh MS, et al. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11:743–749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Cimiotti JP, Barton SJ, Chavanu Gorman KE, et al. Nurse reports on resource adequacy in hospitals that care for acutely ill children. J Healthc Qual. 2014;36:25–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Horwitz LI, Lin Z, Herrin J, et al. Association of hospital volume with readmission rates: a retrospective cross-sectional study. BMJ. 2015;350:h447. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Markovitz BP, Kukuyeva I, Soto-Campos G, et al. PICU volume and outcome: a severity-adjusted analysis. Pediatr Crit Care Med. 2016;17:483–489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Mesman R, Westert GP, Berden BJ, et al. Why do high-volume hospitals achieve better outcomes? A systematic review about intermediate factors in volume-outcome relationships. Health Policy. 2015;119: 1055–1067. [DOI] [PubMed] [Google Scholar]
  • 29.Sasaki R, Yasunaga H, Matsui H, et al. Hospital volume and mortality in mechanically ventilated children: analysis of a National Inpatient Database in Japan. Pediatr Crit Care Med. 2016;17:1041–1044. [DOI] [PubMed] [Google Scholar]
  • 30.Nguyen YL, Wallace DJ, Yordanov Y, et al. The volume-outcome relationship in critical care: a systematic review and meta-analysis. Chest. 2015;148:79–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Tepas JJ 3rd, Patel JC, DiScala C, et al. Relationship of trauma patient volume to outcome experience: can a relationship be defined? J Trauma. 1998;44:827–830. discussion 830–821. [DOI] [PubMed] [Google Scholar]
  • 32.Finks JF, Osborne NH, Birkmeyer JD. Trends in hospital volume and operative mortality for high-risk surgery. N Engl J Med. 2011; 364:2128–2137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sills MR, Fairclough D, Ranade D, et al. Emergency department crowding is associated with decreased quality of care for children with acute asthma. Ann Emerg Med. 2011;57. 191–200 e191-197. [DOI] [PubMed] [Google Scholar]
  • 34.Sills MR, Fairclough DL, Ranade D, et al. Emergency department crowding is associated with decreased quality of analgesia delivery for children with pain related to acute, isolated, long-bone fractures. Acad Emerg Med. 2011;18:1330–1338. [DOI] [PubMed] [Google Scholar]
  • 35.Simpson EH. The interpretation of interaction in contingency tables. J R Stat Soc. 1951;Series B. 13:238–241. [Google Scholar]

RESOURCES