Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2023 Jun 1.
Published in final edited form as: J Patient Saf. 2021 Dec 28;18(4):e741–e746. doi: 10.1097/PTS.0000000000000938

Relationships between pediatric safety indicators across a national sample of pediatric hospitals: dispelling the myth of the ‘safest’ hospital

Carly E Milliren 1,, George Bailey 2,, Dionne A Graham 3,4, Al Ozonoff 4,5
PMCID: PMC9136151  NIHMSID: NIHMS1744625  PMID: 35617599

INTRODUCTION

Over the last two decades, hospitals have increasingly collected data on multiple facets of clinical performance. These range from process measures that describe how well systems perform or how often physicians follow evidence-based guidelines, to outcome measures that indicate how well a provider or a hospital achieves measurable goals for clinical care. Patient safety has drawn more attention as an important healthcare policy issue since the publication of two landmark Institute of Medicine (IOM) reports in 2000-01: ‘To Err Is Human’1 and ‘Crossing the Quality Chasm’.2 Following these publications, the focus of the healthcare quality community turned to measurement and benchmarking as a primary strategy to improve system-wide patient safety and quality of care.3 Safety is a fundamental domain of hospital quality,4 a commonly used metric to rank hospitals on quality of care,5 and a fundamental pillar of high reliability organizations.6,7

Concurrent with the IOM reports, in 2002 the Joint Commission on Accreditation of Hospitals required accredited hospitals to collect and report data on a minimum of two out of four measure sets.8,9 Since then, the use of electronic medical records has risen, driven in part by incentives for Meaningful Use, and the number of quality measures collected by hospitals has increased accordingly.10 Routinely-collected, timely data on safety and quality of clinical care are now an important part of hospital operations.11-13 As such, the range of available clinical safety measures in the hospital setting is now broad and diverse, resulting in a complicated measurement landscape. Quantifying hospital performance is fundamental to a learning healthcare system, and system-wide quality measurement has resulted in substantial improvements in safety and quality.3,8 However, the large scale of available data makes it challenging to identify important signals amidst the noise of a crowded data background.14

Precisely because of this complex measurement landscape, there is a natural impulse to simplify the framework by deriving summary measures that facilitate comparisons across hospitals, i.e. ranking by performance.15 Previous work has shown that various hospital ranking systems arrive at discordant conclusions even when based on identical underlying data.16 This naturally leads to confusion or uncertainty amongst patients acting as informed healthcare consumers16 who aim to identify a suitable hospital from recommendations promulgated by institutions that hold public trust and authority. This complexity may also pose challenges to hospital leaders who rely upon these measures to benchmark performance against peer institutions and to identify target areas for performance improvement.

When multiple interrelated measures are collected, there will inevitably be overlap and redundancy between measures, which we can quantify using the observed correlation and covariance. Redundant or related measurements can be valuable to confirm observed performance differences between hospitals, however collinearity can also invalidate certain statistical models and tests. Therefore, identifying areas of overlap is important to make statistically valid comparisons and better distinguish between high and low performing hospitals. Principal Component Analysis (PCA) is a common statistical method for studying these inter-relationships in high-dimensional data and has been used to study relationships between safety and quality measures from adult hospitals.17

The present study aimed to examine relationships between measures on a commonly used set of pediatric safety measures and evaluate the use of these measures for effective performance ranking of pediatric hospitals. We used pediatric safety measures from the Agency for Healthcare Research and Quality (AHRQ) Pediatric Quality Indicators (PDI) and Centers for Medicare and Medicaid Services (CMS) Hospital Acquired Conditions (HAC) collected from a national sample of freestanding pediatric hospitals to analyze quantitative relationships between these measures and consider the implications for performance ranking of pediatric hospitals. We hypothesized that a unidimensional summary measure of patient safety is insufficient to characterize the variance and covariance between this set of commonly used safety measures. We further hypothesized that hospital rankings using these safety data would not be robust to the choice of summary measure.

METHODS

Data

We analyzed data from the Pediatric Health Information System (PHIS) database, a national comparative database compiled by the Children’s Hospital Association (Lenexa, KS) and used by member hospitals for improvement and benchmarking purposes.18 The database consists of administrative and billing data from inpatient, ambulatory surgery, emergency department, and observation unit patient encounters for these hospitals. Encounters are flagged in PHIS for meeting the denominator and/or numerator definitions for the PDIs using SAS code provided by AHRQ. The numerator definitions for HACs are derived from a list of ICD-10 diagnosis codes provided by CMS, while all inpatient encounters are counted in the denominator. The PDIs are limited by definition to patients 17 years and younger, with the exception of neonatal bloodstream infection which is limited to neonates (<30 days). HACs are calculated for all inpatients with no exclusions.

Measures

We examined 21 pediatric safety measures, including seven Agency for Healthcare Research and Quality (AHRQ) Pediatric Quality Indicators (PDIs) and 14 Hospital Acquired Conditions (HAC) for patients discharged from 45 freestanding pediatric hospitals between January 1, 2016 and December 31, 2019. Nine hospitals with only a single year of data were excluded. The seven PDIs and 14 HAC measures evaluated are summarized in Table 1.19,20

Table 1:

AHRQ PDI and CMS HAC measures evaluated.

Measure Source Included
Accidental puncture or laceration rate AHRQ PDI *
Central-venous catheter (CVC)-related bloodstream infection rate AHRQ PDI *
Iatrogenic pneumothorax rate AHRQ PDI *
Neonatal bloodstream infection rate AHRQ PDI *
Perioperative hemorrhage or hematoma rate AHRQ PDI *
Postoperative respiratory failure rate AHRQ PDI *
Postoperative sepsis rate AHRQ PDI *
Air embolism rate CMS HAC
Blood incompatibility rate CMS HAC
Catheter-associated urinary tract infection (CAUTI) rate CMS HAC *
Deep vein thrombosis and pulmonary embolism following certain orthopedic procedures rate CMS HAC
Falls and trauma rate CMS HAC *
Foreign object retained after surgery rate CMS HAC *
Manifestations of poor glycemic control rate CMS HAC *
Iatrogenic pneumothorax with venous catheterization rate CMS HAC
Pressure ulcer stages III & IV rate CMS HAC *
Surgical site infection (SSI), mediastinitis, following coronary artery bypass graft (CABG) rate CMS HAC *
Surgical site infection (SSI) following cardiac implantable electronic device (CIED) rate CMS HAC *
Surgical site infection (SSI) following bariatric surgery for obesity rate CMS HAC *
Surgical site infection (SSI) following certain orthopedic procedures rate CMS HAC *
Vascular catheter-associated infection rate CMS HAC

Abbreviations: AHRQ – Agency for Healthcare Research and Quality; PDI – Pediatric Quality Indicators; CMS – Center for Medicare and Medicaid Services; HAC – Hospital-Acquired Conditions

We selected this set of 21 indicators as they were both readily available in PHIS and frequently used to measure safety of pediatric care as an important metric of quality care delivery.21-23 It does not form a complete set of safety and quality measures used in pediatric hospitals, but is a convenient and relevant measure set which is specific to pediatric inpatient safety while also covering multiple aspects of hospital operations and delivery of care.

Both the PDIs and HACs include measures of iatrogenic pneumothorax and vascular catheter-associated infections, and we included only the PDI measures for these two indicators as we felt these measures are more specific than HACs which include all inpatient discharges in the denominator. As a sensitivity analysis, we substituted these measures with the corresponding HAC indicators.

We assessed data quality for each measure and found that three HAC measures (air embolism, blood incompatibility, and deep vein thrombosis/pulmonary embolism) were infrequent, resulting in excessive zero rates for nearly all hospitals, and we therefore excluded these measures from analysis. Specific surgical site infections (SSIs) were rare; therefore, these four indicators were pooled into a single SSI measure. The remaining set of 13 measures included in the final analysis was comprised of seven PDIs and six HACs (Table 1).

We performed an initial analysis stratified by calendar year and observed broadly similar patterns across years. There was some observed temporal variability and instability of rates for measures involving uncommon events. We therefore pooled the data into a single forty-eight-month period, summing the numerators and denominators of each measure over the time period of the study. Data for individual years contained excess event rates of zero, but pooling across the study period resolved this issue.

Statistical analysis

We performed all analyses using SAS (version 9.4; Cary, NC). We calculated descriptive and graphical summary statistics for each indicator to assess normality and identify outliers. The 13 indicators exhibited strong right skew, and so we used log-transformed rates per 10,000 throughout our analysis. Rates of zero were imputed with the negative log of the maximum across all hospitals for a given measure, resulting in an approximately symmetric log-transformed distribution. We examined all pairwise bivariate scatterplots and calculated the 13x13 Pearson correlation matrix. We performed principal components analysis (PCA) using orthogonal rotation, and used a scree plot to determine factors retained from PCA.24 After generating the principal components, we examined patterns of variable loadings.

We ranked hospitals by performance in five ways: (1) on each measure individually; (2) based on the median and (3) geometric mean of rankings across all measures; (4) using a summary score based on the first principal component extracted from PCA; and (5) using a summary score based on the weighted-average of the retained principal components extracted from the PCA, weighted by the variance contribution for each retained component. Lower numeric rankings indicate lower rates of incidence for a given measure i.e. better performance, ranging from the best rank of 1 to the poorest rank of 36. Similarly, the scores based on the first principal component and weighted-average across principal components were ranked from smallest to largest. We used descriptive statistics to assess variation and concordance of hospital rankings based on these different summary approaches. The identity of individual hospitals and rankings is blinded using an alphabetic code.

RESULTS

We collected safety measures from the AHRQ pediatric quality indicators and CMS hospital-acquired conditions rates under consideration from 36 hospitals within the PHIS network. Boxplots of each log-transformed measure are provided in Figure 1.

Figure 1:

Figure 1:

Distribution of log-transformed AHRQ PDI and CMS HAC measures across 36 hospitals in the PHIS database between January 2016 and December 2019.

Abbreviations: AHRQ – Agency for Healthcare Research and Quality; PDI – Pediatric Quality Indicators; CMS – Center for Medicare and Medicaid Services; HAC – Hospital-Acquired Conditions; PHIS – Pediatric Health Information System; CVC – central-venous catheter; CAUTI – catheter-associated urinary tract infection

Hospital performance rankings for each of the 13 individual indicators, as well as the summary measures (geometric mean, median, minimum, and maximum of ranks) are found in Supplementary Table 1. A rank of 1 indicates the lowest incidence rate for a given measure (best performance; green shading), while a rank of 36 indicates the highest incidence rate (worst performance; red shading). Figure 2 contains boxplots of within-hospital performance rankings for the 13 indicators. All hospitals demonstrated a wide range of ranks based on individual measures, with the smallest observed range of rankings being 18 (8 to 26) for Hospital ‘S.’ The median interquartile range of performance rankings across all hospitals was 13. Almost all hospitals performed poorly on at least one measure, such as Hospital B which performed well on average but ranked 31 of 36 for surgical site infections; or Hospital A which ranked consistently high on most measures but was in the lower half of rankings for perioperative hemorrhage rate. Nearly all hospitals (32 of 36; 89%) were ranked in the bottom quartile for at least one measure and 31% ranked worst on at least one measure.

Figure 2:

Figure 2:

Between- versus within-hospital variation in performance rankings across 13 AHRQ PDI and CMS HAC measures for 36 PHIS hospitals between January 2016 and December 2019. Lower rank indicates better performance.

Abbreviations: AHRQ – Agency for Healthcare Research and Quality; PDI – Pediatric Quality Indicators; CMS – Center for Medicare and Medicaid Services; HAC – Hospital-Acquired Conditions; PHIS – Pediatric Health Information System

We calculated the 13x13 Pearson correlation matrix of the log-transformed measures. Among 78 possible pairwise correlations, 23 (29%) were below 0.10 in absolute magnitude and only 6 (8%) were greater than 0.40 in absolute magnitude. The highest observed correlation was 0.53 between accidental puncture or laceration rate and pressure ulcer rate. We observed moderate positive correlations between the following measures: accidental puncture or laceration rate with perioperative hemorrhage rate (r=0.48) and catheter-associated urinary tract infection (CAUTI) rate (r=0.47); post-operative sepsis rate with post-operative respiratory failure (r=0.46); pressure ulcer rate with CAUTI rate (r=0.48); and falls or trauma rate with manifestations of poor glycemic control rate (r=0.41). We observed moderate correlation between accidental puncture or laceration rate and most other study measures (median r=0.33, range [−0.34, 0.53]).

Using PCA, we identified five distinct variance components with eigenvalues greater than 1 and report their variable loadings in Table 2. These five components cumulatively accounted for 68% of the variability in the data, with the first component accounting for 25% of the observed variation. The factor loadings from PCA contain relatively little overlap across the five principal components and multi-dimensional underlying constructs with a broad set of measures loading onto a number of components. The three highest loadings for the first principal component were perioperative hemorrhage or hematoma rate, central venous catheter (CVC)-related bloodstream infection rate, and CAUTI rate, suggesting underlying drivers related to nursing care, infection control, and surgical care. The highest loadings for the second principal component were manifestations of poor glycemic control rate and falls or trauma rate, perhaps measuring facets of nursing care. The highest loadings for the third principal component were neonatal bloodstream infection rate and iatrogenic pneumothorax rate, seeming to involve measures of infection control and surgical safety. The final two components comprise additional measures of safe surgical care including postoperative respiratory failure, foreign object retained, postoperative sepsis and surgical site infection.

Table 2:

Principal components analysis (PCA) results and factor loadings for 13 AHRQ PDI and CMS HAC indicators across 36 PHIS hospitals between January 2016 and December 2019. Lightly shaded loadings are greater than or equal to 0.25 in absolute value and darkly shaded loadings are greater than or equal to 0.5. Borders around factor loadings indicate which factor each measure loaded onto after orthogonal rotation.

PC1 PC2 PC3 PC4 PC5
Summary
Eigenvalue 3.29 1.77 1.43 1.26 1.08
Standard Deviation 1.81 1.33 1.20 1.12 1.04
Proportion of Variance 0.25 0.14 0.11 0.10 0.08
Cumulative Proportion 0.25 0.39 0.50 0.60 0.68
Factor Loadings
Perioperative hemorrhage or hematoma rate (PDI) 0.34 −0.05 −0.07 0.15 −0.18
Accidental puncture or laceration rate (PDI) 0.25 −0.10 −0.02 −0.06 0.11
CVC-related bloodstream infection rate (PDI) 0.34 −0.05 −0.27 0.04 −0.03
Pressure ulcer rate (HAC) 0.26 0.11 0.17 −0.07 −0.09
CAUTI rate (HAC) 0.28 0.32 0.15 −0.02 0.12
Glycemic control rate (HAC) 0.03 0.56 0.10 0.05 0.16
Falls/trauma rate (HAC) −0.02 −0.37 0.01 −0.06 0.10
Neonatal bloodstream infection rate (PDI) −0.11 0.09 0.59 0.01 −0.10
Iatrogenic pneumothorax rate (PDI) 0.02 −0.05 0.35 0.12 −0.08
Postoperative respiratory failure rate (PDI) 0.06 0.08 0.09 0.54 −0.09
Foreign object retained rate (HAC) −0.12 −0.18 0.16 −0.33 0.07
Postoperative sepsis rate (PDI) −0.15 −0.17 0.12 0.38 0.35
Surgical site infection rate (HAC) −0.04 0.05 −0.13 −0.04 0.77

Abbreviations: AHRQ – Agency for Healthcare Research and Quality; PDI – Pediatric Quality Indicators; CMS – Center for Medicare and Medicaid Services; HAC – Hospital-Acquired Conditions; PHIS – Pediatric Health Information System; PC – principal component; CVC – central-venous catheter; CAUTI – catheter-associated urinary tract infection

The hospital rankings based on the first principal component as well as the weighted-average across all five retained components from the PCA are reported in Supplementary Table 1. Rankings based on either the first principal component or across all five components are positively correlated but not identical to those produced by the summary measures (geometric mean or median). For example, Hospital N would be considered average by either the median or mean (median rank=12.3; mean=15), and yet ranks 4 (of 36) using the first principal component and is the top performing hospital when ranked across all five components. Conversely, Hospital A would be considered exemplar by the median or mean (median=5; mean=5.6), yet it is ranked in the bottom third using the first component and in the middle of the rankings across all five components (ranked 23 and 11 out of 36, respectively). Rankings by mean or median were highly correlated (r=0.90; p<0.001). Ranking by the first principal component was moderately correlated with the ranking by mean (r=0.67; p<0.001) or median (r=0.63; p<0.001). Similarly, ranking by the weighted-average across five principal components was moderately correlated with ranking by the mean (r=0.58; p<0.001) or median (r=0.56; p<0.001). The two composite methods for ranking based on the principal components demonstrated a moderately-strong positive correlation (r=0.72; p<0.001).

Hospital rankings and PCA results were similar when HAC measures for iatrogenic pneumothorax and CVC-related bloodstream infection were substituted for these AHRQ PDI measures.

DISCUSSION

Within the 13 measures included in our analysis, PCA identified five mutually orthogonal variance components, the primary component of which retained only 25% of the observed variation. The first component includes a broad set of measures representing an underlying factor comprising accidental infections, nursing, and surgical care, all important facets of safe, quality care. However, relying solely on this component would leave three-fourths of the variation in this collection of measures unexplained, over-simplifying measurement of hospital safety performance. Indeed, relying solely on this component for performance ranking would fail to account for performance on other measures of nursing and perioperative care. Even when considering the five major components, only 68% of the variance is explained which leaves over 30% of variation in hospital performance unaccounted. Additionally, the broad set of measures loading on each factor, and the proportion of the variance distributed across multiple components, demonstrates the multifactorial nature of patient safety. While we postulate underlying drivers for each component, the five retained variance components do not have definitive interpretations, further emphasizing the complexity of patient safety as a measurement construct.

We observe substantial within-hospital variation across the 13 measures collected. No hospital performs consistently well across all measures. This is consistent with findings of Lilford and Pronovost, who studied the use of hospital mortality rates to assess hospital performance and concluded that “Little or no correlation exists between how well a hospital performs on one standard of safe and effective care and how well it performs on another; differences in the quality of care within hospitals are much greater than differences between hospitals.”25 Our findings affirm this statement and provide an additional level of methodological rigor to contribute to its supporting evidence. Within-hospital variation across rankings appears large relative to between-hospital differences in safety indicators. This poses a measurement challenge which suggests new approaches to assess hospital safety and quality in light of the multiple factors that evidently underlie the quantitative data on patient safety, hospital management, operation, and quality of care.

Our analysis suggests that there is likely no single composite measure which captures all of the variance observed across a broad range of hospital safety outcomes. Put another way, any summary measurement for hospitals will necessarily retain some information from the available safety data while reducing or ignoring other information. There are many possible approaches to construct a composite or summary score for pediatric hospital safety, and any particular approach, even one that is objective and data-driven such as PCA, will fail to represent faithfully the entire variation in safety indicators across a broad sample of hospitals. This variation is multifactorial and thus will resist efforts to simplify hospital performance to a single unidimensional score. Our approach using a variance weighted-average across all five major components represents a compromise approach to integrate most common safety measures with a minimum of information loss.

Our findings have important implications for how we use safety indicators and other available data to rank hospital performance. Hospital rankings show sufficiently large variation across individual measures such that we cannot confidently identify any particular hospital as a consistently high-performer or low-performer. When applied to Patient Safety writ large, this variation across measures weakens the conceptual premise of quality measurement, namely to study variation in outcomes with a goal to identify institutions that have achieved better outcomes through better practices. Ranking itself is an inherently arbitrary activity. Absolute differences in rates between the ‘best’ and ‘worst’ hospital may be small and within margins of statistical uncertainty, but rank-ordering establishes definitive ‘winners’ and ‘losers’. This is particularly true for rare events such as manifestations of poor glycemic control, where rates ranged from 0 per 10,000 to 0.65 per 10,000 (based on n=1 event). Similarly, the measure for foreign object retained ranged from 0 to 1.81 per 10,000 (n=6 events). Perhaps rare or so-called ‘never’ events should be held in a separate category of measures for the purposes of ranking.

Composite indicators are intuitively appealing because they simplify an otherwise complex quality of care construct; however, they risk over-simplifying and thus destroying important information.15 Our results provide quantitative evidence that we should not reduce patient safety to a single measure. A logical consequence is that no pediatric hospital can claim to be ‘the safest’. Indeed, the very use of the word ‘safest’ implies a single linear ordering of ‘safe’ from lowest to highest. Our results suggest that the existence of such an ordering is fallacious.

We acknowledge some important limitations to our study. An underlying assumption of PCA is that data follow a multivariate normal (MVN) distribution. There is some evidence of mild departure from MVN; however, PCA is routinely used under these conditions for descriptive purposes and its sensitivity to the assumption of MVN is not well understood.26 We did not risk adjust these safety measures at the patient-level as we extracted aggregate hospital performance data, and indeed not all measures have risk adjustment models available.27 There may be some differences in the relationships between the measures that we studied and their risk-adjusted counterparts.

The sample of selected measures are a subset of all available measures related to patient safety, and therefore do not reflect all aspects of hospital safety or quality. However, the addition of more measures to the set that we studied would likely add to within-hospital variation and overall complexity of the covariance structure. The sample of freestanding hospitals that contribute data to PHIS, although broadly representative of pediatric hospitals across the U.S., may not be representative of all freestanding pediatric hospitals. We examined temporal stability of outcomes carefully, but our decision to aggregate data over the four-year study period ignores any systematic change in hospital performance over time. This leads to several further questions, for example whether these relationships are stable over longer periods of time or across a different sample of hospitals. Although we question whether univariate performance measures are faithful representations of hospital performance on safety, we do not offer a suitable alternative if the goal is benchmarking or performance ranking. Instead, we renew calls for more methodological research to combine multiple correlated safety measures,15 and we see this as a natural application of analytic methods to the field of quality and safety measurement. We believe further study of quantitative relationships between measures, and a deeper understanding of the latent factors underlying the quantitative data, will be increasingly valuable as hospitals are held accountable for the safety and quality of care provided.

CONCLUSIONS

This study demonstrates the multifactorial nature of patient safety using a set of 13 commonly-used safety indicators across 36 tertiary care children’s hospitals in the United States. Our findings indicate there is no unique ordering of hospitals based on these measures, and thus no pediatric hospital can claim to be ‘the safest.’ This raises further questions about appropriate methods to rank hospitals by safety, renewing calls for methodologic work in this area to identify intuitive and meaningful approaches to hospital performance ranking.

Supplementary Material

Supplementary Table 1

Acknowledgement

This study was funded by the Agency for Healthcare Research and Quality (grant number: R01 HS026246 01A1).

Conflicts of Interest and Source of Funding:

This study was funded by the Agency for Healthcare Research and Quality (grant number: R01 HS026246 01A1). No other conflicts of interest were declared.

Data availability statement:

No data are available. Data were acquired through the Pediatric Health Information System which prohibits data sharing outside of its member hospitals

REFERENCES

  • 1.Institute of Medicine Committee on Quality of Health Care in America. To err is human : building a safer health system. Washington, D.C.: National Academy Press; 2000. [Google Scholar]
  • 2.Institute of Medicine Committee on Quality of Health Care in America. Crossing the quality chasm : a new health system for the 21st century. Washington, D.C.: National Academy Press; 2001. [Google Scholar]
  • 3.National Strategy for Quality Improvement in Health Care: 2011 Annual Report to Congress. In:2011.
  • 4.Masica AL, Richter KM, Convery P, Haydar Z. Linking joint commission inpatient core measures and national patient safety goals with evidence. Proc (Bayl Univ Med Cent). 2009;22(2):103–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Milstein A, Galvin RS, Delbanco SF, Salber P, Buck CR Jr. Improving the safety of health care: the leapfrog initiative. Eff Clin Pract. 2000;3(6):313–316. [PubMed] [Google Scholar]
  • 6.Cochrane BS, Hagins M Jr., Picciano G, et al. High reliability in healthcare: creating the culture and mindset for patient safety. Healthc Manage Forum. 2017;30(2):61–68. [DOI] [PubMed] [Google Scholar]
  • 7.Tamuz M, Harrison MI. Improving Patient Safety in Hospitals: Contributions of High-Reliability Theory and Normal Accident Theory. Health Services Research. 2006;41(4p2):1654–1676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability Measures — Using Measurement to Promote Quality Improvement. New England Journal of Medicine. 2010;363(7):683–688. [DOI] [PubMed] [Google Scholar]
  • 9.The Joint Commission: Over a century of quality and safety. The Joint Commission;2015. [Google Scholar]
  • 10.Marjoua Y, Bozic KJ. Brief history of quality movement in US healthcare. Current Reviews in Musculoskeletal Medicine. 2012;5(4):265–273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health affairs (Project Hope). 2008;27(3):759. [DOI] [PubMed] [Google Scholar]
  • 12.Madsen LB. Data-driven healthcare : how analytics and BI are transforming the industry. John Wiley and Sons, Inc.; 2014. [Google Scholar]
  • 13.Lindenauer PK, Remus D, Roman S, et al. Public Reporting and Pay for Performance in Hospital Quality Improvement. New England Journal of Medicine. 2007;356(5):486–496. [DOI] [PubMed] [Google Scholar]
  • 14.Silver N The signal and the noise : why so many predictions fail-- but some don't. New York: Penguin Press; 2012. [Google Scholar]
  • 15.Profit J, Typpo KV, Hysong SJ, Woodard LD, Kallen MA, Petersen LA. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care. Implementation Science. 2010;5(1):13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Austin J, Jha A, Romano P, et al. National Hospital Ratings Systems Share Few Common Scores And May Generate Confusion Instead Of Clarity. Health Affairs. 2015;34(3):423–430. [DOI] [PubMed] [Google Scholar]
  • 17.Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. Hospitals — The Hospital Quality Alliance Program. New England Journal of Medicine. 2005;353(3):265–274. [DOI] [PubMed] [Google Scholar]
  • 18.Narus SP, Srivastava R, Gouripeddi R, et al. Federating Clinical Data from Six Pediatric Hospitals: Process and Initial Results from the PHIS+ Consortium. AMIA Annual Symposium Proceedings. 2011;2011:994–1003. [PMC free article] [PubMed] [Google Scholar]
  • 19.Quality AfHRa. Pediatric Quality Indicators Technical Specifications. https://www.qualityindicators.ahrq.gov/Modules/PDI_TechSpec_ICD10_v2020.aspx. Published 2020. Updated July 2020. Accessed March 9, 2021.
  • 20.Services CfMM. Hospital-Acquired Condition (HAC) Reduction Program. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/HAC/Hospital-Acquired-Conditions. Published 2020. Accessed March 9, 2021. [Google Scholar]
  • 21.Scanlon MC, Harris JM, Levy F, Sedman A. Evaluation of the agency for healthcare research and quality pediatric quality indicators. Pediatrics. 2008;121(6):e1723. [DOI] [PubMed] [Google Scholar]
  • 22.Phipps AR, Paradis M, Peterson KA, et al. Reducing Serious Safety Events and Priority Hospital-Acquired Conditions in a Pediatric Hospital with the Implementation of a Patient Safety Program. Jt Comm J Qual Patient Saf. 2018;44(6):334–340. [DOI] [PubMed] [Google Scholar]
  • 23.Stockwell DC, Landrigan CP, Schuster MA, et al. Using a Pediatric Trigger Tool to Estimate Total Harm Burden Hospital-acquired Conditions Represent. Pediatr Qual Saf. 2018;3(3):e081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Pett MA. Making sense of factor analysis : the use of factor analysis for instrument development in health care research. Thousand Oaks, Calif.: Sage Pub.; 2003. [Google Scholar]
  • 25.Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won't go away. BMJ: British Medical Journal. 2010;340(7753):955–957. [DOI] [PubMed] [Google Scholar]
  • 26.Jolliffe IT. Principal component analysis. 2nd ed. ed. New York: Springer; 2002. [Google Scholar]
  • 27.Agency for Healthcare Research and Quality. Risk Adjustment Coefficients for the PDI Version 4.3 2011. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Table 1

Data Availability Statement

No data are available. Data were acquired through the Pediatric Health Information System which prohibits data sharing outside of its member hospitals

RESOURCES