Abstract
The availability and breadth of collected data has grown exponentially in pediatric critical care medicine. This growth is driven by the practitioners' desire to understand and improve practice. In this manuscript, the author details the registry design factors that must be considered to meet quality improvement and safety needs in pediatric critical care units. The challenges to maintain a high standard database and data on health care delivery performances using the VPS registry data are provided.
Keywords: children, quality improvement, electronic databases, intensive care
Introduction
Over the past decade the number of medical databases and patient registries has grown tremendously. The availability and breadth of collected data has grown exponentially in pediatric critical care medicine.1 2 3 4 The Virtual Pediatric Systems (VPS, LLC http://www.myvps.org/) database alone now has data on more than 1,000,000 pediatric ICU admissions from more than 130 hospitals addressing all aspects of pediatric critical care medicine. Many factors have driven this growth. Foremost among these are the practitioners' desire to understand and improve our practice and an awareness that we must be able to describe, define, and measure what we do. This has also led to the growth of the use of databases and registries to serve the needs of clinical research through cohort discovery, epidemiologic research, retrospective analysis, prospective trials, and registry trials.1 2 5 6 A further factor driving this development has been a tremendous welcome focus on quality improvement and patient safety. With the knowledge that no process can be improved without measurement and that safety requires objective measurement, renewed enthusiasm for collecting data on specific patient populations with scientific rigor has grown. Furthermore, the public awareness of and demand for quality care and accountability has driven the need for databases and patient registries. Finally, we must move from understanding our patients one at a time—to aggregated data of patient characteristics, therapies, and outcomes created to help us move from anecdote to observation-based understanding of clinical care.2 5 6 All of this has been made possible by the information revolution that enables the capture, storage, management, and analysis of ever-increasing amounts of quality data. Many databases and patient registries already exist to serve many purposes (Table 1) and meet the needs of pediatric critical care. Two recently published papers provide a useful survey.1 4
Table 1. Purposes of registries.
| 1. Define the natural history of diseases |
| 2. Define and monitor care delivery processes |
| 3. Determine efficiency and efficacy of care |
| 4. Monitor quality and safety |
| 5. Identification of clinical needs |
| 6. Research—hypothesis generation, cohort discovery |
For the purposes of this paper, databases will be defined as any organized collection of clinical data whereas registries are considered a more refined subset database created using observational study methods to specifically capture a well-defined, uniform series of data elements from a specific patient population collected by rigorous methods for a predefined purpose, for example, clinical, scientific, quality, or political purpose.7 Registries can be used for many purposes (see Table 1).
Successful registry design must consider many factors (Table 2). Perhaps first the mission of the registry, the reason for its existence, and its purposes should be clearly defined and understood. Recently, the importance of well-designed technology has become emphasized. Ease of data collection, avoiding an undue burden on data collectors and busy clinicians, has stimulated the adoption of automated digital data collection. This requires a critical synergy between the ultimate users of the registry and its data, a clear understanding of its purpose, subject matter expertise, designers, data architects, engineers, programmers, and data analysts with statistical expertise all of whom may be involved in creating and maintain effective registries. The best registries have all of these functions aligned for a common purpose rather than being supported by a daisy chain of designers, programmers, clinicians, and users who do not necessarily have aligned incentives for success of the registry. A further requirement is a well-defined ontology and clear nomenclature—which contributors to the registry adhere to. To ensure continued participation in the registry, incentives and rewards need to be aligned. Researchers may seek to develop a registry for research purposes and therefore may request participation from many others. Although this may certainly expand our knowledge and publications (and the reputation of the authors) unless all contributors benefit in some way, the longevity of the registry is doubtful and it may not serve its ultimate purpose. Unless all benefit from the registry in some fashion, for example, by receipt of useful information, data analysis, benchmarking, quality improvement support, academic credit, research collaboration, and education, and they receive positive regard for their participation, the registry may falter. Collaboration and alignment are keys to the success of multisite registries.
Table 2. Essential elements in registry design.
| • Clear concept of purpose and strategy |
| • State-of-the art engineering |
| • Clear ontologies and data definitions |
| • Quality |
| • Security |
| • Flexibility and agility |
| • Collaboration among providers and programmers |
| • Empowered user groups |
| • Effective analytics |
It seems obvious that registries must be driven by their users who understand the questions being addressed and the needs of the clinical setting. Often, although conceived by clinicians, registries are designed and run by data collection companies. Unfortunately the gulf between engineers and clinicians is wide. Collaboration and understanding is critical to success. User groups must control not only the overall strategy but also be involved in the design, customization of, and performance of the technical aspects that support the database. This also implies that registry users optimally will have access to analytic tools that they can use at their convenience in an ad hoc fashion, which are easy to use but sophisticated in design. The data collection and management tools must be flexible, customizable, and agile enabling timely response to the users' needs. A well-experienced, dedicated clinical advisory group—available to users and engineers—is also extremely helpful to ensure the success of patient registries and ambitious data collection initiatives.
Of course none of this design is of value if quality is not built into the registry from the beginning.8 There must be high quality in programming and engineering, according to well-accepted standards, well-documented, and highly integrated with the clinical and quality needs. The days of amateur registry design are over; registries are expensive to design and build, and quality considerations are important to avoid failure to meet the need for which they were created. Quality is also necessary in data collection. The data collectors and tools must be well trained and well designed. Inter-rater reliability and data quality checking are necessary to ensure high-quality data abstraction. There must also be high quality in data curation and data security. Registries often contain Protected Health Information (PHI) and must therefore meet Health Insurance Portability and Accountability Act (HIPAA) requirements for security to protect patient, user, and hospital confidentiality. Security of the data from inadvertent exposure is obvious. Security from data loss is also crucial. All collected data must be available for analysis and comparison; incomplete data lead to an incomplete mission for the registry. Data quality and governance are central to good registry functioning. Without these, the research and quality improvement, benchmarking, and other comparisons are unreliable, and the tremendous effort undertaken to collect data may be wasted.
To serve research needs, registries must collect data with research rigor. Although registry research is often considered retrospective—it is more accurately defined as prospective data collection for secondary analysis—all research uses retrospective data analysis in that the data are analyzed after they are collected. If the potential research purposes of the registry are known or the registries data collection can be altered to subsequently meet research data needs, the quality of the data collection can rival that in research studies. Indeed, there is little excuse for poor-quality data collection to support other purposes of registry use in any case; all data should approach research data quality standards.
These considerations for research registries are just as essential for registries designed to meet quality improvement and safety needs in pediatric critical care units. The quality improvement process essentially requires measuring a process, planning improvement, intervening, observing the outcomes of the intervention, and repeating the cycle. Continuous quality improvement is exemplified by the PDSA (plan-do-study-act) quality improvement cycle which gives rise.9 10 Data registries designed to improve quality should serve all levels of the quality cycle. Essentially this cycle is similar to research. That is, observe a care process, develop a hypothesis, plan ways to test the hypothesis, intervene and study the process, and act while analyzing the results. Registries serving quality improvement in health care must be designed to collect relevant data to these processes. Similarly, to serve the Donebedian quality improvement process, registries should be designed to examine the Donebedian categories of structure, process, and outcomes.10 11 12 Systems thinkers are always curious about how to make things better; registries should provide the answers the curious thinkers in critical care are looking for. Potential contributions of critical care patient registries to the quality improvement process are listed in Table 3.
Table 3. Registries and quality.
| 1. Defining standards of care |
| 2. Determining best practices |
| 3. Performance measurement |
| 4. Benchmarking |
| 5. Support continuous process improvement PDSA cycles |
Abbreviation: PDSA, plan-do-study-act.
The Matter of Informed Consent and Institutional Review Board Approval
Collecting and using data for quality, safety, administrative, and care purposes (Treatment, Payment, and Health Care Operations [TPO] HIPAA disclosure exemption) does not require informed consent from patients and is a recognized necessary practice permitting disclosure of PHI by HIPAA and Health Information Technology for Economic and Clinical Health (HITECH) policies. Using clinical data, either identified, partially identified, and anonymized (de-identified), a limited dataset or safe harbor standards dataset for research or publication purposes requires approval of the Institutional Review Board (IRB) and possibly patient consent, whether the data are prospectively intended to be analyzed, or the determination for research and analysis is made retrospectively. If patient data are collected prospectively for specifically known research purposes or with the intent to publish, IRB approval and patient consent will most likely be required. On the other hand, if registry data already exist and an investigator wishes to analyze this preexisting data, this secondary use of the data (use of the data for a purpose other than the original purpose of the registry) for research or publication will also require the investigator's IRB approval. In the case of identifiable data containing PHI, the IRB may require patient consent or provide special instruction for the investigator. If the data are anonymized (total removal of all PHI elements, with no possibility of linking the research database to the original identified data) or de-identified that is the removal of most but not all elements of PHI with a key back to the original data (the possibility exists to reidentify individual patient in the original database), it is likely that the investigators IRB will waive the necessity for patient consent for use of the data and deem that it is nonhuman data and exempt from the necessity of IRB approval. De-identification can be determined by meeting one of two standards: the HIPAA safe harbor standard in which 18 key elements of PHI are removed from the database or an expert determination has been made that there is a “very small” risk of identifying individual patients. Registry research requires understanding these requirements and working with the covered entities IRB to ensure patient privacy. Patient registries that enable data research should not only require an investigators IRB approval or waiver of approval, but they also have a system to review research data requests, the appropriate management of the data, provide a data anonymizing process, or ensure that users of the data are compliant with these important federal regulations.
Economic Viability
Data collection, analysis, and reporting cost money. Ensuring the success of data registries requires that there are sufficient funds for design-appropriate software and to develop it as well as to ensure correct database design and maintenance, collect the data, clean the data and ensure data quality, analyze the data, prepare reports, and oversee research. All of these obviously require appropriately trained people and systems. Consideration of how the data are collected is critical. The first line of data quality relies on the data aggregators who must understand the data definitions, the clinical scenarios, and the importance of the endeavor. Data registries should have a well-defined system for training the data aggregators. Clearly professionals familiar with pediatric intensive care who have sufficient time to collect, aggregate, and ensure the quality of the data are required, and frequently in critical care this requires a clinical ICU nurse. Physician collectors and others, if given sufficient time and training, may also aggregate and submit data. Providing inadequate data collecting resources threatens entire project. Constrained resources that impact data quality may sabotage all of the downstream inferences from the data, which must rely on accurate, high-quality data. Clearly, this is an expensive resource that requires understanding before beginning a registry project. In fact, data aggregation may be the most expensive part of any registry project. Consider that an average-experienced pediatric intensive care unit (PICU) nurse is paid $90,000 per annum and that a nurse is required for each of 20 sites there is already a $2 million investment in data collection alone.
Clearly, there is a need to lower this resource barrier to data entry and there is great interest in using data already collected in data systems such as electronic health records (EHRs) for registry participation. The immense expense of these system interests hospitals in using them to decrease costs elsewhere and many institutions participate in scores of registries, the data for which ultimately can be found in the EHRs. The temptation to send the data from the EHR to the data registries is great, but unfortunately it is a nontrivial process. Ensuring quality of the data often requires the judgment of trained clinician to determine which EHR values are accurate. Programming the appropriate interfaces requires expertise ensuring that the data received by the registry from electronic sources is a crucial task that must be well designed to ensure the validity of the inferences based on the patient registry. All of these issues have delayed the obvious but nontrivial adoption of data entry directly from hospital electronic systems.
Designing and building appropriate databases is also expensive but in general a one-time capital expense. Depending on the requirements, this may range from a few 100,000 to several million dollars of engineering, software and architecture design, and implementation and training. The database maintenance, customer support, training, data stewardship, and report preparation clearly require expertise and their cost will depend on the specific purposes of the registry. These expenses can be spread over all of the users, a fact that makes registries with many participants more viable than small groups as these costs are generally fixed. Data registry charges for participation are necessary to cover these costs, even without profit, and registry membership can range from as low as $10,000 per year to greater than $100,000 depending on the purposes, number of participants, and size of the registry. Obviously the value of a registry depends on all of the appropriate support and long-term registry viability depends on ensuring adequate resources to meet these needs.
What Data Are to Be Collected?
Obviously the data that are to be prospectively collected must depend on the purpose of the database registry. If the purpose is to primarily serve quality improvement, the definition of quality and safety is important. Increasingly adopted, the Institute of Medicine's (IOM) six aims of a health care system improvement provide a useful guide9:
Care is safe—is harm avoided.
Care is effective achieving the desired outcomes and based on scientific evidence.
Care is patient centered being respectful and responsive to the patient's needs.
Care is timely.
Care is efficient avoiding waste and unnecessary resource utilization.
Care is equitable meeting the standard of justice.
Ideally, data relevant to these aims would be collected. If the purpose is to assess health care outcomes and compare them among participants, this benchmarking purpose would require specific defined outcomes, information relevant to achieving those outcomes, and some means of ensuring that comparisons between similar patient populations is made. To assess care process capturing complications, duration of care, comorbidities, diagnostic information, and adherence to standards could be included in the registry. Of course, all registries should contain demographic data such as age, sex, race, and perhaps economic factors. Specific ICU quality indicators could include reintubation rate, readmission rate, length of stay, mortality, morbidities such as complications, occurrence of ventilator-associated pneumonia, central line-associated bloodstream infections, quality of pain control, and many others, including functional assessment such as pediatric cerebral performance category, pediatric overall performance category, and Pollack's more recent functional status score.12 13 14 15
The perfect database would contain all possibly related data, but there are other considerations. The expenses and time of data collection and the increasing complexity of analysis indicate a need for less comprehensive data collection. These factors require predetermination before initiating data collection. A collaborative group of intensivists and programmers should consider the need for every data element in the database and ask the question what is the value of collecting this data? Another aspect of what to include in the data collection is that not all sites have to capture all data. Consideration should be given to a core dataset, for example, demographics, diagnosis, severity of illness (SOI) scores, dates of admission, and discharge, etc., and other modular data tools could be added to enhance data collection as desired by each individual participant. In addition to this, having the ability to add modules to collect specific data related to specific multiparticipant quality improvement projects would be beneficial. It is important to also note that, for clinical quality improvement, the best data registries are clinical rather than administrative.16 The availability of administrative databases has encouraged some to use them for quality improvement and care process definition, but without clinical information, validated by clinicians, and SOI adjustment, the inferences from administrative data must be used with caution. Combining administrative databases with clinical databases may give a new perspective on the efficacy and efficiency of the care process. The best data registries are agile and can be adapted to the changing quality landscape and meet unanticipated but important future needs.
Comparing Quality and Benchmarking
Continuous quality improvement requires a standard against which to compare improvement, whether internal (comparing to oneself over time) or external comparing performance among collaborating units. This commitment to continuously improving the care provided in intensive care units (ICUs) is the key ingredient of the benchmarking process. Benchmarking is not merely comparison of one group's performance against another's or an accepted standard, but rather it necessitates collaboration and commitment to improve. The benchmarking process is inherently comparative and relies at least in part on a commitment to competitively improve performance. Participation in data registries can either be blinded, where sites only see their own data and de-identified comparisons or transparent, where sites can see the results from all registry participants and know who each unit is. The latter clearly allows more collaboration among participants and reveals the best performers on the one hand and an incentive to improve on the other. Obviously, this implies continuous performance assessment and continuous data submission and analysis that can be facilitated by patient data registries in critical care. This of course introduces the question of how to measure performance. Highlighting differences in care and outcome suggests opportunities for improvement.
The goal of PICU care is to achieve maximal survival safely and efficiently. Naively, one might assume that ICUs with the lowest mortality and shortest length of stay and least complications are the “best” units. Alternatively, it may be that they are just not caring for any critically ill patients. Clearly the degree of patient compromise on admission to the ICU must be taken into account when assessing the quality of care provided by each PICU. Sophisticated SOI algorithms are often based on predicted likelihood of death or risk of mortality (ROM) for a patient population (the average ROM of all patients in the population). Since the ground-breaking work of Knaus and Pollack developing and evaluating the use of SOI methodology, there have been many improvements tailored for specific age groups, populations, and practice types.17 18 19 20 Those most commonly used in pediatric critical care are pediatric risk of mortality (PRISM) score (calibrated now by virtual PICU system [VPS]) and pediatric index of mortality (PIM) score (now version 3).21 From this methodology, it is also possible to predict ICU length of stay, likelihood of ventilation, and other potential important outcomes of critical care.12 22 23 24 25 SOI-adjusted outcomes are useful in both research and quality improvement. Their utility comes from allowing standardized ratios of outcomes such as the standardized mortality ratio (SMR) and the standardized length of stay ratio (SLOSR) to be measured. The ratio of the population mortality or average length of stay to the predicted population average mortality or length of stay standardizes the outcome for population comparison. A ratio greater than one indicates performance worse than average and less than one better. For example, if two units had a 10% mortality rate, but differing SMRs, the unit with the lowest SMR would have the better performance. Obviously, there is extensive science behind the generation of SOI scores and the discrimination (represented by the area under the receiver operating characteristic curve) and calibration (most often done by Hosmer Lemeshow methodology) is necessary in ensuring the performance of these methodologies for quality purposes. Recently a specific score for cardiac surgical patients has been developed from the VPS database—the pediatric index of cardiac surgical mortality (PICSIM).26 To be useful, standardized ratios should be centered around one and have a useful dispersal within the groups being compared. In addition, SMRs should not be affected by age, sex, diagnostic category, etc. when applied to entire populations. For example, the SLOSRs for a population of PICUs is shown in Fig. 1, and it can be seen that they are centered above and below one and that there is clear variation among ICUs. Also, SMRs determined using PICSIM show consistency across diagnostic groups for the top 25 diagnoses in cardiac ICUs (Fig. 2). Special mention should be made of Pediatric Logistic Organ Dysfunction (PELOD) score that correlates well with outcome and can be followed daily to indicate changes in the patient's condition27 and is a useful addition to ICU registries.
Fig. 1.

This plot represents standardized length of stay ratios (SLOSRs) for individual intensive care units (ICUs) caring for cardiac patients plotted from lowest on the left to highest. SLOSRs greater than 1.00 indicate a greater than average length of stay adjusted for severity of illness. This plot also shows SLOSRs for cardiac surgical patients cared for in mixed and dedicated cardiac ICUs (CICUs). The average for all ICUs was 1.04 and good discrimination among units can be seen. (Figure courtesy of Aaron Katch and Dr. Irina Kukuyeva, VPS analytic team.)
Fig. 2.

This graph demonstrates the standardized mortality ratios (SMRs) and 95% confidence intervals (CIs) for the top 25 diagnoses in cardiac surgical patients. Except for atrial septal defects (ASDs) and ventricular septal defects (VSDs) (which had no mortalities), the SMRs are not different from one demonstrating that performance is reliable across multiple cardiac diagnoses. AVC, atrioventricular canal; AVSD, atrioventricular septal defect; CAVSD, complete AVSD; PA, pulmonary artery; PDA, patent ductus arteriosus; RV, right ventricle; STS, Society of Thoracic Surgeons; TAPVC, total anomalous pulmonary venous connection; TCPC, total cavopulmonary connection; TOF, tetralogy of Fallot. (Figure courtesy of Aaron Katch and Dr. Irina Kukuyeva, VPS analytic team.)
If it is possible to consider that SMRs compare efficacy, SLOSRs could be taken as a proxy for efficiency and the two can be looked at together. VPS has developed a graphical presentation representing both of these on a combined access, thus separating ICUs into four quadrants by their efficiency and efficacy (Fig. 3). This represents a way of comparing quality outcomes for ICUs determined from an ICU registry.
Fig. 3.

This graph represents standardized mortality ratios (SMRs) for each intensive care unit (ICU) plotted against standardized length of stay ratios (SLOSRs), thus demonstrating combined measures of efficacy and efficiency. Clearly ICUs in quadrant 1 have both lower mortality and shorter length of stay (LOS) adjusted for severity of illness than those in the other quadrants. In addition, quadrant 4 has both high mortality and excessive LOS, indicating poorer performing units. VPS, Virtual Pediatric Systems. (Figure courtesy of Aaron Katch and Dr. Irina Kukuyeva, VPS analytic team.)
Registries have demonstrated utility in improving quality through collaborative quality projects and providing data for research to improve our understanding of pediatric critical care. The VPS registry data have been used to publish more than 100 research and quality papers. Some of the questions addressed including efficacy of high-frequency oscillatory ventilation (HFOV),28 and determination of the volume-outcomes relationship that addresses to ICUs who manage greater volume have better outcomes.25 28 29 30 Another useful way of looking out the volume-outcomes relationship is the use of funnel plots26 31 with outliers being clearly evident (Fig. 4). Recently VPS data have been used to develop a unique, data-driven methodology for disaster triage.32 Finally an extension of the use of ICU data to examine quality issues has recently been reported using the Society of Thoracic Surgeons (STS) and public health information system databases to demonstrate that there is a relationship between high-quality outcomes as measured by standardized mortality and decreased cost of care—confirming what we believe—quality care costs less.33
Fig. 4.

A standard funnel plot demonstrating the relationship between the number of cases annually treated by an individual general intensive care unit (ICU) and the standardized mortality ratio (SMR). Outliers with worse performance are seen in the smaller ICUs and outliers with better performance are seen in the larger ICUs. CI, confidence interval; PRISM, pediatric risk of mortality. (Figure courtesy of Aaron Katch and Dr. Irina Kukuyeva, VPS analytic team.)
References
- 1.Bennett T D, Spaeder M C, Matos R I. et al. Existing data analysis in pediatric critical care research. Front Pediatr. 2014;2:79. doi: 10.3389/fped.2014.00079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Fackler J C, Wetzel R C. Critical care for rare diseases. Pediatr Crit Care Med. 2002;3(1):89–90. doi: 10.1097/00130478-200201000-00022. [DOI] [PubMed] [Google Scholar]
- 3.LaRovere J M, Jeffries H E, Sachdeva R C. et al. Databases for assessing the outcomes of the treatment of patients with congenital and paediatric cardiac disease—the perspective of critical care. Cardiol Young. 2008;18 02:130–136. doi: 10.1017/S1047951108002886. [DOI] [PubMed] [Google Scholar]
- 4.Jacobs M L, Jacobs J P, Franklin R CG. et al. Databases for assessing the outcomes of the treatment of patients with congenital and paediatric cardiac disease—the perspective of cardiac surgery. Cardiol Young. 2008;18(18) 02:101–115. doi: 10.1017/S1047951108002813. [DOI] [PubMed] [Google Scholar]
- 5.Lauer M S, D'Agostino R B Sr. The randomized registry trial—the next disruptive technology in clinical research? N Engl J Med. 2013;369(17):1579–1581. doi: 10.1056/NEJMp1310102. [DOI] [PubMed] [Google Scholar]
- 6.Fackler J, Lehmann H P, Wetzel R C. Critical care for rare diseases (and procedures): redux. Pediatr Crit Care Med. 2015;16(3):297–299. doi: 10.1097/PCC.0000000000000360. [DOI] [PubMed] [Google Scholar]
- 7.Tavazzi L. Do we need clinical registries? Eur Heart J. 2014;35(1):7–9. doi: 10.1093/eurheartj/eht360. [DOI] [PubMed] [Google Scholar]
- 8.Arts D G, De Keizer N F, Scheffer G J. Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc. 2002;9(6):600–611. doi: 10.1197/jamia.M1087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Slonim A D, Pollack M M. Integrating the Institute of Medicine's six quality aims into pediatric critical care: relevance and applications. Pediatr Crit Care Med. 2005;6(3):264–269. doi: 10.1097/01.PCC.0000160592.87113.C6. [DOI] [PubMed] [Google Scholar]
- 10.Varkey P, Reller M K, Resar R K. Basics of quality improvement in health care. Mayo Clin Proc. 2007;82(6):735–739. doi: 10.4065/82.6.735. [DOI] [PubMed] [Google Scholar]
- 11.Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–1748. doi: 10.1001/jama.260.12.1743. [DOI] [PubMed] [Google Scholar]
- 12.Wetzel R C, Sachedeva R, Rice T B. Are all ICUs the same? Paediatr Anaesth. 2011;21(7):787–793. doi: 10.1111/j.1460-9592.2011.03595.x. [DOI] [PubMed] [Google Scholar]
- 13.Pollack M M, Holubkov R, Glass P. et al. Functional Status Scale: new pediatric outcome measure. Pediatrics. 2009;124(1):e18–e28. doi: 10.1542/peds.2008-1987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Pollack M, Holubkov R, Funai T. et al. Relationship between the functional status scale and the pediatric overall performance category and pediatric cerebral performance category scales. JAMA Pediatr. 2014;168(7):671–676. doi: 10.1001/jamapediatrics.2013.5316. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Fiser D H, Tilford J M, Roberson P K. Relationship of illness severity and length of stay to functional outcomes in the pediatric intensive care unit: a multi-institutional study. Crit Care Med. 2000;28(4):1173–1179. doi: 10.1097/00003246-200004000-00043. [DOI] [PubMed] [Google Scholar]
- 16.Pasquali S K, He X, Jacobs J P. et al. Measuring hospital performance in congenital heart surgery: administrative versus clinical registry data. Ann Thorac Surg. 2015;99(3):932–938. doi: 10.1016/j.athoracsur.2014.10.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Marcin J P, Pollack M M. Review of the acuity scoring systems for the pediatric intensive care unit and their use in quality improvement. J Intensive Care Med. 2007;22(3):131–140. doi: 10.1177/0885066607299492. [DOI] [PubMed] [Google Scholar]
- 18.Marcin J P, Pollack M M. Review of the methodologies and applications of scoring systems in neonatal and pediatric intensive care. Pediatr Crit Care Med. 2000;1(1):20–27. doi: 10.1097/00130478-200007000-00004. [DOI] [PubMed] [Google Scholar]
- 19.Le Gall J R. The use of severity scores in the intensive care unit. Intensive Care Med. 2005;31(12):1618–1623. doi: 10.1007/s00134-005-2825-8. [DOI] [PubMed] [Google Scholar]
- 20.Knaus W A, Wagner D P, Draper E A, Lawrence D E, Zimmerman J E. The range of intensive care services today. JAMA. 1981;246(23):2711–2716. [PubMed] [Google Scholar]
- 21.Straney L, Clements A, Parslow R C. et al. Paediatric index of mortality 3: an updated model for predicting mortality in pediatric intensive care. Pediatr Crit Care Med. 2013;14(7):673–681. doi: 10.1097/PCC.0b013e31829760cf. [DOI] [PubMed] [Google Scholar]
- 22.Ruttimann U E, Patel K M, Pollack M M. Length of stay and efficiency in pediatric intensive care units. J Pediatr. 1998;133(1):79–85. doi: 10.1016/s0022-3476(98)70182-9. [DOI] [PubMed] [Google Scholar]
- 23.Wetzel R C. The virtual pediatric intensive care unit. Practice in the new millennium. Pediatr Clin North Am. 2001;48(3):795–814. doi: 10.1016/s0031-3955(05)70340-0. [DOI] [PubMed] [Google Scholar]
- 24.Pollack M M, Holubkov R, Funai T. et al. Simultaneous prediction of new morbidity, mortality, and survival without new morbidity from pediatric intensive care: a new paradigm for outcomes assessment. Crit Care Med. 2015;43(8):1699–1709. doi: 10.1097/CCM.0000000000001081. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Tilford J M, Simpson P M, Green J W, Lensing S, Fiser D H. Volume-outcome relationships in pediatric intensive care units. Pediatrics. 2000;106(2 Pt 1):289–294. doi: 10.1542/peds.106.2.289. [DOI] [PubMed] [Google Scholar]
- 26.Jeffries H E Soto-Campos G Katch A Gall C Rice T B Wetzel R A pediatric index of cardiac surgical intensive care mortality risk score for pediatric cardiac critical care Pediatr Crit Care Med 2015; e-pub ahead of print [DOI] [PubMed] [Google Scholar]
- 27.Leteurtre S Duhamel A Salleron J Grandbastien B Lacroix J Leclerc F; Groupe Francophone de Réanimation et d'Urgences Pédiatriques (GFRUP). PELOD-2: an update of the PEdiatric logistic organ dysfunction score Crit Care Med 20134171761–1773. [DOI] [PubMed] [Google Scholar]
- 28.Gupta P, Green J W, Tang X. et al. Comparison of high-frequency oscillatory ventilation and conventional mechanical ventilation in pediatric respiratory failure. JAMA Pediatr. 2014;168(3):243–249. doi: 10.1001/jamapediatrics.2013.4463. [DOI] [PubMed] [Google Scholar]
- 29.Gupta P, Tang X, Gossett J M. et al. Association of center volume with outcomes in critically ill children with acute asthma. Ann Allergy Asthma Immunol. 2014;113(1):42–47. doi: 10.1016/j.anai.2014.04.020. [DOI] [PubMed] [Google Scholar]
- 30.Gupta P, Tang X, Gall C M, Lauer C, Rice T B, Wetzel R C. Epidemiology and outcomes of in-hospital cardiac arrest in critically ill children across hospitals of varied center volume: a multi-center analysis. Resuscitation. 2014;85(11):1473–1479. doi: 10.1016/j.resuscitation.2014.07.016. [DOI] [PubMed] [Google Scholar]
- 31.Sterne J A, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol. 2001;54(10):1046–1055. doi: 10.1016/s0895-4356(01)00377-8. [DOI] [PubMed] [Google Scholar]
- 32.Toltzis P, Soto-Campos G, Kuhn E M, Hahn R, Kanter R K, Wetzel R C. Evidence-based pediatric outcome predictors to guide the allocation of critical care resources in a mass casualty event. Pediatr Crit Care Med. 2015;16(7):e207–e216. doi: 10.1097/PCC.0000000000000481. [DOI] [PubMed] [Google Scholar]
- 33.Pasquali S K, Jacobs J P, Bove E L. et al. Quality-cost relationship in congenital heart surgery. Ann Thorac Surg. 2015;100(4):1416–1421. doi: 10.1016/j.athoracsur.2015.04.139. [DOI] [PMC free article] [PubMed] [Google Scholar]
