Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Nov 2.
Published in final edited form as: Pediatr Crit Care Med. 2023 Nov 2;24(11):943–951. doi: 10.1097/PCC.0000000000003335

Implementation Science Research in Pediatric Critical Care Medicine

Charlotte Z Woods-Hill a,b, Heather Wolfe a,b, Sara Malone c, Katherine M Steffen d, Asya Agulnik e, Brian F Flaherty f, Ryan P Barbaro g, Maya Dewan h, Sapna Kudchadkar i; ECLIPSE (Excellence in Pediatric Implementation Science), Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network.
PMCID: PMC10624111  NIHMSID: NIHMS1916452  PMID: 37916878

Abstract

Objective:

Delay or failure to consistently adopt evidence-based or consensus-based best practices into routine clinical care is common, including for patients in the pediatric intensive care unit (PICU). PICU patients can fail to receive potentially beneficial diagnostic or therapeutic interventions, worsening the burden of illness and injury during critical illness. Implementation science (IS) has emerged to systematically address this problem, but its use of in the PICU has been limited to date. We therefore present a conceptual and methodologic overview of IS for the pediatric intensivist.

Design and Methods:

The members of ECLIPSE (Excellence in Pediatric Implementation Science; part of the Pediatric Acute Lung Injury and Sepsis Investigators Network) represent multi-institutional expertise in the use of IS in the PICU. This narrative review reflects the collective knowledge and perspective of the ECLIPSE group about why IS can benefit PICU patients, how to distinguish IS from quality improvement (QI), and how to evaluate an IS article.

Results:

IS requires a shift in one’s thinking, away from questions and outcomes that define traditional clinical or translational research, including QI. Instead, in the IS rather than the QI literature, the terminology, definitions, and language differs by specifically focusing on relative importance of generalizable knowledge, as well as aspects of study design, scale, and timeframe over which the investigations occur.

Conclusions:

Research in pediatric critical care practice must acknowledge the limitations and potential for patient harm that may result from a failure to implement evidence-based or professionals’ consensus-based practices. IS represents an innovative, pragmatic, and increasingly popular approach that our field must readily embrace in order to improve our ability to care for critically ill children.

Indexing: implementation science, quality, safety, outcomes, methodology, pediatric critical care medicine

Introduction

Historically, in adult medical practice it is concluded that 17 years elapses before an evidence-based practice (EBP) is routinely adopted into clinical care (1, 2). As such, recommendations may easily be out of date before they reach their intended audience or, possibly, never reach their audience at all. The uptake of EBPs is challenged by academia’s traditional focus on rigorous clinical studies in highly specific patient populations without prioritizing the next step of wider implementation for broad impact. Hence, addressing this “research to practice gap” is the core concept underlying the field of implementation science (IS) (35). Academic and research funding bodies increasingly recognize the value of IS to advance more widespread public health impact of clinical research and quality improvement (QI) initiatives (6, 7).

In 2021, one of the articles celebrating the 50th Anniversary of the journal Critical Care Medicine reviewed “The coming of Age of Implementation Science Research in Critical Care Medicine” (8). In fact, work in adult intensive care units (ICUs) suggests specific reasons why critical care research often fails to translate into clinical practice (e.g., the complexity of ICU team dynamics when attempting behavior change, and challenges in overcoming ICU culture) and how IS can help to overcome these obstacles. While the practice of pediatric critical care likely faces some of these challenges, the pediatric ICU (PICU) differs from adult ICUs in important ways. For example: 1) the comparatively limited evidence-base to guide care; 2) the logistical and ethical obstacles to traditional randomized trials in children; and 3) the relative lack of resources available for pediatric research. The ECLIPSE (Excellence in Pediatric Implementation Science) group of the Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network(9) believes that many of the research challenges faced by the pediatric intensivist can and should be addressed using IS methodology. However, there is a paucity of IS research in pediatric critical care.

Therefore, in this narrative review we aim to describe core elements of IS investigations that may help improve utilization of IS methodology in the PICU. Hence, this review will: 1) define IS; 2) describe the basic terminology used in IS research; 3) review the distinctions between IS and QI; and 4) guide readers of Pediatric Critical Care Medicine through their evaluation of an IS study.

Defining Implementation Science

In general, IS is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services” (10). As such, IS falls within the scope of translational research, that occurs at the end of the research pipeline (4, 11).

Initially, potential interventions are tested in efficacy trials using homogenous populations and highly controlled conditions. Effectiveness trials are used subsequently to determine if the intervention works in a more realistic context (11), but with ongoing supervision and support from researchers. Thereafter, in the past, it was assumed that effective interventions would automatically be broadly applied in clinical practice. However, health care researchers have repeatedly demonstrated that this assumption is flawed. Even the most effective interventions require systematic efforts to achieve integrated and sustained use within the real-world clinical setting. For example, one major limitation in the conventional clinical research pipeline is at the time of facilitating widespread uptake of an effective practice or intervention. There is often no additional resource, such as staff, for implementing any new evidence once the research protocol has been completed. This funding gap means that without specific examination of how to implement a new EBP in the real world, researchers will not understand what parts of the new intervention either prove troublesome or are straightforward outside the ideal setting of a research project.

The field of IS was developed as an answer to this problem. That is, the need to understand how to ensure widespread use of EBPs beyond simple publication. Put simply, the early part of the EBP research pipeline focuses on patient-level health outcomes, such as mortality rate or duration of mechanical ventilation, and answers the question “what is the best treatment for disease X?”. IS comes at the end of this pipeline addresses how frequently a new EBP is used, how well it is used, and how to make its use sustainable in the real world. That is, IS asks “how do I ensure all eligible patients receive the current evidence-based treatment for disease X?”

The language used in IS research

Learning about IS requires familiarity with its core terminology and understanding the subtleties that distinguish concepts like “intervention,” “implementation,” and “dissemination.” These terms are often used interchangeably outside of IS but have specific meaning and implications when applied in IS work (Table 1). In addition, IS employs familiar-sounding concepts like “theories,” “frameworks,” and “models” in highly specific and precisely defined ways to produce generalizable knowledge. IS theories, frameworks, and models can be categorized as serving to create generalizable knowledge of three types: 1) to describe or guide the process of translating research into practice; 2) to understand or explain what influences implementation success or failure; and 3) to evaluate the implementation process (1222). IS also focuses on outcomes that are distinct from those encountered in typical clinical research, including acceptability, feasibility, and fidelity (21) (Table 2).

Table 1.

Glossary of implementation science

Term Definition Example
Intervention, implementation, and dissemination
Intervention The evidence-based, or consensus-based, clinical practice, policy, or program that a particular patient population will benefit from but is currently under-utilized(13) The timely use of antibiotics for septic shock
Implementation The integration of an intervention into a specific setting or context(13) Sepsis huddle to ensure no barriers to timely administration (such as IV access or stat ordering)
Dissemination The distribution of an intervention to a specific audience (13) Creation of a toolkit to facilitate widespread adoption of sepsis huddles for timely antibiotics in pediatric critical care centers
Theory, model, and framework
Theory A set of analytical principles or statements designed to structure our observation, understanding, and explanation of the world. Theory of diffusion (14)

Organizational theory(22)
Model A deliberate simplification of a phenomenon or a specific aspect of a phenomenon. Models can be described as theories with a more narrowly defined scope of explanation; a model is descriptive, whereas a theory is explanatory as well as descriptive. The Knowledge-to-Action model(15), the Ottawa model(16), the ACE Star Model of Knowledge Transformation(17)
Framework A structure, overview, outline, system or plan consisting of various descriptive categories, (e.g., constructs or variables) and the relations between them that are presumed to account for a phenomenon. Frameworks do not provide explanations; they only describe empirical phenomena by fitting them into a set of categories. Consolidated Framework for Implementation Research (CFIR)(18); Theoretical Domains Framework (TDF)(19); Promoting Action on Research Implementation in Health services (PARIHS)(20); Proctor outcomes framework(21)

Table 2.

Definition of outcomes used in Implementation Science

Outcome Definition
Acceptability graphic file with name nihms-1916452-t0001.jpg The perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory
Adoption / Uptake
graphic file with name nihms-1916452-t0002.jpg
The intention, initial decision, or action to try or employ an innovation or evidence-based practice
Appropriateness
graphic file with name nihms-1916452-t0003.jpg
The perceived fit, relevance, or compatibility of the innovation or evidence-based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem.
Cost
graphic file with name nihms-1916452-t0004.jpg
The cost impact of an implementation effort, related to the costs of the particular intervention, the implementation strategy used, and the location of service delivery
Feasibility
graphic file with name nihms-1916452-t0005.jpg
The extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting
Fidelity
graphic file with name nihms-1916452-t0006.jpg
The degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the intervention developers
Penetration
graphic file with name nihms-1916452-t0007.jpg
The integration of a practice within a service setting and its subsystems; can be calculated in terms of the number of providers who deliver a given intervention, divided by the total number of providers trained in or expected to deliver the intervention, or by the number of eligible patients who receive the intervention divided by the total number of patients eligible for the intervention
Sustainability
graphic file with name nihms-1916452-t0008.jpg
The extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing, stable operations

Distinguishing IS from quality improvement

There is an important overlap between IS and QI: both have a shared goal of improving the quality of care delivered to a patient. However, the two fields diverge in how to accomplish this goal (Table 3). At its core, IS seeks to shorten the research-to-practice-gap of EBPs as the primary mechanism to improve patient outcomes, while QI is designed to solve problems that may or may not relate to a specific EBP(23, 24). IS seeks to develop generalizable knowledge that can be applied beyond the area of study and in multiple settings. QI in contrast does not seek to generate generalizable knowledge, but instead focuses on improving a specific problem in a system of healthcare practice (5). An IS approach offers insight into the mechanisms of behavior change and examines contextual factors that impact implementation, grounded by specific scientific theories, frameworks, or models (25); while such detailed study of context is typically not a part of QI projects. Unlike outcomes that are examined in QI (such as a particular clinical outcome, i.e., rates of unplanned extubations; as well as process and balancing metrics, i.e., rates of reintubation), IS outcomes pertain to adoption and use of the intervention itself, and include measures such as feasibility, fidelity, and sustainability (Table 2). While QI tends to move quickly, with rapid tests of change and repeated plan-do-study-act cycles, IS requires more time to complete formal assessment of context and to generate theory-based generalizable knowledge (23).

Table 3.

Quality improvement vs implementation science

Quality improvement Implementation Science
Underlying assumption
graphic file with name nihms-1916452-t0009.jpg
Evidence-based practices, benchmarks, guidelines exist to optimize care or there is variability in care or outcomes Evidence-based practices, benchmarks, guidelines exist to optimize care
Primary problem to address
graphic file with name nihms-1916452-t0010.jpg
Improvement needed in the performance of a specific, local problem; at times based on evidence-based practice, benchmarks, or guidelines Evidence-based practices, benchmarks, guidelines have not been widely adopted by clinicians or patients
Generalizability
graphic file with name nihms-1916452-t0011.jpg
Applicability to the local setting is the focus, accounting for specific context; often not replicable in different contexts or at a large scale Applicability to multiple settings is essential, as is characterization of context
Use of theoretical models
graphic file with name nihms-1916452-t0012.jpg
Sometimes, but not essential Essential
Tools
graphic file with name nihms-1916452-t0013.jpg
Improvement models: Six Sigma, IHI Model for Improvement Specific tools: key driver diagrams, process mapping, audit and feedback Theories, Models, and Frameworks: Consolidated Framework for Implementation Research (CFIR), Exploration, Preparation, Implementation, Sustainment (EPIS), Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) and many others
Study structure
graphic file with name nihms-1916452-t0014.jpg
Hypothesis-generating studies (often qualitative); prospective experimental and quasi- experimental studies focused on effectiveness outcomes (qualitative/ quantitative/ mixed methods) Hypothesis-generating studies (often qualitative); prospective experimental and quasi- experimental studies focused on implementation outcomes or hybrid outcomes (qualitative/quantitative/ mixed methods)
Timeframe
graphic file with name nihms-1916452-t0015.jpg
Short term initially: improvement can be rapid with small tests of change (Plan-Do-Study-Act cycles). Subsequently, a longer-term focus on sustainability. Medium to long term initially: the scientific approach (planned implementation accounting for contextual factors and use of mixed methods) can lengthen time. Subsequently, a longer-term focus on sustainability.
Measures
graphic file with name nihms-1916452-t0016.jpg
Outcomes, processes, and balancing metrics displayed in control and run charts Effectiveness and/or implementation outcomes depending on study design and statistical analysis plan; implementation outcomes include measures such as acceptability, feasibility, appropriateness and others.

Essential elements of an IS publication

To standardize the quality and reporting of IS studies, an international group of IS experts developed the Standards for Reporting and Implementation Studies (StaRI) statement (26). We highlight the core components of the StaRI statement and provide some examples of IS work in the PICU.

The manuscript should first clearly describe the EBP of interest, the evidence behind it, and demonstrate the research-to-practice gap now justifying the IS approach.

Lane-Fall et al offer a useful schematic highlighting that efficacy and effectiveness research should ideally be completed prior to formal implementation studies (4). Hence, by definition, IS first requires a substantial body of evidence before to embarking on studies, i.e., the “what to do” should be well-established before asking questions about “how.” In many PICU-based interventions, however, there may be some benefit from an IS perspective without first demonstrating the strength of evidence for both efficacy and effectiveness in critically ill children. The clinical reality is that there are relatively few large randomized clinical trials in critically ill children(9, 27, 28) which means that many ‘informed’ decisions in pediatric critical care likely use adult data or lower quality pediatric evidence (29). Furthermore, our field has developed a variety of consensus and EBP guidelines to shape practice (3035). Therefore, studying the implementation of these adult data-driven approaches, or pediatric guidelines are appropriate uses IS methods in pediatric critical care. Therefore, in the PICU, an IS publication should either make a clear case that the practice of interest has a reasonable evidence base to justify studying its implementation, or to make a compelling case for minimal harm/high likelihood of benefit for studying the implementation of a consensus-based guideline.

The methods should identify what aspect of IS is under study, and how.

For example, is the paper evaluating context (i.e., barriers and facilitators to implementing the EBP/guidelines), strategies (e.g., specific approaches to better integrate the EBP/guideline into routine clinical care), or implementation outcomes (see Table 2)? The study design can vary widely: often, qualitative or mixed methods are employed to assess context and outcomes, while a prospective randomized clinical trial may be used to study if a particular strategy is superior or inferior to another strategy. When the evidence underlying an EBP/guideline is still being developed, a hybrid trial, which concurrently collects data on clinical outcomes as well as implementation outcomes, is an increasingly promising approach (36). Like any clinical trial, the methods should describe intervention allocation and planned analysis including any subgroups. Detailed description of the specific setting in which the EBP is being used or evaluated is critical, as is a thorough explanation of the necessary components of any implementation strategy. If applicable, a reader should also be able to identify the theory, framework, or model serving as the scientific basis for the approach taken by the authors, as well as justification for which one was chosen (Table 1).

The outcome metrics, whether they are a hybrid of clinical/implementation or implementation only, should be defined, and reported in the results section.

The results sections of IS studies often include the fidelity to core components of the implementation strategy and EBP, as well as any adaptions to the EBP or contextual changes that may have occurred during the study.

The publication should pay attention to generalizability of its findings, remembering that applicability to multiple settings – attuned to contextual issues – is a critical defining feature of IS. Depending on the focus of the work, take-away points may include key barriers/facilitators that should be addressed for improved EBP use, if/how the EBP can be adapted to work better in specific contexts, or optimal strategies for EBP implementation.

Conclusions about IS within pediatric critical care and a call to action

To date, IS research within the PICU includes a range of studies: to increase uptake of a specific guideline (e.g., blood product transfusion); to characterize the general process of clinical practice change in the PICU via an IS framework (37); to explore determinants and contextual barriers to implementation (30, 31, 38); and, to study implementation strategies (3941). These studies, in addition to answering specific questions about the implementation of unique interventions, highlight the usefulness of pragmatic tools and strategies to aid in implementation of best practices. These efforts are important, but much more work is needed to ensure all critically ill children consistently receive beneficial interventions.

In conclusion, all PICU practitioners can contribute to shortening the research to practice gaps in our field. Researchers trying to understand and discover “what” is best for our patients can and should simultaneously consider “how” to implement these approaches. IS methodologies allows us to think systematically about the feasibility, acceptability, and adoptability of proposed interventions. Hybrid studies concurrently examining both the “what” the “how” are particularly important given the challenges pediatric intensivists face when using high-quality evidence to guide practice. Systematically asking and answering questions using the IS perspective, and applying its tools and methodologies has the potential to transform our practice of pediatric critical care and significantly improve outcomes for all critically ill children.

Supplementary Material

Supplemental Data File (.doc, .tif, pdf, etc.)

Supplement 1. Authorship appendix, ECLIPSE subgroup of the PALISI Network

Acknowledgements

The authors wish to thank the Pediatric Acute Lung Injury and Sepsis Investigators (PALISI) Network for their support of the ECLIPSE group and their review of this manuscript. The authors also wish to thank Drs. Meghan Lane-Fall MD MSHP and Rinad Beidas PhD for their support of the ECLIPSE work and content expertise in implementation science.

Financial support:

Dr. Woods-Hill receives support from the National Heart, Lung, And Blood Institute of the National Institutes of Health under Award Number K23HL151381. Dr. Wolfe receives funding from the National Institutes of Health (R01HL131544 and R01HD099284). Dr. Barbaro receives funding from the National Heart, Lung, and Blood Institute (K12HL138039-02 and R01 HL153519). Dr. Kudchadkar receives funding from the National Institutes of Health (R01HD103811 & R01DK132348). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Copyright Form Disclosure: Dr. Woods-Hill’s institution received funding from the National Heart, Lung, and Blood Institute (NHLBI) and the Agency for Healthcare Research and Quality. Drs. Woods-Hill, Barbaro, and Dewan received support for article research from the National Institutes of Health. Dr. Wolfe received funding from The Debriefing Academy. Dr. Barbaro’s institution received funding from the NHLBI (R01 HL153519 and K12 HL138039); he disclosed that he is on the Board of Directors for the Extracorporeal Life Support Organization and Co-Chair of Member Pedi-ECMO. The remaining authors have disclosed that they do not have any potential conflicts of interest.

References

  • 1.Morris ZS, Wooding S, Grant J: The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med 2011; 104(12):510–520 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Balas EA BS: Managing Clinical Knowledge for Health Care Improvement. Yearb Med Inform 2000(1):65–70 [PubMed] [Google Scholar]
  • 3.Munro CL, Savel RH: Narrowing the 17-Year Research to Practice Gap. Am J Crit Care 2016; 25(3):194–196 [DOI] [PubMed] [Google Scholar]
  • 4.Lane-Fall MB, Curran GM, Beidas RS: Scoping implementation science for the beginner: locating yourself on the “subway line” of translational research. BMC Med Res Methodol 2019; 19(1):133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bauer MS, Damschroder L, Hagedorn H, Smith J, et al. : An introduction to implementation science for the non-specialist. BMC Psychol 2015; 3:32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.NHLBI: Center for Translation Research and Implementation Science. Available at: https://www.nhlbi.nih.gov/about/divisions/center-translation-research-and-implementation-science. Accessed April 20, 2023
  • 7.AHRQ: AHRQ’s Dissemination and implementation Initiative. Available at: https://www.ahrq.gov/pcor/ahrq-dissemination-and-implementation-initiative/index.html. Accessed April 20, 2023
  • 8.Barr J, Paulson SS, Kamdar B, Ervin JN, et al. : The Coming of Age of Implementation Science and Research in Critical Care Medicine. Crit Care Med 2021; 49(8):1254–1275 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Randolph AG, Bembea MM, Cheifetz IM, Curley MAQ, et al. : Pediatric Acute Lung Injury and Sepsis Investigators (PALISI): Evolution of an Investigator-Initiated Research Network. Pediatr Crit Care Med 2022; 23(12):1056–1066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Eccles MP, Mittman BS: Welcome to Implementation Science. Implementation Science 2006; 1(1) [Google Scholar]
  • 11.Brown CH, Curran G, Palinkas LA, Aarons GA, et al. : An Overview of Research and Evaluation Designs for Dissemination and Implementation. Annu Rev Public Health 2017; 38:1–22 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nilsen P: Making sense of implementation theories, models and frameworks. Implement Sci 2015; 10:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Psychology SoC: Overview. Available at: https://www.div12.org/implementation. Accessed April 14, 2023
  • 14.Rogers EM: Diffusion of Innovations. 5th Edition. New York: Free Press, 2003 [Google Scholar]
  • 15.Graham ID, Logan J, Harrison MB, Straus SE, et al. : Lost in knowledge translation: time for a map? J Contin Educ Health Prof 2006; 26(1):13–24 [DOI] [PubMed] [Google Scholar]
  • 16.Graham ID, Logan J: Innovations in knowledge transfer and continuity of care. The Canadian journal of nursing research = Revue canadienne de recherche en sciences infirmières 2004; 36(2):89–103 [PubMed] [Google Scholar]
  • 17.Stevens KR: ACE Start Model of EBP: Knowledge Transformation. Available at: www.acestar.uthscsa.edu. Accessed
  • 18.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, et al. : Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009; 4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Cane J, O’Connor D, Michie S : Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci 2012; 7(37) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care 1998; 7(3):149–158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Proctor E, Silmere H, Raghavan R, Hovmand P, et al. : Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011; 38(2):65–76 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Birken SA, Bunger AC, Powell BJ, Turner K, et al. : Organizational theory for dissemination and implementation research. Implement Sci 2017; 12(1):62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Lane-Fall MB, Fleisher LA: Quality Improvement and Implementation Science: Different Fields with Aligned Goals. Anesthesiology Clinics 2018; 36(1):xiii–xv30092943 [Google Scholar]
  • 24.Bartman T, Brilli RJ: Quality Improvement Studies in Pediatric Critical Care Medicine. Pediatr Crit Care Med 2021; 22(7):662–668 [DOI] [PubMed] [Google Scholar]
  • 25.Koczwara B, Stover AM, Davies L, Davis MM, et al. : Harnessing the Synergy Between Improvement Science and Implementation Science in Cancer: A Call to Action. J Oncol Pract 2018; 14(6):335–340 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Pinnock H, Barwick M, Carpenter CR, Eldridge S, et al. : Standards for Reporting Implementation Studies (StaRI) Statement. BMJ 2017; 356:i6795. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Peters MJ, Ramnarayan P, Scholefield BR, Tume LN, et al. : The United Kingdom Paediatric Critical Care Society Study Group: The 20-Year Journey Toward Pragmatic, Randomized Clinical Trials. Pediatr Crit Care Med 2022; 23(12):1067–1075 [DOI] [PubMed] [Google Scholar]
  • 28.Dean JM, Collaborative Pediatric Critical Care Research Network I: Evolution of the Collaborative Pediatric Critical Care Research Network. Pediatr Crit Care Med 2022; 23(12):1049–1055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Murthy S, Fontela P, Berry S: Incorporating Adult Evidence Into Pediatric Research and Practice: Bayesian Designs to Expedite Obtaining Child-Specific Evidence. JAMA 2021; 325(19):1937–1938 [DOI] [PubMed] [Google Scholar]
  • 30.Topjian AA, de Caen A, Wainwright MS, Abella BS, et al. : Pediatric Post-Cardiac Arrest Care: A Scientific Statement From the American Heart Association. Circulation 2019; 140(6):e194–e233 [DOI] [PubMed] [Google Scholar]
  • 31.Woods-Hill CZ, Koontz DW, Voskertchian A, Xie A, et al. : Consensus Recommendations for Blood Culture Use in Critically Ill Children Using a Modified Delphi Approach. Pediatr Crit Care Med 2021; 22(9):774–784 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Emeriaud G, Lopez-Fernandez YM, Iyer NP, Bembea MM, et al. : Executive Summary of the Second International Guidelines for the Diagnosis and Management of Pediatric Acute Respiratory Distress Syndrome (PALICC-2). Pediatr Crit Care Med 2023; 24(2):143–168 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Smith HAB, Besunder JB, Betters KA, Johnson PN, et al. : 2022. Society of Critical Care Medicine Clinical Practice Guidelines on Prevention and Management of Pain, Agitation, Neuromuscular Blockade, and Delirium in Critically Ill Pediatric Patients With Consideration of the ICU Environment and Early Mobility. Pediatr Crit Care Med 2022; 23(2):e74–e110 [DOI] [PubMed] [Google Scholar]
  • 34.Valentine SL, Bembea MM, Muszynski JA, Cholette JM, et al. : Consensus Recommendations for RBC Transfusion Practice in Critically Ill Children From the Pediatric Critical Care Transfusion and Anemia Expertise Initiative. Pediatr Crit Care Med 2018; 19(9):884–898 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Nellis ME, Karam O, Valentine SL, Bateman ST, et al. : Executive Summary of Recommendations and Expert Consensus for Plasma and Platelet Transfusion Practice in Critically Ill Children: From the Transfusion and Anemia EXpertise Initiative-Control/Avoidance of Bleeding (TAXI-CAB). Pediatr Crit Care Med 2022; 23(1):34–51 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Curran GM, Bauer M, Mittman B, Pyne JM, et al. : Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012; 50(3):217–226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Steffen KM, Holdsworth LM, Ford MA, Lee GM, et al. : Implementation of clinical practice changes in the PICU: a qualitative study using and refining the iPARIHS framework. Implement Sci 2021; 16(1):15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Agulnik A, Ferrara G, Puerto-Torres M, Gillipelli SR, et al. : Assessment of Barriers and Enablers to Implementation of a Pediatric Early Warning System in Resource-Limited Settings. JAMA Netw Open 2022; 5(3):e221547. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Wieczorek B, Ascenzi J, Kim Y, Lenker H, et al. : PICU Up!: Impact of a Quality Improvement Intervention to Promote Early Mobilization in Critically Ill Children. Pediatr Crit Care Med 2016; 17(12):e559–e566 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Markham C, Proctor EK, Pineda JA: Implementation strategies in pediatric neurocritical care. Curr Opin Pediatr 2017; 29(3):266–271 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Agulnik A, Gonzalez Ruiz A, Muniz-Talavera H, Carrillo AK, et al. : Model for regional collaboration: Successful strategy to implement a pediatric early warning system in 36 pediatric oncology centers in Latin America. Cancer 2022; 128(22):4004–4016 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Data File (.doc, .tif, pdf, etc.)

Supplement 1. Authorship appendix, ECLIPSE subgroup of the PALISI Network

RESOURCES