Abstract
Quality improvement (QI) methods have been used extensively to support the delivery of safe, timely, effective, equitable, and cost-effective health care. While QI initiatives have demonstrated benefits, critical gaps in design and implementation undermine their impact. Systemic reviews and expert commentaries point to recurring challenges, including limited understanding and appreciation of the system in which the work takes place; poorly articulated aims; absence of guiding content theories for scalable implementation; weak implementation strategies; inadequate mechanisms for measurement, evaluation, and learning; and insufficiently structured approaches to communication and dissemination. These gaps limit learning, impact, replication, sustainability and scalability. To address these gaps, the Institute for Healthcare Improvement (IHI) developed the Core Components Guide, a practical framework for designing, implementing, and evaluating QI initiatives. Grounded in Deming's System of Profound Knowledge and the Model for Improvement, the guide includes six interrelated components: System Understanding, Improvement Aim, Measurement, Evaluation and Learning, Content Theory, Execution Theory, and Dissemination and Communication. Together, these components provide a structured approach to align interventions with context, clarify program theory, and embed iterative learning cycles. This manuscript introduces the Core Components, illustrates their application through a case study, and shares lessons learned from operationalizing the guide across diverse settings. By integrating improvement and implementation science principles, the Core Components Guide strengthens design, promotes fidelity, and increases the potential for impact, replication, scale, and sustainability of QI initiatives. This Guide offers actionable strategies for QI leaders and policymakers to build stronger foundations for improvement, evaluation, and dissemination.
Keywords: core components, healthcare, implementation science, improvement science, QI design, quality improvement, quality improvement evaluation
1. Introduction
Quality improvement (QI) methods have been utilized in health and health care settings to support the delivery of safe, timely, effective, efficient, equitable, and cost-effective care (1–3). The use of QI has evolved from isolated local projects to holistic deployment across complex systems (4). Previous research shows that applying QI can lead to safer care, better outcomes, and lower costs (5, 6), and can empower providers to deliver both person-centered and context-specific care; boosting engagement, teamwork, and job satisfaction (7).
However, gaps in the design and implementation of QI initiatives often undermine their effectiveness (8–10). First, inadequate system appreciation remains a challenge. Without clearly defining the problem, the system, and the surrounding context, improvement teams may overlook important influences and fail to identify factors contributing to the problem (11). Second, QI initiatives often have poorly defined aims. Aims that are not specific, measurable and time-bound create ambiguity and limit evaluation efforts (8). Third, the absence of a content theory limits the adoption, adaptation, and reliable implementation of evidence-based interventions (12). Fourth, approaches to guide improvement teams in testing and implementation can be vague, overly ambitious, or disconnected from operational realities (8, 13). Similarly, the absence of adaptive learning principles, such as Plan-Do-Study-Act (PDSA) cycles, can contribute to the failure of QI initiatives (8, 14). Fifth, failure to design and embed measurement, evaluation, and learning (MEL) mechanisms throughout the project lifecycle limits the ability to assess progress and establish a causal pathway (10). Finally, the lack of structured communication and dissemination strategies can limit the spread and scale of successful interventions.
Recent systematic reviews and commentaries have highlighted these challenges, emphasizing the need for greater integration of improvement and implementation science and approaches to inform the design and implementation of QI initiatives (9, 10, 15). Given the variable results of QI initiatives, the field could benefit from specific guidance to designing QI initiatives with a clear program theory that is more likely to lead to learning, improvement and impact (16, 17).
This manuscript introduces the Core Components, a practical guide developed by the Institute for Healthcare Improvement (IHI) to address these challenges. The Core Components Guide offers an integrated approach to improvement in complex systems, drawing on improvement science, implementation science, evaluation, and related fields. Together, they provide a practical yet robust structure to guide the design, implementation, evaluation, and dissemination of QI initiatives, while helping practitioners integrate knowledge generation, research, and practice. Their use creates synergies across these domains by:
Grounding changes to be tested in shared theory and evidence (knowledge)
Generating evidence about changes that result in improvement that can be used by others and/or to bring improvement to scale (research)
Embedding all activities and learning within real-world settings “in context” based on an understanding of the system(s) of focus (practice).
This approach also enables measurement and evaluation insights to inform both improvement and research, for research findings to lead to better practice, and for accumulated knowledge to accelerate improvement at a larger scale. The authors provide an overview of the Guide, illustrate its application through a case study, and discuss lessons learned in operationalizing it.
2. The core components guide
2.1. Origin of the core components
Informed by an IHI innovation cycle and decades of experience leading improvement initiatives, IHI developed its Core Components Guide in 2014 (18). The initial guide included five Core Components (Aim; MEL; Content Theory; Execution Theory; and Dissemination and Communication) and emphasized the value of formative, theory-driven evaluation that addressed a central learning question: “How and in what contexts does a new model work or can be amended to work?” (19). Recognizing the need to strengthen design and set the stage for evaluation, IHI expanded the use of the Core Components Guide to design, implement, refine, and scale-up improvement initiatives (20). In 2024, IHI added a core component of System Understanding to reinforce context-alignment in the design phase.
2.2. Underpinning frameworks
Two frameworks underpin the Core Components Guide: W. Edwards Deming's System of Profound Knowledge (SoPK) (21) and the Model for Improvement (MFI) (8). SoPK offers four interrelated domains to guide the design and implementation of improvement initiatives: appreciation for a system, knowledge of variation, theory of knowledge, and psychology (21). Anchored in SoPK, the MFI includes three questions: what are we trying to accomplish, how will we know that a change is an improvement, and what changes can we make that will result in improvement (22). These three questions drive adaptive testing and learning using PDSA cycles.
2.3. Overview of the core components guide
The guide includes six interrelated Core Components (see Figure 1): (1) System Understanding; (2) Improvement Aim; (3) MEL; (4) Content Theory; (5) Execution Theory; (5) and (6) Dissemination and Communication. Like the MFI, the Core Components are agnostic to the aim, scope and scale of the QI initiative, type of organization, or context. The Core Components Guide encourages planning based on theory, prediction, evidence, and iterative learning.
Figure 1.
Core components guide mapped against the model for improvement (8).
2.3.1. System understanding
“Appreciation of the system” is a required element for improvement in Deming's SoPK (21). Understanding how the system works—the different parts and how they interconnect and work together—is a critical starting point for integrating improvement into the unique context [system(s) or setting(s)]. It makes explicit the need to better understand the actors within the system, how things are currently done in the system(s) of focus, the competing priorities and motivations of participating teams, these teams’ bandwidth, the level of will among their leaders and staff and baseline performance. This work provides a “bird's eye” view to help improvement teams zoom out from isolated problems to understand larger root contributors to the system's function. Acknowledgement and exploration of power dynamics, structural inequities, and historical and current mistrust are also important to this component, as they influence how the system functions and for whom they improve outcomes (23–25).
System understanding starts during the initial design process, informs the development of the other Core Components and continues throughout the lifecycle of the initiative as changes are tested and refined for successful implementation. Five recommended activities help build system understanding:
Problem Statement and Diagnosis—an accurately understood problem and well-articulated problem statement.
Context Assessment—an analysis of the influencers of the project's success including factors related to the environment, social conditions, culture, etc.
Evidence Review—a review of formal and informal evidence to understand the broader landscape related to the work of the project.
Data Review—analysis of the system's data on past and current performance, variation in subsystems, and populations disproportionately impacted. This could include an analysis of needs, opportunities, and ways in which systems have discarded or undervalued assets of individuals and communities to identify ways these can be addressed to advance population health and dismantle inequities (26).
Actor Engagement—identification of the actors in the system (e.g., health care workers, patients, and families) to support future engagement in the project.
2.3.2. Improvement Aim
Building off “system understanding,” an effective aim provides a shared vision for improvement and helps answer the first question of the MFI (8, 27). An aim statement should be specific, numeric, and time-bound and describe the gap between current performance and performance that is desired for key outcomes that matter to the people engaged in and impacted by the work. It should be ambitious yet feasible, reflecting an understanding of the system. A QI initiative may have an aim statement for the whole project (e.g., decrease average Hemoglobin A1C for the outpatient Type 2 diabetes population from 8.5% to 6.5% over 18 months) or a set of sub-aims relating to targeted process in the system of care (e.g., improve reliability of annual screening of Hemoglobin A1C for all returning Type 2 diabetes population from 60% to 95% in 6 months). For QI initiatives involving multiple teams, individual improvement teams are encouraged to adapt the initiative aim to their own context informed by their local baseline.
2.3.3. Measurement, evaluation, and learning
Continuous MEL should inform all aspects of an improvement initiative. Most QI initiatives require modifications during their implementation to meet their stated aim(s), and robust, yet practical MEL activities provide the information needed to make informed decisions. The MEL Core Component guides improvement at two levels: (1) how the system of interest is changing (informed by the measurement strategy) and (2) the effectiveness of the QI initiative's implementation (informed by the MEL plan). While these areas of learning are interrelated, they have different primary users and purposes and utilize distinct methods and tools.
Measurement Strategy—helps teams implementing the QI initiative “on the ground” answer the second question of the MFI—whether the changes are leading to improvement (28). A robust measurement strategy includes a family of measures (outcome, process, and balancing measures) (28); clear operational definitions, a feasible plan for data collection across teams; and a system for visualizing and analyzing data over time, such as run charts or statistical process control charts (28–30). Assessing these measures over time enables teams to assess whether changes are resulting in improvement, adapt as needed, and make decisions.
MEL Plan—includes questions, data, and tools to inform the project team (individuals responsible for the overall design) and key actors about the QI initiative's progress, effectiveness, and learnings. Formative evaluation during the project facilitates adaptive design (31, 32), while summative evaluation at the conclusion of the project provides evidence about the causal pathway (20, 33).
A learning plan may include regular, formal, and informal activities to pause and reflect on what is working and what is not, and to use these insights to adapt the execution theory and content theory.
2.3.4. Content theory
The content theory outlines the QI initiative's theory of change and includes the evidence-based interventions that are known to be effective in improving results and the system drivers to be acted upon to achieve the aim (34). The content theory constantly evolves, based on learning about which ideas and drivers work, and which do not.
Two tools can depict and organize the content theory:
Driver Diagram—a visual tool that articulates the high-level theory of change predicted to influence the achievement of the aim (34). Driver diagrams include the aim statement, primary drivers, secondary drivers, and change ideas. Primary drivers are key leverage points for improving the performance of the system and secondary drivers represent the factors, steps, processes or elements in the system that influence primary drivers. For each secondary driver, a set of change ideas is proposed (34, 35).
Change Package—a set of specific and actionable changes, often organized in alignment with the driver diagram, that may be tested, refined, and embedded into everyday practice by the improvement teams (36).
2.3.5. Execution theory
The execution theory specifies how change will occur—detailing the mechanisms and implementation strategies that will drive reliable implementation of the content theory to achieve the intended aim. Developing a clear execution theory during the design phase of a QI initiative is critical to ensure that the other Core Components, particularly the content theory and measurement strategy, are implemented with fidelity and in a way that achieves the initiative's aim.
2.3.5.1. Organizing the Execution Theory
A well-defined execution theory clarifies the necessary structures, inputs, activities, outputs, and implementation supports needed to improve outcomes (37). Inputs include the elements required to enable improvement to operationalize the content theory, such as leadership commitment, protected time for staff participation, data systems, and an improvement model (e.g., the Model for Improvement or Lean). Activities refer to the processes that promote learning and collaboration, such as structured learning sessions, coaching, site visits, peer-to-peer exchanges, and dashboards that support real-time data analysis and feedback. Outputs include tangible products such as the number of PDSA cycles completed, percentage of teams submitting data, or percentage of teams receiving coaching (38, 39).
Two helpful tools to display the execution theory are a project roadmap and a logic model.
Project Roadmap—specifies the project's key phases, activities to be delivered, and milestones required to translate plans into reliable and effective implementation based on the selected change model. Sample activities detailed in the roadmap are in-person or virtual learning sessions, coaching calls, site visits, etc. The roadmap clarifies the sequence and dosing of the activities predicted to facilitate engagement, build QI capability, and support peer learning.
Logic Model—a visual representation of how the initiative works to achieve its aim. A logic model displays relationships between the inputs required to implement the project, activities, outputs, and short-, medium- and long-term outcomes expected as a result (14, 40, 41). The logic model is a useful way to graphically assemble the Core Components into a theory.
These tools should define roles and accountability structures, data systems for data collection and visualization, timely feedback mechanisms for communication and learning, and ongoing capacity building (37, 42). Integrating explicit execution planning helps strengthen implementation fidelity, anticipate and address challenges, align resources, promote adaptive learning, and increases the likelihood of achieving measurable and sustained improvement (37).
2.3.6. Dissemination and communication
Effective dissemination and communication of results and insights from QI initiatives are essential to maximize their impact, promote shared learning, and accelerate system-wide change. A clear dissemination and communication plan helps ensure that learning does not remain local but instead informs broader improvement efforts (43, 44). Effective dissemination increases transparency, strengthens credibility, and contributes to the science of improvement by enabling replication, adaptation, and evidence-informed decision-making (45). It also helps build will for change and ensures that insights about what works—and what does not—reach diverse audiences, including frontline providers, leaders, policymakers, and communities (46–48).
A dissemination and communication plan should consider what project learning might be of interest to others (internally and externally), who the potential audience(s) are, and what platforms and outputs are useful for reaching them. The plan should specify the resources (people, time, etc.) required to execute these activities, including staff expertise, data visualization, and publication fees (47, 48). Internal dissemination products include peer-learning events, pause and reflect sessions, and internal knowledge bases. External dissemination products include peer-reviewed publications, conference presentations, knowledge management tools, blogs, videos, storyboards, how-to-guides, toolkits, and project summaries.
3. Case study: global comfort promise
Global Comfort Promise, a QI initiative implemented by IHI and St. Jude Children's Research Hospital, developed and refined the Core Components to guide the effective delivery of an evidence-based intervention across multiple sites globally. The content theory and measurement strategy were developed using a series of expert panels with subject matter experts and people with lived experience over four months. A detailed case study is available in Supplementary File S4.
4. Discussion
QI has demonstrated potential to improve safety, outcomes, and efficiency across health systems. However, gaps in design and implementation of QI initiatives hinder their effectiveness (19, 49) and limit learning, impact, replication, and scalability (50, 51). The Core Components Guide offers a structured approach to address these challenges and includes actionable strategies for QI leaders, project teams, evaluators, researchers, and policymakers to strengthen the design, implementation, and evaluation of QI initiatives. The Core Components Guide is not intended to replace established QI, implementation science, or evaluation frameworks; rather, they integrate and operationalize these into a coherent, practice-ready structure that connects design, implementation, and evaluation throughout an improvement effort.
While the Core Components can strengthen any QI initiative, they are particularly valuable for efforts that require accountability for outcomes, are designed for scale, or involve complex, multi-site or multi-partner implementation. IHI has applied the Core Components to guide the design of complex QI initiatives since 2014. Between January 2024 and September 2025, 21 of 23 IHI project teams across six continents used the Core Components Guide, providing a standard approach to project design, implementation, learning, and evaluation.
The Core Components are interdependent elements within the design of a QI initiative; each one is a critical element in the blueprint for how the initiative should be implemented. As such, design typically follows a non-linear, iterative process. However, system understanding stands apart as a foundational element that underpins the other Core Components; it plays a critical role in shaping the initiative's aim, content theory, and measurement strategy. For this reason, designers must invest early in deeply understanding the system at hand. The remaining components should then be developed in alignment with that understanding and with one another, ensuring that interventions are context-sensitive and aligned with organizational priorities (see Table 1) (11, 52, 53).
Table 1.
The 6 core components for designing quality improvement initiatives.
| Core component | Questions the core component answers | Activities |
|---|---|---|
| System Understanding |
|
|
| Improvement Aim |
|
|
| Measurement, Evaluation, and Learning |
|
|
| Content Theory |
|
|
| Execution Theory |
|
|
| Dissemination and Communication |
|
|
The Core Components should be co-designed with subject-matter experts, people with lived experience, and those directly involved in the system(s) of focus—they cannot be created in isolation.
The guide also emphasizes adaptive design: the Components should function as a living document, continuously refined as new learning emerges. Embedding iterative testing through PDSA cycles strengthens adoption, implementation fidelity, and ongoing learning and adaptation (54).
The Core Components are not a panacea. Things can (and do) go wrong when implementing QI initiatives. However, by laying a strong foundation for design, the Core Components set the stage for effective implementation. Similarly, planning for evaluation, learning, and dissemination activities enables meaningful learning and sharing during and after a QI initiative. Clarity about the project aim, content theory, and execution theory make the project evaluable; improvers cannot determine if, how, and why the project is effective if they have not made explicit the aim and underlying theories (55).
5. Limitations and future directions
Effectively operationalizing the Guide requires capability building, leadership engagement, and robust data infrastructure (19, 37, 56). This Guide draws on the improvement science, implementation, and evaluation literature, as well as iterative learning from IHI projects across six continents. Although not formally validated, its grounding in real-world practice enhances its relevance. Further research using common measures is needed to validate the Guide's usefulness across contexts and assess if adherence to the Core Components improves design quality, implementation fidelity, outcomes, and learning for QI projects.
Acknowledgments
We acknowledge Gareth Parry for his contribution to the initial development of the 5 Core Components. Additionally, we would like to thank the team who supported us with the St. Jude Global case study: Michael McNeil, Ximena Garcia-Quintero, Paola M. Friedrich, Miriam Gonzalez-Guzman, and Yawen Zheng. All their insights, feedback, and technical assistance have enhanced the quality of this paper.
Funding Statement
The author(s) declared that financial support was received for this work and/or its publication. The authors received no financial support for the publication of this article. The Global Comfort Promise initiative featured in the case study was funded by St. Jude Children's Research Hospital's Global Palliative Care Program.
Footnotes
Edited by: Laura Lennox, Imperial College London, United Kingdom
Reviewed by: Adam Baus, West Virginia University School of Public Health, United States
Abbreviations IHI, institute for healthcare improvement; MEL, measurement, evaluation, and learning; MFI, model for improvement; PDSA, plan, do, study, act; QI, quality improvement; SoPK, system of profound knowledge.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Author contributions
PH: Methodology, Writing – original draft, Writing – review & editing, Conceptualization. JA: Writing – review & editing, Supervision, Methodology. CL: Investigation, Writing – review & editing, Formal analysis, Project administration, Methodology, Writing – original draft. KB: Writing – review & editing, Writing – original draft, Methodology. JF: Writing – original draft, Visualization, Investigation, Writing – review & editing, Methodology, Project administration. RS: Methodology, Writing – review & editing, Writing – original draft. SO: Writing – review & editing, Investigation, Project administration, Writing – original draft. PB: Supervision, Writing – review & editing, Conceptualization, Methodology.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frhs.2026.1751580/full#supplementary-material
References
- 1.Reed JE, Card AJ. The problem with plan-do-study-act cycles. BMJ Qual Saf. (2016) 25(3):147–52. 10.1136/bmjqs-2015-005076 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.The Quintuple Aim for Health Care Improvement. A New Imperative to Advance Health Equity | Health Disparities | JAMA | JAMA Network. Available online at: https://jamanetwork.com/journals/jama/fullarticle/2788483 (Accessed September 30, 2025). [DOI] [PubMed]
- 3.Mangum CD. Journey to STEEEP healthcare: a focus on systems through a patient’s experience. Curr Probl Pediatr Adolesc Health Care. (2023) 53(8):101461. 10.1016/j.cppeds.2023.101461 [DOI] [PubMed] [Google Scholar]
- 4.Schroeder P, Parisi LL, Foster R. Healthcare quality improvement: then and now. Nurs Manage. (2019) 50(9):20–5. 10.1097/01.NUMA.0000579004.87116.35 [DOI] [PubMed] [Google Scholar]
- 5.de la Perrelle L, Cations M, Barbery G, Radisic G, Kaambwa B, Crotty M, et al. How, why and under what circumstances does a quality improvement collaborative build knowledge and skills in clinicians working with people with dementia? A realist informed process evaluation. BMJ Open Qual. (2021) 10(2):e001147. 10.1136/bmjoq-2020-001147 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Hill JE, Stephani AM, Sapple P, Clegg AJ. The effectiveness of continuous quality improvement for developing professional practice and improving health care outcomes: a systematic review. Implement Sci. (2020) 15(1):23. 10.1186/s13012-020-0975-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Jung OS, Cummings JR. Employee engagement in quality improvement and patient sociodemographic characteristics in federally qualified health centers. Med Care Res Rev. (2023) 80(1):43–52. 10.1177/10775587221118157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco, CA: John Wiley & Sons; (2009). p. 514. [Google Scholar]
- 9.Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. (2004) 140(11):897–901. 10.7326/0003-4819-140-11-200406010-00011 [DOI] [PubMed] [Google Scholar]
- 10.Garcia-Elorrio E, Rowe SY, Teijeiro ME, Ciapponi A, Rowe AK. The effectiveness of the quality improvement collaborative strategy in low- and middle-income countries: a systematic review and meta-analysis. PLoS One. (2019) 14(10):e0221919. 10.1371/journal.pone.0221919 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Tyler A, Perry M, Slemmer A, Westphal K, Chavez L. A practical guide to assessing and addressing context in quality improvement. Hosp Pediatr. (2025) 15(4):e173–8. 10.1542/hpeds.2024-007745 [DOI] [PubMed] [Google Scholar]
- 12.Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. (2020) 1(1):27. 10.1186/s43058-020-00001-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Berwick DM. The science of improvement. JAMA. (2008) 299(10):1182–4. 10.1001/jama.299.10.1182 [DOI] [PubMed] [Google Scholar]
- 14.Goeschel CA, Weiss WM, Pronovost PJ. Using a logic model to design and evaluate quality and patient safety improvement programs. Int J Qual Health Care. (2012) 24(4):330–7. 10.1093/intqhc/mzs029 [DOI] [PubMed] [Google Scholar]
- 15.Vos L, Dückers ML, Wagner C, Van Merode GG. Applying the quality improvement collaborative method to process redesign: a multiple case study. Implement Sci. (2010) 5(1):19. 10.1186/1748-5908-5-19 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Dixon-Woods M, Leslie M, Tarrant C, Bion J. Explaining matching Michigan: an ethnographic study of a patient safety program. Implement Sci. (2013) 8(1):70. 10.1186/1748-5908-8-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. (2015) 24(3):228–38. 10.1136/bmjqs-2014-003627 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Cohen S. Reid A. 5 Core Components for Learning from QI Projects. 5 Core Components for Learning from QI Projects. (2014). Available online at: https://www.ihi.org/library/blog/5-core-components-learning-qi-projects (Accessed November 7, 2025).
- 19.Parry GJ, Carson-Stevens A, Luff DF, McPherson ME, Goldmann DA. Recommendations for evaluation of health care improvement initiatives. Acad Pediatr. (2013) 13(6):S23–30. 10.1016/j.acap.2013.04.007 [DOI] [PubMed] [Google Scholar]
- 20.Parry G, Coly A, Goldmann D, Rowe AK, Chattu V, Logiudice D, et al. Practical recommendations for the evaluation of improvement initiatives. Int J Qual Health Care. (2018) 30(1):29–36. 10.1093/intqhc/mzy021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Deming W. The New Economics for Industry, Government, Education. Cambridge, MA: MIT Press; (1994). [Google Scholar]
- 22.Langley G, Moen R, Nolan K, Norman C, Provost L. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. Chicago, IL: Jossey-Bass; (2009). [Google Scholar]
- 23.Batalden M, Batalden P, Margolis P, Seid M, Armstrong G, Opipari-Arrigan L, et al. Coproduction of healthcare service. BMJ Qual Saf. (2016) 25(7):509–17. 10.1136/bmjqs-2015-004315 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Auguste BL, Wong BM, Tad-y DB. Equity in quality improvement. Med Clin N Am. (2025) 109(5):1145–56. 10.1016/j.mcna.2025.03.004 [DOI] [PubMed] [Google Scholar]
- 25.Rowland P, Lising D, Sinclair L, Baker GR. Team dynamics within quality improvement teams: a scoping review. Int J Qual Health Care. (2018) 30(6):416–22. 10.1093/intqhc/mzy045 [DOI] [PubMed] [Google Scholar]
- 26.Institute for Healthcare Improvement. Population Health Guide for Undertaking a Three-Part Data Review | Institute for Healthcare Improvement. (2025). Available online at: https://www.ihi.org/library/publications/population-health-guide-undertaking-three-part-data-review (Accessed January 30, 2026).
- 27.Institute for Healthcare Improvement. How to Improve: Model for Improvement: Setting Aims. Available online at: https://www.ihi.org/library/model-for-improvement/setting-aims (Accessed October 8, 2025).
- 28.Provost LP. Health Care Data Guide: Learning from Data for Improvement. San Francisco, CA: Wiley & Sons, Incorporated, John; (2022). p. 480. [Google Scholar]
- 29.Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. (2003) 12(6):458–64. 10.1136/qhc.12.6.458 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Perla RJ, Provost LP, Parry GJ. Seven propositions of the science of improvement: exploring foundations. Qual Manag Health Care. (2013) 22(3):170–86. 10.1097/QMH.0b013e31829a6a15 [DOI] [PubMed] [Google Scholar]
- 31.Elwy AR, Wasan AD, Gillman AG, Johnston KL, Dodds N, McFarland C, et al. Using formative evaluation methods to improve clinical implementation efforts: description and an example. Psychiatry Res. (2020) 283:112532. 10.1016/j.psychres.2019.112532 [DOI] [PubMed] [Google Scholar]
- 32.Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. (2006) 21(2):S1–8. 10.1007/s11606-006-0267-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Scriven M. The Methodology of Evaluation. Purdue University; (1966). p. 140. [Google Scholar]
- 34.What’s Your Theory? | ASQ. Available online at: https://asq.org/quality-progress/articles/whats-your-theory?id=fc9befe6bf6f47f89c6f34c7e855d045&srsltid=AfmBOop1DcwfJs9J_B-eIz7jfPaTNFrAKEXSfBx05SgGC2J1yujenFWD (Accessed August 26, 2025).
- 35.Bennett B, Provost L. Driver diagram serves as tool for building and testing theories for improvement.
- 36.Neleman H. Change packages are a powerful starting point for sharing ideas that work | by Brandon Bennett. CHANGE MANAGEMENT.
- 37.Dixon-Woods M, McNicol S, Martin G. Ten challenges in improving quality in healthcare: lessons from the health foundation’s programme evaluations and relevant literature. BMJ Qual Saf. (2012) 21(10):876–84. 10.1136/bmjqs-2011-000760 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. (2013) 91(2):354–94. 10.1111/milq.12016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Schouten LMT, Hulscher MEJL, Everdingen JJEV, Huijsman R, Grol RPTM. Evidence for the impact of quality improvement collaboratives: systematic review. Br Med J. (2008) 336(7659):1491–4. 10.1136/bmj.39570.749884.BE [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Knowlton LW, Phillips CC. The Logic Model Guidebook: Better Strategies for Great Results. 2 ed. Los Angeles London: SAGE; (2013). p. 170. [Google Scholar]
- 41.Ball L, Ball D, Leveritt M, Ray S, Collins C, Patterson E, et al. Using logic models to enhance the methodological quality of primary health-care interventions: guidance from an intervention to promote nutrition care by general practitioners and practice nurses. Aust J Prim Health. (2017) 23(1):53–60. 10.1071/PY16038 [DOI] [PubMed] [Google Scholar]
- 42.Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. (2015) 10(1):53. 10.1186/s13012-015-0242-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. (2004) 82(4):581–629. 10.1111/j.0887-378X.2004.00325.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Dixon-Woods M, Martin GP. Does quality improvement improve quality?. Future Hosp J. (2016) 3(3):191–4. 10.7861/futurehosp.3-3-191 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. (2017) 12(1):125. 10.1186/s13012-017-0657-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (standards for QUality improvement reporting excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. (2016) 25(12):986–92. 10.1136/bmjqs-2015-004411 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the word out: new approaches for disseminating public health science. J Public Health Manag Pract. (2018) 24(2):102–11. 10.1097/PHH.0000000000000673 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Bate SP, Robert G. Knowledge management and communities of practice in the private sector: lessons for modernizing the national health service in England and Wales. Public Adm. (2002) 80(4):643–63. 10.1111/1467-9299.00322 [DOI] [Google Scholar]
- 49.Barry D, Kimble LE, Nambiar B, Parry G, Jha A, Chattu VK, et al. A framework for learning about improvement: embedded implementation and evaluation design to optimize learning. Int J Qual Health Care. (2018) 30(1):10–4. 10.1093/intqhc/mzy008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. (2014) 23(4):290–8. 10.1136/bmjqs-2013-001862 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Reed JE, Howe C, Doyle C, Bell D. Simple rules for evidence translation in complex systems: a qualitative study. BMC Med. (2018) 16(1):92. 10.1186/s12916-018-1076-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the consolidated framework for implementation research (CFIR): the CFIR outcomes addendum. Implement Sci. (2022) 17(1):7. 10.1186/s13012-021-01181-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Kaplan HC, Provost LP, Froehle CM, Margolis PA. The model for understanding success in quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. (2012) 21(1):13–20. 10.1136/bmjqs-2011-000010 [DOI] [PubMed] [Google Scholar]
- 54.Coury J, Schneider JL, Rivelli JS, Petrik AF, Seibel E, D’Agostini B, et al. Applying the plan-do-study-act (PDSA) approach to a large pragmatic study involving safety net clinics. BMC Health Serv Res. (2017) 17(1):411. 10.1186/s12913-017-2364-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Trevisan MS, Walser TM. Evaluability Assessment: Improving Evaluation Quality and Use. Thousand Oaks, CA: SAGE Publications; (2014). p. 201. [Google Scholar]
- 56.Ng YJ, Lew KSM, Yap AU, Quek LS, Hwang CH. Building capacity and capability for quality improvement: insights from a nascent regional health system. BMJ Open Qual. (2024) 13(3):e002903. 10.1136/bmjoq-2024-002903 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

