Skip to main content
Annals of the American Thoracic Society logoLink to Annals of the American Thoracic Society
. 2015 Dec;12(12):S213–S221. doi: 10.1513/AnnalsATS.201506-367OT

American Thoracic Society and National Heart, Lung, and Blood Institute Implementation Research Workshop Report

Bruce G Bender 1,*, Jerry A Krishnan 2,*, David A Chambers 3, Michelle M Cloutier 4, Kristin A Riekert 5, Cynthia S Rand 5, Michael Schatz 6, Carey C Thomson 7, Sandra R Wilson 8, Andrea Apter 9, Shannon S Carson 10, Maureen George 9,, Joe K Gerald 11, Lynn Gerald 11, Christopher H Goss 12, Sande O Okelo 13, Richard A Mularski 14, Huong Q Nguyen 6, Minal R Patel 15, Stanley J Szefler 16, Curtis H Weiss 17, Kevin C Wilson 18, Michelle Freemer 19,
PMCID: PMC5467083  PMID: 26653201

Abstract

To advance implementation research (IR) in respiratory, sleep, and critical care medicine, the American Thoracic Society and the Division of Lung Diseases from the NHLBI cosponsored an Implementation Research Workshop on May 17, 2014. The goals of IR are to understand the barriers and facilitators of integrating new evidence into healthcare practices and to develop and test strategies that systematically target these factors to accelerate the adoption of evidence-based care. Throughout the workshop, presenters provided examples of IR that focused on the rate of adoption of evidence-based practices, the feasibility and acceptability of interventions to patients and other stakeholders who make healthcare decisions, the fidelity with which practitioners use specific interventions, the effects of specific barriers on the sustainability of an intervention, and the implications of their research to inform policies to improve patients’ access to high-quality care. During the discussions that ensued, investigators’ experience led to recommendations underscoring the importance of identifying and involving key stakeholders throughout the research process, ensuring that those who serve as reviewers understand the tenets of IR, managing staff motivation and turnover, and tackling the challenges of scaling up interventions across multiple settings.

Keywords: translational research, knowledge transfer, dissemination


To accelerate the research needed to address the challenges of implementing evidence-based care in respiratory, sleep, and critical care medicine, the American Thoracic Society (ATS) and the Division of Lung Diseases from the NHLBI cosponsored an Implementation Research Workshop on May 17, 2014. Workshop participants are listed in the Appendix. Both the ATS and the Division of Lung Diseases recognized the need to bring together an interdisciplinary group of investigators who share a common goal of developing methods to disseminate the knowledge garnered from research and, importantly, provide patients and those who care for them a means of benefitting from research that might impact their lives. Meeting organizers sought to leverage synergy among ATS, NHLBI, and scientists with diverse approaches to begin to generate a common understanding of research priorities for future implementation research (IR) that will further advance the significant progress already made by investigators working in a variety of diseases. The purpose of this Workshop Report is to define the key concepts in IR as discussed during the workshop, provide examples of IR from experienced investigators, and describe the challenges, recommendations, and priorities for IR in respiratory, critical care, and sleep medicine that emerged from the meeting.

Key Concepts in Implementation Research

Implementation Research

The National Institutes of Health funding opportunity announcement for dissemination and implementation research (IR) defines IR as the “scientific study of methods to promote the integration of research findings and evidence-based interventions into healthcare practice and policy” (http://grants.nih.gov/grants/guide/pa-files/PAR-13-055.html). Rather than addressing whether or to what extent healthcare interventions work, IR investigators focus on how, where, and why interventions impact healthcare, considering a broad range of contexts including participants, processes, and places. As such, IR relies on the existing “evidence base” for an intervention to enable a systematic exploration of the mechanism(s) that mediate or modify the intervention’s effects. In contrast to IR, the NIH announcement distinguishes implementation as “the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings.” Hence, implementation is not merely performing an intervention (i.e., the “action”), as it entails the selection of the intervention with the intention to change practice/processes.

As an example from critical care medicine, an intensivist managing patients on mechanical ventilation might implement a checklist as part of the medical record that includes evidence-based practices to assess patients on daily rounds (e.g., low tidal volume in those with acute respiratory distress syndrome, sedation/daily awakening). An implementation researcher might consider and test alternative strategies to accelerate the rate of adoption of the checklist as part of rounds. In such a study, the researcher might evaluate how well and how quickly each strategy works. For example, how does adding a physician checklist as a tab in the electronic record compare with dividing the checklist into alerts in the electronic health record that appear only when evidence-based practices are not being followed? Within IR, additional questions would include how the rates of adoption are affected by the time required by providers to use the strategies, whether adoption depends on provider attitudes regarding the strategies (or which provider characteristics affect their use or preference for a given strategy), whether one strategy is more feasible in some units (contexts) than others, or which checklist items get skipped by providers.

The field of IR (also referred to as implementation science, knowledge transfer, or knowledge translation) has emerged to rigorously assess the methods used to bridge the substantial gap between advances in knowledge and “real-world” practice. To do so, implementation scientists may participate in a variety of activities to measure operational differences in practices in diverse clinical settings, to engage a wide range of stakeholders (e.g., patients and their caregivers, clinicians, healthcare organizations, payers, policy makers) to identify their needs and barriers to change, to assess novel technology platforms and their potential use in specific interventions or by specific populations, and to evaluate the impact of linking different types of incentives to quality metrics that measure provider performance.

Three types of outcome measures have been conceptualized in IR: implementation, service, and client (clinical) outcomes (1). Implementation outcomes include reach (penetration), effectiveness, adoption (acceptability), implementation (fidelity), and sustainability (maintenance). Service outcomes are measured at the system level and include efficiency, safety, effectiveness, equity, patient-centeredness, and timeliness. Client (clinical) outcomes, measured at the patient level, include satisfaction, quality of life, physiology, function, and symptoms, as well as mortality and healthcare utilization.

Problem Statement

Fortunately, a proliferation of biomedical discoveries and many new treatments for lung disease, sleep medicine, and critical care have emerged. Unfortunately, the potential benefits of these breakthroughs to individual patients have not been realized universally. Although the challenges of translating research into practice are not unique to respiratory, sleep, and critical care medicine providers, the data are staggering, with very little (14%) new scientific information being used in day-to-day clinical practice within 17 years after its discovery (24). Systematically assessing the translation process is the purview of IR.

The importance of IR can be considered from a number of vantage points of relevance to all. For example, although patient nonadherence is well recognized, research more than a decade ago showed that physicians do not adhere to evidence-based practices for many reasons (5). More recently, a systematic review showed moderate evidence (at best) for existing interventions to improve provider adherence with asthma guidelines (6), despite evidence that guideline-based care is cost effective (7). Therefore, patients, caregivers, medical providers, and payers forego benefits when implementation of evidence-based practices fails. IR provides the methodological approach to determine the mechanisms and extent to which the translation process succeeds, fails, or requires modifications. A systematic review of IR is outside the scope of this ATS/NHLBI Workshop Report; for more information, we refer readers to other recent reports (1, 811).

Many factors contribute to the barriers to translating research findings to patient care. Although some of the factors remain poorly understood, one clear hindrance is the assumption that rigorously tested interventions from randomized controlled trials will simply diffuse to patients and providers (12). This assumption is equally problematic for evidence-based interventions that diagnose or treat disease (e.g., medications) and those that modify behavior, such as improving patient or provider adherence with best practices. Various stakeholders (e.g., patients, families, communities, clinicians, delivery systems, and payers) may experience different barriers to implementation. Patients and families may consider convenience (e.g., travel time or need for transportation) and out-of-pocket costs as critically important when making healthcare decisions. Lack of awareness of new research findings, limited information about long-term outcomes, differences between clinical trial participants and patients in clinical practice, and resource constraints (e.g., reimbursement by payers) are among the factors that limit clinicians’ adoption of evidence from efficacy trials (13). In addition, a sense of apprehension about changes to standard practices may hinder other clinicians. Within the context of IR, the investigator’s role is to understand the determinants (barriers and facilitators) of implementation, service, and client (clinical) outcomes and to develop and test strategies that systematically target these factors to accelerate the adoption of evidence-based care.

Approaches to IR: Conceptual Frameworks

Multiple conceptual frameworks have been developed “to enable systematic identification and understanding of drivers that predict success in different settings, guide adaption of targeted practice changes and implementation strategies, and more quickly and confidently build the scientific knowledgebase” (www.isrn.net, February 19, 2014). Dozens of conceptual frameworks have been developed to guide IR (14). Examples discussed during the workshop are listed in Table 1. Although no single framework is best suited for any particular IR question or disease, some experts have proposed approaches that researchers can use when selecting among the various conceptual frameworks (11, 15).

Table 1.

Examples of theoretical frameworks for implementation research included in workshop presentations

Consolidated Framework for Implementation Research (CFIR) A resource to allow researchers to strategically design studies with consideration of five domains: the intervention characteristics, the outer setting, the inner setting, the population characteristics (for everyone involved), and the process of implementation. (Originally described in Reference 24.)
Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) A model for assessing the external validity of implementation elements including adoption, implementation, and sustainability (25)
The President’s Emergency Plan for AIDS Relief (PEPFAR): Impact evaluation PEPFAR’s implementation science framework for AIDS programs including monitoring and evaluation, operations research, and impact evaluation (26)
Promoting Action on Research Implementation in Health Services (PARIHS) Examines implementation evidence around three key elements: evidence, context, and facilitation (27)

IR Designs

Given the intent to definitively answer the question, “Can this intervention work under ideal circumstances?” efficacy research is usually conducted using standardized interventions provided to a well-defined population in a specific setting to maximize the precision of the estimate of efficacy and reduce sources of bias. Therefore, randomized clinical trials have been the “gold standard” for providing the highest level of efficacy evidence in clinical research. More recently, there has been recognition that the determination of the “best” evidence must account for the purpose of the investigation (16).

If the purpose of research is to inform how evidence is best incorporated into decision making in day-to-day practice, the size of treatment effects from efficacy studies may over- or underestimate the likely benefits or harms of interventions compared with what is observed in clinical practice. Such discrepancies may be due to systematic differences in patient populations, clinician expertise, the frequency of follow up, and the resources available to promote treatment fidelity and to minimize the risk of harm in efficacy trials compared with clinical settings (17).

In contrast to efficacy research, effectiveness research is intended to answer the question, “Does this intervention work in clinical practice?” Therefore, the study designs used in effectiveness research should deliberately allow for variations at the level of the patient (e.g., range of ages or health-related behaviors such as variable adherence to prescribed therapy), clinician (e.g., different levels of expertise in diagnosis and management), setting (e.g., rural or metropolitan and privately owned or government-funded clinics), and system (e.g., pay for performance or fee for service, restricted access to specialists) that could modify the observed harms and benefits of interventions but reflect the variety of situations encountered in clinical settings (18).

IR builds on effectiveness research by focusing more specifically on defining the factors that will facilitate the integration of effective practices into “real-world” patient care settings, as well as studies comparing different approaches to accelerating uptake and maintenance of effective practices into such settings. IR studies that are focused on uncovering mechanisms governing implementation may use both qualitative (e.g., in situ observations of patients, caregivers, and clinicians at the point of care) and quantitative (e.g., surveys) methods to define barriers and facilitators of promoting practice change among stakeholders (see section about stakeholders below). If the IR question is focused on testing different approaches to overcome barriers or to enhance the effects of facilitators for practice change, researchers may use pragmatic clinical trials (using randomization at the individual or practice level, with interventions delivered by clinicians rather than researchers), stepped-wedge trials, interrupted time series, and pre–post intervention designs (19, 20). Given the focus of IR, the results of such studies are likely to be heavily influenced by the clinical and organizational context in which IR is performed. Understanding this context is a key aspect of IR, because it will help to inform subsequent efforts devoted to enact practice change in other settings.

Engaging Stakeholders

Both effectiveness research and IR emphasize early and continuous stakeholder engagement. Stakeholders are individuals or organizations that will be affected by or have an interest in the changes involved in a specific aspect of care; stakeholders may include patients, clinicians, support staff, administrators, community leaders, and policymakers. Stakeholders should be actively engaged in the research process from the point of inception and subsequent planning of the implementation approach and research design through execution, interpretation, and dissemination of the results. Stakeholders’ perspectives should be incorporated in the research process to ensure that the research fills the evidence gaps they have identified as being important (21).

For many researchers, genuinely integrating stakeholders as partners in the research process represents a paradigm shift from relying on experts to decide what research should be done, by whom, and how and where it will be conducted. Although such engagement can be a challenge, stakeholder input can be critical for identifying and removing research barriers and improving the adoption and sustainability of interventions after the research is completed.

Examples of Implementation Research

Examples of IR are abundant in lung disease, sleep-disordered breathing, and intensive care. Examples of IR and the issues identified by investigators at the workshop are shown in Table 2. A variety of IR frameworks were used by the various investigators, depending on the intervention, population, and clinical setting. Although many of the studies included client (clinical) outcomes, each had a significant focus on implementation outcomes, including adoption, sustainability, and cost.

Table 2.

Examples of implementation research from workshop presentations

Investigator Setting Implementation Framework Study Design Intervention Clinical Outcomes (Examples) Implementation Outcomes (Examples) Implementation Issues
M. Schatz Kaiser Permanente, outpatient asthma care PEPFAR’s impact evaluation Randomized controlled trial Referral to asthma specialist for high rescue medication users Rescue medication use Adoption Engaging primary care and specialist providers
Specialist consultation Feasibility
Exacerbations Penetration
C. Thomson Teaching, hospital ICUs PARIHS Pre–post intervention Modify practice standard in critical care units (e.g., targeted temperature management post-MI) Survival Acceptability Lack of awareness of evidence and urgency to change
Neurologic status Adoption Engaging multiple specialties and disciplines
Appropriateness Integrating changes into workflow
Feasibility
Sustainability
S. Wilson, H Tapp Multiple primary care practices RE-AIM Pre–post intervention Shared asthma treatment decision making intervention tailored to practice needs Hospitalizations and emergency department visits for asthma Adoption by practices, providers and patients Competing demands from health care system
Sustainability of the intervention Obtaining frank stakeholder feedback on process
Practice staff turnover
K. Riekert Cystic fibrosis outpatient clinics CFIR Cluster (clinic level) randomized controlled trial Comprehensive adherence program Medication adherence Adoption Providers’ resistance to change
Health-related quality of life Feasibility Clinical team turnover
Subject’s knowledge and skills Fidelity Access to pharmacy refill reporting (for adherence measurement)
Sustainability
B. Bender Kaiser Permanente outpatient clinics RE-AIM Pragmatic controlled trial Patient reminders via interactive voice recognition and EHR Inhaled corticosteroid use Acceptability Research team turnover
Fidelity Integrating intervention into existing EHR
C. Rand Head Start programs RE-AIM Randomized, controlled trial Motivational interviewing to reduce secondhand smoke exposure in children Secondhand smoke exposure Acceptability Staff safety when conducting home visits
Healthcare utilization Adoption Competing demands of community partners and families
Caregiver beliefs Sustainability
Head Start staff beliefs
M. Cloutier Primary care practices RE-AIM Pre–post intervention with contemporaneous control group Primary care provider training on asthma guidelines Adherence to inhaled corticosteroid use and treatment plan distribution Adoption Incompatible EHR systems
Hospital, emergency department, and urgent care outpatient utilization Sustainability Specialists’ concerns about “turf”
J. Krishnan, J. Gerald Minority serving hospital Not applicable* Pragmatic controlled trial Community health worker and phone- based peer coaches for hospitalized patients Patient experience (e.g., satisfaction, anxiety, social support, physical and mental health) Acceptability Engaging institutional leadership
Hospital readmission Appropriateness Identifying opportunities to collaborate with other quality improvement programs intended to reduce hospital readmissions
Fidelity
Cost

Definition of abbreviations: CFIR = Consolidated Framework for Implementation Research; EHR = electronic health record; ICU = intensive care unit; MI = myocardial infarction; PARIHS = Promoting Action on Research Implementation in Health Services; PEPFAR = President’s Emergency Plan for AIDS Relief; RE-AIM = Reach, Effectiveness, Adoption, Implementation, Maintenance.

*

Study design included effectiveness and IR elements. The latter component focused on budget impact analysis.

Challenges in Implementation Research

Perceptions and Recognition of Implementation Research

Limited awareness of IR as a distinct scientific discipline or “home” for such activities within organizations and institutions has broad implications, including reduced opportunities for training, mentoring, and infrastructure to promote and grow collaborations between those who develop and those who use new health-related information. Many investigators at the workshop have encountered clinicians and administrators who are unaware of IR, as well as peers who do not recognize their own work as falling within the spectrum of IR.

Some investigators attending the workshop also acknowledged that they had not adopted a theoretical IR framework when planning or conducting their research. In addition, particularly for those investigators with less training or experience in IR, challenges included a reluctance to use designs other than traditional clinical trials randomized at the level of individual patients. Such hesitation is compounded by a lack of familiarity about how to effectively engage stakeholders (see below) and concerns about how IR will be received by institutional review boards (IRBs) and research sponsors. For example, cluster randomization designs (e.g., randomization at the practice or organization level) may affect the informed consent process (e.g., the need to include both patients and providers) (22). Cluster randomized designs should also account for intracluster correlation of outcomes when developing sample size calculations and data analysis plans (23). Pre–post intervention studies without concurrent controls may be useful if the time series analysis includes a sufficiently long preintervention period to evaluate the possibility of preexisting trends (19).

Engagement of Stakeholders

Researchers described numerous challenges with respect to involving stakeholders, including how to identify and enlist stakeholders, negotiate among stakeholders with varied interests, and integrate stakeholder feedback throughout the research process. For example, investigators often immediately recognize patients and healthcare providers as stakeholders but may not as readily understand the importance of engaging support staff within healthcare systems (e.g., phlebotomist, spirometry technician) and community leaders in the planning of research, its conduct and monitoring, and the dissemination of research results. Other barriers encountered by investigators with and without an IR background included difficulty engaging healthcare providers or community organizations as partners.

Scale-up of Interventions

Researchers who are comfortable conducting single-site, controlled clinical trials often struggle with the challenges of “scaling up” interventions across multiple, diverse clinical settings. For example, a program to improve adherence with cystic fibrosis (CF) treatment conducted at 18 separate CF clinics required investigators to balance the need for maintaining treatment fidelity across settings with site-specific barriers to adoption (Table 2).

Researchers have also found it challenging to implement time-consuming interventions within the setting of busy healthcare systems designed for patient care, not research. For example, primary care providers in a healthcare system were reluctant to participate in a study to decrease overuse of β2-agonists because they perceived the research as adding excessive burden to their day-to-day clinical workflows (Table 2). Moreover, sustaining the use of an intervention after implementation in a clinical setting must compete with ongoing clinical demands and may require changes in resource allocation. For example, teachers can be trained and encouraged to help parents of children with asthma to stop smoking, but interest and motivation may regress unless processes are in place to ensure sustainability (Table 2).

Design of an Implementation Research Team

Multidisciplinary research teams (e.g., behavioral scientists, clinical researchers, clinical content experts, statisticians) are typically necessary to the conduct of IR. The need for multiple areas of research expertise also requires attention to communication and management of roles, responsibilities, and timelines. By design, IR also involves interactions with clinical (nonresearch) personnel and multiple other stakeholders; changes in leadership or staff turnover can lead to delays in completing IR.

Workshop Recommendations

Investigators participating in the workshop identified innovative solutions to some of the challenges inherent to IR. Table 3 provides a comprehensive list of suggestions to advance the field of IR in respiratory, sleep, and critical care medicine. Some of these recommendations are further explained with examples in the following text:

Table 3.

Key recommendations emerging from the workshop

Integrate IR with respiratory, sleep, and critical care medicine research
 Identify priority research areas
 Work to adopt the nomenclature that is recognized by the field of IR
 Determine how existing research infrastructure or changes to such infrastructure will facilitate IR
Design and conduct IR
 Identify relevant study designs beyond randomized clinical trials
 Define appropriate control groups for IR and establish whether “usual care” is acceptable
 Establish how to identify and engage stakeholders, including those who are not part of the health care “system”
 Define elements within IR that can be adapted to leverage technology to increase dissemination
 Establish how to measure adaptability
 Determine how to prepare funding organizations or monitoring boards to provide an informative review
 Consider methods to enhance staff participation and motivation to minimize turnover
Measure outcomes in IR
 Identify appropriate outcomes for IR
 Define the metrics of success for IR
 Establish the appropriate time interval and methodology to measure sustainability
Train investigators
 Build a collaboration to address training guidelines (establish a curriculum) for students or junior investigators who wish to become engaged in IR
 Create programs and opportunities for established lung researchers to learn about IR
Scale up interventions across multiple settings
 Identify study champions at each setting
 Emphasize simplicity when proposing integration across multiple care settings
 Distinguish between evidence-based care and IR
Implement and leverage supports for respiratory, critical care, and sleep IR
 Develop an ATS “home” for investigators and members to coordinate and maintain communication between various groups conducting IR
 Learn about the Center for Translation Research and Implementation Science (CTRIS) at NIH/NHLBI

Definition of abbreviations: ATS = American Thoracic Society; IR = implementation research; NHLBI = National Heart, Lung, and Blood Institute; NIH = National Institutes of Health.

Identifying and Engaging Stakeholders

  • Identify stakeholders at the time of initiation of a research project. Potential stakeholders include patients (and/or their caregivers); patient advocacy organizations (e.g., COPD Foundation, CF Foundation); physician, nurses, and other clinicians; health system administrators and support staff, payers, pharmacies, purchasers (e.g., employers), community organizations, pharmaceutical organizations, and policy makers. Early engagement should help to ensure that the research is relevant to stakeholders who are critical to sustaining the impact of an intervention.

  • Engage identified stakeholders in initial meetings in small groups to assess interest, commitment, and ideas, including means of identification of additional stakeholders. As appropriate, invite a subgroup of stakeholders to become a part of the ongoing stakeholder advisory committee or study investigators, depending on availability and interest. The extent of stakeholder engagement needed for a particular project is likely to depend on various aspects of the study design, including its target population, intervention and comparators, outcomes, and the setting in which the study would take place. Meeting logistics must facilitate ongoing stakeholder participation in planning, execution, and evaluation of the study, while also not over-taxing participants. Technology can be used to reduce participants’ burden (e.g., web-based meetings).

Working with Institutional Review Boards

  • Establish early meetings with IRB leadership, before their review of a protocol, to introduce the project and discuss how the specific project meets the definition of IR and how it differs from other types of human subjects research.

  • Offer resources (e.g., publications or experienced individuals) to increase researchers’ and IRB staffs’ knowledge about challenges and possible solutions to these issues to facilitate the conduct of IR. For example, provide feasible approaches to human subject protection in trials where it is not possible to obtain consent from every individual.

Managing Staff Motivation and Turnover

  • Meet with all staff involved with the project to explain the rationale and goals of the proposed research. Maintain frequent contact to provide updates about the status of the project on a regular basis. Consider identifying one or more study “champions” to maintain staff motivation and help with transitions due to staff turnover.

  • Commit adequate resources to develop procedures to educate new, incoming staff. Consider “booster” meetings to help bring new staff up to speed and build a cohesive team with other staff members. Seek feedback from staff about the study protocol and adapt the protocol, when appropriate, based on this feedback.

Scaling up Interventions across Multiple Settings

  • Meet “early and often” when conducting IR projects in multiple and potentially different settings.

  • Garner support of relevant leaders and identify study “champions” within each healthcare system. Consider forming “partnerships” with stakeholders at each healthcare system to develop a sense of community of purpose and collaborate in identifying and overcoming barriers to conducting the study at individual sites or across multiple sites with similar barriers.

  • Ensure sufficient resources at individual sites to maintain the integrity of the IR in the context of clinical care. For example, personnel who are involved in assessing the implementation of an intervention might be biased (“contaminated”) if they were also responsible for data collection on clinical outcomes.

  • Simplify the intervention to allow integration into routine practice and enhance implementation. For example, in one project primary care physicians were able to implement complex asthma guidelines (from a more than 400-page evidence-based document) using care templates that could be integrated into more than 300 different electronic health record systems. Field testing to ensure the templates are user friendly for most providers is also helpful.

  • Update interventions as needed to keep up with practice and new evidence.

Supporting Implementation Research in Respiratory, Critical Care, and Sleep Medicine

  • Identify strategies to build capacity and coordinate efforts among interested researchers. Expertise in IR exists in different research communities and types of organizations, including the assemblies and committees within the ATS. Advances in the methodologies used in IR are more likely to occur if organizations and those who fund research understand and emphasize the importance of IR. Options available to the ATS include the development of a “home” to coordinate communication among the various IR groups, conduct of postgraduate training and scientific symposia specifically for IR at national and international meetings, and the development of an official ATS research statement on IR.

  • Research funding by NHLBI has been critical to the advances made to date in IR in respiratory, critical care, and sleep. Creation of the Center for Translation Research and Implementation Science (CTRIS) within the Office of the Director at NHLBI during 2014 suggests potential opportunities to accelerate IR activities in collaboration with stakeholders. One example is a recent request for application, “Creating asthma empowerment collaborations to reduce childhood asthma disparities” (http://grants.nih.gov/grants/guide/rfa-files/RFA-HL-15-028.html). Workshop participants expressed interest in working within NHLBI and ATS and with other stakeholders to define and sustain such partnerships.

Conclusions

The ATS-NHLBI IR Workshop created a forum to bring together researchers from a variety of research disciplines to discuss IR in respiratory, sleep, and critical care medicine. In so doing, participants reviewed the components of and approaches to IR, including a variety of examples from their own experience. The challenges and barriers to IR were discussed. It is anticipated that some of the workshop recommendations will be taken forward via collaborative efforts to sustain the enthusiasm for turning research into practices that benefit patients with lung disease and sleep-disordered breathing and those receiving critical care.

Appendix: Workshop Participants (Listed affiliations as of May, 2014)

Andrea Apter, M.D., M.Sc., M.A., University of Pennsylvania

Bruce G. Bender, Ph.D., National Jewish Health*

Shannon Carson, M.D., University of North Carolina

David Chambers, Ph.D., National Institute of Mental Health

Michelle M. Cloutier, M.D., University of Connecticut Health Center

Michelle Freemer, M.D., NHLBI

Maureen George, Ph.D., R.N., A.E.-C., F.A.A.N., University of Pennsylvania

Joe K. Gerald, M.D., Ph.D., University of Arizona

Lynn B. Gerald, M.S.P.H., Ph.D., University of Arizona

Chris Goss, M.D., M.Sc., University of Washington

James Kiley, M.D., Division of Lung Diseases/NHLBI

Jerry A. Krishnan, M.D., Ph.D., University of Illinois Hospital and Health Sciences System*

Sande Okelo, M.D., Ph.D., University of California Los Angeles

Richard Mularski, M.D., M.C.R., M.S.H.S., Kaiser Permanente

Huong Nguyen, Ph.D., R.N., Kaiser Permanente

Minal Patel, Ph.D., University of Michigan

Cynthia S. Rand, Ph.D., Johns Hopkins School of Medicine

Kristin A. Riekert, Ph.D., Johns Hopkins School of Medicine

Michael Schatz, M.D., M.S., Kaiser Permanente Southern California

Stanley Szefler, M.D., Children’s Hospital of Colorado

Carey C. Thomson, M.D., M.P.H., Mount Auburn Hospital

Curtis Weiss, M.D., M.S., Northwestern University

Kevin C. Wilson, M.D., Boston University School of Medicine

Sandra R. Wilson, Ph.D., Palo Alto Medical Foundation Research Institute

*Co-Chairs.

Footnotes

The workshop and the development of the report were supported by the American Thoracic Society and the NHLBI.

Author Contributions: Each author contributed to the concepts presented through their participation in the workshop, revised the draft and approved the final version of the report, and agreed to be accountable for the information included this summary.

Author disclosures are available with the text of this article at www.atsjournals.org.

References

  • 1.Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Balas EA, Weingarten S, Garb CT, Blumenthal D, Boren SA, Brown GD. Improving preventive care by prompting physicians. Arch Intern Med. 2000;160:301–308. doi: 10.1001/archinte.160.3.301. [DOI] [PubMed] [Google Scholar]
  • 3.Westfall JM, Mold J, Fagnan L. Practice-based research: “Blue Highways” on the NIH roadmap. JAMA. 2007;297:403–406. doi: 10.1001/jama.297.4.403. [DOI] [PubMed] [Google Scholar]
  • 4.Needham DM, Colantuoni E, Mendez-Tellez PA, Dinglas VD, Sevransky JE, Dennison Himmelfarb CR, Desai SV, Shanholtz C, Brower RG, Pronovost PJ. Lung protective mechanical ventilation and two year survival in patients with acute lung injury: prospective cohort study. BMJ. 2012;344:e2124. doi: 10.1136/bmj.e2124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, Rubin HR. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–1465. doi: 10.1001/jama.282.15.1458. [DOI] [PubMed] [Google Scholar]
  • 6.Okelo SO, Butz AM, Sharma R, Diette GB, Pitts SI, King TM, Linn ST, Reuben M, Chelladurai Y, Robinson KA. Interventions to modify health care provider adherence to asthma guidelines: a systematic review. Pediatrics. 2013;132:517–534. doi: 10.1542/peds.2013-0779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ismaila AS, Risebrough N, Li C, Corriveau D, Hawkins N, FitzGerald JM, Su Z. Cost-effectiveness of salmeterol/fluticasone propionate combination (Advair(®)) in uncontrolled asthma in Canada. Respir Med. 2014;108:1292–1302. doi: 10.1016/j.rmed.2014.06.005. [DOI] [PubMed] [Google Scholar]
  • 8.Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013;347:f6753. doi: 10.1136/bmj.f6753. [DOI] [PubMed] [Google Scholar]
  • 9.Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:119. doi: 10.1186/s13012-015-0306-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Wandersman A, Alia KA, Cook B, Ramaswamy R. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes. BMJ Qual Saf. 2015;24:645–652. doi: 10.1136/bmjqs-2014-003525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Moullin JC, Sabater-Hernández D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Syst. 2015;13:16. doi: 10.1186/s12961-015-0005-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Green LW, Ottoson JM, García C, Hiatt RA. Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009;30:151–174. doi: 10.1146/annurev.publhealth.031308.100049. [DOI] [PubMed] [Google Scholar]
  • 13.Glauser TA, Nevins PH, Williamson JC, Abdolrasulnia M, Salinas GD, Zhang J, Debonnett L, Riekert KA. Adherence to the 2007 cystic fibrosis pulmonary guidelines: a national survey of CF care centers. Pediatr Pulmonol. 2012;47:434–440. doi: 10.1002/ppul.21573. [DOI] [PubMed] [Google Scholar]
  • 14.Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–350. doi: 10.1016/j.amepre.2012.05.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. doi: 10.1186/s13012-015-0242-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Howick J, Chalmers I, Glasziou P, Greenhalgh T, Heneghan C, Liberati A, Moschetti I, Phillips B, Thornton H. Explanation of the 2011 Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence (Background Document). Oxford Centre for Evidence-Based Medicine [accessed 2015 Nov 5]. Available from: http://www.cebm.net/index.aspx?o=5653.
  • 17.Lieu TA, Au D, Krishnan JA, Moss M, Selker H, Harabin A, Taggart V, Connors A Comparative Effectiveness Research in Lung Diseases Workshop Panel. Comparative effectiveness research in lung diseases and sleep disorders: recommendations from the National Heart, Lung, and Blood Institute workshop. Am J Respir Crit Care Med. 2011;184:848–856. doi: 10.1164/rccm.201104-0634WS. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Singal AG, Higgins PD, Waljee AK. A primer on effectiveness and efficacy trials. Clin Transl Gastroenterol. 2014;5:e45. doi: 10.1038/ctg.2013.13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Roland M, Torgerson D. Understanding controlled trials: what outcomes should be measured? BMJ. 1998;317:1075. doi: 10.1136/bmj.317.7165.1075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kahn JM, Palmer SM, Spruit MA, Punjabi NM, Sutherland ER. Clinical year in review I: quality improvement for pulmonary and critical care medicine, lung transplantation, rehabilitation for pulmonary and critically ill patients, and sleep medicine. Proc Am Thorac Soc. 2012;9:183–189. doi: 10.1513/pats.201206-031TT. [DOI] [PubMed] [Google Scholar]
  • 21.Krishnan JA, Lindenauer PK, Au DH, Carson SS, Lee TA, McBurnie MA, Naureckas ET, Vollmer WM, Mularski RA COPD Outcomes-based Network for Clinical Effectiveness and Research Translation. Stakeholder priorities for comparative effectiveness research in chronic obstructive pulmonary disease: a workshop report. Am J Respir Crit Care Med. 2013;187:320–326. doi: 10.1164/rccm.201206-0994WS. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Walkey AJ, Wiener RS, Eccles M, McRae A, White A, et al. Risk factors for underuse of lung-protective ventilation in acute lung injury. J Crit Care. 2012;27:323.e1–323.e9. doi: 10.1016/j.jcrc.2011.06.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Campbell MK, Mollison J, Steen N, Grimshaw JM, Eccles M. Analysis of cluster randomized trials in primary care: a practical approach. Fam Pract. 2000;17:192–196. doi: 10.1093/fampra/17.2.192. [DOI] [PubMed] [Google Scholar]
  • 24.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. National Collaborating Centre for Methods and Tools. Assessing the public health impact of health promotion initiatives. Hamilton, ON: McMaster University; 2010 [accessed 2015 Mar 30]. Available from: http://www.nccmt.ca/registry/view/eng/70.html.
  • 26.Padian NS, Holmes CB, McCoy SI, Lyerla R, Bouey PD, Goosby EP. Implementation science for the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) J Acquir Immun Defic Syndr. 2011;56:199–203. doi: 10.1097/QAI.0b013e31820bb448. [DOI] [PubMed] [Google Scholar]
  • 27.Kitson AL, Rycroft-Malone J, Harvey G, McCormack B, Seers K, Titchen A.Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges Implement Sci 200831doi: 10.1186/1748-5908-3-1 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Annals of the American Thoracic Society are provided here courtesy of American Thoracic Society

RESOURCES