Introduction
Trauma surgery moves fast. Clinical decisions and treatment of injured patients must occur expeditiously, or patients suffer. Trauma research also moves fast, and new high-quality studies about treatment of injured patients frequently reshape the field and our understanding of best practices. Historically, medicine relied on the dissemination of best practices through publication of manuscripts and the endorsement of trusted physicians to change practices. However, implementation of research has proven to be slow. When research does not reach the bedside, patients are not offered proven therapies or are treated with dated or ineffective therapies. Implementation science, or the rigorous studying of the timely uptake of evidence into routine practice, is the next vital frontier in surgery,1 with the potential to have a profound positive effect on the care provided to our patients.
The purpose of this paper is to describe the principles of implementation science and propose their wider use in trauma care. This paper is published as an initiative of the Coalition for National Trauma Research (CNTR) to further advance high-quality research and promote sustainable research funding to improve the care of injured patients, commensurate with the burden of disease in the USA. We will review definitions of implementation, dissemination, and de-implementation, as well as research frameworks, study design, and funding opportunities.
Implementation science is an umbrella term that includes implementation research, dissemination research, and de-implementation research. The key with implementation science is focusing on “how to do it” rather than “what to do.” As a result, the outcomes of interest are not those typically considered in outcomes research such as mortality or morbidity. To study implementation, we assume that the “best practice” treatment is already known. Implementation science focuses on how to obtain sustained use of the best practice treatment in real-world settings. Implementation research is the study and use of strategies to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health. Dissemination research is the study of targeted distribution of information and intervention materials to a specific public health or clinical practice audience, with the intent to understand how to best spread and sustain knowledge. De-implementation is the study of systematic processes to remove unnecessary or low-value care. Implementation science is similar in some ways to quality improvement, which is a process well known to the trauma community. However, quality improvement is typically performed more locally within a single hospital or healthcare organization to improve healthcare services to patients, whereas implementation science aims to develop new knowledge that will be more generalizable. These overlapping principles help to bridge the gap between research and the patient experience of healthcare.
Traumatic injury is a common disease. It is the leading cause of death up to age 44 years, and survivors may also suffer severe disability. Because of the widespread and urgent nature of trauma, injured patients are treated at nearly every hospital across the country and around the world. This ubiquity poses a significant concern about the variation in care for injured patients nationwide. Although some aspects of care are standardized and routine, others are likely subject to major variations. The initial approach to the care of the injured via the American College of Surgeons Advanced Trauma Life Support course is well known and likely followed frequently.2 However, in other aspects of care, patients are unlikely to receive all appropriate interventions. Trauma systems have developed to ensure that hospitals designated as trauma centers have the appropriate resources and personnel to care for complex patients as well a robust quality improvement programme. However, there are many hospitals that are not part of the trauma system, and trauma center designation alone does not ensure the rapid uptake of best practices. Implementation science can help identify the lapses in care and help promote and promulgate best practice to reach a larger group of patients.
What is the gap?
Research, even when convincing and ground-breaking, is only as good as its adoption into routine clinical practice. Current estimates are that it takes, on average, 17 years for research findings to become standard clinical practice.3–5 There are multiple excellent examples of “best practices” in surgery, but implementation of these practices has been less studied. In fact, study of best practices in real-world environments can reveal flaws in the recommended practices. However, it may also be true that the implementation must be improved on to reach effective changes in outcomes.
In many studies of efficacy of using a treatment, the treatment is assumed to be well implemented, and data on clinical outcomes are assumed to reflect the true treatment effect. In this scenario, if a treatment is poorly implemented, the researchers conclude that the treatment is ineffective. The challenges and barriers to adoption of practice can be exemplified by examining the evolution of WHO’s Surgical Safety Checklist (WHO-SSC).6 Hull et al described this as a case study for implementation, to show why efforts to implement evidence-based interventions often fail to replicate the pattern of improvements described in the initial study.7 It should be of no surprise that efficacy shown in a controlled environment (ie, randomized clinical trial (RCT)) does not always translate directly to real-world effectiveness.
In 2009, the WHO-SSC was published by Haynes et al. This study described reductions in mortality and morbidity following the introduction of a 19-item surgical checklist across eight countries.6 The WHO-SSC was rapidly and widely implemented, but the efficacy was highly debated. Multiple studies within a range of settings have implemented the WHO-SSC and corroborated initial findings of reduced mortality, complications, hospital length of stay, and improved teamwork and adherence to safety processes.6 8 9 However, these findings have not been universally replicated, revealing a debate as to whether the practice itself is flawed or whether limitations are embedded within the implementation of the practice.10 11 Implementation is not a simple process, even for a seemingly “simple” intervention with dramatic reported improvements in outcomes. One study of implementation of the WHO-SSC showed only 31% compliance with the initial “Sign in,” and 48% compliance with “time out” procedures after a hospital started using the checklist.12 Even now, 10 years after the initial publication of the study, debate remains about the benefits of the WHO-SSC, embodied by a recent “Head to Head” publication in the British Medical Journal, debating whether the WHO Surgical Safety Checklist was “hyped.”13 While is it paramount to the scientific process to continually question dogma, implementation science studies must be performed to assess the contribution of implementation barriers to efficacy studies that seek to bring practices to imperfect environments. Without more knowledge about implementation, it will be impossible to determine whether best practices are truly effective, and how to bring widespread adoption of those practices to the front line.
Surgeons, particularly trauma surgeons, are conditioned to benchmark against quality indicators and follow protocols. Therefore, methods that create evidence-based programs often gain traction. These take many forms, including quality programs, practice management guidelines, and verification programs. Examples include the American College of Surgeons’ Strong for Surgery14; practice management guidelines such as those from the Eastern Association for the Surgery of Trauma (EAST)15; as well as the trauma community’s key doctrine document, the Optimal Resources for Trauma Care.16 These resources documents tell us what to do, but not how to do it. As a result, hospitals and trauma centers are often left to “reinvent the wheel” and problem-solve in silo environments to meet these “best practices.” Although many hospitals use passive methods such as education in an effort to spread best practices, this approach may be flawed and has not been shown to be effective.17 A concerted effort from researchers to study effective implementation will identify those practices that are generalizable among communities and help to bridge the gap between academic knowledge and patient care.
Outcomes of interest in implementation science
For clinicians and researchers to effectively study “how” instead of “what,” we need to describe the outcomes of interest to be used as metrics for success. Proctor et al created one widely used framework for this, identifying three possible outcome categories that are separate but interrelated: implementation outcomes, service outcomes, and client/health outcomes.18 19 Client/health outcomes describe the typical study outcomes of health research including health status and symptoms, satisfaction, and function of an individual. Service outcomes examine the effect of the intervention on a health system by examining efficiency, safety, effectiveness, equity, and patient-centeredness.
Implementation outcomes are outlined in table 1, and this taxonomy includes acceptability, adoption, appropriateness, costs, feasibility, fidelity, penetration, and sustainability. These outcomes were developed to achieve consistency with existing literature and definitions but allow researchers to separate concepts to better describe barriers and successes in different phases of implementation. They address implementation from a variety of perspectives, including provider, consumer, organization/institution/setting, and administrator. This work must be multidisciplinary to truly understand the impact via different lenses. Measurements of different outcomes may also be dependent on the particular implementation point or particular outcome of interest. For example, acceptability may be measured at the provider or consumer level with surveys or qualitative interviews, adoption may be measured at the provider or organization level with administrative data or survey data, and feasibility may be measured at the organization level using case audits. While these types of studies have not been very common in the trauma literature, some good examples do exist, including the “Pediatric Guideline Adherence and Outcomes (PEGASUS) programme in severe traumatic brain injury,” which specifically reported on program adoption, penetration, and fidelity.20 The “Enhanced Peri-Operative Care for High-risk patients” (EPOCH) trial studied the implementation of a quality improvement program to improve outcomes in patients undergoing emergency abdominal surgery in 93 hospitals in the UK.21 The authors report data on numerous aspects of implementation science including acceptability, adoption, appropriateness, fidelity, and process evaluation.
Table 1.
Implementation outcomes | |
Acceptability | The perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory Synonyms: Satisfaction with various aspects of the innovation (eg, content, complexity, comfort, delivery, and credibility) |
Adoption | The intention, initial decision, or action to try or employ an innovation or evidence-based practice Synonyms: Uptake; utilization; initial implementation; intention to try |
Appropriateness | The perceived fit, relevance, or compatibility of the innovation or evidence-based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem. “Appropriateness” is conceptually similar to “acceptability,” and the literature reflects overlapping and sometimes inconsistent terms when discussing these constructs Synonyms: Perceived fit; relevance; compatibility; suitability; usefulness; practicability |
Implementation costs | Cost impact of an implementation effort, dependent on three components: variation in complexity of treatments, variation for complexity of implementation strategy, variation by setting and overhead Synonyms: Marginal cost; cost-effectiveness; cost–benefit |
Feasibility | The extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting. Actual fit or utility; suitability for everyday use; practicability |
Fidelity | The degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the programme developers Synonyms: Delivered as intended; adherence; integrity; quality of program delivery |
Penetration | Integration of a practice within a service setting and its subsystems Synonyms: Level of institutionalization; spread; service access |
Sustainability | The extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing stable operations Synonyms: Maintenance; continuation; durability; incorporation; integration; institutionalization; sustained use; routinization |
Adapted from Proctor et al.
Frameworks for implementation science
Numerous frameworks for motivating and driving change in healthcare have been proposed in the literature.22 They are often encountered in the quality improvement literature, but there is clear overlap in some of these concepts with the field of Implementation Science. Some commonly used approaches often seen in the quality improvement realm include Lean Six Sigma and Plan-Do-Study-Act (PDSA).
The Translating Evidence Into Practice (TRIP) model is a well-known approach to change that has been successfully applied to implementation science for more than a decade.23 TRIP is a four-step implementation framework that can be customized to either large-scale or small-scale interventions. In brief, the TRIP model has four main steps: (1) summarize the evidence, (2) identify local barriers to implementation, (3) measure performance, and (4) ensure all patients reliably receive the intervention. The fourth step is an ongoing iterative process composed of four key components: Engage, Educate, Execute, and Evaluate. The model is intuitive and easy to explain to all audience levels, even those without expertise in quality improvement or implementation science. It has been cited well over 250 times and has served as the model for numerous large-scale funded research collaboratives and projects.
The most frequently cited model for evaluating and reporting implementation science work is the Consolidated Framework for Implementation Research (CFIR).24 CFIR covers five major constructs by domain to consider when assessing barriers and facilitators of implementation: (1) intervention characteristics, (2) outer setting, (3) inner setting, (4) characteristics of individuals, and (5) process. The CFIR can help plan collection and analysis of both qualitative and quantitative data. Qualitative guides cover interviews, observations, and meeting notes. For example, there are suggested approaches to semi-structured interviews or focus groups with physicians, nurses, patients, and key stakeholders to identify existing or potential barriers and facilitators when implementing the new practices. Quantitative data may include scores on scales validated to examine concepts such as organizational readiness to change and Organizational Change Manager scores.25 While CFIR is the most widely adopted, there are other tools such as the Implementation Science Research Development (ImpRes) offered to help design high-quality implementation research.26
The most commonly accepted implementation science conceptual framework to report outcomes is RE-AIM, which operationalizes outcomes into five main ideas: reach, effectiveness, adoption, implementation, and maintenance.27 28 The RE-AIM goal is to “encourage program planners, evaluators, readers of journal articles, funders, and policy-makers to pay more attention to essential program elements including external validity that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions.” It helps professionals to consider strengths and weaknesses of different interventions to improve clinical care. RE-AIM helps to answer critically important questions including:
How do I reach the targeted population with the intervention?
How do I know my intervention is effective?
How do I develop organizational support to deliver my intervention?
How do I ensure the intervention is delivered properly?
How do I incorporate the intervention so that it is delivered over the long term?
RE-AIM has been used in the trauma setting primarily as it relates to injury prevention programs. More recently, it is starting to be used for other topics such as in evaluating a clinical decision support tool for pediatric head trauma.29
Methodology
Study designs
While the routine types of research study design should be familiar to many trauma researchers, implementation science methods may be a newer concept. Performing interventional studies such as RCTs or non-interventional prospective observational studies will give trauma researchers a good background on which to build their research skills. The addition of health services research methodology, including approaches such as regression modeling to control for confounding, helps to grow the overall foundation. Some implementation science research can look very much like health services research, especially projects that use “natural experiments” or interventions to study changes in outcomes as noted above. For example, typical quality improvement articles often examine change in clinical outcomes before versus after an intervention. These papers have a common format and are often straightforward to write and read.30 Implementation science papers may have a similar setup, but use implementation outcomes (ie, adoption or feasibility) instead of clinical outcomes such as mortality.
Clinical trials can be performed as part of implementation science research, and similar to other trials, they are at the higher end of the evidence pyramid and supply stronger evidence. One of the key differences of implementation science trials versus other RCTs is the level of assignment or randomization. In a typical RCT, individual patients are assigned to one of two treatment arms. However, in implementation science, the level of assignment is often at a larger scale than individual patients. Cluster randomized trials might randomize at the floor, unit, clinic, or hospital level. For example, a cluster randomized trial examining the effectiveness of nurse education to improve venous thromboembolism prevention in hospitalized patients randomly assigned 21 floors within a hospital to one of two educational interventions.31 The benefits of cluster randomization may include the ability to study interventions that cannot be given to only selected patients (such as nurse education), and the ability to prevent contamination between individuals (ie, all nurses working together on the same floor receive the same intervention). The level of analysis might then follow at the same level of randomization, although this is not necessary. Outcomes for a typical RCT are routinely analyzed at the patient level if clinical outcomes are being studied. However, in the implementation science space, outcomes at the unit level (ie, adoption, appropriateness, fidelity, penetration, and/or sustainability) are often reported.
Another commonly used study design is the stepped-wedge cluster randomized trial.32 In this design, all enrolled clusters (ie, units, floors clinics, hospitals) eventually receive the intervention. This is accomplished via random and sequential crossover of clusters from control to intervention until all units have been exposed. Each cluster will act as its own historic control. This design is especially powerful when there is heterogeneity among clusters.33 An excellent example of this trial design in trauma surgery is an ongoing study of the delivery of high-quality screening and intervention for post-traumatic stress disorder and comorbidities in adult trauma patients.34
Implementation science usually requires a multidisciplinary research team. Qualitative and mixed-methods studies often benefit from individuals not usually included in traditional surgical research teams such as social scientists, medical anthropologists, human factors engineers, behavioral scientists, and health economists. Stakeholder perspectives from all possible angles are beneficial to improve these types of projects. Implementation science teams might need frontline partners such as administrators, as well as physicians, nurses, and other clinical providers whose practice will be involved or related to the intervention.
Methodological examples
Our ability to study and pinpoint barriers to the implementation of evidence-based measures and best practices is dependent on selecting context-relevant methodology. Once the outcomes of interest and framework are identified, the next step is to match the ideal study methodology. We provide two real-life examples rather than outline an exhaustive list, understanding that specific clinical scenarios will reveal different barriers that require tailored methodologies. The first example uses a mixed-methods approach to study emergency surgical care in northern Uganda; the second, the development of a best-practices model in hospital-based violence intervention.
Soroti Regional Referral Hospital in northern Uganda serves more than 2 million people and eight districts, leading to about 260 surgical referrals monthly.35 As one might expect, obstacles to life-saving surgical care, as in many low-income countries, are prevalent but poorly understood. The challenge of outlining these obstacles requires a number of methods under the umbrella of implementation science. For this study, a mixed-methods approach provided the greatest detail that can be presented to key stakeholders as a first step in improving essential surgical care in a resource-limited setting. The Surgeons Overseas’ Personnel, Infrastructure, Procedures, Equipment and Supplies (PIPES) Survey, with 105 variables in five domains, was used to reveal deficiencies in both workforce and infrastructure that allowed targeted intervention for improvement (available at https://www.surgeonsoverseas.org/resources/).36 These results were combined with process mapping, or Time and Motion Studies, to pinpoint issues with access to urgent surgical care. Large patient volume was found to account for the greatest delay to timely care. Finally, qualitative analysis was performed after focus groups of key stakeholders and healthcare providers were conducted. This valuable information corroborated some of the PIPES data but also highlighted the strength of the attendant (family) care, and the nature of the determination and ingenuity of the team of providers as two other key components driving change. This example addresses adoption, appropriateness, and feasibility of improvements to emergency surgical care in Uganda.
The second “real life” vignette involves a public health approach to hospital-based violence intervention.37–39 The hospital-based violence intervention program at San Francisco General Hospital has developed fidelity over the past decade to retain the components and conduct that lead to successful outcomes. These include reduction in injury recidivism, programmatic capacity to address social determinants of health, and victims’ perceived value of the program. This group also demonstrated that the program could be implemented at another trauma center, studying barriers of transfer with the goal of identifying barriers to feasibility, maintaining fidelity and sustainability of this program on a larger scale.
Studying implementation of programs requires operationalizing outcome measures to determine if the program meets success metrics for all stakeholders. For the hospital-based violence intervention program referenced above, a variety of implementation outcomes were studied in addition to clinical outcomes. The program’s clinical benefits were evaluated by examining whether the program met the stated needs of the community and by recording injury and criminal recidivism rates. Accessibility and adoption outcomes were studied using formative and process evaluations. These investigated if the program successfully screened, enrolled, and retained the target population. Qualitative semi-structured interviews of patients were used to describe appropriateness and acceptability of these programs by end-users. Qualitative methods were also used to examine barriers to care from the perspectives of key stakeholders such as city government officials, private sector executives, hospital staff, and community-based organizations, which addressed acceptability and feasibility. Cost-analysis studies were performed to ensure that this type of public health programming would not be financially onerous and therefore would have reasonable sustainability. Lastly, this program was adopted at another institution; studying the portability of this program examined the strength of the program’s ability to be adopted and implemented in additional settings.
De-Implementation
On the other side of implementation science lays the concept of de-implementation, or removal of harmful or unnecessary practices. These efforts should be systematic and should end the use of low-value care, whether or not an alternative is available. However, de-implementation is likely underappreciated in the literature, as there is cognitive bias against removing a treatment from a paradigm. De-implementation can be considered an implicit part of implementation and organizational change, although the strategies required are often different.40 One conceptual model describes four main types of de-implementation change: partial reversal, complete reversal, related replacement, or unrelated replacement. In clinical practice, de-implementation does occur, and the extent to which treatments are de-implemented and the processes by which de-implementation is successful should be studied.
Partial reversal changes the frequency, breadth, or scale of an outmoded intervention to provide the intervention to only a subgroup of patients or at a longer interval. In the trauma bay, selective placement of tubes (rather than fingers and tubes in every orifice) is a start. Selective use of plain radiographs—such as eliminating x-ray of the pelvis in selected patients who will be undergoing CT scan of the abdomen and pelvis—can save time, money, and radiation exposure.41 Decision rules allowing selective use of imaging for cervical spine clearance (ie, NEXUS and Canadian c-spine rules) are other good examples of partial reversal.42 43 Complete reversal or discontinuation without replacement can also occur. If an intervention has been shown to have no benefit to any subgroup on any timeframe, the practice can be completely eliminated. One example of complete reversal in trauma practice is the complete discontinuation of the use of steroids for routine treatment of spinal cord injury.44 45 Another strong push for reversal is in the clearance of the cervical spine in obtunded or intoxicated adult blunt trauma patients based on high-quality CT scan alone rather than with MRI.46 47 Despite a preponderance of papers suggesting this approach, there remains much variation in practice48—a topic ripe for de-implementation science studies.
Reversal with a related replacement or substitution allows the use of a related or more effective clinical practice. For example, in trauma, low-molecular-weight heparin has replaced unfractionated heparin for standard prophylaxis against deep vein thrombosis for most trauma patients.49 The fourth type of de-implementation includes reversal with an unrelated replacement. One example of this within trauma surgery is the evolution of treatment of splenic laceration with embolization instead of surgery, a procedure that allows for splenic salvage as well as preservation of splenic function without major abdominal surgery.50
De-implementation does occur in clinical practice but is often not studied with the rigor that we study other scientific changes. The study of de-implementation is an opportunity to ensure that ineffective practices do not reach our patients.
Conclusions
The CNTR is a broad coalition of US-based national organizations and professional societies brought together to focus attention on the significant public health problem of traumatic injury. CNTR aims to advocate for consistent and significant federal funding for trauma research commensurate with the injury burden in the USA.51 Currently, there is significant room for improvement for major funding in all areas of trauma research. Funding opportunities exist in the realm of implementation science, and this is a major frontier to which the trauma research community can be primed to make a significant impact. More and more large-scale funding opportunities for implementation science research are being offered by the National Institutes of Health, Agency for Healthcare Research and Quality, the Veterans Affairs system, the Patient-Centered Outcomes Research Institute, and other large national organizations.
Basic, clinical, and translational science research have been the backbone of trauma research for decades. We are not advocating to stop doing these types of research. Only by these investigations will we discover new drugs, surgical or procedural therapies, diagnostic tests, and cutting-edge care for patients. However, we implore the trauma research community to also embrace other frontiers of research including implementation science in order to learn how to best bring the right care to the right patient in the right place at the right time.
Acknowledgments
The authors greatly appreciate the ongoing financial support of The Coalition for National Trauma Research Scientific Advisory Committee (CNTR-SAC) from the following organizations: American Association for the Surgery of Trauma (AAST), American College of Surgeons (ACS), American College of Surgeons Committee on Trauma (ACS-COT), Eastern Association of the Surgery of Trauma (EAST), National Trauma Institute (NTI), and Western Trauma Association (WTA).
Footnotes
Collaborators: Coalition for National Trauma Research Scientific Advisory Committee: Saman Arbabi, MD FACS1; Eileen Bulger, MD FACS1; Mitchell J. Cohen, MD FACS2; Todd W. Costantini, MD FACS3; Marie M. Crandall, MD, MPH FACS4; Rochelle A. Dicker, MD FACS5; Elliott R. Haut, MD, PhD FACS6-8; Bellal Joseph, MD FACS9; Rosemary A. Kozar, MD, PhD FACS10; Ajai K. Malhotra, MD FACS11; Avery B. Nathens, MD, PhD, FRCS, FACS12; Raminder Nirula, MD, MPH FACS13; Michelle A. Price, PhD, MEd14; Jason W. Smith, MD FACS15; Deborah M. Stein, MD, MPH FACS FCCM16; Ben L. Zarzaur, MD, MPH FACS1. From the: 8. University of Washington; 9. University of Colorado; 10. UC San Diego School of Medicine; 11. University of Florida College of Medicine Jacksonville; 12. University of Arizona; 13. University of Maryland; 14. University of Vermont; 15. University of Toronto; 16. University of Utah; 17. National Trauma Institute; 18. University of Louisville; 19. University of California–San Francisco; 20. University of Wisconsin School of Medicine and Public Health.
Contributors: All authors have contributed substantially and all members of the CNTR SAC have approved the submission.
Funding: VH was supported by the Case Western Reserve University/Cleveland Clinic CTSA, via NCATS KL2TR000440. This publication was made possible by the Clinical and Translational Science Collaborative of Cleveland, KL2TR000440 from the National Center for Advancing Translational Sciences (NCATS) component of the National Institutes of Health and NIH roadmap for Medical Research. ERH is/was primary investigator of contracts from The Patient-Centered Outcomes Research Institute (PCORI), entitled “Preventing Venous Thromboembolism: Empowering Patients and Enabling Patient-Centered Care via Health Information Technology” (CE-12-11-4489) and “Preventing Venous Thromboembolism (VTE): Engaging Patients to Reduce Preventable Harm from Missed/Refused Doses of VTE Prophylaxis” (DI-1603-34596). ERH is primary investigator of a grant from the Agency for Healthcare Research and Quality (AHRQ) (1R01HS024547) entitled “Individualized Performance Feedback on Venous Thromboembolism Prevention Practice,” and is a co-investigator on a grant from the NIH/NHLBI (R21HL129028) entitled “Analysis of the Impact of Missed Doses of Venous Thromboembolism Prophylaxis.” ERH is supported by a contract from The Patient-Centered Outcomes Research Institute (PCORI), “A Randomized Pragmatic Trial Comparing the Complications and Safety of Blood Clot Prevention Medicines Used in Orthopedic Trauma Patients” (PCS-1511-32745). ERH receives research grant support from the DOD/Army Medical Research Acquisition Activity and has received grant support from the Henry M. Jackson Foundation for the Advancement of Military Medicine (HJF). ERH receives royalties from Lippincott, Williams, Wilkins for a book—"Avoiding Common ICU Errors." ERH was the paid author of a paper commissioned by the National Academies of Medicine titled “Military Trauma Care’s Learning Health System: The Importance of Data Driven Decision Making,” which was used to support the report titled, “A National Trauma Care System: Integrating Military and Civilian Trauma Systems to Achieve Zero Preventable Deaths After Injury.”
Disclaimer: The contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.
Competing interests: None declared.
Patient consent for publication: Not required.
Provenance and peer review: Not commissioned; internally peer reviewed.
Contributor Information
Coalition for National Trauma Research Scientific Advisory Council:
Saman Arbabi, Eileen Bulger, Mitchell J Cohen, Todd W Costantini, Marie M Crandall, Rochelle A Dicker, Elliott R Haut, Bellal Joseph, Rosemary A Kozar, Ajai K Malhotra, Avery B Nathens, Raminder Nirula, Michelle A Price, Jason W Smith, Deborah M Stein, and Ben L Zarzaur
References
- 1. Smith AB, Brooke BS. How implementation science in surgery is done. JAMA Surg 2019;154:891 10.1001/jamasurg.2019.1515 [DOI] [PubMed] [Google Scholar]
- 2. American College of Surgeons, Committee on Trauma . Advanced trauma life Support, ATLS. student course manual. 48, 2018. [Google Scholar]
- 3. Balas EA, Boren SA. Managing clinical knowledge for health care improvement. Yearb Med Inform 2000;09:65–70. 10.1055/s-0038-1637943 [DOI] [PubMed] [Google Scholar]
- 4. Grant J, Green L, Mason B. Basic research and health: a reassessment of the scientific basis for the support of biomedical science. Res Eval 2003;12:217–24. 10.3152/147154403781776618 [DOI] [Google Scholar]
- 5. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med 2011;104:510–20. 10.1258/jrsm.2011.110180 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat A-HS, Dellinger EP, Herbosa T, Joseph S, Kibatala PL, Lapitan MCM, et al. . A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009;360:491–9. 10.1056/NEJMsa0810119 [DOI] [PubMed] [Google Scholar]
- 7. Hull L, Athanasiou T, Russ S. Implementation science: a neglected opportunity to accelerate improvements in the safety and quality of surgical care. Ann Surg 2017;265:1104–12. [DOI] [PubMed] [Google Scholar]
- 8. van Klei WA, Hoff RG, van Aarnhem EEHL, Simmermacher RKJ, Regli LPE, Kappen TH, van Wolfswinkel L, Kalkman CJ, Buhre WF, Peelen LM, et al. . Effects of the introduction of the WHO "Surgical Safety Checklist" on in-hospital mortality: a cohort study. Ann Surg 2012;255:44–9. 10.1097/SLA.0b013e31823779ae [DOI] [PubMed] [Google Scholar]
- 9. de Vries EN, Prins HA, Crolla RMPH, den Outer AJ, van Andel G, van Helden SH, Schlack WS, van Putten MA, Gouma DJ, Dijkgraaf MGW, et al. . Effect of a comprehensive surgical safety system on patient outcomes. N Engl J Med 2010;363:1928–37. 10.1056/NEJMsa0911535 [DOI] [PubMed] [Google Scholar]
- 10. Urbach DR, Govindarajan A, Saskin R, Wilton AS, Baxter NN. Introduction of surgical safety checklists in Ontario, Canada. N Engl J Med 2014;370:1029–38. 10.1056/NEJMsa1308261 [DOI] [PubMed] [Google Scholar]
- 11. Haugen AS, Søfteland E, Almeland SK, Sevdalis N, Vonen B, Eide GE, Nortvedt MW, Harthug S. Effect of the World Health Organization checklist on patient outcomes: a stepped wedge cluster randomized controlled trial. Ann Surg 2015;261:821–8. 10.1097/SLA.0000000000000716 [DOI] [PubMed] [Google Scholar]
- 12. Hannam JA, Glass L, Kwon J, Windsor J, Stapelberg F, Callaghan K, Merry AF, Mitchell SJ. A prospective, observational study of the effects of implementation strategy on compliance with a surgical safety checklist. BMJ Qual Saf 2013;22:940–7. 10.1136/bmjqs-2012-001749 [DOI] [PubMed] [Google Scholar]
- 13.Urbach DR, Dimick JB, Haynes AB, Gawande AA.Is WHO's surgical safety checklist being hyped? [DOI] [PubMed]
- 14. Stokes SM, Wakeam E, Antonoff MB, Backhus LM, Meguid RA, Odell D, Varghese TK. Optimizing health before elective thoracic surgery: systematic review of modifiable risk factors and opportunities for health services research. J Thorac Dis 2019;11:S537–54. 10.21037/jtd.2019.01.06 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Kerwin AJ, Haut ER, Burns JB, Como JJ, Haider A, Stassen N, Dahm P. Eastern Association for the Surgery of Trauma Practice Management Guidelines Ad Hoc Committee . The Eastern Association of the Surgery of Trauma approach to practice management guideline development using Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) methodology. J Trauma Acute Care Surg 2012;73:S283–7. 10.1097/TA.0b013e31827013e9 [DOI] [PubMed] [Google Scholar]
- 16. Resources for Optimal Care of the Injured Patient 2014/Resources Repository. https://www.facs.org/quality-programs/trauma/tqp/center-programs/vrc/resources (4 Nov 2019).
- 17. Tooher R, Middleton P, Pham C, Fitridge R, Rowe S, Babidge W, Maddern G. A systematic review of strategies to improve prophylaxis for venous thromboembolism in hospitals. Ann Surg 2005;241:397–415. 10.1097/01.sla.0000154120.96169.99 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health 2009;36:24–34. 10.1007/s10488-008-0197-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011;38:65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Vavilala MS, King MA, Yang J-T, Erickson SL, Mills B, Grant RM, Blayney C, Qiu Q, Chesnut RM, Jaffe KM, et al. . The Pediatric Guideline Adherence and Outcomes (PEGASUS) programme in severe traumatic brain injury: a single-centre hybrid implementation and effectiveness study. Lancet Child Adolesc Health 2019;3:23–34. 10.1016/S2352-4642(18)30341-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Peden CJ, Stephens T, Martin G, Kahan BC, Thomson A, Rivett K, Wells D, Richardson G, Kerry S, Bion J, et al. . Effectiveness of a national quality improvement programme to improve survival after emergency abdominal surgery (EPOCH): a stepped-wedge cluster-randomised trial. The Lancet 2019;393:2213–21. 10.1016/S0140-6736(18)32521-2 [DOI] [PubMed] [Google Scholar]
- 22. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012;43:337–50. 10.1016/j.amepre.2012.05.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: a model for large scale knowledge translation. BMJ 2008;337:a1714–965. 10.1136/bmj.a1714 [DOI] [PubMed] [Google Scholar]
- 24. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Sci 2009;4 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Helfrich CD, Li Y-F, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implementation Sci 2009;4 10.1186/1748-5908-4-38 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Hull L, Goulding L, Khadjesari Z, Davis R, Healey A, Bakolis I, Sevdalis N, et al. . Designing high-quality implementation research: development, application, feasibility and preliminary evaluation of the implementation science research development (ImpRes) tool and guide. Implementation Sci 2019;14:80 10.1186/s13012-019-0897-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. RE-AIM—Reach Effectiveness Adoption Implementation Maintenance. http://www.re-aim.org/ (4 Nov 2019).
- 28. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999;89:1322–7. 10.2105/AJPH.89.9.1322 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Masterson Creber R, Dayan P, Kuppermann N, Ballard D, Tzimenatos L, Alessandrini E, Mistry R, Hoffman J, Vinson D, Bakken S, et al. . Applying the RE-AIM framework for the evaluation of a clinical decision support tool for pediatric head trauma: a mixed-methods study. Appl Clin Inform 2018;09:693–703. 10.1055/s-0038-1669460 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Holzmueller CG, Pronovost PJ. Organising a manuscript reporting quality improvement or patient safety research. BMJ Qual Saf 2013;22:777–85. 10.1136/bmjqs-2012-001603 [DOI] [PubMed] [Google Scholar]
- 31. Lau BD, Shaffer DL, Hobson DB, Yenokyan G, Wang J, Sugar EA, Canner JK, Bongiovanni D, Kraus PS, Popoola VO, et al. . Effectiveness of two distinct web-based education tools for bedside nurses on medication administration practice for venous thromboembolism prevention: a randomized clinical trial. PLoS One 2017;12:e0181664 10.1371/journal.pone.0181664 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ 2015;350:h391 10.1136/bmj.h391 [DOI] [PubMed] [Google Scholar]
- 33. Stephens TJ, Peden CJ, Pearse RM, Shaw SE, Abbott TEF, Jones EL, Kocman D, Martin G, et al. . Improving care at scale: process evaluation of a multi-component quality improvement intervention to reduce mortality after emergency abdominal surgery (EPOCH trial). Implementation Sci 2018;13:142 10.1186/s13012-018-0823-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Zatzick DF, Russo J, Darnell D, Chambers DA, Palinkas L, Van Eaton E, Wang J, Ingraham LM, Guiney R, Heagerty P, et al. . An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement Sci 2016;11:58 10.1186/s13012-016-0424-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Nwanna-Nzewunwa OC, Ajiko M-M, Kirya F, Epodoi J, Kabagenyi F, Batibwe E, Feldhaus I, Juillard C, Dicker R. Barriers and facilitators of surgical care in rural Uganda: a mixed methods study. J Surg Res 2016;204:242–50. 10.1016/j.jss.2016.04.051 [DOI] [PubMed] [Google Scholar]
- 36. Markin A, Barbero R, Leow JJ, Groen RS, Perlman G, Habermann EB, Apelgren KN, Kushner AL, Nwomeh BC. Inter-rater reliability of the PIPES tool: validation of a surgical capacity index for use in resource-limited settings. World J Surg 2014;38:2195–9. 10.1007/s00268-014-2522-2 [DOI] [PubMed] [Google Scholar]
- 37. Juillard C, Cooperman L, Allen I, Pirracchio R, Henderson T, Marquez R, Orellana J, Texada M, Dicker RA. A decade of hospital-based violence intervention: benefits and shortcomings. J Trauma Acute Care Surg 2016;81:1156–61. 10.1097/TA.0000000000001261 [DOI] [PubMed] [Google Scholar]
- 38. Juillard C, Smith R, Anaya N, Garcia A, Kahn JG, Dicker RA. Saving lives and saving money: hospital-based violence intervention is cost-effective. J Trauma Acute Care Surg 2015;78:252–8. 10.1097/TA.0000000000000527 [DOI] [PubMed] [Google Scholar]
- 39. Smith R, Evans A, Adams C, Cocanour C, Dicker R. Passing the torch: evaluating exportability of a violence intervention program. Am J Surg 2013;206:223–8. 10.1016/j.amjsurg.2012.11.025 [DOI] [PubMed] [Google Scholar]
- 40. Wang V, Maciejewski ML, Helfrich CD, Weiner BJ. Working smarter not harder: coupling implementation to de-implementation. Healthc 2018;6:104–7. 10.1016/j.hjdsi.2017.12.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Guillamondegui OD, Pryor JP, Gracias VH, Gupta R, Reilly PM, Schwab CW, John Pryor P, Vincente Gracias H, Patrick Reilly M. Pelvic radiography in blunt trauma resuscitation: a diminishing role. J Trauma 2002;53:1043–7. 10.1097/00005373-200212000-00002 [DOI] [PubMed] [Google Scholar]
- 42. Hoffman JR, Mower WR, Wolfson AB, Todd KH, Zucker MI. Validity of a set of clinical criteria to rule out injury to the cervical spine in patients with blunt trauma. N Engl J Med 2000;343:94–9. 10.1056/NEJM200007133430203 [DOI] [PubMed] [Google Scholar]
- 43. Stiell IG, et al. The Canadian C-spine rule for radiography in alert and stable trauma patients. JAMA 2001;286:1841–8. 10.1001/jama.286.15.1841 [DOI] [PubMed] [Google Scholar]
- 44. Bracken MB, Shepard MJ, Collins WF, Holford TR, Young W, Baskin DS, Eisenberg HM, Flamm E, Leo-Summers L, Maroon J, et al. . A randomized, controlled trial of methylprednisolone or naloxone in the treatment of acute spinal-cord injury. Results of the Second National Acute Spinal Cord Injury Study. N Engl J Med 1990;322:1405–11. 10.1056/NEJM199005173222001 [DOI] [PubMed] [Google Scholar]
- 45. Walters BC, Hadley MN, Hurlbert RJ, Aarabi B, Dhall SS, Gelb DE, Harrigan MR, Rozelle CJ, Ryken TC, Theodore N, et al. . Guidelines for the management of acute cervical spine and spinal cord injuries. Neurosurgery 2013;60:82–91. 10.1227/01.neu.0000430319.32247.7f [DOI] [PubMed] [Google Scholar]
- 46. Patel MB, Humble SS, Cullinane DC, Day MA, Jawa RS, Devin CJ, Delozier MS, Smith LM, Smith MA, Capella JM, et al. . Cervical spine collar clearance in the obtunded adult blunt trauma patient: a systematic review and practice management guideline from the Eastern Association for the Surgery of Trauma. J Trauma Acute Care Surg 2015;78:430–41. 10.1097/TA.0000000000000503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Martin MJ, Bush LD, Inaba K, Byerly S, Schreiber M, Peck KA, Barmparas G, Menaker J, Hazelton JP, Coimbra R, et al. . Cervical spine evaluation and clearance in the intoxicated patient: a prospective Western Trauma Association Multi-Institutional Trial and Survey. J Trauma Acute Care Surg 2017;83:1032–40. 10.1097/TA.0000000000001650 [DOI] [PubMed] [Google Scholar]
- 48. Albaghdadi A, Leeds IL, Florecki KL, Canner JK, Schneider EB, Sakran JV, Haut ER. Variation in the use of MRI for cervical spine clearance: an opportunity to simultaneously improve clinical care and decrease cost. Trauma Surg Acute Care Open 2019;4:e000336 10.1136/tsaco-2019-000336 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Geerts WH, Jay RM, Code KI, Chen E, Szalai JP, Saibil EA, Hamilton PA. A comparison of low-dose heparin with low-molecular-weight heparin as prophylaxis against venous thromboembolism after major trauma. N Engl J Med 1996;335:701–7. 10.1056/NEJM199609053351003 [DOI] [PubMed] [Google Scholar]
- 50. Stassen NA, Bhullar I, Cheng JD, Crandall ML, Friese RS, Guillamondegui OD, Jawa RS, Maung AA, Rohs TJ, Sangosanya A, et al. . Selective nonoperative management of blunt splenic injury. J Trauma Acute Care Surg 2012;73:S294–300. 10.1097/TA.0b013e3182702afc [DOI] [PubMed] [Google Scholar]
- 51. Coimbra R, Kozar RA, Smith JW, Zarzaur BL, Hauser CJ, Moore FA, Bailey JA, Valadka A, Jurkovich GJ, Jenkins DH, et al. . The Coalition for National Trauma Research supports the call for a national trauma research action plan. J Trauma Acute Care Surg 2017;82:637–45. 10.1097/TA.0000000000001353 [DOI] [PubMed] [Google Scholar]