Skip to main content
International Journal for Quality in Health Care logoLink to International Journal for Quality in Health Care
. 2018 Apr 20;30(Suppl 1):20–23. doi: 10.1093/intqhc/mzy013

Adapting improvements to context: when, why and how?

John Ovretveit 1,, Lisa Dolan-Branton 2, Michael Marx 3, Amy Reid 4, Julie Reed 5, Bruce Agins 6,7
PMCID: PMC5909662  PMID: 29878138

Abstract

There is evidence that practitioners applying quality improvements often adapt the improvement method or the change they are implementing, either unknowingly, or intentionally to fit their service or situation. This has been observed especially in programs seeking to spread or ‘scale up’ an improvement change to other services. Sometimes their adaptations result in improved outcomes, sometimes they do not, and sometimes they do not have data make this assessment or to describe the adaptation. The purpose of this paper is to summarize key points about adaptation and context discussed at the Salzburg Global Seminar in order to help improvers judge when and how to adapt an improvement change. It aims also to encourage more research into such adaptations to develop our understanding of the when, why and how of effective adaptation and to provide more research informed guidance to improvers.

The paper gives examples to illustrate key issues in adaptation and to consider more systematic and purposeful adaptation of improvements so as to increase the chances of achieving improvements in different settings for different participants. We describe methods for assessing whether adaptation is necessary or likely to reduce the effectiveness of an improvement intervention, which adaptations might be required, and methods for collecting data to assess whether the adaptations are successful. We also note areas where research is most needed in order to enable more effective scale up of quality improvements changes and wider take up and use of the methods.

Keywords: improvement, learning, complex adaptive systems, implementation, delivery

Introduction

A common approach to quality improvement is to take a change proven in one or more healthcare services, such as a surgical safety checklist or a care delivery model, and to implement that change in another healthcare service [1, 2]. The subject of this paper is whether or how a local project team adapts elements of a specific improvement change, originally tested somewhere else, so that it can be implemented in that local setting with local staff and patients.

Instead of ‘implementation’ we use the term ‘take-up’ to indicate the conscious choice that staff (and sometimes patients) make to take up a change to their everyday work and organization. Many quality improvement teams seek to enable staff to use the ‘new better way’ of working and to carefully copy elements that appeared to be important for the improved outcomes in the original test—for example, all parts of the surgical safety checklist intervention. The surgical safety checklist has been tested in many settings and, if implemented with fidelity in these settings, appears to be effective in many places. But is there scope for adaptation of the methods used to ensure that the surgical safety checklist is taken up as a routine practice (for example by giving fewer training sessions?), and if so, how could we develop guidance for local implementers to adapt either the implementation methods, or the content of the improvement changes?

The seminar shared observations that adaptations are often made because the local implementing site has a different ‘context’ to where the improvement change was tested. The context, elements of which are the number and type of staff in the service, local language, the way the service is paid or leadership, may make it difficult to copy the improvement change exactly. Adapting it may be the only way to implement the improvement change, or rather a version of it. But if, for example, some elements of the surgical safety checklist or clinical guidelines are not followed, is this improvement change not effective? Which elements of different improvement changes could be, or need to be adapted to be effective in different contexts? These questions about adapting the improvement change and/or the implementation method were addressed at the seminar: this paper seeks to share our answers to the questions from the discussions at the seminar and our work afterwards.

Definition of terms used in this article

Adaptation: modifying a change that has been tested and found to be effective in another care setting or patient population, such as an assessment method, a treatment method, a care practice (e.g. for sterilization), or a service delivery model.

Contextual factors: ‘may include the political, social, and organizational setting for the implementation of the intervention and include social support, legislations and regulations, social networks, and norms and culture’ [3]

Improvement change: a change to work practices or work organization that results in better patient outcomes and/or less waste, sometimes termed the ‘new better way of working’. Examples: increased compliance with best practice in hand hygiene; or a new way of organizing patient transitions from hospital to home.

Implementation action: actions to invite and enable people to take up and perform the new better way of working, such as training, performance feedback, or providing rewards or incentives.

Improvement method: a method to improve safety, quality and/or reduce waste, typically a systematic method or tool used by staff to collect and analyze data, plan and carry out a change, and spread the change if it is effective. An ‘improvement approach’, such as a quality breakthrough collaborative, combines a number of methods and tools.

Quality improvement investigation: using systematic methods to reduce bias in order to gather and analyze data that is useful for analyzing a quality or safety problem, or to describe or evaluate an improvement change or an improvement method.

Is adaptation inevitable or necessary?

Implementation science has studied the extent to which evidence-based guidelines, treatments or service delivery models are taken up by clinicians and healthcare services, and the results. The research has found that fidelity to the original improvement change is challenging to establish and to sustain, and that three factors appear to explain how completely the ‘new better way’ is implemented:

  • – the complexity of the ‘new better way’: simple changes can be copied more easily than more complex changes, for example changes requiring more than one profession or service to change their practice or the organization of their work;

  • – the structure and the strategy-actions of the program that are used to establish and support the take up of the new better way: appropriate facilitation, training and other resources are usually necessary to support the take up of the change. This has a cost, which may be recurring for some changes;

  • – the context of the care practice that is to be changed: for example, if key healthcare organization leaders do not view the improvement as a high priority, then take up is likely to be low (senior leaders are sometimes considered part of the ‘internal context’); or if the healthcare organization loses money by taking up the new better way, then this improvement might not be sustained (the financial system is one part of the ‘external context’).

Example

Hospital ‘rapid response teams’ (RRTs) or ‘medical emergency teams’ (METs) provide examples to illustrate why, how, and with what effect local projects adapt improvement changes and some of the issues that improvers and researchers may need to consider.

Research in the late 1980s began to report an intervention that had been tested and which provided staff in hospital units with a way of seeking expert help for patients who appeared to be deteriorating and might otherwise need admitting to an intensive care unit (ICU). Normally, staff would contact the senior experienced nurse or doctor assigned to the unit to seek help. There were times when such staff were not available, and providing the option to call a rapid response team (RRT) for expert advice from these teams appeared to result in reduced admissions to intensive care units, and reduced morbidity.

A large body of research has been undertaken over the last 20 years into this intervention [4]. What this research reveals is instructive for answering the questions noted above about adapting an original tested improvement change, how context influenced the adaptations, and about whether adaptations are effective. It originally appeared that what was necessary for successful improvement was the composition of the team, the specification of signs of deterioration, when to call the RRT for advice and the training for staff in the nursing units. However, with publication and reports of this improvement change, many hospitals began to develop and implement RRTs over the next 20 years. The composition of the teams varied considerably as did the strategies for implementing them: the number of members and type of professions, whether physician or nurse-led, the hours available, the equipment and medications brought to the bedside, and access to other specialists or critical care units. Many response teams were formed using existing resources within hospital systems.

For the purpose of this paper we can note that one feature of this multifaceted improvement remained relatively constant across the many implementations were the statements about signs of deterioration that staff were trained about to guide their decisions to call the RRT. Most other features of the ‘RRT improvement change’ were adapted to the local setting, and to fit with the staffing, culture and other aspects of the hospital context: for example, whether to provide a 24 hour-7 day response team or just one at nights, and which arrangements for contacting the RRT rather than a senior clinician. The statements about signs of deterioration were similar across sites because human physiology is relatively similar. Methods for training and reminding staff about these signs were different across sites. Even more different were the other elements of the improvement intervention labeled by the term ‘rapid response team’.

Approaches to scaling up

The research and the literature on experience of scale up programs gives some guidance as to how successfully to enable the take up of evidence-based improvements that have to be copied exactly, for example by changing the context and or designing the implementation structure and strategy to ensure implementation achieves fidelity to the tested original [58].

For some well-researched improvement changes, it is possible systematically to plan and guide scale up of certain improvements that need to be implemented with fidelity. However, many improvement changes have only been tested in one setting and with one type of patient group: we do not know from research whether it might be effective in another setting or with another patient group. One way forward is to try to implement the improvement in different settings or with another patient group, and to study the results either using conventional research methods or quality improvement testing methods. This approach can also involve adaptation or iteration by the implementing team [9].

Different types of adaptations have been described by Stirman et al. 2013, such as modifying the training, how practices are performed (e.g. using different cleaning material that was used in the original trial because this material is not available locally), or changing a payment or reporting arrangement from the original [10]. This and other frameworks can be used to guide project teams and researchers in both planning and documenting adaptations of the improvement change.

Other approaches to scale up and learning about improvement

What did we learn from the seminar to answer some of the questions about fidelity and adaption in scale up and how to study these? First, the question about the effectiveness of adaptation: if some elements of an improvement change are not followed, is it the change no longer effective? One answer is that it depends on the type of improvement change, and how precisely the change is described. The seminar discussed many improvement changes: physicians prescribing medications more appropriately, changes to improve antenatal care in Uganda, and changes to improve parenting or violence prevention. Implementation of some may be much more dependent on context—some may be more ‘context sensitive’ [11].

We observe that some improvement changes are precisely described in a way that allows replication (‘specified improvement changes’), and that some are more conceptual, such as the chronic care model [2] (‘change models’). One challenge is that descriptions of quality improvement changes, even in research reports, may not be precise enough to allow replication which also means that it is not possible to judge whether others have or have not adapted it. Two conclusions are that we need better guidance for researchers and quality improvers to describe precisely an improvement change and that this guidance might be different for different types of improvement. Guidance for documentation for researchers is already available, but some do not distinguish the improvement change from the implementation methods in the guidance for description [1216].

If the adaptation is well-described, a second consideration is how to evaluate an adaptation so as to judge if it is still effective. This relates to the question about understanding how context affects adaptation: which elements of different improvement changes could be, or need to be adapted to be effective in different contexts? One conclusion is that a variety of organizational and external context influences affect implementation [1720]. These frameworks do not give guidance for implementers about how to adapt the improvement change to their context, but they do show factors that implementers need to consider when developing their implementation plans.

Another conclusion was that different context factors will influence implementation and effectiveness of improvement changes in different ways. Many of these research-based context assessment tools aggregate findings from a range of studies of different interventions and will only give general guidance. For example, an improvement that relies on adding a decision-support prompt to an electronic medical record needs the host organization to have the ‘context’ of an information system that can accommodate the decision-support software, as well as the availability of information technology experts who have the time and skills to add this software to the information system. The context influences for an improvement to prevent falls in a nursing home, or infections in an intensive care unit will be different.

It was clear from the seminar that educational material would need to be adapted if the language of the clients or staff were different to the language of clients or staff involved in the original test of the improvement change. In general, adaptation of different elements to be culturally appropriate would appear to be necessary, and there is some guidance about how to carry out our such adaptations [21, 22].

Conclusions

The seminar and this paper sought to contribute knowledge that would help both research and practical care improvement by identifying questions and giving some answers about adaptation and context. Adaptation of some improvements is likely to decrease their effectiveness. We know a little from research about for which improvements exact fidelity appears to be necessary, and also about the high cost of infrastructures to ensure implementation with fidelity. Less is known about sustainment. In some cases, it is clear that adaptations, such as translating educational materials, are necessary. Also, recent research has developed frameworks describing elements of context that may influence implementation and which may explain or require adaptations of improvement changes. In addition, it may be possible to improve descriptions of adaptations made by different project teams in order to assess whether these adaptations are more or less effective in different contexts [23, 24]. Project teams also can assess the effectiveness of their changes and this can provide additional data to develop answers to the question: when why and how to adapt improvements to context. Improvers, funders and researchers at the seminar were all agreed that improvements, especially in scale up programs could be more effective if we had better answers to these questions. In addition, there are now excellent opportunities to develop answers using recently developed research methods, tools and forms of partnership research.

Acknowledgements

Julie Reid would like to acknowledge the funding of the Health Foundation and the National Institute for Health Research (NIHR) under the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) which made it possible for him to contribute to the workshop and to writing this paper. The views expressed in this publication are those of the authors and not necessarily those of the Health Foundation, the NHS, the NIHR or the Department of Health or any of the organizations for which the authors are affiliated.

References

  • 1. WHO World Alliance for Patient Safety. WHO Guidelines for Safe Surgery. Geneva: World Health Organization, 2008. [Google Scholar]
  • 2. Wagner EH, Austin BT, Davis C et al. Improving chronic illness care: translating evidence into action. Health Aff (Millwood) 2001;20:64–77. [DOI] [PubMed] [Google Scholar]
  • 3. Brownson RC, Colditz GA, Proctor EK (eds). Dissemination and Implementation Research in Health: Translating Science to Practice. New York: Oxford University Press, 2012. [Google Scholar]
  • 4. Maharaj R, Raffaele I, Wendon J. Rapid response systems: a systematic review and meta-analysis. Crit Care 2015;19:254 10.1186/s13054-015-0973-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Cohen D, Crabtree B, Etz R et al. Fidelity versus flexibility: translating evidence-based research into practice. Am J Prev Med 2008;35:S381–9. [DOI] [PubMed] [Google Scholar]
  • 6. Davidson EM, Liu JJ, Bhopal R et al. Behavior change interventions to improve the health of racial and ethnic minority populations: a tool kit of adaptation approaches. Milbank Q 2013;91:811–51. 10.1111/1468-0009.12034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Massoud MR, Donohue KL, McCannon CJ Options for Large-scale Spread of Simple, High impact Interventions. Technical Report. Published by the USAID Health Care Improvement Proj. Bethesda, MD: University Research Co; 2010.
  • 8. Massoud MR, Nielsen GA, Nolan K et al. A Framework for Spread: From Local Improvements to System-Wide Change. IHI Innovation Series white paper. Cambridge,Massachusetts: Institute for Healthcare Improvement; 2006. Available on www.IHI.org.
  • 9. Hoffmann TC, Glasziou PP, Boutron I et al. Better reporting of interventions: Template for Intervention Description and Replication (TIDieR) checklist and guide. Br Med J 2014;348:g1687. [DOI] [PubMed] [Google Scholar]
  • 10. Stirman S, Miller C, Toder K et al. Development of a framework and coding system for modifications and adaptations of evidenced-based interventions. Implement Sci 2013;8:65 http://www.implementationscience.com/content/8/1/65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Øvretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf 2011;20:i18–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci 2013;8:1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Ogrinc G, Davies L, Goodman D et al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf 2016;25:986–92. 10.1136/bmjqs-2015-004411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Des Jarlais DC, Lyes C, Crepaz N. the TREND Group Improving the reporting quality of nonrandomized evaluations: the TREND statement. Am J Public Health 2004;94:361–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Proctor E, Silmere H, Raghavan R et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011;38:65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: the stages of implementation completion (SIC). Implement Sci 2011;6:116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Damschroder L, Aron D, Keith R et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Helfrich C, Li Y, Sharp N et al. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci 2009;4:38 10.1186/1748-5908-4-38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. McCormack B, Kitson A, Harvey G et al. Getting evidence into practice: the meaning of ‘context’. J Adv Nurs 2002;38:94–104. [DOI] [PubMed] [Google Scholar]
  • 20. Weiner BJ, Amick H, Lee SD. Review: conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev 2008;65:379–436. [DOI] [PubMed] [Google Scholar]
  • 21. Castro F, Barrera M, Holleran Steiker L. Issues and challenges in the design of culturally adapted evidence-based interventions. Annu Rev Clin Psychol 2010;6:213–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Castro F, Barrera M, Martinez C. The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prev Sci 2004;5:41–5. [DOI] [PubMed] [Google Scholar]
  • 23. Øvretveit J, Leviton LC, Parry GJ. Increasing the generalisability of improvement research with an improvement replication programme. BMJ Qual Saf 2011;20:i87–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Ovretveit J, Garofalo L, Mittman B. Scaling up improvements: increasing healthcare effectiveness more rapidly, and the research agenda. Int J Qual Health Care 2017;29:1014–9. [DOI] [PubMed] [Google Scholar]

Articles from International Journal for Quality in Health Care are provided here courtesy of Oxford University Press

RESOURCES