Skip to main content
BMJ Open Access logoLink to BMJ Open Access
. 2016 Dec 16;26(7):578–582. doi: 10.1136/bmjqs-2016-006143

What we know about designing an effective improvement intervention (but too often fail to put into practice)

Martin Marshall 1, Debra de Silva 2, Lesley Cruickshank 3, Jenny Shand 2, Li Wei 4, James Anderson 2
PMCID: PMC5502247  PMID: 27986901

Intervening to change health system performance for the better

It is temptingly easy to treat improvement interventions as if they are drugs—technical, stable and uninfluenced by the environment in which they work. Doing so makes life so much easier for everyone. It allows improvement practitioners to plan their work with a high degree of certainty, funders to be confident that they know what they are buying and evaluators to focus on what really matters—whether or not ‘it’ works.

But of course most people know that life is not as simple as that. Experienced improvers have long recognised that interventions—the specific tools and activities introduced into a healthcare system with the aim of changing its performance for the better1—flex and morph. Clever improvers watch and describe how this happens. Even more clever improvers plan and actively manage the process in a way that optimises the impact of the improvement initiative.

The challenge is that while most improvers (the authors included) appreciate the importance of carefully designing an improvement intervention, they (we) rarely do so in a sufficiently clever way. In this article, we describe our attempts as an experienced team of practitioners, improvers, commissioners and evaluators to design an effective intervention to improve the safety of people living in care homes in England. We highlight how the design of the intervention, as described in the original grant proposal, changed significantly throughout the initiative. We outline how the changes that were made resulted in a more effective intervention but how our failure to design a better intervention from the start reduced the overall impact of the project. Drawing on the rapidly expanding literature in the field and our own experience, we reflect on what we would do differently if we could have our time again.

A practical case study—an initiative to improve the safety of people living in care homes

A growing number of vulnerable older people are living in care homes and are at increased risk of preventable harm. We carried out a safety improvement programme with a linked participatory multimethod evaluation2 in care homes in the south east of England. Ninety homes were recruited in four separate cohorts over a 2-year period. Our aim was to reduce the prevalence of three of the most common safety events in the sector—falls, pressure ulcers and urinary tract infections—and thereby to reduce unnecessary attendances at emergency departments and admissions to hospital.

In the original proposal submitted to the funding body, we described a multifaceted intervention comprising three main elements:

  1. The measurement and benchmarking of (i) the prevalence of the target safety incidents using a nationally designed tool called the NHS Safety Thermometer3 and (ii) rates of emergency department attendances and hospital admissions using routinely collected data.

  2. Training in quality improvement methods provided initially by a team of NHS improvement advisors and then, using a ‘train the trainer’ model, by practitioners working with or in the care homes.

  3. The use of a specially adapted version of the Manchester Patient Safety Framework,4 (Marshall M, de Silva D, Cruickshank L, et al. Understanding the safety culture of care homes; insights from the adaptation of a health service safety culture assessment tool for use in the care home sector (submitted to BMJ Qual Saf, August 2016), a formative assessment tool which provides insights into safety culture for frontline teams.

The intervention was underpinned by a strong emphasis on support and shared learning using communities of practice and online resources facilitated by the improvement team.

The programme theory hypothesised that the three main elements of the intervention (benchmarking, learning improvement skills and cultural awareness) would reduce the prevalence of safety events, that this would lead to a reduction in emergency department attendances and hospital admissions and that both outcomes would reduce system costs as well as improving the quality of care for residents. The intervention was co-designed by improvement researchers in the evaluation team, the improvement team in the local government body responsible for commissioning care home services and a senior manager of one of the participating care homes. The design was influenced by a combination of theory, published empirical evidence and the personal knowledge and experience of the commissioners and care home manager.

We built in a 6-month preparatory period at the start of the programme, prior to implementing the intervention with the first cohort of care homes. This period was used to recruit staff, establish the project infrastructure and build relationships between the care homes and the improvement and evaluation teams. Only when the programme formally started did we begin to expose some of the deficiencies in the planned intervention. Table 1 describes the different components of the intervention, whether it was part of the original plan or introduced at a later stage, and, based on our participatory evaluation, how it was implemented and the extent to which it was used.

Table 1.

The original intervention and how it evolved

Intervention component Original/added later Ways in which the component were implemented Extent to which component was used
NHS Safety Thermometer (NHS designed and owned online tool for collecting process and outcomes data) Original Implemented with first cohort and offered to all of second cohort, then replaced by Safety Cross and Monthly Mapping tools (see below) 66% of first cohort homes tried the Safety Thermometer. About one-third input data
Active involvement of staff, residents and relatives in sharing data and co-creating improvement solutions Original Staff initially slow to share data but became enthusiastic as project progressed. Residents and relatives hardly actively involved at all but project details and data displayed on public notice boards in most homes Fewer than 10% of first cohort homes shared Safety Thermometer data. Eighty per cent of homes used the Safety Cross and displayed this for staff, residents and families to see. Sixty per cent displayed graphs from the Monthly Mapping tool
Training for care home staff in improvement methodologies Original Quality improvement training was provided initially by the NHS staff, then adapted and provided by the improvement team All homes took part in training. In first cohort, this was chiefly home managers but in subsequent cohorts some senior carers also attended
Participants able to deliver the training to peers (train-the-trainer) Original Formal train-the-trainer model was not implemented though local advocates (‘champions’) were encouraged to roll out learning to others Champions were found to work well to spread learning informally
Intervention toolkit containing a compendium of evidenced-based interventions for each of the domains of the Safety Thermometer Original Toolkit with worksheets and information sheets developed All homes received a hard copy and an online version. Unclear how much they were used by first cohort and then dropped as Safety Thermometer replaced by Safety Cross
Safety culture assessed using the MaPSaF tool at three time points (before, during and after PROSPER), using the tool to understand and address barriers to change Original MaPSaF revised and tested in different ways with various cohorts Use not prioritised by the improvement team or by the homes. Small number of homes actively used it. Progressively more significant changes made to the tool for each cohort to make it more relevant
Communities of practice Original Three community of practice events held throughout project Between a half and two-thirds of homes attended the events
Improvement tools and case studies uploaded to resource tool for peer learning Original Knowledge hub set up and documents uploaded periodically, mainly copies of things sent by email 10% of homes signed up and none of them posted information
Ongoing support from improvement team including meetings, visits and telephone conferences Original Facilitators visited homes with varied frequency. During the intensive phase, some homes were visited monthly and others every 3–4 months. Group telephone conferences were not used Some homes received regular support and others did not. Some homes reported that they had no contact with their allocated improvement adviser for 6 months
‘Safety Cross' for displaying information about monthly incidents replaced Safety Thermometer (see above) Addition Used from cohort two homes onwards then also rolled out to cohort one About 80% of homes reported using it
‘Monthly Mapping tool’ using graphs with monthly data to track changes over time and compare averages Addition All homes were invited to provide data about the monthly incidence of harms. From cohort three onwards, homes were given access to an online tool About 60% of homes provided some data. One-quarter used the tool regularly without prompting
Provision of resources such as information posters, certificates of training, mirrors to view pressure ulcers and other tangible resources Addition Resources developed ad hoc Homes offered tools during community of practice and visits. Variable uptake depending on focus. Resources appeared to be highly appreciated
Provision of additional training beyond improvement methods courses, such as training in infection control and pressure ulcers Addition Twenty-six training sessions run About 50% of homes participated
Coordination with partner organisations in the NHS Addition Varied by geographical area Varied by geographical area
Monthly newsletter Addition Sent monthly to participating homes Sixty per cent of home managers reported reading it

Green=implemented as planned; Amber=partly implemented as planned; Red=not implemented as planned.

MaPSaF, Manchester Patient Safety Framework.

The evaluation found that four of the nine original components of the intervention were not implemented as planned and two were only partially implemented as planned. Only three of the nine were implemented in line with the original proposal. Five of the six new intervention components, designed and implemented while the initiative was taking place, were fully implemented. Qualitative evaluative data, collected using interviews, surveys and observations, demonstrated changes in the attitudes of frontline staff to safety and changes in their working practices. However, quantitative data suggested only small and variable changes of questionable statistical significance in the prevalence of safety incidents, and no impact on the background rising rates of emergency department attendances and hospital admissions.

Success or failure?

Perhaps we should not be too hard on ourselves. On the surface at least, our intervention was more sophisticated than that seen for most improvement projects.5 The multifaceted intervention had complementary measurement, educational and culture-change elements and was co-designed by a wide group of stakeholders, including a practitioner and experienced improvement science academics. We based the design on a reasonable programme theory and an explicit logic model. We recognised the need to adapt off-the-shelf tools to the local context and to build in a preparatory period prior to formally evaluating the intervention. And we purposefully chose a participatory and formative evaluation model to support a feedback cycle as the initiative progressed.

As a project team, we thought that we had designed the original intervention thoughtfully and carefully but the findings of our evaluation suggested that we could have done a lot better. Reflecting towards the end of the programme, we considered a number of possible explanations: we did not put enough time and effort into designing the intervention; we designed a sound intervention which was not implemented sufficiently well or was implemented without an adequate understanding of the context and our expectations were naïve that an intervention at such an early stage of development would have a significant impact. We then revisited the literature to examine these hypotheses.

What the literature suggests we should have done

There is no shortage of increasingly sophisticated theory, empirical evidence and learned commentary that could have guided our design decisions. Much of the thinking about interventions is relatively new; a state-of-the-art review of improvement published in the Lancet more than 15 years ago made no specific reference to the ways in which interventions morph when applied in practice.6 In contrast, more recent international guidance on designing, conducting and writing up improvement projects highlights the importance of describing how improvement interventions change.7 In brief, a number of themes relating to the design of effective interventions are emerging in the literature.

First, the importance of using theory (‘a chain of reasoning’) to optimise the design and effectiveness of interventions is highlighted.8 A commonsense rather than an overly academic approach to theory is being advocated as a way of reducing the risk of the ‘magical thinking’, which encourages improvers to use interventions that look superficially attractive but for which the mechanisms of action are unclear.8 9 Alongside the use of theory, there is a growing interest in the application of ‘design thinking’ as a strategy for ensuring that the problem has been clearly identified and a way of addressing complex problems in rapidly changing environments.10 Second, the importance of having an explicit method, such as the Institute for Healthcare Improvement's Model for improvement using Plan-Do-Study-Act cycles, is described and also understanding how to use the methods to their full potential.11 Third, there is a growing emphasis on the extent to which improvement interventions are social as well as technical in nature, and how their effectiveness is a consequence of a complex interaction between people, organisational structures and processes.12 13 Fourth, the literature describes how what people do (intervention), how they do it (implementation) and the wider environment (context) are interdependent and some people are suggesting that the traditional differentiation between this classic triad is no longer helpful.14

Fifth, there is a growing consensus that improvement efforts are being evaluated too early in their development and as a consequence are being judged unfairly as being ineffective.15 16 Instead, there are calls for interventions to be categorised according to the ‘degree of belief’ that they will work16 and how this belief becomes stronger as a project progresses. Interventions in the early ‘innovation’ phase should be evaluated using different methods from those in the later ‘testing’ or ‘spread’ phases. They may also have a different intent, for example, changes in behaviour may be seen as ‘success’ before measurable changes in outcome are achieved. Sixth, drawing on the expanding field of knowledge mobilisation,17 18 experts are calling for a more active process of co-design of improvement initiatives involving service users, practitioners and improvers, and also academics, with all of these stakeholders contributing to participatory models of evaluation.19

What we would do differently?

Having reviewed the literature, we came to the conclusion that each of the post hoc hypotheses were reasonable explanations for what in the field of improvement were not uncommon results, but were nevertheless disappointing. In future, we will put more effort into designing the intervention from the very start. We will think through the design issues in sufficient detail to not only persuade the funder of the project but also to persuade ourselves that it will work in practice. We will describe a programme theory in greater detail based on a better understanding of the contextual factors which could impact on the feasibility and effectiveness of the initiative, and we will use design thinking to rigorously frame the problem from the start.

We will work through in more detail and more systematically how to use current thinking about intervention design and its applicability to our project. We will build-in a similar or even longer preparatory period and will use that period to test and refine the intervention. We will not rely on a single senior care home manager to provide a practitioner view for the original proposal and we will seek a wide range of views from frontline staff and from care home residents in an inclusive and iterative way. We will not assume that the intervention can be implemented as described in the proposal and we will be more sensitive to the resource constraints under which the improvement team and the care homes are operating.

If we do all of this, the outcome will almost certainly be better.

Final reflections

Improvement initiatives are sometimes planned on the hard high ground, but they are put into effect in the swampy lowlands.20 As we are more than aware, frontline practice is messy. And as we have described in this paper, it is never possible to do things perfectly and good improvers are always learning. But as the improvement movement matures, we are getting to the stage where we could and should be doing better. It needs to be seen as a professional rather than an amateur sport. The importance of understanding that improvement interventions are not like drugs or medical devices, and that flexibility needs to be built into their design and delivery, is uncontestable. But is it no longer acceptable to use the need for flexibility as an excuse for a lack of thought and planning. As improvement becomes more rigorous, perhaps improvement practitioners will be able to plan their work with a higher degree of certainty, funders will be more confident that they know what they are buying and evaluators will be able to focus on whether and how ‘it’ works.

Footnotes

Twitter: Follow Martin Marshall at @MarshallProf

Contributors: MM led the development of the ideas presented in this paper. All authors contributed to the development and writing of the paper.

Funding: Health Foundation.

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Hoffmann TC, Glasziou PP, Boutron I, et al. . Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348:g1687 10.1136/bmj.g1687 [DOI] [PubMed] [Google Scholar]
  • 2.https://www.ucl.ac.uk/pcph/research-groups-themes/isl-pub/current-projects/prosper (accessed 22 Aug 2016).
  • 3.NHS Safety Thermometer. http://digital.nhs.uk/thermometer (accessed 14 Sep 2016).
  • 4.Kirk S, Parker D, Claridge T, et al. . Patient safety culture in primary care: developing a theoretical framework for practical use. Qual Saf Health Care 2007;16:313–20. 10.1136/qshc.2006.018366 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dixon Woods M, McNicol S, Martin G. Ten challenges in improving quality in healthcare: lessons from the Health Foundation's programme evaluations and relevant literature. BMJ Qual Saf 2012;21:876–84. 10.1136/bmjqs-2011-000760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet 2003;362:1225–30. 10.1016/S0140-6736(03)14546-1 [DOI] [PubMed] [Google Scholar]
  • 7.Ogrinc G, Davies L, Goodman D, et al. . SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf 2015; 10.1136/bmjqs-2015-004411 10.1136/bmjqs-2015-004411 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Davidoff F, Dixon-Woods M, Leviton L, et al. . Demystifying theory and its use in improvement. BMJ Qual Saf 2015;24:228–38. 10.1136/bmjqs-2014-003627 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Dixon-Woods M, Bosk CL, Aveling EL, et al. . Explaining Michigan: developing an ex post theory of a quality improvement programme. Milbank Q 2011;89:167–205. 10.1111/j.1468-0009.2011.00625.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dorst K. The core of ‘design thinking’ and its application. Des Stud 2011;32:521–32. 10.1016/j.destud.2011.07.006 [DOI] [Google Scholar]
  • 11.Reed JE, Card AJ. The problem with Plan-Do-Study-Act cycles. BMJ Qual Saf 2016;25:147–52. 10.1136/bmjqs-2015-005076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Dixon-Woods M, Suokas A, Pitchforth E, et al. . An ethnographic study of classifying and accounting for risk at the sharp end of medical wards. Soc Sci Med 2009;69:362–9. 10.1016/j.socscimed.2009.05.025 [DOI] [PubMed] [Google Scholar]
  • 13.Mannion R, Konteh FH, Davies HTO. Assessing organisational culture for quality and safety improvement: a national survey of tools and tool use. Qual Saf Health Care 2009;18:153–6. 10.1136/qshc.2007.024075 [DOI] [PubMed] [Google Scholar]
  • 14.Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol 2009;43:267–76. 10.1007/s10464-009-9229-9 [DOI] [PubMed] [Google Scholar]
  • 15.Berwick D. The Science of Improvement. JAMA 2008;299:1182–4. 10.1001/jama.299.10.1182 [DOI] [PubMed] [Google Scholar]
  • 16.Parry GJ, Carson-Stevens A, Luff DF, et al. . Recommendations for evaluation of health care improvement initiatives. Acad Pediatr 2013;13(Suppl):S23–30. 10.1016/j.acap.2013.04.007 [DOI] [PubMed] [Google Scholar]
  • 17.Davies HTO, Powell AE, Nutley SM. Mobilising knowledge to improve UK health care: learning from other countries and other sectors—a multimethod mapping study. Health Serv Deliv Res 2015;3. [PubMed] [Google Scholar]
  • 18.Ward V, Smith SO, House A, et al. . Exploring knowledge exchange: a useful framework for practice and policy. Soc Sci Med 2012;74:297–304. 10.1016/j.socscimed.2011.09.021 [DOI] [PubMed] [Google Scholar]
  • 19.Marshall M, Pagel C, French C, et al. . Moving improvement research closer to practice: the Researcher-in-Residence model. BMJ Qual Saf 2014;23:801–5. 10.1136/bmjqs-2013-002779 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Marshall M. Bridging the ivory towers and the swampy lowlands; increasing the impact of health services research on quality improvement. Int J Qual Health Care 2014;26:1–5. 10.1093/intqhc/mzt076 [DOI] [PubMed] [Google Scholar]

Articles from BMJ Quality & Safety are provided here courtesy of BMJ Publishing Group

RESOURCES