Skip to main content
Journal of the Royal Society of Medicine logoLink to Journal of the Royal Society of Medicine
. 2013 Feb;106(2):45–50. doi: 10.1177/0141076812472622

Developing a science of improvement

Martin Marshall 1,, James Mountford 2
PMCID: PMC3569021  PMID: 23401635

Clinicians are used to using scientific evidence when they make decisions about the care they provide for patients: they have a good idea of what blood pressure they should be aiming to achieve for patients with diabetes, and there is enough evidence about the outcomes of knee joint replacement surgery to enable an informed conversation with patients presenting with osteoarthritis. Information to guide clinical decisions about what to do, or what to advise patients to do, is available, comprehensible and increasingly easy to access.

But when people working in the health services have to make a decision about how to organize and deliver care, they find themselves in a very different situation. How do we ensure that care for patients with chronic obstructive pulmonary disease (COPD) follows national guidelines? How do we minimize the risk of a prescribing error? Evidence guiding decisions about how best to organize and deliver health services for patients very often does not exist, or is difficult to act upon.

This matters because we increasingly know that achieving the best possible results for patients depends on both the clinical acumen of individuals and teams and on the capability of the organizations in which care is delivered.1,2 For example, for nearly 20 years there has been evidence about the benefits of active treatment of acute stroke and the elements of service organization which enable this treatment to be implemented reliably for all patients.3 Despite this, the National Health Service (NHS) in London, UK, has only recently re-organized stroke services consistent with this evidence and some other parts of the NHS have yet to take action.4 Many decisions about how care is organised and delivered are insufficiently influenced by rigorous and systematic evidence.5,6 As a result these decisions are less good than they could be and the price paid is poorer patient outcomes and wasted resources.

The role of science

The sciences underpinning clinical decisions – the biomedical and clinical sciences – are well developed and both the knowledge and the application of these sciences has become a defining feature of what it means to be a good professional.7 By contrast, the science underpinning the organization and delivery of care – the science of improvement – is relatively new and unfamiliar to most people working in health services. Healthcare is lagging behind many manufacturing and service industries in this respect.8

There are a number of reasons for the relative slowness of the health sector to embrace the science of improvement. First, the term ‘improvement science’ has been used overly narrowly to describe a specific set of process-oriented quality improvement methods8,9 rather than the broader interpretation of evidence-based improvement advocated in this paper and illustrated in Figure 1.

Figure 1.

Figure 1

An example of improvement science in practice: reducing central ine infections in intensive care units

Second, many people working in health services have only recently engaged with the evidence that there is an actionable gap between what they know they should do, and what actually happens in practice,10 and have therefore seen no reason to engage with a science that provides potential solutions. Those people who do engage are more inclined to see the solution as one of stronger management rather than better management influenced by better science. Third, clinicians are trained and socialized in the biomedical tradition, but not in the disciplines of organization science. Many clinicians therefore find it difficult to relate to the bodies of knowledge underpinning the science of improvement – drawing strongly on the social sciences, multidisciplinary, theory-driven, often involving ideas and language that feel alien to medicine. Fourth, the nature and the quality of the data and the evidence informing improvement often feel unfamiliar and substandard in relation to the ‘hard’ data and evidence that informs clinical practice.2 Finally, neither the professional nor the financial reward systems are sufficiently well developed to incentivize the best people to lead systems improvement.

But in addition to these explanations, there is a more practical barrier to building the science of improvement, and that is the extent to which the science underpinning improvement spans, and does not operate wholly within, the two parallel worlds that have a significant impact on healthcare delivery – the health service world and the academic world.

The health service world has a well-defined purpose manifest by recognized measures of success, such as improvements in health outcomes, user experience and the efficient use of resources. The activities that deliver this purpose range from systematic and data-driven actions at one end of the spectrum, to intuitive and pragmatic actions at the other.

The academic world also has a well-defined purpose, manifest by recognized measures of success but these are quite different from those in the health service world. The academic world aims to create, teach and disseminate new knowledge and it is driven by the need to generate grant income, to attract students and to produce high quality peer-reviewed publications. These drivers, like those in health service sector, have a strong impact on the behaviours of those working in the sector. The activities that go on within the academic world range from the more biomedically oriented ones, which tend to be high profile and prestigious for the organization, to applied health services research.

The science of improvement does not exist exclusively in one or other of these worlds, but rather spans the two (Figure 2).

Figure 2.

Figure 2

Framing the science of improvement

In doing so, it draws upon the expertise of those working in both worlds – the service improvers in the health system and health service researchers and other social scientists in academia – and aims to create a new space between the two, a space that encourages people to think differently about the problems that they are trying to solve.

Like the academic and health service worlds that it spans, the science of improvement also has a purpose and growing body of knowledge, though in the way in which it is framed in this paper, it does not yet have agreed measures of success. It aims to utilize academic expertise to improve the decisions made about the organization and delivery of care. Its body of knowledge draws on those of the academic and service sectors, in particular rigorous but pragmatic science, and the content knowledge and practical wisdom held by clinicians and managers providing care.

The similarities and differences between the traditional academic sector, the health service sector and the inter-linking science of improvement are summarized in Table 1.

Table 1.

Comparing the academic sector, the health sector and the science of improvement

Academic sector Health service sector Science of improvement
Overarching purpose To create, teach and disseminate new knowledge To deliver safe, high quality and efficient care/public health for patients and populations To use academic expertise to improve decisions made in the service about the organization and delivery of care
Body of knowledge Life, biomedical, clinical and social sciences Evidence-based medicine; Content expertise and practical clinical and managerial wisdom Combination of academic and service bodies of knowledge
Core activities Research, education, leadership Provision of clinical care, preventive care and education and training Improvement science projects and programmes, service evaluations, capacity building exercises, network development, innovation
Key or usual measures of success Grant income, peer-reviewed publications, number and success of students Measurable clinical and managerial processes and outcomes; financial performance Some of the measures relating to the academic and service sectors, plus new measures relating to the interface between the two, to be determined

This framing of the science of improvement begs two questions about how it might be further developed: First, how should improvement science be judged and rewarded and second, what do improvement scientists look like?

How should improvement science be judged and rewarded?

Improvement science will only embed and develop in the short term if it is judged by those who use it to be adding value to current practice. In these terms we suggest a three-part approach to developing measures of success for the science of improvement.

First, improvement science needs to respect traditional academic success criteria. If leading academics are to contribute to the development of the science then they will need to generate grant income, commit to developing the next generation of scientists and publish new findings in high quality peer-reviewed journals, building on a growing number of examples of studies that are both scientifically rigorous and practically useful.1113 To achieve this, specialist journals in the field will have to grow their impact factors and circulations and general journals will have to promote the use of the science of improvement with the same enthusiasm as they promote the biomedical and clinical sciences.

Second, the science will need to respect traditional health service success criteria. Put simply, health service organizations will only engage with the science of improvement if they think that it will deliver better clinical and experiential outcomes, improved financial performance and lead to greater capacity for improvement. If it does so, this will create a ‘pull’ for improvement science skills and would favour individuals and organizations which exhibit them.

Third, advocates of the science will need to create new measures of success which reflect the space between the traditional academic and health sectors. Relying on the established measures will not be enough. Traditional academic incentives are so strong and entrenched that even substantial shifts in core academic drivers (such as the decision in the UK to allocate 20% of the total score in the next university Research Excellence Framework to ‘impact’14) are at least in the short term less likely to generate a genuine change of mind set among academics than a change in compliance behaviours. In the same way, traditional health service incentives, combined with the increasing workload of clinicians and managers, threatens to stifle innovation by crowding out the capacity of people who work in the health service to contribute to innovative ways of thinking about how to deliver care differently.

Initially at least, the new measures should focus unashamedly on improving processes, in particular processes that drive different kinds of relationships between stakeholders. Examples might include the proportion of service-based improvement projects which involved academic partners from the start; or the proportion of research projects that have practitioners and patients as core members of their teams; or the involvement of academic disciplines which might not traditionally be involved in health service improvement, such as engineering or anthropology; or the number of research projects that have a clear strategy for implementation, which is then followed through in a demonstrable way. Improving these processes will help to develop a culture that values the application of scientific principles and methods to service improvement.

In addition to these organizational drivers, measures and incentives need to be developed to encourage individuals seeking careers in improvement science. This brings us to the second question.

What do improvement scientists look like?

If the two worlds that have an impact on patient care are so different in their aims and orientation, what might the people who inhabit the space in between these worlds look like? It is likely that two kinds of people will do so, which could be called ‘visitors’ and ‘residents’.

Visitors will be those working in either the health service or academia who are committed to majoring in the sector they currently occupy, but who want to work differently and see the need to create an environment that allows them and others to do so.

There are plenty of people working in the health service who recognize that the decisions that they make about the organization and delivery of care could be better if academic expertise was brought to bear. Specifically, when experienced academics join a service team they bring a deep understanding of the evidence in a particular field, a theoretical understanding of how change might be achieved at the level of individuals and organizations (a ‘theory of change’),15,16 an expertise in evaluative methods and a sophisticated understanding of how to use data to give new insights. This expertise needs to be negotiated with service decision-makers if it is to have impact.

Similarly, some academics want their work to have greater utility and impact. Some of these will always have taken an applied approach; others are more driven to make a practical difference later in their career, after they have demonstrated to themselves and their colleagues that they can ‘play the academic game’.

But in addition to these visitors it is likely that the improvement science space will require a cadre of permanent residents if it is to develop and thrive. These will be people who might have trained in and developed expertise in either the academic or health service worlds (or perhaps in a different environment) and now want to develop a new career track. This will require ‘improvement scientist’ to be as credible a career label as ‘clinician’, ‘clinical academic’, ‘manager’ or ‘health-service researcher’ are today. A small number of people are starting to demonstrate that such a career option is possible and legitimate, including experienced academics who are choosing to work in more applied and more customer-focused environments, such as foundations and research consultancies. Leading research funding bodies17 and health-care providers18 are demonstrating what can be achieved.

Conclusion

Improvement science has an important part to play in benefitting patients, populations and health systems by bringing the world of health service delivery together with the world of academia to drive ongoing improvement in quality and efficiency of care. For this to be realized improvement scientists must be welcomed by both academic and service sectors and their success judged against criteria that properly reflect what they are trying to achieve – the use of scientific principles and methods to address the practical challenges of delivering better healthcare for patients and populations.

DECLARATIONS

Competing interests

None declared

Funding

None declared

Ethical approval

Not applicable

Guarantor

MM

Contributorship

MM had the original idea for the paper and wrote the first draft. Both authors developed the concepts and contributed to the final draft of the paper

Acknowledgements

Not applicable

References

  • 1.Bohmer R The four habits of high-value healthcare organizations. N Engl J Med 2012;365:2045–47 [DOI] [PubMed] [Google Scholar]
  • 2.Batalden B, ed. Lessons Learned in Changing Healthcare. Toronto: Longwoods Publishing, 2010 [Google Scholar]
  • 3.Indredavik B, Bakke F, Solberg R, Rokseth R, Haaheim LL, Holme I Benefit of a stroke unit: a randomized controlled trial. Stroke 1991;22:1026–31 [DOI] [PubMed] [Google Scholar]
  • 4.Fraser A, Fenwick-Elliot S, Cohen D The six steps to delivering better stroke care. Health Serv J 2012edition [PubMed] [Google Scholar]
  • 5.Walshe K, Rundall TG Evidence-based management: from theory to practice in health care. Milbank Q 2001;79:429–57 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Shojania KG, Grimshaw JM Evidence-based quality improvement: the state of the science. Health Aff 2005;24:138–150 [DOI] [PubMed] [Google Scholar]
  • 7.Stanton E, Lemer C, Marshall M An evolution of professionalism. JRSM 2011;104:48–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Marshall M Applying quality improvement approaches to health care. Br Med J 2009;339:b3411. [DOI] [PubMed] [Google Scholar]
  • 9.Powell AE, Rushmer RK, Davies HTO A Systematic Narrative Review of Quality Improvement Models in Health Care. Edinburgh: Quality Improvement Scotland, 2009 [Google Scholar]
  • 10.McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635–45 [DOI] [PubMed] [Google Scholar]
  • 11.Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725–32 [DOI] [PubMed] [Google Scholar]
  • 12.Feder G, Davies RA, Baird K, et al. Identification and referral to improve safety (IRIS) of women experiencing domestic violence with a primary care training and support programme: a cluster randomised controlled trial. Lancet 2011;378:1788–95 [DOI] [PubMed] [Google Scholar]
  • 13.Dixon-Woods M, Leslie M, Bion J, Tarrant C What counts? An ethnographic study of infection data reported to a patient safety program. Milbank Q 2012;90:548–91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.http://www.ref.ac.uk/ (last checked 23 July 2012)
  • 15.Lipsey M Theory as method: small theories of treatments. In: Sechrest P, Perrin E, Bunker J, eds. Research Methodology: Strengthening Causal Interpretations of Non-experimental Data. Washington, DC: Agency for Healthcare Policy and Research, 1990 [Google Scholar]
  • 16.Alexander JA, Hearld LR The science of quality improvement implementation: developing capacity to make a difference. Med Care 2011;49(Suppl):S6–20 [DOI] [PubMed] [Google Scholar]
  • 17.http://www.health.org.uk/areas-of-work/programmes/improvement-science-fellowships/ (last checked 23 July 2012)
  • 18.http://www.hopkinsmedicine.org/armstrong_institute/ (last checked 23 July 2012) [Google Scholar]
  • 19.Pronovost PJ, Goeschel CA, Colantuoni E, et al. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: observational study. Br Med J 2010;c309:340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q 2011;89:167–205 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press

RESOURCES