Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2017 Jan 1.
Published in final edited form as: BMJ Qual Saf. 2016 Feb 24;25(7):485–488. doi: 10.1136/bmjqs-2016-005232

Patient safety and the problem of many hands

Mary Dixon-Woods 1, Peter Pronovost 2,3
PMCID: PMC4959572  EMSID: EMS68988  PMID: 26912578

Summary

Healthcare worldwide is faced with a crisis of patient safety: every day, everywhere, patients are injured during the course of their care. Notwithstanding occasional successes in relation to specific harms, safety as a system characteristic has remained elusive. We propose that one neglected reason why the safety problem has proved so stubborn is that healthcare suffers from a pathology known in the public administration literature as the problem of many hands. It is a problem that arises in contexts where multiple actors – organizations, individuals, groups – each contribute to effects seen at system level, but it remains difficult to hold any single actor responsible for these effects. Efforts by individual actors, including local quality improvement projects, may have the paradoxical effect of undermining system safety. Many challenges cannot be resolved by individual organisations, since they require whole-sector coordination and action. We call for recognition of the problem of many hands and for attention to be given to how it might most optimally be addressed in a healthcare context.


Every day, everywhere, patients are injured during the course of their care. 13 But the puzzle of how to keep patients safe has remained stubbornly difficult to solve, despite huge optimism, effort, investment, public pressure, and some occasional successes in relation to specific harms over the last 15 years or more.4 We suggest that one neglected reason for slow progress in patient safety lies in a pathology known in the public administration literature as the problem of many hands. First described by the political philosopher Dennis Thompson,5 the problem of many hands was originally developed in the context of public officials. His concern was the challenge of how responsibilities can be allocated for the decisions and policies of government when so many different officials contribute in so many ways that it is difficult to identify the causal contribution of any single individual. Summarised in the old aphorism that “if everyone is responsible, no-one is”, the idea is not a new one. 6 But Thompson’s diagnosis, developed into the more general observation that a collective in its entirety may have responsibilities that cannot be attributed to any individual member of the collective,7 have stimulated new attention to this enduring conundrum.

The problem of many hands is now understood to arise in many contexts where multiple actors – organizations, individuals, groups – contribute to the performance seen at the system level, but no single actor can be held responsible for the overall outcome. These voids of responsibility may be highly consequential. System weaknesses may develop because of decisions and non-decisions that accumulate over long periods of time; because responsibility and authority for coordinating action to correct structural deficiencies is diffused, confused or absent; and because a profusion of localised practices and components erode the integrity and functioning of the system as a whole. Eventually, catastrophe may erupt.

In his more recent work, Thompson has noted that “when many hands are involved, individuals who may bear some responsibility for harm are less likely to see what they do and less likely to be held responsible by others. The profusion of agents obscures the location of agency”. 8 Understood in this way, the problem of many hands is not simply a restatement of the well-known economists’ problem of misaligned incentives between the multiple actors in a system (common in complex and diffuse fields such as healthcare). Instead, its emphasis is on the important tensions that may arise between individual and collective responsibility for adverse outcomes and how responsibility can be distributed in areas as diverse as climate change, 9 engineering defects in large building projects, 10 the financial crisis of the late 2000s, and the Deepwater Horizon disaster. 8

Healthcare, characterized by autonomous, highly distributed and heterogeneous yet interdependent actors, is a paradigmatic example of the problem of many hands. Its actors include healthcare organizations and healthcare workers and their professional bodies and governmental agencies, but also manufacturers and suppliers of drugs and equipment, charities and foundations, patient advocacy groups, political representatives and political parties, insurers and payers, regulators and accreditors, professional associations, the legal system, information technology vendors, and many, many others. Such naturally-forming (rather than purposefully designed) networks typically find it difficult to coordinate their interactions, 11 not least because the various actors may be rivalrous and lack shared commitments. They may experience intense conflicts over the nature of the problems they face, the goals to be met, the means by which these goals will be achieved, and who will take responsibility for delivering on those goals and be accountable if they are not met. 12 Only rarely can a single individual or entity be held responsible for failures at level of the collective. The overall effect is that the kind of system-level action needed to manage risk effectively is frustrated.

As is frequently observed, the healthcare example stands in vivid contrast to many sectors that have become safer over the time, such as the oil, building, nuclear and aviation industries. These sectors have typically found ways to confront and manage these challenges, typically through developing mechanisms of coordination, harmonisation, and incentives for cooperation on safety that are robust to imperatives for competition. Such industries focus huge efforts at the level of the sector, agreeing on national or global standards and measures, harmonising technology, and using multiple techniques ranging from peer learning communities through to international standards and legal requirements.13, 14 None of this prevents local learning in individual organisations; indeed, it may support and facilitate it. For instance, the existence of international standards on vehicle safety does not stop individual car manufacturers from continuing to innovate in the design of their automobiles. Yet the actors in healthcare systems have failed to organise themselves in this way. With some important exceptions focused on a specific problem – such as, for example, the work of the International Standardization Organization on Anaesthetic and Respiratory Equipment – they do not function as a collective whole or sector-like entity, but instead act as a collection of atomised individuals, responsible mainly for themselves and not the system as a whole.

These failures to act at a sector level in healthcare have persisted even as efforts to hold individual organisations (particularly providers) have increased markedly. But demands for organisational accountability do not by themselves solve the problem of many hands: they may, instead, paradoxically exacerbate it by eroding the recognition that some problems need to be solved at a scale greater than the individual hospital or practice. The failure of scale alone makes it difficult for single organisations to address many safety issues effectively. For instance, the expertise to investigate and address many safety problems is so specialised and multidisciplinary that few organizations will have the skills or resources need to conduct a robust investigation or design interventions that will mitigate risks. Local investigations of safety incidents are, accordingly, often conducted in ways that appear non-independent and amateurish in comparison with other high-risk industries that benefit from sector-wide expertise. In aviation, for instance, dedicated and highly skilled Commercial Aviation Safety Teams conduct sector-wide analyses of the major causes of preventable deaths that can inform the design of sector-wide solutions. In contrast, healthcare has clinicians and administrators conducting investigations, often with limited training in safety and often recommending weak interventions such as “re-education” as the risk reduction strategy. 15 The problem is compounded by the failure in healthcare to share the learning from investigations: such learning often remains confined within the organisation where it occurred, the generalizable lessons neither generated nor implemented. 16

Charging individual organisations with the responsibility for patient safety challenges may in fact reproduce the exactly same problem seen when individuals are blamed for systems defects: organisations themselves are just one element of a much wider context, and cannot, acting individually, resolve many of the deep structural issues at the heart of the safety problem. Simply put, many safety challenges defy the capacity of any single healthcare organisation to resolve. Controlling the supply side of medical devices, for example, is not within the gift of any hospital. Yet these devices consistently violate the principles of human factors recognized as fundamental to safety in other industries, and they rarely facilitate the creation of the kinds of integrated systems best suited to serving the interests of patients and practitioners. Instead, hospitals have to assemble, painfully, multiple items of equipment and devices that arrive piecemeal from multiple sources that do not coordinate their activities. Cobbled-together, highly fallible systems that pose risks to patients persist in part because the kinds of imperatives and structures to support system-wide standards for usability and interoperability are lacking. As a result, health care overly relies on the heroic efforts of clinicians to ensure safety rather than the design of safe systems. The problem of many hands is deeply implicated: there is no mechanism for coordinating the actors and their incentives to ensure they produce a safe, integrated supply chain, and no single party to hold accountable when it fails.

Failures of coordination and integration in healthcare also contribute to the current arms-race of performance and quality metrics, the confusion and distraction it creates, and the diversion of resources into improvement efforts that are often ineffective and inefficient. 17 Despite the massive burden of quality metrics, no valid mechanism exists to monitor how many patients die or are harmed as a result of sub-standard care, leaving the field open to widely varying and sometimes lurid claims,18 yet again the locus of responsibility for solving this problem remains obscure. Thus, the weaknesses of the collective obstruct the achievement of individual actors’ goals, even though all involved support those goals in principle.

The problem of many hands also means that even when individual actors are seeking to secure improvements, the multiplicity of actors and their failure to act in a coordinated way may increase the risks in the system. The recent proliferation of local quality improvement (QI) projects, though well-intentioned, perversely adds to the difficulties. Many projects rightly target poorly designed or functioning healthcare processes. QI projects seeking to address process defects have delivered important successes and will always be a critical element of organisations’ efforts to improve quality. But they are not a straightforward solution to safety.

First, local projects are prone to uniqueness bias (the often flawed assumption that every situation is singular and requires a different solution) and may wastefully start from scratch every time. A given hospital is rarely the first to have a problem with patients with delayed recognition and management of septic patients, overuse of urinary catheters, communication and handoff errors, suboptimal use of the surgical checklist, or any number of other common targets for QI. Yet, because of the problem of many hands, system-level curation of safety measures, standards and solutions is lacking; it remains difficult even to find out how to assess the problem or what another organization has done that worked or did not work, and academic publishing norms remain ill-suited to this task. The result is that local teams waste time and energy in inventing solutions from scratch rather than customising solutions known to work. Second, because the skills and resources needed for safe design are rare and often unavailable to local QI teams, small “patches” are often used to fix safety issues, resulting in a corresponding failure to tackle the bigger, deeper problems.

Third, and perhaps most consequentially for safety, QI projects undertaken locally have a troubling tendency to create locally-specific work processes, routines and tasks that only apply in their context of origin and in so doing create new risks at the level of the healthcare collective. One basic problem, well-known in safety science, is that too many localized processes contribute to unwarranted variability across health systems. Locally-specific procedures and failures to harmonise safety procedures at the system-level create the conditions for tragic outcomes, as occurred in the case of the last patient to die of inadvertent administration of vincristine by the intrathecal route in the UK. 22 The implementation of electronic health records is increasingly making visible the underlying variability in clinical processes and practices across even units in the same hospital. 20 Some of this arises from variability in individual clinician preference (e.g. in relation to dosing for vasopressors and electrolytes) and requires resolution to be reached through multidisciplinary dialogue and engagement with the scientific evidence. Much more variability arises, however, from historically-reinforced patterns and norms that sustain poorly functioning processes rather than principled, purposeful, multi-stakeholder design.21

The paradox is that local QI projects may, unless well uncoordinated, may reproduce or exacerbate the unwanted effects of highly variable processes and procedures by making improvements in local settings that undermine the safety of the system as a whole. Thus, for example, the hospital that seeks to improve safety by using red labelling for syringes containing muscle relaxants may well be able to demonstrate better local risk control in their own operating rooms, but introduce new system-level risks because doctors moving from this hospital to the next may depend on the visual cue and make errors if it is not there (or if a different colour is used). The chaos surrounding color-coding of wristbands, with the same colours signifying different meanings in different contexts 19 similarly introduces risks at the level of the system that may occur at the same time as QI evidence may suggest improvement at the level of a single organisation. We have reached the limits of treating patient safety as something that can solved provider by provider or through individual heroism. Quality improvement capacity will always retain an invaluable and indispensable role in organizations, but we need to acknowledge the risk that multiple ill-coordinated small-scale QI projects, substituting for sector-wide solutions, may degrade rather than improve the ability to achieve system-level change.

Arriving at a diagnosis of the many hands problem helps in clarifying the nature of the pathology, but it does not by itself suggest a therapy. Thompson himself perhaps is better at characterising the problem than solving it: his proposal, in the context of public administration systems, is that it is necessary to be able to identify individuals who knowingly and freely contribute to poor outcomes. Though it has some potential for some kinds of issues, this kind of individualist approach is likely to have many limitations (both practical and ethical) in the context of patient safety, at least in its current stage of development. What is clear is that healthcare now needs assume collective responsibility. It needs to tackle its safety problems as a sector through coordinated, interdependent, and integrated action and collective, consensual solutions. The structures though which this may be achieved will, however, require much debate.

It is likely that much of what is needed is not coercive intervention by central governments or regulators, though that will play a role where needed: for a select group of challenges, perhaps especially those involving manufacturers and suppliers, something akin to a system integrator is needed,23 one with legally-backed authority. But a top-down, centrally-imposed dystopia of standardization and enforcement may not be the answer to many challenges that arise from the problem of many hands. Instead, much is likely to be achieved by making those in healthcare accountable to each other through more horizontal, cooperative structures.24 25 Such structures can accommodate professional groupings who can work together to agree on solutions that are satisfying, workable, informed by professional values and clinical expertise, capable of being customised for specific situations, and enforceable through peer sanctions. Much more thought needs to be given to finding the balance between global standards and local innovation, so that one facilitates the other; the key is that the kinds of strategy chosen should be thoughtfully selected and well-fitted to risks and contexts.

Recognising the problem of many hands may be the first step in fixing it. We call for attention to be given urgently to identifying the new structures and new accountabilities for a collective, system-level approach to protecting patients.

Acknowledgments

Funding Wellcome Trust Senior Investigator Award for Mary Dixon-Woods (WT097899).

Footnotes

Contributors MD-W conceived the idea for the paper and prepared a first draft. PJP made substantial contributions to revising this draft and providing additional insights. Both authors approved the final draft.

Competing interests MD-W is Deputy Editor-in-Chief of BMJ Quality and Safety.

Provenance and peer review: Not commissioned; internally peer reviewed.

References

  • 1.Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010 Nov 25;363(22):2124–34. doi: 10.1056/NEJMsa1004404. [DOI] [PubMed] [Google Scholar]
  • 2.Provenzano A, Rohan S, Trevejo E, Burdick E, Lipsitz S, Kachalia A. Evaluating inpatient mortality: a new electronic review process that gathers information from front-line providers. BMJ Qual Saf. 2015 Jan;24(1):31–7. doi: 10.1136/bmjqs-2014-003120. [DOI] [PubMed] [Google Scholar]
  • 3.Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012 Sep;21(9):737–45. doi: 10.1136/bmjqs-2011-001159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Lamont T, Waring J. Safety lessons: shifting paradigms and new directions for patient safety research. J Health Serv Res Policy. 2015 Jan;20(1 Suppl):1–8. doi: 10.1177/1355819614558340. [DOI] [PubMed] [Google Scholar]
  • 5.Thompson DF. Moral responsibility of public officials: The problem of many hands. Am Polit Sci Rev. 1980;74(4):905–16. [Google Scholar]
  • 6.Aveling EL, Parker M, Dixon-Woods M. What is the role of individual accountability in patient safety? A multi-site ethnographic study. Sociol Health Illn. 2015 Nov 4; doi: 10.1111/1467-9566.12370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bovens M. The quest for responsibility: Accountability and citizenship in complex organizations. 1998 [Google Scholar]
  • 8.Thompson DF. Responsibility for Failures of Government The Problem of Many Hands. Amer Rev Public Adm. 2014;44(3):259–73. [Google Scholar]
  • 9.van de Poel I, Fahlquist JN, Doorn N, Zwart S, Royakkers L. The problem of many hands: Climate change as an example. Sci Eng Ethics. 2012;18(1):49–67. doi: 10.1007/s11948-011-9276-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Van de Poel I, Royakkers L. Ethics, technology, and engineering: An introduction. John Wiley & Sons; 2011. [Google Scholar]
  • 11.Klijn EH, Koppenjan JFM. Accountable networks. In: Bovens M, Goodin RE, Schillemans T, editors. Oxford Handbook of Public Accountability. Oxford: Oxford University Press; 2014. pp. 242–257. [Google Scholar]
  • 12.Bovens M, Schillemans T, T'Hart P. Does public accountability work? An assessment tool. Public Administration. 2008;86(1):225–42. [Google Scholar]
  • 13.Macrae C. Close calls : managing risk and resilience in airline flight safety. Basingstoke: Palgrave Macmillan; 2014. [Google Scholar]
  • 14.Macrae C. Learning from patient safety incidents: creating participative risk regulation in healthcare. Health Risk Soc. 2008;10(1):53–67. [Google Scholar]
  • 15.Pronovost PJ, Goeschel CA, Olsen KL, Pham JC, Miller MR, Berenholtz SM, et al. Reducing Health Care Hazards: Lessons From The Commercial Aviation Safety Team. Health Affairs. 2009 May 01;28(3):w479–89. doi: 10.1377/hlthaff.28.3.w479. [DOI] [PubMed] [Google Scholar]
  • 16.Macrae C. The problem with incident reporting. BMJ Qual Saf. 2015 Sep 7; doi: 10.1136/bmjqs-2015-004732. [DOI] [PubMed] [Google Scholar]
  • 17.Austin JM, Jha AK, Romano PS, Singer SJ, Vogus TJ, Wachter RM, et al. National Hospital Ratings Systems Share Few Common Scores And May Generate Confusion Instead Of Clarity. Health Affairs. 2015 Mar 01;34(3):423–30. doi: 10.1377/hlthaff.2014.0201. [DOI] [PubMed] [Google Scholar]
  • 18.Shojania KG. Deaths due to medical error: jumbo jets or just small propeller planes? BMJ Qual Saf. 2012 Sep;21(9):709–12. doi: 10.1136/bmjqs-2012-001368. [DOI] [PubMed] [Google Scholar]
  • 19.Sehgal NL, Wachter RM. Color-coded wristbands: Promoting safety or confusion? Journal of Hospital Medicine. 2007;2(6):445. doi: 10.1002/jhm.254. [DOI] [PubMed] [Google Scholar]
  • 20.Wachter RM. The digital doctor : hope, hype, and harm at the dawn of medicine's computer age. New York: McGraw-Hill; 2015. [Google Scholar]
  • 21.Dixon-Woods M, Martin G, Tarrant C, Bion J, Goeschel C, Pronovost P, et al. Safer Clinical Systems: evaluation findings. Learning from the independent evaluation of the second phase of the Safer Clinical Systems programme. London: The Health Foundation; 2015. [Google Scholar]
  • 22.Toft B. External inquiry into the adverse incident that occurred at Queen's Medical Centre, Notthingham, 4th January 2001. Department of Health; London: 2001. [Google Scholar]
  • 23.Macrae C, Vincent C. Learning from failure: the need for independent safety investigation in healthcare. J R Soc Med. 2014 Nov;107(11):439–43. doi: 10.1177/0141076814555939. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q. 2011 Jun;89(2):167–205. doi: 10.1111/j.1468-0009.2011.00625.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Rhodes R. How to manage your policy network. Workshop paper for the Commonwealth Secretariat; London. 7th to 8th February; 2013. [Google Scholar]

RESOURCES