Skip to main content
BMJ Open Access logoLink to BMJ Open Access
. 2019 Feb 23;104(12):1130–1133. doi: 10.1136/archdischild-2018-316401

Reclaiming the systems approach to paediatric safety

Ronny Cheung 1, Damian Roland 2,3, Peter Lachman 4
PMCID: PMC6900242  PMID: 30798257

Introduction

Prior to the emergence of the patient safety movement as a distinct science, it was assumed that the safety of patients was an outcome of good professional acumen, and that if healthcare providers could individually perform well then their patients would remain safe at all times.

It is now 20 years since the publication of To Err is Human,1 the first major review of healthcare safety in the USA. In the UK, the publication Organisation with a Memory 2 in 2000 supported the view that patient safety required a wider system approach. Both documents reframed safety and error in healthcare as an organisational or system issue rather than one of individual error, whether of omission or of commission. Over the past 20 years, there has been major progress in the understanding of patient safety and the complexity of the systems involved in providing healthcare. In a recent review of the state of patient safety in 2018, Bates and Singh3 conclude that ‘Highly effective interventions have since been developed and adopted for hospital-acquired infections and medication safety, although the impact of these interventions varies because of their inconsistent implementation and practice’.

Within paediatrics, the National Patient Safety Agency made the first attempt in the UK to detail the extent of healthcare-derived harm among children.4 The problems identified remain a challenge—namely communication, deterioration, delayed or missed diagnosis, infections and medication harm. This is despite well-tested theories and interventions being available for many of these. In this paper, we explore the theories of patient safety and provide principles to tackle the challenge ahead.

The evolution of patient safety theories

The original approach to patient safety was essentially limited to risk management and review of adverse events. This included the introduction of root cause analysis and failure mode effects analysis, which aimed to understand the causation of harm. Measures of harm such as the Paediatric Global Trigger Tool were developed,5 which provided greater insight into paediatric patient safety and allowed the development of interventions to address single patient issues such as prescribing, pressure ulcers and infection control. Interventions were effective but often slow and reactive, and learning often delayed.

The next leap in understanding came from human factors and ergonomics, which originated in engineering and aviation. Human factors acolytes consider safety as part of a complex system of interactions. While acknowledging the differences between the aircraft cockpits and the clinical environment, these theories postulate that improving safety requires a focus on the interactions between humans, and between humans and their working environment. Interventions such as Team STEPPS6 or Systems Engineering Initiative for Patient Safety model focused on the team interaction and culture, and the interplay of the environment, tools and technology and the people involved (both patients and providers), as a means to achieve safe outcomes.7

Building on these theories, the concept of reliability in healthcare was introduced from the study of ‘High Reliability Organisations (HRO)’ in diverse industries such as nuclear power, the military and air traffic control.8 9 Although these organisations are in general highly complex, with many interdependencies and working to tight time pressures, safety remains core to their business.10 Rather than reacting to events, they generate new safety solutions proactively. They incorporate human factors and ergonomics to design their processes and systems, to remain error-free. Healthcare’s adoption of high reliability principles—that is, aiming for minimal defects or scope for error through development of standardised tools, processes and interventions to prevent predictable adverse events—led to the introduction of care bundles and standardisation of care pathways, which have achieved considerable success, for example in the elimination of central line-associated bloodstream infections.11 12

Initially, the development of discrete, proven safety interventions brought the promise of reliable improvement in safety that could be replicated everywhere, irrespective of setting, provided they were implemented consistently and predictably. But over time, it became clear that this was overly simplistic, and there has since been a move from focusing on individual responsibility, and towards an understanding of safety in the context of the complexity inherent in healthcare. A case in point is the introduction of interventions in adult intensive care across the UK to decrease central line infections, replicating the work of Pronovost et al 11 in Michigan. The programme failed because the concept of context and local environment had not been adequately addressed. It was assumed that the simple roll-out of an intervention proven to work in one setting would achieve the same outcome in another. Dixon-Woods et al 13 have highlighted that implementation of practice occurs through many routes, and simple translation of one intervention to a different clinical environment is unlikely to have identical effects.

Vincent and Amalberti14 go further still in outlining the limitations of this linear, ‘process-defect’ approach in healthcare. They argued that healthcare is composed of many interconnected processes of varying complexity, and therefore the context in which clinical care is delivered should be the primary factor in determining the approach required. For instance, many highly predictable clinical care processes (eg, blood transfusions or radiotherapy) should have the goal of being a highly reliable service with clear operating systems, while others such as routine surgery should aim for reliability with some scope for adaptation to changing circumstances. Still others, such as emergency medicine, may need to be even more adaptable, even as they continue to embrace the underlying principles of reliability theory. There is now an understanding that the achievement of safety requires a broader lens which encompasses both the clinical process and safety culture and environmental context.15 16

John Launer17 reflected on the pragmatic application of complexity theory thus: in a world where prediction can never be certain, are there nevertheless some general rules that can reduce uncertainty, so that our actions stand a better chance of achieving their intended results?Box 1 illustrates some considerations to challenge the simplistic, linear approach to healthcare safety.

Box 1. Dealing with a complex system (from Launer17).

  • Resist the temptation to focus on an isolated problem. Instead, look for interconnections within the system.

  • Look for patterns in the behaviour of a system, not just at events.

  • Be careful when attributing cause and effect. It is rarely that simple.

  • Keep in mind the system is dynamic and it does not necessarily respond to intended change as predicted.

Where we are now

Although there is a requirement for individual accountability, there is a recognition now that a safe service must go beyond a linear, mechanistic approach and embrace the broader system. This starts with the clinical team as a ‘clinical micro system’ with its own culture and set of processes.18 Systems theory is a scientifically rigorous approach which incorporates all other theories such as proactive design for safety, using human factors and ergonomics approaches, and reliability methodologies in order to optimise outcomes.7 19

Despite this, there remains an understandable desire to secure evidence to support individual safety interventions that can be easily implemented. We illustrate this potential pitfall, and the importance of a systems approach, with two examples.

Understanding systems: the example of PEWS

The focus on Paediatric Early Warning Systems (PEWS)—structured tools which aggregate an individual patient’s risk of requiring urgent intervention to prevent morbidity or mortality (based on physiological observations such as heart rate, respiratory rate and blood pressure)—is based on evidence that patient lives can be saved by recognising (and reversing) early deterioration in hospital.20 As with many interventions in healthcare, there is often significant delay between the root causes of harm being identified (eg, delayed recognition of deterioration in hospital) and adequate interventions to address them being implemented.21 One reason for this is the tendency to view solutions as individual interventions and failing to understand that identifying deterioration is a complex and multifactorial exercise.22

It is obviously tempting to implement a focused, defined and instantly auditable intervention, rather than engaging with the complex, cultural factors within a healthcare system. But this approach ignores the critical factors that determine the performance of PEWS: communication, cultural hierarchy and organisational factors.23 Evaluation of a specific score without consideration of these system factors is therefore at best limited and at worst misleading.

The differing approaches to the design of PEWS also illustrate the requirement to apply design principles and ergonomics in safety science. There is evidence that score-based PEWS tools (where cumulative scores assigned to vital signs are used to identify thresholds for escalation of care) are subject to significantly greater errors in completion and interpretation than ‘Track and Trigger’ tools (where breaching thresholds for any individual vital signs leads to escalation, obviating the need for adding together numerical scores).24-26 This crucial interface between the tool and the humans who interact with it, particularly in highly stressful and busy ‘live’ clinical environments, is too often ignored in studies which simply evaluate the tools from an isolated statistical perspective, based on reviewing clinical notes retrospectively.27

This was emphasised in the Evaluating processes of care and outcomes of children in hospital (EPOCH) study, the largest prospective trial of an early warning system in children, in which the ‘BedsidePEWS’ scoring system failed to demonstrate improvement in mortality in live use, despite being previously validated using retrospective clinical notes data.28 Increasingly there is a recognition that research into the efficacy of PEWS requires evaluation of the context within which any single score is used (including the way in which the tool is designed to allow for ergonomic and human factors), rather than simply the score itself.

There is also a more fundamental point to consider. PEWS appear to have inherent face validity, have been used for over 20 years and have spread rapidly.29 Despite this, it is not clear what direct role the scores themselves have had in this, given that inpatient mortality is decreasing across all healthcare systems in any case.30 31 The difficulty for an organisation is that, having introduced PEWS, it is tempting to believe that they have found a solution to the underlying problem. This may fuel the continued roll-out of PEWS with minimal sense-checking around the underlying causative factors and the underlying tenets of any adaptive change process needed to implement improvement (box 2).32 33

Box 2. Factors to consider before introducing a paediatric early warning score (EWS) (from Roland33).

  • What is the patient group the EWS will be used on?

  • What outcome are you looking to alter?

  • What type of EWS would you like to introduce?

  • Is there a current system you could employ?

  • How will you engage and be responsive to the concerns of the stakeholders?

  • How will you monitor its effect?

The importance of safety culture: safety huddles

As more technical interventions for patient safety were developed, the quest for transferability means the experience of PEWS has been mirrored elsewhere. As noted previously, a key component of any success is the understanding of the context and the culture of the organisation, clinical team and the individual.

Safety huddles amply demonstrate this principle.34 35 This safety tool exploits the concept of situational awareness, which acts on many levels and applies not only to the actions of individual staff with patients, but to the coordination of multiple hospitals by a senior management team. Safety huddles bring together multiple staff, of different specialty and grade, to assess risk and formulate plans for a given clinical area. Huddles have been demonstrated to have an impact on the outcomes of children but, like PEWS, risk being seen as an off-the-shelf solution that can be delivered in any setting.36 The evidence from the Situation Awareness for Everyone national programme of huddle implementation in paediatric departments in the UK is clear: while a huddle may allow information to be exchanged in flattened hierarchical fashion, this will only be effective if the organisation genuinely espouses the underlying principles.37 It would be perfectly possible, for instance, to undertake huddles which were no different from the more traditional ‘command-and-control’ model, with one individual dictating the flow of conversation. The effectiveness of huddles, and that of other safety initiatives (such as focused handover tools, eg, Situation/Background/Assessment/Background (SBAR)), is entirely dependent on the personnel involved understanding the underlying principle and rationale for use, rather than simply enforced on a reluctant workforce in the form of yet another change endeavour.

Safety tools in context

Network learning has demonstrated that organisations can improve their safety culture by working together and by learning from each other at scale.38 The key insight is that it is not only the technical tools that are important, but rather the beliefs and attitudes of the clinical teams. Indeed, even the most perfect technical safety tool will fail if poorly applied, in an unreceptive environment. Safety science should keep focused on examining the factors that contribute to a high-performing system in the round (complex and ‘dirty’ though that may be) rather than concentrating too narrowly on individual tools, which might seem easier to evaluate but with much less meaningful results if done in isolation.

We would rightly be accused of poor medicine if we initiated antihypertensive therapies (such as diet, exercise or medication) for our patients without exploring the patient’s circumstances as a whole—their comorbidities, their family support, and their social and educational background. Nor, as clinicians and scientists, are we naïve enough to believe that we can extrapolate drug trial outcomes to clinical outcomes without viewing them through the lens of individual patient characteristics. In clinical practice, the treatment of tuberculosis with multiple antibiotic therapy, while proven to be biologically efficacious in clinical trials, did not always work as intended, with negative consequences for individual patients and more drastic ones for the wider population through the development of drug resistance. Of course, there could be no therapy at all without effective drugs. But human and behavioural factors (such as family support, social stigma, or the limits of human memory and routines) were critical to the success of the treatment regimen, leading to the introduction of directly observed therapy which was the key to unlocking the theoretical benefit of these treatments.39

Conclusion

Evaluating PEWS, huddles, electronic prescribing or other tools in isolation (or worse still, simply by running historical databases to test statistical significance of individual tools in vitro) rather than as part of a greater system manned by human beings, subject to cultural and behavioural influences, risks falling into the same trap as those initial pioneers in tuberculosis. Just as we have all moved from an organ-specific or treatment-specific model of patient care towards a holistic, multidisciplinary model for treating our patients, so must we move back towards understanding safety as a complex interconnected whole, rooted in the culture and environment in which the tools act. Future evaluations of safety interventions must take into account wider human and system factors which inevitably affect their performance in real life.

Paediatricians as clinicians must also take a lead in improving the safety of the care they deliver on a systems basis. This means measuring harm and adverse events in real time, studying processes in their clinical team or microsystem using a human factors approach, and actively fostering a culture of safety. Much progress has been achieved over the past 20 years. Embracing the understanding of systems rather than a linear model of safety and improvement, allied with the potential of health informatics and technology,40 will be critical if we are to move paediatric safety to the next level.

Footnotes

Contributors: RC, DR, PL conceived the article together, and all authors contributed to the final manuscript and revisions.

Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests: None declared.

Provenance and peer review: Commissioned; externally peer reviewed.

Patient consent for publication: Not required.

References

  • 1. Corrigan JM, Kohn LT. Donaldson MS, To err is human: building a safer health system. Washington (DC), USA: National Academies Press, 1999. [PubMed] [Google Scholar]
  • 2. Donaldson L. Department of Health, England. Organisation with a memory. 2000https://webarchive.nationalarchives.gov.uk/20130105144251/http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_4065086.pdf (Accessed 3 Jan 2019).
  • 3. Bates DW, Singh H. Two decades since to err is human: an assessment of progress and emerging priorities in patient safety. Health Aff 2018;37:1736–43. 10.1377/hlthaff.2018.0738 [DOI] [PubMed] [Google Scholar]
  • 4. National Patient Safety Agency. Review of patient safety for children and young people. London, UK: Crown Stationery, 2009. [Google Scholar]
  • 5. Chapman SM, Fitzsimons J, Davey N, et al. Prevalence and severity of patient harm in a sample of UK-hospitalised children detected by the Paediatric Trigger Tool. BMJ Open 2014;4:e005066–66. 10.1136/bmjopen-2014-005066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Team Steps. Agency for Healthcare Research and Quality. https://www.ahrq.gov/teamstepps/index.html (Accessed 3 Jan 2019). [DOI] [PubMed]
  • 7. Carayon P, Schoofs Hundt A, Karsh BT, et al. Work system design for patient safety: the SEIPS model. Qual Saf Health Care 2006;15:i50–i58. 10.1136/qshc.2005.015842 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Weick KE, Sutcliffe KM. Managing the unexpected. Hoboken, NJ, USA: Jossey-Bass, 2007. [Google Scholar]
  • 9. Chassin MR, Loeb JM. High-reliability health care: getting there from here. Milbank Q 2013;91:459–90. 10.1111/1468-0009.12023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Hudson P. Applying the lessons of high risk industries to health care. Qual Saf Health Care 2003;12(Suppl 1):i1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725–32. 10.1056/NEJMoa061115 [DOI] [PubMed] [Google Scholar]
  • 12. Lachman P, Yuen S. Using care bundles to prevent infection in neonatal and paediatric ICUs. Curr Opin Infect Dis 2009;22:224–8. 10.1097/QCO.0b013e3283297b68 [DOI] [PubMed] [Google Scholar]
  • 13. Dixon-Woods M, Leslie M, Tarrant C, et al. Explaining matching Michigan: an ethnographic study of a patient safety program. Implement Sci 2013;8:70 10.1186/1748-5908-8-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Vincent C, Amalberti R. Safer healthcare - strategies for the real world. London, UK: Springer, 2016. [PubMed] [Google Scholar]
  • 15. Braithwaite J, Churruca K. Complexity science in healthcare: aspirations, approaches, applications and accomplishments – a white paper. Sydney: Australian Institute of Healthcare Innovation, Macquarie University, 2017. [Google Scholar]
  • 16. Braithwaite J. Changing how we think about healthcare improvement. BMJ 2018;361:k2014 10.1136/bmj.k2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Launer J. Complexity made simple. Postgrad Med J 2018;94:611–2. 10.1136/postgradmedj-2018-136096 [DOI] [PubMed] [Google Scholar]
  • 18. Nelson EC, Batalden PB, Huber TP, et al. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Jt Comm J Qual Improv 2002;28:472–93. [DOI] [PubMed] [Google Scholar]
  • 19. Institute of Medicine. Bringing a Systems Approach to Health. Washington, DC, USA: The National Academies Press, 2000. [Google Scholar]
  • 20. Schein RM, Hazday N, Pena M, et al. Clinical antecedents to in-hospital cardiopulmonary arrest. Chest 1990;98:1388–92. [DOI] [PubMed] [Google Scholar]
  • 21. Pearson G. Why Children die: a pilot study 2006, England (South West, North East and West Midlands), Wales and Northern Ireland. London: CEMACH, 2008. [Google Scholar]
  • 22. Dixon-Woods M, Martin G. Does quality improvement improve quality? Future Hospital J 2016;3:191–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Lambert V, Matthews A, MacDonell R, et al. Paediatric early warning systems for detecting and responding to clinical deterioration in children: a systematic review. BMJ Open 2017;7:e014497 10.1136/bmjopen-2016-014497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Christofidis MJ, Hill A, Horswill MS, et al. A human factors approach to observation chart design can trump health professionals' prior chart experience. Resuscitation 2013;84:657–65. 10.1016/j.resuscitation.2012.09.023 [DOI] [PubMed] [Google Scholar]
  • 25. Preece MH, Hill A, Horswill MS, et al. Supporting the detection of patient deterioration: observation chart design affects the recognition of abnormal vital signs. Resuscitation 2012;83:1111–8. 10.1016/j.resuscitation.2012.02.009 [DOI] [PubMed] [Google Scholar]
  • 26. Hughes C, Pain C, Braithwaite J, et al. ’Between the flags': implementing a rapid response system at scale. BMJ Qual Saf 2014;23:714–7. 10.1136/bmjqs-2014-002845 [DOI] [PubMed] [Google Scholar]
  • 27. Chapman SM, Wray J, Oulton K, et al. ’The Score Matters': wide variations in predictive performance of 18 paediatric track and trigger systems. Arch Dis Child 2017;102:487–95. 10.1136/archdischild-2016-311088 [DOI] [PubMed] [Google Scholar]
  • 28. Parshuram CS, Dryden-Palmer K, Farrell C, et al. Effect of a pediatric early warning system on all-cause mortality in hospitalized pediatric patients: the EPOCH randomized clinical trial. JAMA 2018;319:1002–12. 10.1001/jama.2018.0948 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Roland D, Oliver A, Edwards ED, et al. Use of paediatric early warning systems in Great Britain: has there been a change of practice in the last 7 years? Arch Dis Child 2014;99:26–9. 10.1136/archdischild-2012-302783 [DOI] [PubMed] [Google Scholar]
  • 30. Joffe AR, Anton NR, Burkholder SC. Reduction in hospital mortality over time in a hospital without a pediatric medical emergency team: limitations of before-and-after study designs. Arch Pediatr Adolesc Med 2011;165:419–23. 10.1001/archpediatrics.2011.47 [DOI] [PubMed] [Google Scholar]
  • 31. Chapman SM, Wray J, Oulton K, et al. 'Death is not the answer': the challenge of measuring the impact of early warning systems. Arch Dis Child 2019;104:210–1. 10.1136/archdischild-2018-315392 [DOI] [PubMed] [Google Scholar]
  • 32. Improving Health and Health Care Wide. http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx (Last accessed 1 Oct 2018).
  • 33. Roland D. Paediatric early warning scores: holy grail and achilles' heel. Arch Dis Child Educ Pract Ed 2012;97:208–15. 10.1136/archdischild-2011-300976 [DOI] [PubMed] [Google Scholar]
  • 34. Provost SM, Lanham HJ, Leykum LK, et al. Health care huddles. Health Care Management Review 2015;40:2–12. 10.1097/HMR.0000000000000009 [DOI] [PubMed] [Google Scholar]
  • 35. Goldenhar LM, Brady PW, Sutcliffe KM, et al. Huddling for high reliability and situation awareness. BMJ Qual Saf 2013;22:899–906. 10.1136/bmjqs-2012-001467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Brady PW, Muething S, Kotagal U, et al. Improving situation awareness to reduce unrecognized clinical deterioration and serious safety events. Pediatrics 2013;131:e298–e308 http://doi.org/ 10.1542/peds.2012-1364 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Stapley E, Sharples E, Lachman P, et al. Factors to consider in the introduction of huddles on clinical wards: perceptions of staff on the SAFE programme. Int J Qual Health Care 2018;30:44–9. 10.1093/intqhc/mzx162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Health Foundation. Effective Clinical Networks. 2014https://www.health.org.uk/sites/health/files/EffectiveNetworksForImprovement.pdf (Last accessed 1 Oct 2018).
  • 39. Weis SE, Slocum PC, Blais FX, et al. The effect of directly observed therapy on the rates of drug resistance and relapse in tuberculosis. N Engl J Med 1994;330:1179–84. 10.1056/NEJM199404283301702 [DOI] [PubMed] [Google Scholar]
  • 40. Classen D, Li M, Miller S, et al. An electronic health record-based real-time analytics program for patient safety surveillance and improvement. Health Aff 2018;37:1805–12. 10.1377/hlthaff.2018.0728 [DOI] [PubMed] [Google Scholar]

Articles from Archives of Disease in Childhood are provided here courtesy of BMJ Publishing Group

RESOURCES