Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 31.
Published in final edited form as: Cogn Technol Work. 2014 Sep 26;17(1):89–94. doi: 10.1007/s10111-014-0299-6

Standardisation and Its Discontents

Robert L Wears 1
PMCID: PMC4319563  NIHMSID: NIHMS631343  PMID: 25667566

Abstract

In discussions of the quality and safety problems of modern, Western healthcare, one of the most frequently heard criticisms has been that: “It is not standardised.” This paper explores issues around standardisation that illustrate its surprising complexity, its potential advantages and disadvantages, and its political and sociological implications, in the hope that discourses around standardisation might become more fruitful.

Keywords: safety, standardisation, resilience, Taylorism, complexity, safety, quality

1. Introduction

Efforts to improve the quality, safety, and efficiency of complex work often call for increasing standardisation of tools, supplies, and procedures as a fundamental strategy (Berwick, 1991; Berwick, Godfrey, & Roessner, 1990; Smith, 2009). In these calls, the benefits of this standardisation are presumed to be commonsensical and intuitively obvious; but the theoretical, philosophical, and socio-cultural aspects of standardisation are generally unexplored. This paper attempts to bring some of those issues to the surface, for four reasons:

  • To remove the veil that obscures subtle interactions between the popular, binary distinctions associated with standardization (eg, standardisation vs flexibility, centralisation vs decentralisation, exploitation vs exploration, feed- forward vs feed-back control) (Macrae, 2013; March, 1991; Perrow, 1967; Reason, 1997);

  • To better understand ‘resistance’ to standardisation efforts;

  • To better manage the unintended consequences of poorly thought out standardisation programmes;

  • To clarify the sorts of problems for which standardisation is both suited and useful so it can be more thoughtfully employed.

1.1.Perspective

The value of any argument is inextricably entangled with perspective from which it is made (Dekker, Cilliers, & Hofmeyr, 2011). Therefore, it is important to note that the author’s experience has been almost entirely within the field of healthcare, an area somewhat notoriously resistant to calls for standardisation for a variety of reasons (McDonald, Waring, & Harrison, 2006); and further, that most of that experience has been within emergency medicine, a specialty that particularly values the ability to deal with contingency and the unexpected, to react adaptively and opportunistically to events and environmental changes (Wears, 1999; Zink, 2006). In addition, Reason has noted that it seems curious that healthcare – traditionally a bastion of discretionary control by professionals – is moving steadily towards standardisation and similar means of control, at a time when many other domains are moving in the opposite direction (Reason, 1997). But by discussing standardisation unabashedly and acknowledgedly from this perspective, I hope to increase understanding by adding to the diversity of viewpoints in these discussions (March, Sproull, & Tamuz, 1991).

1.2.Benefits

To some extent, arguing against standardisation is a bit like arguing against motherhood, because any such discussion must first admit that standardisation has many benefits. A world in which every light bulb had to be custom fit to its socket would be a very dark world indeed; this journal has standardised on the English language as its mode of communication; the print layout is standardized (left to right, top to bottom, front to back); this paper itself was composed using a standardised (QWERTY) keyboard, while looking at a clock whose hands rotate ‘clockwise’; and on and on.

Similarly, standardisation contributes to efficiency in communication; when it creates common ground among the parties, it allows a dense, compact, encoding of complex ideas, and supports communication by omission (eg, that which is not mentioned can safely be assumed to be absent or irrelevant). The success of the highly standardised communication forms developed in aviation crew resource management is broadly accepted (Weiner, Kanki, & Helmreich, 1993), and often advocated for other fields (Catchpole et al., 2007). In addition, standardisation is highly valuable in supporting coordination of action across disparate groups whose mutual communication may be undependable (Berg, 1997a; Timmermans & Berg, 2003).

The many benefits of standardisation, especially in reasonably arbitrary circumstances (such as highway driving) have served to support a common view of standardisation as a sort of universal good – a Philosopher’s Stone that turns the base substance of ordinary life and work into gold. Standardisation, in this view, is seen as the natural outcome of the Enlightenment, producing order, reason, and reproducibility in care; a technical solution to the problem of complexity that could only be opposed by the irrational, perverse, or deluded. Standardisation fits nicely with other elements of the ‘program of technical rationality’ such as practice guidelines and evidence-based medicine and so is synergistic with many other current influences on healthcare (Timmermans & Berg, 2003).

Standardisation promotes routinisation, which enables organisations to exploit their accumulated knowledge, thus increasing process efficiency (and to some extent, personal efficiency since actors following standardised procedures may not have to acquire the knowledge that underpins those procedures). This can free up attentional resources, diverting them from mundane to truly complex or pressing issues (Macrae, 2013). Yet at the same time, this routinisation presents a risk: when organisations are guided by old knowledge, they do not create new knowledge, unless special (and by definition, inefficient) efforts are made to understand gaps between standardised processes and the context in which they are deployed (Hunte, 2010).

2. Problems

Despite its obvious benefits, unthinking use of standardisation is associated with a set of problems. This section will explore 5 problematic aspects of standardisation as an improvement strategy.

2.1. Lack of Specificity

Many calls for standardisation in health care lack specificity and have an almost magical, “wishful thinking” quality (see Section 2.3), as if standardisation were some universal good in itself. Thus, an important first step in these discussions is to clarify a set of issues: what bits of work, exactly, should be standardised; at what level; along what dimensions; by whom; and for what purpose? Discussions of standardisation could be improved by increasing their specificity in all these areas.

Even after the main target area has been defined, it is still necessary to specify which of the different dimensions of the work is to be targeted: its organisation and structure; the terminology used by workers; its outcomes without regard to process; its procedures; or its data or content. Within a selected dimension, the level at which standardisation should be applied still needs to be defined. For example, building materials are almost entirely standardised, but the buildings they are used in are less so, and the neighbourhoods containing those buildings still less. We have standardised roads, but not standardised travel paths; standardised grammars but not standardised stories, standardized instruments, notes and scales, but not standardised music.

In addition to being non-specific, calls for increased standardisation ironically often miss the degree to which the activities in question are already standardised. For example, there have been many recommendations in healthcare to standardise shift change handoffs (Joint Commission on Accreditation of Healthcare Organizations, 2008). These calls generally construe handoffs as haphazard episodes (Arora, Johnson, Lovinger, Humphrey, & Meltzer, 2005; Gandhi, 2005), and because they tend to focus only on the data dimension, they miss other, already standardised areas. In fact, observational studies of handoffs (Behara et al., 2005; Brandwijk, Nemeth, O’Connor, Kahana, & Cook, 2003; Kowalsky, Nemeth, Brandwijk, & Cook, 2004; Wears, Perry, & Patterson, 2011) have shown they consistently follow a 4-phase pattern; use a consistent order among patients; vary the amount of investment in the handoff according to the degree of uncertainty about the clinical problem space (Nemeth et al., 2007); and use a consistent ordering of the discussion within patients. Thus, by limiting one’s vision only to the dimension of data, the standardisation already present is missed. This is exacerbated by the problem that this standardisation tends to have arisen “bottom-up”, organically and emergently from the work context, rather than being engineered “top down” by managers.

2.2. Philosophical basis

Standardisation is inextricably associated with the industrial revolution, Taylorism and ultimately the rationalism of the Enlightenment (Berg, 1997b). Its philosophical underpinnings in a Newtonian-Cartesian understanding of the world as a complicated, but ultimately decomposable, understandable and linearly predictable domain are seldom examined by its proponents, who generally show little awareness of even the possibility of other philosophical stances (Dekker, 2010; Dekker, et al., 2011; Dekker & Nyce, 2012; Dekker, Nyce, van Winsen, & Henriqson, 2010; Kneebone, 2002; Wears & Kneebone, 2012; Xiao & Vicente, 2000). Although there are areas of clinical work where this view might be accurate, for the majority of clinical work it is not. Clinical work systems have many of the characteristics of complex, self-organizing systems: they are comprised of a large number of mutually interacting elements, with multiple enhancing and inhibiting feedback loops; they are open to the environment, and their boundaries are hard to define; they operate far from equilibrium; they are path dependent (ie, their past is partly responsible for their present behaviour); their structure does not come from a priori designs, and it changes dynamically to adapt to changes in their environments (Cilliers, 1998). In these complex (as opposed to complicated) systems, it is not possible to predict the trajectory of the system from fundamental principles and its current condition; thus, overly ambitious efforts to standardise are likely to create disorder, either in the target area or elsewhere in the system (Greenhalgh, Potts, Wong, Bark, & Swinglehurst, 2009; Snowden, 2012).

These problem are often euphemistically labelled ‘side effects’, or ‘unintended consequences’; while they are no doubt indeed unintended, it is important to note that “side effects are not a feature of reality, but a sign that our understanding of the system is narrow and flawed” (Sterman, 2000). A simple example of this problem in healthcare has been the standardisation on the Luer lock connector. The Luer lock was intended to provide a standard way of easily connecting and disconnecting syringes and intravenous tubing; but because so many devices use this standard, it has led to numerous, fatal adverse events by allowing easy connection of items that should never be connected (Berwick, 2001; ECRI, 2010; 2003, 2004). At its worst, this sort of standardisation becomes the ‘arrogance of design’, a privileging of the ex ante judgment of remote designers over that of the worker situated in a specific context (Bisantz & Wears, 2008).

Similarly, at the frontline worker level, clinical work tends to be much more about making sense of an uncertain and ambiguous jumble of unfolding phenomena, and in so doing developing contextual judgments, explanations and situated actions that support and help revise shifting goals, than it is about rule-based decisions. It is about phronēsis rather than techne (Greenhalgh & Wong, 2011; Hunte, 2010); practice rather than prescription. Thus at least some of the resistance of frontline workers to standardisation is explicable, because the models of work inscribed in standardised routines clash to strongly with their actual work. Even such an orthodox spokesman as Donald Berwick has noted this mismatch, and remarked that the prevailing strategies for improvement in healthcare rely largely on outmoded, Taylorist theories of control and standardization of work, noting that “if we want to understand how the workplace needs to be changed, we must understand and call into question many of the principles of Taylorism” (Berwick, 2003).

2.3. Psychological and organisational comfort

The rationalism underlying standardisation comes partly from its dominance in modern thinking, but also partly from the psychological and organisational benefits it provides to its proponents. Rather than having to deal with the uncomfortable reality of a world full of risk, ambiguity, chance, and disorder, the rationalist model underlying standardisation offers clear, explicable, understandable explanations. Although its proponents may recognize some of the properties of complex systems outlined above, they see those properties not as certainties about the world, but rather as defects that can and should be managed away through standardisation and other rationalising modalities; the linearizing orderliness of standardization provides a bulwark against the unpleasant realities, and holds forth the reassuring prospect of control (Dekker, Nyce, & Myers, 2012).

It is interesting to note that some standardisation efforts have provided only those sorts of psychological benefits. Berg has noted that IT-enforced standardisation often produces “...no clear-cut ’benefit’ emerging anywhere from the alignment of staff members with the reading and writing artefacts; the only ’benefit’, often only perceived as such by management, lies in the alignment itself. The artefacts are not occasioned to afford the emergence of new tasks, but to ’standardize’ already existing ones. They are not allowed to potentialize anything: in a misplaced equation of ’standardization’ with ’quality’ – whether of the care delivered or the staff members’ work – framing is introduced for framing’s sake” (Berg, 1999). Thus in some instances, the benefits of standardisation are entirely aesthetic – things look better on paper, whether they actually work better or not.

2.4. Non-neutrality

Given its roots in Taylorism and the rationalism of the Enlightenment, it is not surprising that standardisation often depicted as a technical, politically neutral exercise; one best performed by experts, not involving negotiations, socio-political considerations, and certainly not involving winners or losers. But standardisation efforts are not neutral activities; they privilege one view of the world over another and so often one group over another. For example, an information system may standardise data relevant for some purposes but not others; this forces the unprivileged group to engage in a continual translation process, or in the worst case makes data relevant to them invisible (Garfinkel, 1967; Johnson, 2009). Although attempts at standardisation invariably invoke the common good, different groups tend to have differing ideas about what, exactly, is the common good, and in addition, what means are legitimate in its pursuit.

In addition, standardisation often restructures the work environment, changing relations among users, and thus potentially creating additional negotiation and occasional conflict. For example, standardisation tends to elevate the role of the managers and technocrats, who organize and plan the work, over that of front-line workers, who merely execute their instructions (Kanigel, 1997). It makes invisible the articulation work of those who fill the gaps between prescriptive standards and the messy uncertainties of real work (Nemeth, Wears, Woods, Hollnagel, & Cook, 2008).

2.5. Heterogeneity

Finally, standardisation assumes that heterogeneity and variation are inherently undesirable properties that should be eliminated, or at the least, nuisances to be minimized. But to the extent that the clinical problem space is heterogeneous, this assumption clashes with three real world properties of complex systems: the Law of Requisite Variety (Ashby, 1957, 1958) (that every controller of a system must exhibit at least as much variety in behaviour as the system under its control); the principle of equifinality (that there may be many, equally good paths to a goal); and the principle of multifinality (that similar initial conditions may result in dramatically different final states).

In health care settings, standardisation presumes that average results will be equally obtainable by everyone despite individual differences, but this is hardly ever the case. Most “standard treatments” provide a large benefit for a small number of patients who cannot be specifically identified in advance, at a small cost to a large number. For example, routine treatment of hypertension prevents heart attacks and strokes in a small number of cases, while exposing large numbers of patients who would never have suffered those conditions to the expense and side effects of lifelong medication. While this tradeoff might in fact be considered justifiable, it still involves an asymmetry of benefits and burdens, and the “average benefit” calculated over the entire group will be realized by virtually no one. This is well known in epidemiology as the ecological fallacy, (the attribution of group averages to individuals in the group).

Finally, there is another form of heterogeneity – change over time – that poses a peculiar challenge to standardisation efforts. They are inevitably aimed at moving targets; developed for static manufacturing systems, their application to complex, open, sociotechnical systems composed of multiple mutually influential elements, constantly changing, and evolving over time, will always and necessarily be behind the times, late in adapting to new or local circumstances.

3. Caveats

Just as unthinking application of standardisation as an improvement strategy results in the sort of problems outlined above, fairness demands that we admit that unthinking opposition to standardisation raises issues of its own. Claims of special knowledge and corresponding immunity to standardisation can be self-interested. Thus, in healthcare, clinicians’ frequent resistance to standardisation might sometimes be based more on enacting professional identities and reinforcing occupational boundaries than on a careful consideration of its advantages and disadvantages (Dixon-Woods, 2010; McDonald, et al., 2006). Furthermore, the view that ‘rules do not apply to us’ might clearly be dysfunctional when applied indiscriminately in areas where variations in judgment are irrelevant or even harmful, or be used to justify poor practices (Dixon-Woods, 2010).

Similarly, much of this discussion has presumed that standardisation is imposed on a group from the outside, in a classically Taylorist manner. But, there is no reason in principle why it could not be negotiated, or develop emergently from within a community of practice.

4. Application and Guidance

Since standardisation is such a complex issue, a tangle of problems and solutions in which certain activities would seem to benefit from being standardised, while others would not, this section will attempt to provide broad guidance about where standardisation might be helpful and where harmful. Perrow and Reason suggest examining two dimensions in making this determination (Perrow, 1967; Reason, 1997):

  • The number of ‘exceptional cases’, ie, the degree to which surprises, novel or unexpected events are likely to arise; and

  • The difficulty of the search process, ie, the degree to which solutions are well understood and easily found by analytic reasoning, as opposed to being poorly understood and requiring extensive knowledge-based processing.

Two extreme combinations of values along these two dimensions mark cases where standardisation is either very well, or very poorly, suited. For example, situations in which exceptional cases are commonplace and in which solutions are poorly understood or identifiable via analysis are poor candidates for standardisation, and are best left to discretionary control. Examples of such situations might be combat operations, or crisis management. Conversely, situations where exceptional cases are truly the exception, and in which solutions can be readily identified by simple means, are good candidates for standardisation. Such situations might include assembly line operations, or traditional construction. Intermediate situations, are, of course, intermediate and would require a judicious mixture of strategies.

In addition to this guidance, it is important to note that the usefulness of standardisation, and so choices about where to apply it, differs according to the skills of the actors involved in a field of practice. To a novice, many if not most situations will appear exceptional, and the search for their solutions difficult; although prescriptive rules in such a setting would not be recommended for experts, for novices, falling back on ‘standard procedures’ might be better than trying in vain to work out a solution to a problem beyond their training or experience.

5. Summary and Conclusion

It should be clear from the foregoing that standardisation is far from a simple, technical solution that is a ‘natural fit’ for quality or safety problems. It has importance social and political aspects that are often ignored, and some of its benefits may be primarily psychological. Yet, there are benefits to be gained from exploring standardisation thoroughly in all its aspects.

In Civilisation and Its Discontents, Freud wrote of an irreducible tension between the individual (seeking freedom for autonomous action) and civilisation (demanding a necessary conformity) (Freud, 1930). Similarly, Greenhalgh has argued that the tension between standardization and contingency can never be resolved, but rather must be actively managed, a task that gets harder as the domain of application gets larger (Greenhalgh, et al., 2009). Thus standardisation cannot be a universal approach to quality and safety, but will always require continual grounding and judgment if it is to be used safely and effectively.

6. References

  1. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign-out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401–407. doi: 10.1136/qshc.2005.015107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ashby WR. Requisite variety An Introduction to Cybernetics. Chapman & Hall Ltd.; London, UK: 1957. pp. 202–218. [Google Scholar]
  3. Ashby WR. Requisite variety and its implications for the control of complex systems. Cybernetica. 1958;1:83–99. [Google Scholar]
  4. Behara R, Wears RL, Perry SJ, Eisenberg E, Murphy AG, Vanderhoef M, Shapiro M, et al. Conceptual framework for the safety of handovers. In: Henriksen K, editor. Advances in Patient Safety. Vol. 2. Agency for Healthcare Research and Quality / Department of Defense; Rockville, MD: 2005. pp. 309–321. [PubMed] [Google Scholar]
  5. Berg M. Problems and promises of the protocol. Soc Sci Med. 1997a;44(8):1081–1088. doi: 10.1016/s0277-9536(96)00235-3. [DOI] [PubMed] [Google Scholar]
  6. Berg M. Rationalizing Medical Work. MIT Press; Cambridge, MA: 1997b. [Google Scholar]
  7. Berg M. Accumulating and coordinating: occasions for information technologies in medical work. Computer supported cooperative work. 1999;8(4):373–401. [Google Scholar]
  8. Berwick DM. Controlling variation in health care: a consultation from Walter Shewhart. Med Care. 1991;29(12):1212–1225. doi: 10.1097/00005650-199112000-00004. [DOI] [PubMed] [Google Scholar]
  9. Berwick DM. Not again! Preventing errors lies in redesign -- not exhortation. BMJ. 2001;322:247–248. doi: 10.1136/bmj.322.7281.247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Berwick DM. Improvement, trust, and the healthcare workforce. Qual Saf Health Care. 2003;12(6):448–452. doi: 10.1136/qhc.12.6.448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Berwick DM, Godfrey AB, Roessner J. Curing Health Care: New Strategies for Quality Improvement. Jossey-Bass Publishers; San Francisco, CA: 1990. [Google Scholar]
  12. Bisantz AM, Wears RL. Forcing Functions: The Need for Restraint. Annals of Emergency Medicine. 2008;53(4):477–479. doi: 10.1016/j.annemergmed.2008.07.019. doi: 10.1016/j.annemergmed.2008.07.019. [DOI] [PubMed] [Google Scholar]
  13. Brandwijk M, Nemeth C, O’Connor M, Kahana M, Cook RI. [Retrieved 27 January 2003];Distributing cognition: ICU handoffs conform to Grice’s maxims. 2003 Jan 27; 2003. from http://www.ctlab.org/properties/pdf%20files/SCCM%20Poster%201.27.03.pdf.
  14. Catchpole KR, de Leval MR, McEwan A, Pigott N, Elliott MJ, McQuillan A, MacDonald C, et al. Patient handover from surgery to intensive care: using Formula 1 pit-stop and aviation models to improve safety and quality. Paediatr Anaesth. 2007;17(5):470–478. doi: 10.1111/j.1460-9592.2006.02239.x. doi: 10.1111/j.1460-9592.2006.02239.x. [DOI] [PubMed] [Google Scholar]
  15. Cilliers P. Complexity and Postmodernism: Understanding Complex Systems. Routledge; London, UK: 1998. [Google Scholar]
  16. Dekker SWA. We Have Newton on a Retainer: Reductionism When We Need Systems Thinking. Joint Commission Journal on Quality and Patient Safety. 2010;36(3):147–149. doi: 10.1016/s1553-7250(10)36024-7. [DOI] [PubMed] [Google Scholar]
  17. Dekker SWA, Cilliers P, Hofmeyr J-H. The complexity of failure: Implications of complexity theory for safety investigations. Safety Science. 2011;49(6):939–945. doi: 10.1016/j.ssci.2011.01.008. [Google Scholar]
  18. Dekker SWA, Nyce J. Cognitive engineering and the moral theology and witchcraft of cause. Cognition, Technology & Work. 2012;14(3):207–212. doi: 10.1007/s10111-011-0203-6. [Google Scholar]
  19. Dekker SWA, Nyce J, Myers D. The little engine who could not: “rehabilitating” the individual in safety research. Cognition, Technology & Work. 2012 in press. doi: 10.1007/s10111-012-0228-5. [Google Scholar]
  20. Dekker SWA, Nyce JM, van Winsen R, Henriqson E. Epistemological Self- Confidence in Human Factors Research. Journal of Cognitive Engineering and Decision Making. 2010;4(1):27–38. doi: 10.1518/155534310x495573. [Google Scholar]
  21. Dixon-Woods M. Why is patient safety so hard? A selective review of ethnographic studies. Journal of Health Services Research & Policy. 2010;15(suppl 1):11–16. doi: 10.1258/jhsrp.2009.009041. doi: 10.1258/jhsrp.2009.009041. [DOI] [PubMed] [Google Scholar]
  22. ECRI [Retrieved 7 December 2010];HIT makes ECRI’s top 10 list of hazardous technologies for 2011. 2010 from http://www.healthcareitnews.com/news/hit-makes-ecris-top-10-list-hazardous-technologies-2011.
  23. Freud S. Civilisation and Its Discontents. Penguin; London: 1930. republished 2002. [Google Scholar]
  24. Gandhi TK. Fumbled Handoffs: One Dropped Ball after Another. Ann Intern Med. 2005;142(5):352–358. doi: 10.7326/0003-4819-142-5-200503010-00010. [DOI] [PubMed] [Google Scholar]
  25. Garfinkel H. “Good” organizational reasons for “bad” clinic records Studies in Ethnomethodology. Blackwell Publishing Ltd.; Cambridge, UK: 1967. pp. 186–207. [Google Scholar]
  26. Greenhalgh T, Potts HW, Wong G, Bark P, Swinglehurst D. Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q. 2009;87(4):729–788. doi: 10.1111/j.1468-0009.2009.00578.x. doi: MILQ578 [pii]10.1111/j.1468-0009.2009.00578.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Greenhalgh T, Wong G. Revalidation: a critical perspective. [Editorial] Br J Gen Pract. 2011;61(584):166–168. doi: 10.3399/bjgp11X561113. doi: 10.3399/bjgp11X561113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hunte GS. PhD thesis. University of British Columbia; Vancouver: 2010. Creating Safety in an Emergency Department. Retrieved from https://circle.ubc.ca/handle/2429/27485. [Google Scholar]
  29. ISMP Blood pressure monitor tubing may connect to IV ports. ISMP Medication Safety Alert. 2003;8(12):1–2. [Google Scholar]
  30. ISMP Problems persist with life-threatening tubing misconnections. ISMP Medication Safety Alert. 2004;9(12) [Google Scholar]
  31. Johnson CW. Politics and patient safety don’t mix: understanding the failure of large- scale software procurement for healthcare systems. Proceedings of the Fourth IET System Safety Conference; London, UK. 2009. (accepted, in press). http://www.dcs.gla.ac.uk/×johnson/papers/politics_hit.pdf. [Google Scholar]
  32. Joint Commission on Accreditation of Healthcare Organizations [Retrieved 4 March 2009];2007 National Patient Safety Goals. 2008 2008, from http://www.jointcommission.org/PatientSafety/NationalPatientSafetyGoals/08_hap_npsgs.htm.
  33. Kanigel R. The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency. Penguin Books; New York, NY: 1997. [Google Scholar]
  34. Kneebone RL. Total internal reflection: an essay on paradigms. Medical Education. 2002;36(6):514–518. doi: 10.1046/j.1365-2923.2002.01224.x. [DOI] [PubMed] [Google Scholar]
  35. Kowalsky J, Nemeth CP, Brandwijk M, Cook RI. [Retrieved 7 November 2005];Understanding sign outs: conversation analysis reveals ICU handoff content and form. 2004 from http://www.ctlab.org/documents/Sccm2005%20POSTER.pdf.
  36. Macrae C. Interfaces of regulation and resilience in healthcare. In: Hollnagel E, Braithwaite J, Wears RL, editors. Resilient Health Care. Ashgate; Farnham, UK: 2013. (in press) [Google Scholar]
  37. March JG. Exploration and exploitation in organizational learning. Organization Science. 1991;2(1):71–87. [Google Scholar]
  38. March JG, Sproull LS, Tamuz M. Learning from samples of one or fewer. Organization Science. 1991;2(1):1–13. doi: 10.1136/qhc.12.6.465. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. McDonald R, Waring J, Harrison S. Rules, safety and the narrativisation of identity: a hospital operating theatre case study. Sociology of Health & Illness. 2006;28(2):178–202. doi: 10.1111/j.1467-9566.2006.00487.x. doi: 10.1111/j.1467-9566.2006.00487.x. [DOI] [PubMed] [Google Scholar]
  40. Nemeth C, Kowalsky J, Brandwijk M, Kahana M, Klock PA, Cook RI. Between shifts: healthcare communications in the PICU. In: Nemeth CP, editor. Improving Healthcare Team Communication: Building on Lessons from Aviation and Aerospace. Ashgate; Aldershot, UK: 2007. pp. 135–154. [Google Scholar]
  41. Nemeth C, Wears RL, Woods DD, Hollnagel E, Cook RI. Minding the gaps: creating resilience in healthcare. In: Battles HK,JB, Keyes MA, Grady ML, editors. Advances in patient safety: New directions and alternative approaches. AHRQ Publication No. 08-0034-3 ed. Vol 3. Performance and Tools. Agency for Healthcare Research and Qualilty; Rockville, MD: 2008. pp. 1–13. Vol. [Google Scholar]
  42. Perrow C. Framework for the comparative analysis of organizations. American Sociological Review. 1967;32(2):194–208. [Google Scholar]
  43. Reason J. Managing the Risks of Organizational Accidents. Ashgate Publishing Co.; Aldershot, UK: 1997. [Google Scholar]
  44. Smith K. Standardization as a key to quality. Healthcare Papers. 2009;9(3):56–58. doi: 10.12927/hcpap.2009.20928. [DOI] [PubMed] [Google Scholar]
  45. Snowden D. [Retrieved 16 February 2012];7 Principles of Intervention in Complex Systems. Cognitive Edge. 2012 Feb 8; 2012. from http://www.cognitive-edge.com/blogs/dave/2012/02/7_principles_of_intervention_i.php.
  46. Sterman JD. Business Dynamics: Systems Thinking and Modeling for a Complex World. Irwin McGraw-Hill; Boston: 2000. [Google Scholar]
  47. Timmermans S, Berg M. The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care. Temple University Press; Philadelphia, PA: 2003. [Google Scholar]
  48. Wears RL. Approach to the patient in the emergency department. In: Harwood-Nuss A, editor. Clinical Practice of Emergency Medicine. Lippincott-Raven; Phildelphia, PA: 1999. (in press} [Google Scholar]
  49. Wears RL, Kneebone RL. The problem of orthodoxy in safety research: time for a reformation. Annals of Emergency Medicine. 2012;60(5):580–581. doi: 10.1016/j.annemergmed.2012.07.113. doi: 10.1016/j.annemergmed.2012.07.11 3. [DOI] [PubMed] [Google Scholar]
  50. Wears RL, Perry SJ, Patterson ES. Handoffs and Transitions in Care. In: Carayon P, editor. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety. 2nd ed Lawrence Erlbaum Associates; Mahway, NJ: 2011. pp. 163–171. [Google Scholar]
  51. Weiner EL, Kanki BG, Helmreich RL, editors. Cockpit Resource Management. Elsevier; Oxford, UK: 1993. [Google Scholar]
  52. Xiao Y, Vicente KJ. A Framework for Epistemological Analysis in Empirical (Laboratory and Field) Studies. [Article] Human Factors. 2000;42(1):87–101. doi: 10.1518/001872000779656642. [DOI] [PubMed] [Google Scholar]
  53. Zink BJ. Anyone, Anything, Anytime: A History of Emergency Medicine. Elsevier; Amsterdam, NL: 2006. [Google Scholar]

RESOURCES