Skip to main content
BMJ Open Access logoLink to BMJ Open Access
. 2011 Apr;20(Suppl_1):i5–i10. doi: 10.1136/bmjqs.2010.046177

Systems of service: reflections on the moral foundations of improvement

Frank Davidoff
PMCID: PMC3066845  PMID: 21450772

Abstract

Providing clinical care is above all a service; in that sense, the medical profession aspires to Aristotelian phronesis, or prudence—being ‘capable of action with regard to things that are good and bad for man.’ This intense commitment to service encourages healthcare providers to gravitate towards one or another epistemology as their preferred moral pathway to better care. One such epistemology, the ‘snail’ perspective, places particular value on knowing whether newly devised clinical interventions are both effective and safe before applying them, mainly through rigorous experimental (deductive) studies, which contribute to the body of established scientific knowledge (episteme). Another (the ‘evangelist’ perspective) places particular value on the experiential learning gained from applying new clinical interventions, which contributes to professional know-how (techne). From the ‘snail’ point of view, implementing clinical interventions before their efficacy and safety are rigorously established is morally suspect because it can result in ineffective, wasteful and potentially harmful actions. Conversely, from the ‘evangelist’ point of view, demanding ‘hard’ proof of efficacy and safety before implementing every intervention is morally suspect because it can delay and obstruct the on-the-ground learning seen as being urgently needed to fix ineffective, inefficient and sometimes dangerous existing clinical practices. Two different moral syndromes—sets of interlocked values—underlie these perspectives; both are arguably essential for better care. Although it is not clear how best to leverage their combined strengths, a true symbiotic relationship between the two appears to be developing, one that leaves the two syndromes intact but softens their epistemological edges and supports active, close, respectful interaction between them.

Keywords: Qualitative research, quality of care, randomised controlled trial, research

Introduction

The order of Knights Hospitallers, founded in the 11th century, provided hostels for pilgrims to the Holy Land, and cared for the sick among them; their Hotels-Dieu were important precursors of modern hospitals.1 Every brother, at his induction into the Knights, recited this vow from the earliest rule of the Order: ‘The brethren of the Hospital should serve our Lords, the sick, with zeal and devotion, as if they were serfs to their Lords.’

Although healthcare providers have moved away over the centuries from this constrained and righteous vision of its purpose, medicine remains above all a service profession. As such, it is concerned primarily with what Aristotle called phronesis: prudence, or the practical wisdom that renders people and organisations ‘capable of action with regard to things that are good and bad for man.’2

Four widely accepted ethical principles frame medicine's approach to the realisation of phronesis: do no harm (non-maleficence); improve patients' well-being (beneficence); be patient-centred (respect and preserve patient autonomy); and deliver care even-handedly (justice). Although virtually everyone agrees that these precepts are necessary, the history of medicine also makes it clear that they are not sufficient. For example, Western doctors believed for millennia that bloodletting was beneficial and was not harmful; they practised it widely until the 19th century when Pierre Louis in Paris, using la methode numerique, demonstrated that bleeding not only failed to cure patients but also harmed and sometimes even killed them.

What was missing from medicine's moral code was a fifth ethical principle now recognised as a central element of all professionalism, namely, ‘unceasing movement towards new levels of performance’—in a word, improvement.3 At some level, all professionals connected with healthcare accept this principle, and over time the practice of clinical medicine has, of course, changed dramatically, mostly (although not always) for the better. The touchstone of these improvements has unarguably been the increase in both basic science and clinical knowledge. The epistemology of basic science is in itself quite complicated, but the generation of clinically useful knowledge is even more complicated, not least because it consists of two very different components. The first is knowing the right thing to do—that is, knowing whether clinical interventions actually work; the abstract conceptual knowledge, invariable in time and space, that Aristotle referred to as episteme. The second is knowing how to do things right—that is, the concrete, variable, time- and context-dependent know-how (or competence) that Aristotle referred to as techne.

Two moral syndromes

Each of these ways of knowing has its dedicated adherents who differ in what they see as the most appropriate way for medicine to meet its moral obligation of continuous movement to new levels of performance. The depth of feeling sometimes expressed over these differences suggests that the psychological forces at work go beyond the strictly intellectual, and are closer to the righteousness associated with moral positions. Stated differently, each perspective seems convinced that its concepts and methods provide the true or, at least, more moral path to better medical care.

These two moral perspectives are not new; they have appeared in various guises for centuries, in philosophy, natural sciences, psychology, statistics and elsewhere. For example, natural scientists have long seen the methods of social science as lacking true scientific validity, in particular since social sciences have difficulty in producing verifiable theoretical predictions. As recently as 1996, the longstanding differences between these moral positions erupted into the so-called ‘science wars,’ in which natural scientists publicly questioned whether social science leads to ‘dangerous antirationalism and relativism.’ The struggle continues to this day.2

In medicine, Sackett and Holland suggested some 35 years ago that the controversy brewing over the then-emerging approaches to screening for disease stemmed from fundamental ideological differences between ‘advocates’ and ‘methodologists’ or, as they later referred to them, ‘evangelists’ and ‘snails.’4 Evangelists, in these authors' view, held that:

‘the pre-existing evidence plus common sense—in the face of the ongoing toll of disability and untimely death—demand massive screening programs for the detection of citizens with risk factors for these disorders now, even in the absence of experiments to determine whether the alteration of many risk factors will, in fact, alter risk.’

Snails, on the other hand, were equally convinced that:

‘screening, like any other untested health maneuver, may do more harm than good and must meet scientific as well as political criteria before it is implemented.’

In short, the evangelist perspective suggests that under conditions of uncertainty it can be morally justifiable to ‘Just do it, and learn as you go,’ while from the snail point of view, the more moral approach when faced by uncertainty is ‘Look before you leap;’ the difference between ‘action’ and ‘caution.’5

The essential moral difference between these two epistemologies comes into sharper focus when considering what each sees as the other's failings, rather than what each senses as its own correctness. Thus, evangelists consider the snail demand for ‘hard’ proof of efficacy and safety as a precondition for putting a new clinical intervention into practice as morally unjustified precisely because obtaining ‘sufficient’ proof can delay and obstruct the actions seen as urgently needed to fix ineffective, inefficient, and sometimes harmful, or even lethal, existing care systems. Conversely, snails consider the evangelist insistence on implementing innovative medical procedures before their efficacy and safety are established as morally unjustified precisely because even reasonable-seeming interventions can waste scarce resources, introduce ineffective, inefficient and potentially even harmful changes in care, and generate serious opportunity costs.

It also helps to understand the differences between evangelist and snail epistemologies by thinking of them in terms of two contrasting ‘moral syndromes.’ The concept of moral syndromes was introduced in 1992 by the scholar and critic Jane Jacobs6 in arguing her thesis that two sets of moral precepts—drivers of desirable or acceptable actions—govern the two disparate ways of surviving in public life. She saw these two ‘systems of survival’ as taking (as in government—think taxes; the church—think tithes; and the military—think conquest) and trading (as in business—think investment, contracts, market value and customer satisfaction). She labelled the tightly integrated sets of moral precepts that underlie these two systems as the guardian and commercial moral syndromes (table 1).

Table 1.

Systems of survival: the guardian and commercial moral syndromes

Guardian moral syndrome Commercial moral syndrome
  • Shun trading

  • Exert prowess

  • Be obedient and disciplined

  • Adhere to tradition

  • Respect hierarchy

  • Be loyal

  • Take vengeance

  • Deceive for the sake of the task

  • Make rich use of leisure

  • Be ostentatious

  • Dispense largesse

  • Be exclusive

  • Show fortitude

  • Be fatalistic

  • Treasure honour

  • Shun force

  • Come to voluntary agreements

  • Be honest

  • Collaborate with strangers

  • Compete

  • Respect contracts

  • Use initiative and enterprise

  • Be open to inventiveness and novelty

  • Be efficient

  • Promote comfort and convenience

  • Dissent for the sake of the task

  • Invest for productive purposes

  • Be industrious

  • Be thrifty

  • Be optimistic

Two analogous moral syndromes underlie what might be called medicine's ‘systems of service’ (table 2). Although the two share a focus on learning, each set of precepts stands in direct contrast to the other, a point-counterpoint that primarily reflects differences between the two fundamental modes of scientific learning: evangelists tend to rely on inductive learning, largely by observation and replication (confirmation), while snails tend to learn deductively, largely by hypothesis testing (experimental evaluation).7 8

Table 2.

Systems of service: the evangelist and snail moral syndromes

Evangelist moral syndrome Snail moral syndrome
  • Take on messy problems

  • Respect and include context

  • Adapt interventions

  • Seek discovery and explanation

  • Learn from heterogeneity

  • Value learning by trial and error

  • Test hypotheses by attempting replication and confirmation

  • Seek local impact

  • Require application

  • Accept credit for the team

  • Seek timeliness

  • Solve sharply defined problems

  • Avoid and control out context

  • Adhere strictly to protocols

  • Pursue causal relationships

  • Strive for homogeneity

  • Rely on structured, sequential learning

  • Test hypotheses by attempting falsification

  • Seek generalisability

  • Require publication

  • Expect personal credit

  • Seek timelessness

It is more than coincidental, therefore, that the moral differences between evangelist and snail perspectives find an echo in the more pragmatic ‘loss functions’ associated with deductive and inductive reasoning.7 Thus, loss of observational research is seen as not reasonable, since it would limit our ability to discover new phenomena, mechanisms and causal models; after all, ‘every discovery contains an “irrational element,” or “a creative intuition”,’9 and testable hypotheses have to come from somewhere. Conversely, loss of hypothesis testing through experimental studies is seen as not reasonable, since ineffective or unsafe clinical measures would be used instead of rejected.

Enter the science of improvement

The emergence of a science of improvement within medicine in the last few decades has amplified the differences between the two epistemologies. The interventions in this data-driven, system-level discipline are designed to achieve appropriate, consistent and efficient delivery of established clinical measures by changing human performance. They are complex, generally consisting of multiple, reciprocally interacting elements. By design, they evolve over time in response to continuing feedback, and hence are intrinsically unstable. They are hard to standardise, since they are most effective when adapted to the local circumstances. Perhaps most importantly, they are inherently context-dependent. Healthcare improvement in this sense is therefore a hybrid discipline, primarily a science of social change, and secondarily a clinical or biomedical one.10–13

Improvement thus operates entirely differently from the inanimate clinical interventions (tests, drugs, procedures) that affect biological or physical systems. Although true experimental methods are sometimes used to evaluate whether improvement interventions work, those methods depend on fixed protocols, assume unidirectional cause–effect relationships and are designed to control the influence of context out of causal pathways; their use for that purpose is at best extremely difficult, and at worst inappropriate. A variety of alternative approaches that draw on economics, social sciences and other disciplines have therefore emerged for evaluating improvement programmes.8 11–15

The introduction of rapid response team systems (RRTs) serves to illustrate the ways in which these two epistemological approaches can play out with regard to the introduction of a clinical innovation. The impetus for RRT systems arose in the 1990s with the observation that most of the hospitalised patients who experienced cardiopulmonary arrest in that era demonstrated signs and symptoms of physiological deterioration in the prearrest period that were not recognised or acted on appropriately. These findings suggested that earlier intervention and detection by dedicated teams with intensive care expertise might reduce cardiac arrest, mortality, unplanned ICU transfer and other important adverse clinical outcomes. In response to this thinking, many hospitals throughout the world began implementing various versions of RRT systems despite the lack of formal evaluation of their efficacy and safety.

Reaction to the relatively rapid implementation of RRT systems has evolved along two pathways. The first, which might be termed ‘Run, don't walk’ reflects the palpable frustration of those who become convinced that the benefits and safety of a particular intervention are so obvious that delay in its implementation would be potentially harmful, hence morally suspect. This action-oriented perspective was expressed as follows in a 2007 editorial: ‘The correct question is: Is there a rationale for withholding critical care resources from critically ill patients outside the intensive care unit? The answer is obvious. No.’16

The alternative view, described in print as ‘Walk, don't run,’ reflects the palpable frustration of those who are convinced that introducing a healthcare intervention before its efficacy, safety and efficiency have been firmly established would be potentially wasteful, hence morally suspect. A 2006 commentary, responding in part to an earlier published opinion that failure to implement RRTs was tantamount to malpractice, expressed this cautionary perspective as follows: ‘In view of the limitations of the evidence and the heterogeneity of study results, it seems premature to declare Rapid Response Systems (RRSs) as the standard of care.’17

The way forward

Jacobs argued that the guardian and commercial moral syndromes are both essential in sustaining a civilised, productive public life. She supported her argument by pointing out, among other things, that when one syndrome tries to take over the functions of the other in public life the result is nearly always a distorted, dysfunctional ‘monstrous hybrid.’6 18 The historical record in medicine is consistent with that view, since most of the clinical interventions used during the centuries in which evangelist epistemology held sway were either ineffective or harmful or both, and it was only when the methods of science, including la methode numerique, were introduced that cautionary ‘evidence-based’ (‘authoritative’) medicine began to temper traditional, action-oriented ‘eminence-based’ (‘authoritarian’) medicine.19 20

At the same time, however, the historical record indicates that dominance of the snail perspective is not without problems of its own; the recent history of thrombolytic therapy is a case in point. The use of thrombolysis in patients with myocardial infarction began as early as 1971, and by 1973 five controlled studies had convincingly demonstrated that this therapeutic approach lowered mortality by about 20%, and was relatively safe. Despite the early availability of such strong evidence (thanks, to be sure, to rigorous experimental studies), recommendations for its routine use did not appear until 14 years later, during which time clinical researchers carried out an additional 25 randomised clinical studies involving tens of thousands of patients.21 It can be argued, therefore, that the seemingly endless preoccupation with studying the efficacy and safety of thrombolytic therapy was not only unnecessary and wasteful, but also delayed its widespread use, possibly contributing to the loss of many thousands of lives.

History therefore helps to understand why the evangelist and snail perspectives have been cautious about embracing each other's concepts and methods. But if we accept the premise that both action and caution are essential systems of service in medicine, the challenge then becomes how to combine or balance these two seemingly incompatible approaches. Jacobs suggested that the only workable relationship between the guardian and commercial syndromes is a state of ‘symbiosis’ in which the two syndromes remain essentially intact, and their strengths become complementary by working unceasingly toward close, respectful interaction between them.6

Such a symbiosis between the evangelist and snail epistemologies at the moral level is likely to be difficult, because people are so reluctant to compromise their moral positions. That said, it should be possible for each epistemology at least to begin by accepting the reality that it is neither infallible nor complete unto itself. In fact, such acceptance may already have begun. Interestingly, although it has not involved formal negotiation, this coming together appears to be developing in a way that is analogous to the strategy of principled negotiation—that is, working from the agreed-upon merits of principles—rather than through defence of moral positions.5 22 Witness, for example, the recent suggestion by leading clinical trialists that what has changed in recent years is not the trials themselves but ‘our recognition of the complexity of the world in which [randomised trials] are conceived, funded, carried out, disseminated, understood, used, and abused,’ as well as increased awareness of ‘the pitfalls of over-reliance on quantitative evidence and its limited influence on healthcare.’23

It also seems likely that close, respectful interaction between evangelist and snail epistemologies at the moral level will be encouraged by softening of hard edges at the technical level. Here, too, there is evidence that such softening has begun. Within the snail perspective, for example, fundamental reconsideration of the roles of experimental and observational studies in clinical research has led to the reshaping of evidence into two mirror-image hierarchies.7 Many clinical methodologists also now agree that when baseline measurements are stable, and an effect size appears to be large (ie, when the signal/noise ratio is high), controlled trials may not be required.24 Formal methods for incorporating the heterogeneity of patient responses into the analysis of trials are becoming available.25 Systematic reviews are increasingly recognised as tools for understanding variation across trials, as well as for validating efficacy.26 The fallacy of judging the strength of a study on the basis of a single element such as randomisation is being taken seriously.27 Bayesian methods, which are logically more robust and do not demand fixed study protocols, are beginning to replace traditional frequentist analysis.28 29 And the overwhelming volume of controlled trials, many redundant or irrelevant to clinical practice, has led to calls for discipline in selecting study topics and streamlining the systematic review process.30

Softening at the technical level has begun within the evangelist perspective (and particularly the improvement community) as well. For example, the opportunity costs of adopting an intervention prematurely are becoming recognised more widely.31 The importance (and feasibility) of evaluating complex interventions as early as possible, using the strongest possible study design, is receiving serious attention13 (an echo of Thomas Chalmers' doctrine of ‘Randomize the first patient when a new therapy becomes available—you may never get another chance!’). Crucial differences among the kind of measurements needed for improvement, accountability and research purposes have been well described,32 as has the importance of using the appropriate primary outcome measurement in evaluating policy and service interventions.12 Sophisticated frameworks for understanding context are emerging.33 The Plan–Do–Study–Act cycle, a formal, system-level version of experiential learning that encourages frequent small ‘tests of change,’ is increasingly used as a bridging strategy between action and caution.34 And detailed consensus guidelines for complete, accurate and transparent reporting of complex improvement interventions are now available.35 36

We clearly have a long way to go in achieving meaningful rapprochement between evangelist and snail moral syndromes. That's hardly surprising, of course; the tensions between guardian and commercial moral values are far from being resolved, despite centuries of trying. The evidence does suggest, however, that we are developing the kind of symbiosis between systems of service that we very much need in order to provide the best possible care for our Lords, the sick.

Footnotes

Competing interests: FD is a part-time employee of Institute for Healthcare Improvement, which has publicly promoted the use of rapid response team systems.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Johnsen AR. Our lords, the sick. In: Longo LD, ed. Our Lords, the Sick. McGovern Lectures in the History of Medicine and Medical Humanism. Malabar, FL: Krieger Publishing Company, 2004:9–16 [Google Scholar]
  • 2.Flyvbjerg B. Making Social Science Matter. Why Social Inquiry Fails and How It Can Succeed Again. New York: Cambridge University Press, 2001 [Google Scholar]
  • 3.Nowlen PM. A New Approach to Continuing Education for Business and the Professions. New York: Collier Macmillan Publishers, 1988:11 [Google Scholar]
  • 4.Sackett DL, Holland WW. Controversy in the detection of disease. Lancet 1975;2:357–9 [DOI] [PubMed] [Google Scholar]
  • 5.Davidoff F. Evangelists and snails redux: the case of cholesterol screening. Ann Int Med 1996;124:513–14 [DOI] [PubMed] [Google Scholar]
  • 6.Jacobs J. Systems of Survival. A Dialogue on the Moral Foundations of Commerce and Politics. New York: Vintage Books, 1992 [Google Scholar]
  • 7.Vandenbrouke JP. Observational research, randomized trials, and two different views of medical science. PLoS Med 2008;5:e67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ziliak ST, McCloskey DN. The Cult of Statistical Significance. How the Standard Error Costs us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press, 2008 [Google Scholar]
  • 9.Popper K. The Logic of Scientific Discovery. New York: Routledge, 2002:8 [Google Scholar]
  • 10.Batalden P, Davidoff F. What is ‘quality improvement’ and how can it transform health care? Qual Saf Health Care 2007;16:2–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Davidoff F. Heterogeneity is not always noise. Lessons from improvement. JAMA 2009;302:2580–6 [DOI] [PubMed] [Google Scholar]
  • 12.Lilford RJ, Chilton PJ, Hemming K, et al. Evaluating policy and service interventions: framework to guide selection and interpretation of study endpoints. BMJ 2010;341:c4413. [DOI] [PubMed] [Google Scholar]
  • 13.Craig P, Dieppe P, MacIntyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Pawson R, Tilley N. Realistic Evaluation. London: SAGE Publications, 1997 [Google Scholar]
  • 15.Shadish WR, Cook TD, Leviton LC. Foundations of Program Evaluation. Theories of Practice. London: SAGE Publications, 1991 [Google Scholar]
  • 16.DeVita MA, Bellomo R. The case of rapid response systems: are randomized clinical trials the right methodology to evaluate systems of care? Crit Care Med 2007;35:1413–14 [DOI] [PubMed] [Google Scholar]
  • 17.Winters BD, Pham J, Pronovost PJ. Rapid response teams—walk, don't run. JAMA 2006;296:1645–7 [DOI] [PubMed] [Google Scholar]
  • 18.Davidoff F. Medicine and commerce 1: Is managed care a ‘monstrous hybrid’? Ann Int Med 1998;128:496–9 [DOI] [PubMed] [Google Scholar]
  • 19.Silverman WA. Where's the Evidence? Debates in Modern Medicine. New York: Oxford University Press, 1998 [Google Scholar]
  • 20.Trohler U. To Improve the Evidence of Medicine. The 18th Century British Origins of a Critical Approach. Edinburgh: Royal College of Physicians of Edinburgh, 2000 [Google Scholar]
  • 21.Egger M, Smith GD, O'Rourke K. Rationale, potentials, and promise of systematic reviews. In: Egger M, Davey Smith G, Altman DG, eds. Systematic Reviews in Health Care: Meta-Analysis in Context. London: BMJ Books, 2001:3–22 [Google Scholar]
  • 22.Fisher R, Ury W, Patton B. Getting To Yes. Negotiating Agreement Without Giving In. New York: Penguin Books, 1991 [Google Scholar]
  • 23.Jadad AR, Enkin MW. Randomized Controlled Trials. Questions, Answers and Musings. 2nd edn Malden, MA: Blackwell Publishing, 2007:xvii–xix [Google Scholar]
  • 24.Glasziou P, Chalmers I, Rawlins M, et al. When are randomised trials unnecessary? Picking signal from noise. BMJ 2007;334:349–51 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kent DM, Hayward RA. Limitations of applying summary results of clinical trials to individual patients: the need for risk stratification. JAMA 2007;298:1209–12 [DOI] [PubMed] [Google Scholar]
  • 26.Thompson SG. Why and how sources of heterogeneity should be investigated. In: Egger M, Davey Smith G, Altman DG, eds. Systematic Reviews in Health Care: Meta-analysis in Context. London: BMJ Books, 2001:157–75 [Google Scholar]
  • 27.Glasziou P, Vandenbroucke J, Chalmers I. Assessing the quality of research. BMJ 2004;328:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Goodman SN. Toward evidence-based medical statistics. 1. The p-value fallacy. Ann Intern Med 1999;130:995–1004 [DOI] [PubMed] [Google Scholar]
  • 29.Goodman SN. Toward evidence-based medical statistics. 2: The Bayes factor. Ann Intern Med 1999;130:1005–13 [DOI] [PubMed] [Google Scholar]
  • 30.Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Med 2010;7:e1000326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA 2010;304:1375–6 [DOI] [PubMed] [Google Scholar]
  • 32.Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997;23:135–47 [DOI] [PubMed] [Google Scholar]
  • 33.Greenhalgh T, Stones R. Theorising big IT programmes in healthcare: strong structuration theory meets actor-network theory. Soc Sci Med 2010;70:1285–94 [DOI] [PubMed] [Google Scholar]
  • 34.Batalden P, Davidoff F. Teaching quality improvement. The devil is in the details. JAMA 2007;298:1059–61 [DOI] [PubMed] [Google Scholar]
  • 35.Davidoff F, Batalden P, Stevens D, et al. Mooney, for the SQUIRE development group. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care 2008;17(Suppl 1):i3–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care 2008;17(Suppl 1):i13–32 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BMJ quality & safety are provided here courtesy of BMJ Publishing Group

RESOURCES