As with a great deal of modern professions since the industrial revolution, automation has been both a balm and spectre of the practical work of medicine.1(p39)
A clear example of the conflict from the entanglement of automation and the practice of medicine is that of clinical decision support systems. A recent editorial by a pioneer in the field noted both the promise of such technology, designed using care algorithms and emerging artificial intelligence technologies, to assist clinicians with challenging decisions, and the many limitations that need to be addressed before these systems can deliver on their promise.2 As we move toward a future in which technology becomes more of an active agent in medical decision making, what remains to be seen is the role of the physician in care delivery systems with automated agents.
Traditionally, clinicians have by default functioned as expert navigators as they have been the primary source of clinical knowledge for a patient. For example, the Surviving Sepsis Campaign guideline summarizes the evidence base for a number of recommendations, advising that clinicians adopt best practices while considering the strength of the evidence base.3 This need for the capacity to exercise expert judgement is further reinforced by competencies set forth by the Accreditation Council for Graduate Medical Education in both medical knowledge and patient-physician communication. Obermeyer and Emanuel in a recent commentary note,
Clinical medicine has always required doctors to handle enormous amounts of data, from macro-level physiology and behaviour to laboratory and imaging studies and, increasingly, “omic” data. The ability to manage this complexity has always set good doctors apart from the rest.4(p1218)
However, Obermeyer and Emanuel’s commentary actually argues that algorithms are capable of sifting through variables and establishing clinical relevance and correlation to fuzzy clinical outcomes far better than humans. This is the fundamental premise behind the emergence of the technician-executor model. In this model, the role of the physician is defined by the procedural use of knowledge with the physician as a supplicant to technology, institutions, and systems. As a result, the decision-making practices of a physician are viewed as a source of bias or error. Typical suggestions for improvement in patient care that assume this model rely on behaviorist approaches to decision support, nudging (or hammering) physicians until they follow a set of prescriptive guidelines.5 The critical distinction is whether the physician is viewed as an inherently valuable agent in the system: the behaviorist solutions that stem from the technician-executor model use language eerily similar to the patient compliance mindset, assuming clinician “overriding” and failure to follow policy are inherently bad decisions.6 We believe that deploying automated decision recommendations in a technician-executor model is fraught with problems.
First, rote automation of a set of guidelines can lose the clinical context beyond which the evidence base is demonstrated. The operationalization of the Surviving Sepsis Campaign in 1-hour bundles, for example, has yet to produce significant evidence that the bundle improves survival in adults with sepsis.7 One of the important reasons is that blind adherence to the bundle for any patient who has a high probability of sepsis will also inevitably result in unintended harm. The original guideline noted that there was little evidence to support the specific volume of 30 mL/kg intravenous crystalloid fluid recommended.3 Patients with heart failure, for instance, instead of universal adherence to a volume with low quality of evidence, would be better served by clinical judgment about the appropriate volume of resuscitation. While this clearly suggests the need for better evidence-based sepsis guidelines for the future, in the present, it is important that sepsis bundle implementation efforts not treat physician “failure to comply” as universally undesired.
Second, the lack of a binary outcome or threshold in defining many pathological states further limits the classification tasks many automated systems rely on. The binarizing of the test into positive or negative existence of disease loses or misinterprets semiquantitative information. Consider the notion of relative pathology, where the patient’s normal differs depending on both clinical and phenotypic contexts. Cytology provides counts that require interpretation by the pathologist, as quantification alone does not provide a complete picture of the cells in the sample.8 Even when a clear divide between a benign case and malignant case is known, there still remain cases that require physician oversight due to factors precluding machine or pattern-based recognition. This skill of oversight is recognized in graduate medical education with the enabling competency under the professional role defined by the CanMEDS 2015 competency renewal as follows: “Demonstrate that professional judgment prevails over technologies designed to support clinical assessment, interventions, and evaluation.”9(p5)
A third problem with the technician-executor model is that it encourages the design of information entry in a structured manner that does not capture the contextual and uncertain manner in which clinicians synthesize information into knowledge about a patient. A study by Patel et al. noted that the transition from a written record to electronic health record resulted in documentation where the time course of events was almost entirely absent, even though it was a substantial portion of the written narrative on paper.10 Even for advocates of future advanced machine agents, enforcing fully structured capture will constrain our ability to describe and articulate clinical judgement. For example, while controlled natural language offers more effective translation capacity to computer logic, it lacks the capacity of a fully articulated language.11
Finally, with dependence on automation, the technician-executor is also less likely to understand the information presented by the machine’s output; lessening the clinical value he or she can provide to the patient. Braithwaite et al. note that given the increasing complexity of care systems, the delivery of safe medical care that humans function as a necessitates resource for system flexibility and resilience.12 If all a physician is familiar with is the execution of an algorithmic approach, then they are unlikely to have the skills in error recognition and recovery to rescue the patient. This has implications for medical education, as students are deskilled in procedural practice and incapable of providing the meta-cognition necessary to perform complex clinical decisions.13
At its core, the uncertainty of clinical information is tied to the inaccuracy of the application of information. Sir Thomas Clifford Allbutt14 articulated this conclusion best as a fundamental tenet of the art of diagnosis in medicine in 1896,
Clinical diagnosis, however, is not investigation a distinction some practitioners forget; diagnosis depends not upon all facts, but upon crucial facts. Indeed, we may go farther and say that accumulation of facts is not science; science is our conception of the facts: the act of judgment, perhaps of imagination, by which we connect the unknown with the known.15(pxxvi)
This description articulates the connection between uncertainty and the clinical reasoning that acts at the heart of medical decision making—the science of clinical medicine. Therefore, we would suggest heeding Allbutt and working to understand better how the expert-navigator uses information to manage and navigate uncertainty. While interventions such as those centered on public health (such as vaccines or smoking cessation), with a known and very strong consensus around the effects of the intervention, are capable of being applied in a fashion that can be likened to an algorithmic approach, and indeed are effectively implemented as highly automated forms of decision support,16 much of clinical medicine is practiced in an area without the evidence base to provide that certainty. A system in which physicians function largely as encoders of information and executors of algorithms will fail not only the clinicians but also advancing automation that may come from the improving evidence base, as computers will be unable to adapt to emergent understanding of disease classification and pathophysiology. Approaches such as those in general pediatrics17 support the judgement and expertise of the physician in their evidence-based decision making congruent with guidelines and procedures, rather than viewing all deviance from guidelines as error irrespective of decision making.18 We recommend caution in the face of the evangelists promoting technocratic superiority over the support and improvement of human performance in medicine.
Footnotes
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iD: David Chartash https://orcid.org/0000-0002-0265-330X
Contributor Information
David Chartash, Center for Medical Informatics, Yale University School of Medicine, New Haven, Connecticut.
Daniel Sassoon, Department of Radiology, University of Colorado at Denver Anschutz Medical Campus, Aurora, Colorado.
Naveen Muthu, Department of Biomedical and Health Informatics, Children’s Hospital of Philadelphia, and Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania.
References
- 1. Pear TH. English Social Differences. 2nd ed. Sydney: Allen & Unwin; 1955. [Google Scholar]
- 2. Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA. 2018;320(21):2199–200. [DOI] [PubMed] [Google Scholar]
- 3. Rhodes A, Evans LE, Alhazzani Wet al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Intensive Care Med. 2017;43(3):304–77. [DOI] [PubMed] [Google Scholar]
- 4. Obermeyer Z, Emanuel EJ. Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med. 2016;375(13):1216–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Cho I, Bates DW. Behavioral economics interventions in clinical decision support systems. Yearb Med Inform. 2018;27(1):114–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Greenhalgh T, Howick J, Maskrey N; Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis? BMJ. 2014;348:g3725. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Pepper DJ, Jaswal D, Sun J, Welsh J, Natanson C, Eichacker PQ. Evidence underpinning the Centers for Medicare & Medicaid Services’ severe sepsis and septic shock management bundle (SEP-1): a systematic review. Ann Intern Med. 2018;168:558–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Raab SS. Diagnostic accuracy in cytopathology. Diagn Cytopathol. 1994;10:68–75. [DOI] [PubMed] [Google Scholar]
- 9. Ho K. The CanMEDS 2015: eHealth Expert Working Group Report. Ottawa: Royal College of Physicians and Surgeons of Canada; 2014. [Google Scholar]
- 10. Patel VL, Kushniruk AW, Yang S, Yale JF. Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning. J Am Med Inform Assoc. 2000;7:569–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Shiffman RN, Michel G, Krauthammer M, Fuchs NE, Kaljurand K, Kuhn T. Writing clinical practice guidelines in controlled natural language. In: Fuchs NE, ed. International Workshop on Controlled Natural Language Berlin: Springer; 2009. p 265–80. [Google Scholar]
- 12. Braithwaite J, Wears RL, Hollnagel E. Resilient health care: turning patient safety on its head. Int J Qual Health Care. 2015;27(5):418–20. [DOI] [PubMed] [Google Scholar]
- 13. Verghese A, Charlton B, Kassirer JP, Ramsey M, Ioannidis JP. Inadequacies of physical examination as a cause of medical errors and adverse events: a collection of vignettes. Am J Med. 2015;128(12):1322–4.e3. [DOI] [PubMed] [Google Scholar]
- 14. Allbutt TC. A system of medicine. London: MacMillan; 1896. [Google Scholar]
- 15. Allbutt TC. Introduction. In: A System of Medicine. Vol 1 London: MacMillan and Co; 1896. p. xix–xxxix. [Google Scholar]
- 16. Biondich PG, Downs SM, Anand V, Carroll AE. Automating the recognition and prioritization of needed preventive services: early results from the CHICA system. Paper presented at: the AMIA Annual Symposium Proceedings; October 22–26, 2005; Washington. [PMC free article] [PubMed] [Google Scholar]
- 17. Anand V, Carroll AE, Biondich PG, Dugan TM, Downs SM. Pediatric decision support using adapted Arden syntax. Artif Intell Med. 2018;92:15–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence. JAMA. 2018;319(1):19–20. [DOI] [PubMed] [Google Scholar]