Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2018 Apr 16;2017:969–978.

Thinking Together: Modeling Clinical Decision-Support as a Sociotechnical System

Mustafa I Hussain 1, Tera L Reynolds 1, Fatemeh E Mousavi 1, Yunan Chen 1, Kai Zheng 1
PMCID: PMC5977688  PMID: 29854164

Abstract

Computerized clinical decision-support systems are members of larger sociotechnical systems, composed of human and automated actors, who send, receive, and manipulate artifacts. Sociotechnical consideration is rare in the literature. This makes it difficult to comparatively evaluate the success of CDS implementations, and it may also indicate that sociotechnical context receives inadequate consideration in practice. To facilitate sociotechnical consideration, we developed the Thinking Together model, a flexible diagrammatical means of representing CDS systems as sociotechnical systems. To develop this model, we examined the literature with the lens of Distributed Cognition (DCog) theory. We then present two case studies of vastly different CDSSs, one almost fully automated and the other with minimal automation, to illustrate the flexibility of the Thinking Together model. We show that this model, informed by DCog and the CDS literature, are capable of supporting both research, by enabling comparative evaluation, and practice, by facilitating explicit sociotechnical planning and communication.

Introduction

Computerized clinical decision support systems (CDSSs) are designed to improve patient safety, but may cause harm if implemented improperly.1 The determinants of CDSS effectiveness are poorly understood, and the problem space is complicated by heterogeneity in CDSS system design.2 Although CDSS taxonomies exist that acknowledge the importance of sociotechnical context,3,4 which we use here to refer to the collection of relationships between actors and artifacts, we have found in this work that this context is often missing in peer-reviewed CDSS research. Specifically, we found that 1 out of 7 articles did not define or describe how suggestions were delivered to the user (e.g., “a pop-up alert,” “a phone call,” “a fax”). Additionally, although we identified 9 articles documenting a system in which a pharmacist filtered automated suggestions for presentation to a physician, none of these provided enough information to model the sociotechnical system in which the computerized subsystem was situated, missing key sociotechnical information, such as how advisories were presented (individual pop-up alerts or a list), and what other advisory systems were already in place. This makes comparative evaluation difficult. Additionally, while it is possible that sociotechnical design is considered in practice and simply not reported, it is also possible that this reflects a scarcity of sociotechnical consideration in practice.

In this theoretical development paper, we aim to expand the understanding of CDSSs as sociotechnical systems, where humans and computers think together to steer the course of clinical care. Explicit sociotechnical consideration may also improve adoption and clinician responses to computerized decision-support advisories, which may, in turn improve workflow, care coordination, and ultimately quality of care and patient safety.5 Inadequate sociotechnical consideration can disrupt work and even lead to patient mortality.1

To this end, we present the main contribution of this paper, the Thinking Together model, a diagrammatical representation for communicating the sociotechnicalities of CDSSs. It provides a common diagrammatical language for guiding the sociotechnical design of CDS systems, conducting research studies of these systems, and reporting research results. Widespread usage would provide researchers with a uniform body of comparable literature. This would ultimately benefit practice indirectly, by providing evidence upon which to guide the design of socio-technical systems. The Thinking Together model also provides a more direct benefit to practitioners: Monk and Howard similarly created the Rich Picture as a tool for designers of business information technology to think, talk, and negotiate sociotechnical design by making it explicit.6 We have developed the Thinking Together model specifically for the sociotechnical context surrounding CDSSs, to benefit both research and practice.

We developed the Thinking Together model using a systematic approach, by conducting a comprehensive review of articles documenting CDSSs published in the past decade. We used the theoretical lens of Distributed Cognition (DCog)7 to interpret descriptions of CDSSs, because DCog is well-suited to studying the complex, collaborative, and dynamic nature of clinical work,8,9 and it has been widely used to study health informatics applications, such as interruption management in the intensive care unit10 and whiteboard use in a trauma center and an operating room.9

Rather than focusing on the individual actors (e.g. an internist, a pharmacist), DCog emphasizes the sociotechnical system, which includes actors, roles, and artifacts. In this study context, actors perform roles. Human actors include physicians, nurses, and pharmacists. We also speak of computer systems as actors—such as a computer program that fires alerts. These actors think together by representing and conveying information through artifacts, such as pop-up alerts, faxes, and spoken words. Artifacts have intrinsic properties with implications for design; an utterance is immediate, intangible, and ephemeral, while a fax alert must be discovered, can be annotated, and persists until destroyed. By applying DCog to understand CDSSs, we are able to more clearly depict important sociotechnical features, and how they work together to achieve a common goal: delivering high-quality patient care.

In the next section, we briefly present the history of CDSSs which led up to this work. Then, we describe the methods that we used to develop and validate the Thinking Together model instances. In the section titled Two Case Studies, we present two resultant Thinking Together model instances, based on CDSSs selected from the literature, to illustrate how the Thinking Together model can be used to represent CDSSs as sociotechnical systems. We follow these with a discussion of uses, benefits, and limitations of these model instances, and some concluding remarks.

Background

Long before the computer became widely available, a clinician’s decision making was supported by artifacts such as reference manuals and dose charts, and by talking to other clinicians. The advent of the computer presented the possibility of using computing methods to automatically generate decision-support advisories. There was a significant effort to develop expert systems, usually based on artificial intelligence methods, to automate the diagnosing process.11 This attempt fell short, partially because these AI-based applications were unable to reason anatomically or temporally, and because they were unable to explain their reasoning.11 These shortcomings persist today in machine learning applications,8 and much computerized clinical decision support instead tends to use explicitly defined, step-by-step rules, usually from evidence-based clinical guidelines.4 Today’s applications of CDSSs include antibiotic stewardship,13 opioid management,14 and adjusting doses for renal insufficiency.15 The suboptimal usage of these CDSSs, however, has been an enduring issue. The lack of usability is well documented,16 as well as the widespread problem of alert fatigue,17 where low rule specificity and a lack of accurate patient data result in too many irrelevant alerts. When alerts are no longer perceived to be valid problem-indicating cues, they become habitually ignored.

Today’s CDSSs are highly heterogeneous, with different capabilities for patient information access, rule programming, advisory display, and shortcuts for immediate action.2 Various taxonomies have been developed to organize this heterogeneity. For example, Wright et al. developed a taxonomy to classify the problems that computerized CDSSs address, and some user interface characteristics.4 Berlin et al. presented a taxonomy with 26 axes and 108 descriptors to classify clinical context, information sources, the underlying recommendation-producing mechanism, information delivery, and how it relates to clinical workflow,3 noting that one should be cautious when generalizing results of randomized controlled trials of a CDSS intervention across different clinical or workflow settings. Over a decade later, we find that generalization remains difficult, because although these taxonomies exist, many research papers do not use them to report CDSS characteristics. While the question of why these taxonomies are underused remains open, we speculate that this may be because taxonomies do not depict systemic configuration, and so may be less than useful in CDSS design. Rather than provide another taxonomy, our goal in this work is to provide a means of depicting systemic configuration that practitioners find useful, and can be just as easily repurposed to provide context when submitting manuscripts for publication. The Rich Picture, a similar approach, has seen success in business-oriented software design.6

Methods

In order to develop a flexible representation system—the Thinking Together model—for characterizing CDSSs that depicts their systemic configurations, we first needed to arrive at a reasonably inclusive set of features that differentiate CDS systems. We used a systematic review approach, illustrated in Figure 1. In the first stage, we developed search queries for PubMed/MEDLINE, EMBASE, CINAHL, and Cochrane literature databases. The queries included keywords such as “decision support,” “alert,” and “error.” Since this is intended to model contemporary systems, we restricted the queries to literature published in the last 10 years. The final queries returned a total of 2,760 results, of which 996 were duplicates. We reviewed the remaining 1,764 unique papers for relevance. Of these, 255 met our criteria for inclusion, which were that a paper must be (1) peer reviewed, (2) about a CDS that targeted clinicians, and (3) written in English. When the same research team reported multiple times on the same system, we kept only the most thorough report.

Figure 1.

Figure 1.

Literature examination process.

We attempted to identify descriptions of the CDSS in all of these papers, in particular what clinical role was targeted (e.g., physician or pharmacist), how alerts were generated and filtered (e.g., automatically or by a pharmacist), how the advisory was presented (e.g., pop-up dialog or human phone call), and what action shortcuts it provided (e.g., cancel or substitute medication), if any. Notably, approximately 1 in 7 did not define or describe any of this key information. This eliminated 38 articles. For the 217 papers that remained, we inductively identified categories that characterize the sociotechnical aspects of CDS systems, using the constant comparative method.18 More specifically, we started with an empty list of categories, and for each paper, we compared the described CDSS to the current list of categories, such as “interruptive pop-up alert” and “tab highlights yellow to draw attention, user clicks tab and views list of alerts”. If the system did not fit in any of the existing categories, we extracted or generated a brief description to add to the list. Otherwise, we categorized the system accordingly. This analysis was independently performed by two of the authors, Hussain and Reynolds.

In the second stage, we iteratively developed the properties of the Think Together model by generalizing phrases from the categories into properties, such as by generalizing the phrases “interruptive,” “non-interruptive,” “searched database” (among others) into a broader means of discovery and retrieval. Some of these properties formed groups; for example, means of discovery and retrieval and means of destruction (again, among others) into a broader means of interaction. Properties that characterized content mapped to Grice’s maxims of communication,19 so we formed these into a group as well. We then organized these properties into theoretical elements by mapping them to DCog theoretical elements. The DCog concepts of systems, roles, and artifacts became prominent. Any ambiguities were resolved through discussion among the two coders. Finally, we selected two sample CDSS systems to validate the model. We purposefully selected one system that is almost fully automated, and another that is almost entirely manual, in order to “stress test” the model using extreme cases to ensure its robustness and representativeness.

Results

Before describing the theoretical elements and their properties, we will provide a general overview of the Thinking Together model. Roles are played by actors; these actors manipulate artifacts. Together, they compose a sociotechnical system. These concepts are holdovers from DCog, where a sociotechnical system performs cognition by transiting information between specialized roles, which process information, collectively determining the course of action. Information is not only distributed among the heads of actors, but also on external artifacts, enabling problem-solving20 and team coordination.7 In this work, we differentiate between human and automated actors.

Theoretical elements and properties

As previously mentioned, we identified theoretical elements and properties relevant to CDSS sociotechnical consideration, based on the systematic review and the DCog theory. The results are presented in Table 1, and discussed in this section. Key concepts are italicized in this section. These concepts are important to understand before interpreting instances of the Thinking Together model, which we present in the case studies that follow this section. Below, we explain these theoretical elements and their properties.

Table 1.

Theoretical elements and their properties.

Systems Roles Artifacts
Purpose Human or automated Physical location
Inception Level of expertise Medium: Virtual or physical
Maintenance Area of specialization Shortcuts for action
Follow-ups
Means of interaction
Discovery and retrieval Generation and processing Storage and sending Destruction
Content (Grice’s maxims)
Quantity Quality Relation Manner

Sociotechnical Systems

The Systems element addresses the fundamental premise underlying a CDSS: What is the purpose for this system’s existence? Is it there to support clinical care, or to avoid legal consequences?21 If the latter, then it is worth considering whether other, more clinically valid CDS alerts are more likely to be ignored in the same sociotechnical system due to a perception of pop-up alerts as an irrelevant nuisance, contributing to alert fatigue.17

Also, what is the story of this system’s inception? Was it by administrative mandate, or regulatory requirements?22 Did clinicians push for it? Whether manually-programmed or machine learning algorithms are used, were they calibrated to the locality?23 Finally, how is the system maintained? Is it updated to reflect emerging, improved understandings of medical practices and/or the particular sociotechnical environment? Is there a regular, ongoing review of consequences that results in safety-improving updates to the sociotechnical system?24

Roles

There are many roles in healthcare, including physicians, nurses, clerical staff, administrators, software developers, and patients. In this work, we draw a distinction between human actors and automated actors. Human information processing may be characterized as ranging from fast and subconscious to slow and conscious.25 Automated actors (in the form of computer programs) can apply rule-based or statistical methods quickly, but their results can be lacking due to an absence of context-dependent understanding, and machine learning applications tend to suffer from “black boxing,” wherein a human-understandable rationale may not be readily available.12 Also, whereas humans tend to be better conversationalists than chat-bots, computers excel at creating precise drawings and producing formatted information quickly.

While examining the literature, it became apparent that, whether it is an experienced brain surgeon, or an advanced analytic application that monitors patients for signs of sepsis based on algorithms trained with historical data, a role has a level of expertise and an area of specialization.

Artifacts

Artifacts include physical objects such as phones, dose calculators and conversion charts, checkbox forms, injection site maps, exam room flags, and patient schedules, as well as virtual objects, such as pop-up alerts, lists of suggested doses presented during order entry, and online reference entries. The artifact’s properties are largely reflected in Berlin’s taxonomy;3 here we focus on contextualizing it as a core element of a sociotechnical system.

Physical artifacts are designable and configurable by nearly any user. With a paper chart, one can fold a corner, jot a note, stick a colored dot. This flexibility allows for spontaneous creation and modification of external representations, which, in turn, allows for problem reframing, an essential step in problem solving.28

Virtual artifacts, on the other hand, are interesting because they are not bound to the same laws as physical artifacts. For example, a modal dialog or modal window is a pop-up message that is not escapable, and which usually locks down several on-screen background artifacts until an action is taken.26 The modal dialog does not accommodate situations in which information must be retrieved from these frozen background windows, but the needed information is scrolled out of sight. Further, once these dialogs disappear, they are usually gone forever, and cannot be retrieved again at will for reference, that is, they are transient, not permanent. This type of brittle interactivity violates Nielsen’s27 heuristic of user control and freedom. Careful design can resolve many usability issues.

Artifacts are manipulated by actors. In the literature, we found that an actor may discover an existing artifact in different ways (perhaps the “new email” notification sound plays), or they might retrieve it from a storage system. They may process an existing artifact, or generate a new one. Finally, they may store, send, or destroy it. Many virtual artifacts found in health IT systems, such as physician notes, cannot be modified or destroyed, for legal reasons rather than those relating to clinical care.29 Modal dialogs, eager to self-destruct, are a curious exception. It may be wise to approach object permanence, and destructibility on a case-by-case basis.

An artifact’s medium of conveyance determines much of what can be done with the information it represents. A pharmacist’s phone call about a medication error may be easy to discover because the phone is ringing, but the content may need to be transcribed for future reference, perhaps on a pad of sticky notes. It may take some time for this phone call to arrive, because the pharmacist is human, and so their information processing tends to be slow.

Though delayed, the content of a human-generated phone call may conform better to Grice’s maxims of communication30 than a computer-generated phone call. Grice’s maxims of communication were reflected in the literature we examined, underscoring the relevance of broader social sciences research to the design of health IT systems. Grice’s maxim of quantity states that one should be no more and no less verbose than is required to convey the information. When an action is suggested, is a rationale provided? If not, the content may be too sparse. Is the entire text of a literature review displayed to explain a particular computer-generated decision-support advisory? This may be too much. Grice’s maxim of quality refers to the accuracy of the information presented. Is a suggestion based on a study in rodents? Were there conflicts of interest that may diminish the legitimacy of the results, and therefore the recommendation? Grice’s maxim of relation refers to the relevance of the content to the task and context at hand. Is an alert for drug-allergy interaction presented because the patient is known to have an allergy, or is it presented for all patients “just in case?” If a phone call about a prescribing error interrupts note-taking, is it important enough in relation to note-taking to justify the potential loss in productivity and accuracy in note-taking?29 Grice’s maxim of manner states that communication should be clear, brief, orderly, and specific. Tabular formats may be more appropriate than prose in certain CDS pop-up alerts,32 possibly because the spatial order and the discrete nature of the information displayed can be more easily retrieved.

If a document is represented in an electronic health record (EHR), one may need to access a computer to interact with it; its physical location is constrained, and this limits the contexts in which suggestions may be displayed. The dimensions and resolution of the display also constrain the design of virtual artifacts. On the other hand, virtual artifacts can also provide convenient shortcuts for common actions that one may need to take, such as ordering appropriate consultations or medications upon receipt of an abnormal test result. Smith et. al presented shortcuts for such actions within pop-up alerts, rather than simply demanding clinician acknowledgement.5

Some artifacts in the literature were intended to follow up on a previous communication. This sometimes comes in the form of personalized phone calls conducted by humans, and sometimes as pop-up reminders that are displayed at regular intervals until a task is accomplished. When deciding between these options, one might consider that nuisance alerts contribute to alert fatigue.17

The Thinking Together Model

The configuration of information flow is modeled in numbered stages. During each stage, an artifact may be retrieved from storage or generated anew, processed by an actor, and then sent or stored to another actor or a storage system. When using Thinking Together to model a sociotechnical system, we recommend first drawing and labelling all of the roles and storage systems involved, in the order in which they come into play, working clockwise. Next, connect the roles and storage volumes with arrows, showing the flow of information. Finally, label these roles and arrows, one stage at a time, starting with the number one, and labelling each step in each stage as r, p, g, and s, for retrieval, processing, generation, and storing or sending (See Figure 3 and Figure 4). Even if the present study focuses on one prong of a multi-prong intervention, be sure to include other forms of decision support that clinicians receive addressing the same topic. For example, if physicians already receive antibiotic management alerts, and the study at hand addresses the additional intervention of pharmacist review and counseling, document both of these interventions in the diagram, since the interaction between these two interventions may need to be later teased out in an aggregated literature review.

Figure 3.

Figure 3.

A Thinking Together model instance of a highly automated CDSS.33 Letters “r, p, g, and s” stand for “retrieval, processing, generation,” and “storing or sending,” respectively.

Figure 4.

Figure 4.

A Thinking Together model of a multidisciplinary CDSS with minimal automation.24

Next, interrogate the diagram, considering the theoretical elements and properties in Table 1. For example:

  • How is this rule-based advisory system maintained as new clinical evidence emerges?

  • How do users point out problematic alerts, so that rules can be changed?

  • While making this phone call, does the pharmacist adjust dose on behalf of the physician, for later signing?

If the answer to a question is unknown, it may be worthwhile to find out, such as by interviewing physicians to find out what they do, as one does when generating a Rich Picture.6 Next, update the design, and continue to ask questions. Continue this cycle, as shown in Figure 2, until no new questions emerge.

Figure 2.

Figure 2.

Recommended cyclical modeling process.

Because Thinking Together is a descriptive, rather than prescriptive, modelling system, we do not prescribe rules for evaluating or suggesting changes. Rather, Thinking Together is intended to facilitate interrogative reflection.

Two Case Studies

We selected 2 CDSSs described in the literature to illustrate how the Think Together model can be applied. As mentioned earlier, we purposefully chose two systems at the extremes of the automation spectrum to validate the robustness and representativeness of the model. The first system is primarily automated, requiring little to no human involvement, while the second is operated almost entirely by human actors.

Case 1. Pre-Surgery Lab Alerts

In Figure 3, we show a rule-based alert system documented by Freundlich et al.31 It was put into place following an adverse event in which a patient’s partial thromboplastin time (PTT), a lab value that indicates blood coagulation, became abnormal without the awareness of the surgical team. The surgery was performed without additional precautions that may have prevented the patient’s death. To address this, a system was instated that retrieved patients scheduled for surgery, along with their lab values (1r), and compared their lab values with predefined thresholds (1p). It then generated (1g) and sent (1s) alphanumeric pages to the surgical staff concerning patients with abnormal lab values. This alerting process was performed every afternoon for most surgical units, and also in the morning before surgeries in the neurosurgery unit.

The authors reported that feedback was largely positive, and that suggested improvements to the rules encoded in the system had been received and were implemented. However, since we do not know how this feedback was collected, we are unable to draw this portion of the diagram. It is imaginable that this may impact how the rules were updated. This has practical, real-world implications. To name an example, two-way pagers might elicit more contextual feedback than a one-way pager, since email, phone call, or face-to-face conversation require more steps to providing feedback than replying to a page. On the other hand, individual pages might provide less information.

The authors also stated that some clinicians may have perceived the alerts as a redundant nuisance. It is possible that this refers to the less-frequent manual laboratory review prior to scheduling the surgery, which also cannot be diagrammed because it is similarly left unspecified.

A Systematic Oversight

We would like to make it clear that we hold the work of Freundlich et al. in high regard. Their paper was well-written, the methods were reasonable, the alerting system appeared thoughtful, and the technology staff appeared responsive to clinician feedback.

It is easy to overlook the provision of additional context when focusing on a specific problem, and their paper is certainly not alone. We also wished to show a diagram displaying a system in which an automated system generated many alerts that were then whittled down by a pharmacist, who then contacted physicians personally, by phone or email. Hybrid systems like this would serve as an illustrative contrast between a fully automated system, like the one we presented in the preceding case study, and a fully manual system, like the one we will present in the following case study. However, although we identified nine articles that documented such a system, none provided the reader with the full sociotechnical system, missing key information, such as whether suggestions were presented to the pharmacist as pop-up alerts or as a list, whether the physician received other alerts (and in what form), and how follow-ups were conducted, such as via email, face-to-face conversation, or phone call. This suggests that some sociotechnical aspects of these CDSS designs went without consideration. This illustrates a systematic oversight that we have observed in the clinical decision support literature: Although sociotechnical context is widely recognized as an important factor to consider in the implementation and alteration of CDS systems,1,32,33 this information is underreported to the extent that it is currently difficult to systematically analyze the effectiveness of real-world interventions.

This has importance to practitioners for two main reasons. The first is less direct; comparative evaluation research will benefit practice by providing reliable, useful guidance. The second is more direct; explicitly diagramming sociotechnicalities may help the practitioner reason about, communicate, and negotiate sociotechnical design.6

Case 2. A Human-Operated CDSS

We selected the CDS presented by Sullivan et al.24 for modeling as well; it is presented in Figure 4. In Stage 1, prescribers write prescriptions, which are stored electronically. In Stage 2, pharmacists retrieve (2r) and review (1p) these prescriptions daily. When they see an error, they take two send/store actions: they call the prescriber (2s1) and log the error (2s2).

In Stage 3, the multidisciplinary team meets every other week, retrieving all error reports (3r), aggregating them (3p), writing feedback emails (3g), and sending them (3s). In this same meeting, the team also retrieves prescribers’ emails about systemic concerns (4r), plans corrective action (4p), and contacts the appropriate party (4s). We considered these to be two separate stages because they appeared to be two separate tasks. In Stage 5, prescribers receive (5r) and read (5p) these emails, and write (5g) and send (5s) responses about systemic concerns that may underlie the errors if appropriate. These emails are read during the next biweekly meeting.

We found this case interesting because it is run manually; though information was relayed and stored with computers, it is hard to argue that automated actors were really supporting decision-making by filing data and shuttling emails. It is also difficult to argue that clinical decision-making is not supported in this sociotechnical system. The choice to shun computerization was conscious; the authors cited unintended consequences, alert fatigue, and low effectiveness at preventing medical errors as rationale. Their system serves as a counter-example to CDS definitions that assume computer dependence;16,34 one could imaginably replace the databases and email server with file cabinets and couriers, while maintaining similar functionality. Also noteworthy is that feedback from prescribers could be flexibly handled by the multidisciplinary team precisely because they are human; they were able to translate explanations for errors into plans for corrective action (4p), and relay those plans to the appropriate recipients (4s). For example, they had rotavirus vaccine intramuscular removed from the available CPOE options because, after it caused a medical error, a prescriber pointed out in a systemic concern email (5s) that it was not an appropriate option in any case. Also noteworthy is the care with which the multidisciplinary team composed the feedback emails (3g); they were careful not to place blame, in order to maintain workplace safety culture, and they provided details on specific errors and advice for avoidance. Courteous prose and real-world reasoning are hardly a computer’s forte.

Discussion

In this work, we developed the Thinking Together model, a descriptive and flexible means of representing CDSS sociotechnicalities. We are confident in its flexibility due to the contrast between the two case studies presented, as well as its grounding in DCog, which has remained largely the same for over two decades. Usage in future research is likely to provide a basis upon which to compare the effectiveness of CDSS implementations, since sociotechnical context is critical to a CDS implementation’s success.1 The ability to comparatively analyze the body of work on CDSS in aggregate will allow researchers to make meaningful recommendations to improve CDS systems. It will be difficult to perform such comparative analyses without clear information about the sociotechnicalities of the CDS systems from which individual results originated. The Thinking Together model is also intended to support the examination of the sociotechnical context when implementing and altering CDS systems in practice. The potential of this model to improve both research and practice can only be realized through cooperation among researchers, clinical practitioners, and technologists.

System representation is powerful insofar as it shows only key details, allowing an overview. Many low-level details that may affect system efficacy cannot be shown. For example, a neural network trained on too many dimensions may learn a practically useless identity function, and may be outperformed by one trained on only the most relevant dimensions (as determined by a principal components analysis, for example), which would arrive at an equation that is valid for previously unseen cases.12 This distinction will not be apparent in a Thinking Together diagram. However, the burgeoning work in automated information processing35,36 complements the sociotechnical viewpoint. Likewise, interpersonal relationships can affect healthcare safety;37 this is also hidden from view by necessity, but is complemented by work in Crew Resource Management.38,39

The Thinking Together model should be thought of as a tool in a toolbox of complementary approaches, from algorithm design to workplace culture. It provides an intuitive means of representing and communicating sociotechnical clinical decision support, to support comparative evaluation in research and design in practice.

We invite practitioners to use the Thinking Together model for integrating CDSSs in sociotechnical design, to reflect on their experiences, and to publish their results. We believe they will be useful in practice, and we hope that this will transform reporting sociotechnical context from a tangential burden into a simple matter of cleaning up and attaching an existing drawing that was used during design. We plan to conduct and publish comparative analyses of CDS system effectiveness as articles documenting both results and sociotechnical design become available.

Conclusion

In this paper, we described the Thinking Together model, which was based on a systematic review of the literature and the theory of Distributed Cognition. Through two case studies, we demonstrated the flexibility of the Thinking Together model. We believe that practitioners will find this model useful when considering sociotechnical context in computerized decision support system design, and it will support systematic sociotechnical reporting in the literature, enabling comparative evaluation of CDSS efficacy.

References

  • 1.Sittig DF, Ash JS, Zhang J, Osheroff JA, Shabot MM. Lessons From Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System. PEDIATRICS. 2006;118(2):797–801. doi: 10.1542/peds.2005-3132. [DOI] [PubMed] [Google Scholar]
  • 2.Wright A, Sittig DF, Ash JS, Sharma S, Pang JE, Middleton B. Clinical Decision Support Capabilities of Commercially-available Clinical Information Systems. J Am Med Inform Assoc. 2009;16(5):637–644. doi: 10.1197/jamia.M3111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Berlin A, Sorani M, Sim I. A taxonomic description of computer-based clinical decision support systems. J Biomed Inform. 2006;39(6):656–667. doi: 10.1016/j.jbi.2005.12.003. doi: [DOI] [PubMed] [Google Scholar]
  • 4.Wright A, Sittig DF, Ash JS, et al. Development and evaluation of a comprehensive clinical decision support taxonomy: comparison of front-end tools in commercial and internally developed electronic health record systems. J Am Med Inform Assoc. 2011;18(3):232–242. doi: 10.1136/amiajnl-2011-000113. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Smith M, Murphy D, Laxmisan A, et al. Developing Software to “Track and Catch” Missed Follow-up of Abnormal Test Results in a Complex Sociotechnical Environment. Appl Clin Inform. 2013;4(3):359–375. doi: 10.4338/ACI-2013-04-RA-0019. doi: 10.4338/ACI-2013-04-RA-0019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Monk A, Howard S. The Rich Picture: A Tool for Reasoning About Work Context. Interactions. 1998 [Google Scholar]
  • 7.Hutchins E. MIT press; 1995. Cognition in the Wild. [Google Scholar]
  • 8.Hazlehurst B, Gorman PN, McMullen CK. Distributed cognition: an alternative model of cognition for medical informatics. Int J Med Inf. 2008;77(4):226–234. doi: 10.1016/j.ijmedinf.2007.04.008. [DOI] [PubMed] [Google Scholar]
  • 9.Xiao Y. Artifacts and collaborative work in healthcare: methodological, theoretical, and technological implications of the tangible. J Biomed Inform. 2005;38:26–33. doi: 10.1016/j.jbi.2004.11.004. [DOI] [PubMed] [Google Scholar]
  • 10.Grundgeiger T, Sanderson P, MacDougall HG, Venkatesh B. Interruption management in the intensive care unit: Predicting resumption times and assessing distributed support. J Exp Psychol Appl. 2010;16(4):317–334. doi: 10.1037/a0021912. [DOI] [PubMed] [Google Scholar]
  • 11.Engle RL. Attempts to Use Computers as Diagnostic Aids in Medical Decision Making: A Thirty-Year Experience. Perspect Biol Med. 1992;35(2):207–219. doi: 10.1353/pbm.1992.0011. [DOI] [PubMed] [Google Scholar]
  • 12.Marsland S. Boca Raton: CRC Press; 2009. Machine Learning: An Algorithmic Perspective. [Google Scholar]
  • 13.McDermott L, Yardley L, Little P, et al. Process evaluation of a point-of-care cluster randomised trial using a computer-delivered intervention to reduce antibiotic prescribing in primary care. BMC Health Serv Res. 2014;14(1) doi: 10.1186/s12913-014-0594-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Devine JW, Trice S, Nwokeji E, et al. Automated Profile Review for Transdermal Fentanyl to Verify Opioid Tolerance in the Military Health System. Mil Med. 2013;178:1241–244. doi: 10.7205/MILMED-D-13-00184. [DOI] [PubMed] [Google Scholar]
  • 15.Sellier E, Colombet I, Sabatier B, et al. Effect of Alerts for Drug Dosage Adjustment in Inpatients with Renal Insufficiency. J Am Med Inform Assoc. 2009;16(2):203–210. doi: 10.1197/jamia.M2805. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Berner ES, La Lande TJ. Overview of clinical decision support systems. In: Clinical Decision Support Systems. 2016 [Google Scholar]
  • 17.van derSijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. JAMIA. 2006;13:138–147. doi: 10.1197/jamia.M1809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Conrad CF. A Grounded Theory of Academic Change. Sociol Educ. 1978;51(2):101. doi: 10.2307/2112242. [DOI] [Google Scholar]
  • 19.Grice HP. Logic and Conversation. 1975 [Google Scholar]
  • 20.Zhang J, Norman DA. Representations in distributed cognitive tasks. Cogn Sci. 1994;18(1):87–122. [Google Scholar]
  • 21.Overhage JM, Middleton B, Miller RA, Zielstorff RD, Hersh W. Does National Regulatory Mandate of Provider Order Entry Portend Greater Benefit Than Risk for Health Care Delivery. JAMIA. 2002;9(3) doi: 10.1197/jamia.M1081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ash JS, Sittig D, Campbell E, Guappone K, Dykstra R. An Unintended Consequence of CPOE Implementation: Shifts in Power, Control, and Autonomy. AMIA. 2006 [PMC free article] [PubMed] [Google Scholar]
  • 23.Churpek MM, Yuen TC, Winslow C, Meltzer DO, Kattan MW, Edelson DP. Multicenter Comparison of Machine Learning Methods and Conventional Regression for Predicting Clinical Deterioration on the Wards. Crit Care Med. 2016;44(2):368–374. doi: 10.1097/CCM.0000000000001571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Sullivan KM, Suh S, Monk H, Chuo J. Personalised performance feedback reduces narcotic prescription errors in a NICU. BMJ Qual Saf. 2013;22:256–262. doi: 10.1136/bmjqs-2012-001089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Evans JSBT. In two minds: dual-process accounts of reasoning. Trends Cogn Sci. 2003;7(10):454–459. doi: 10.1016/j.tics.2003.08.012. [DOI] [PubMed] [Google Scholar]
  • 26.Horsky J, Phansalkar S, Desai A, Bell D, Middleton B. Design of decision support interventions for medication prescribing. Int J Med Inf. 2013;82(6):492–503. doi: 10.1016/j.ijmedinf.2013.02.003. [DOI] [PubMed] [Google Scholar]
  • 27.Nielsen J. Enhancing the explanatory power of usability heuristics. Proc SIGCHI Conf Hum Factors Comput Syst. 1994:152–158. [Google Scholar]
  • 28.Security Standards for the Protection of Electronic Protected Health Information. 45 CFR 164.312. [Google Scholar]
  • 29.Li SYW, Magrabi F, Coiera E. A systematic review of the psychological literature on interruption and its patient safety implications. J Am Med Inform Assoc. 2012;19(1):6–12. doi: 10.1136/amiajnl-2010-000024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Russ AL, Zillich AJ, Melton BL, et al. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation. J Am Med Inform Assoc. 2014;21(e2):e287–e296. doi: 10.1136/amiajnl-2013-002045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Freundlich RE, Grondin L, Tremper KK, Saran KA, Kheterpal S. Automated electronic reminders to prevent miscommunication among primary medical, surgical and anaesthesia providers: a root cause analysis. BMJ Qual Saf. 2012;21(10):850–854. doi: 10.1136/bmjqs-2011-000666. [DOI] [PubMed] [Google Scholar]
  • 32.Cornford T, Dean B, Savage I, Barber N, Jani Y. NHS; 2009. Electronic Prescribing in Hospitals: Challenges and Lessons Learned. [Google Scholar]
  • 33.Phansalkar S, Edworthy J, Hellier E, et al. A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17(5):493–501. doi: 10.1136/jamia.2010.005264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Sim I, Gorman P, Greenes RA, et al. Clinical Decision Support Systems for the Practice of Evidence-based Medicine. J Am Med Inform Assoc. 2001;8(6):527–534. doi: 10.1136/jamia.2001.0080527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Gultepe E, Green JP, Nguyen H, Adams J, Albertson T, Tagkopoulos I. From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system. J Am Med Inform Assoc. 2014;21(2):315–325. doi: 10.1136/amiajnl-2013-001815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Darcy AM, Louie AK, Roberts LW. Machine Learning and the Profession of Medicine. JAMA. 2016;315(6):551. doi: 10.1001/jama.2015.18421. [DOI] [PubMed] [Google Scholar]
  • 37.Gershon RRM, Karkashian CD, Grosch JW, et al. Hospital safety climate and its relationship with safe work practices and workplace exposure incidents. Am J Infect Control. 2000;28(3):211–221. doi: 10.1067/mic.2000.105288. [DOI] [PubMed] [Google Scholar]
  • 38.Wu W-T, Wu Y-L, Hou S-M, et al. Examining the effects of an interprofessional crew resource management training intervention on perceptions of patient safety. J Interprof Care. 2016;30(4):536–538. doi: 10.1080/13561820.2016.1181612. [DOI] [PubMed] [Google Scholar]
  • 39.Hefner JL, Hilligoss B, Knupp A, et al. Cultural Transformation After Implementation of Crew Resource Management: Is It Really Possible? Am J Med Qual. 2016 Jul;106286061665542 doi: 10.1177/1062860616655424. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES