Skip to main content
AMIA Summits on Translational Science Proceedings logoLink to AMIA Summits on Translational Science Proceedings
. 2020 May 30;2020:89–97.

Physician Usage and Acceptance of a Machine Learning Recommender System for Simulated Clinical Order Entry

Jonathan Chiang 1,*, Andre Kumar 2,*, David Morales 3, Divya Saini 3, Jason Hom 2, Lisa Shieh 2, Mark Musen 1,2, Mary K Goldstein 4,5, Jonathan H Chen 1,2
PMCID: PMC7233080  PMID: 32477627

Abstract

Clinical decision support tools that automatically disseminate patterns of clinical orders have the potential to improve patient care by reducing errors of omission and streamlining physician workflows. However, it is unknown if physicians will accept such tools or how their behavior will be affected. In this randomized controlled study, we exposed 34 licensed physicians to a clinical order entry interface and five simulated emergency cases, with randomized availability of a previously developed clinical order recommender system. With the recommender available, physicians spent similar time per case (6.7 minutes), but placed more total orders (17.1 vs. 15.8). The recommender demonstrated superior recall (59% vs 41%) and precision (25% vs 17%) compared to manual search results, and was positively received by physicians recognizing workflow benefits. Further studies must assess the potential clinical impact towards a future where electronic health records automatically anticipate clinical needs.

Introduction

Healthcare often falls short of recommended, evidence-based care, with overall compliance with guideline recommendations ranging from 20 to 80%. 1 Variability and uncertainty in medical practice may compromise quality of care and cost efficiency, especially in scenarios where knowledge is inconsistently applied. 2 The advent of the meaningful use era of electronic health records (EHRs) 3 creates the opportunity for data-driven clinical decision support (CDS) that utilizes the collective expertise of many practitioners in a learning health system. 48 Tools such as order sets already reinforce consistency and compliance with best practices, 9,10 but maintainability is limited in scale by a top-down, knowledge-based approach requiring the manual effort of human experts. 11 Moreover, the intended vs. actual usage of EHR order sets may not align with physician workflows. 12 A key challenge to fulfill a future vision for clinical decision support 13,14 is the automatic production of content from the bottom-up by data-mining clinical data sources. 15

Previously, we developed a clinical order recommender system by automatically data-mining hospital EHR data. 16 The results of this approach align with established standards of care 15,17,18 and is predictive of real physician behavior and patient outcomes. 16 Our underlying vision is to seamlessly integrate a system into clinical order entry workflows that automatically infers the relevant clinical context based on data already in the EHR and provides actionable decision support in the form of clinical order suggestions, analogous to Netflix or Amazon.com’s “customers who bought A also bought B” system. 19,20 As with many machine learning models designed to support clinical decision making, it is unknown if physicians will actually accept such suggestions into their clinical decision workflow. Most prior studies in automated development of clinical decision support content 15,16,2125 have been strictly analytical evaluations, with few studies assessing the response of human clinicians to such recommender tools and their ordering patterns. More broadly, the majority of physicians have significant distrust or negative attitudes toward the EHR, 2628 which may affect how well these tools could be adopted.

This study seeks to address these issues by examining physicians’ behaviors while interacting with a clinical provider order entry (CPOE) interface that simulates an electronic health record for hospital clinical scenarios. We specifically examine physician ordering patterns, time spent, and survey responses when a clinical order recommender system is added to standard functionality.

Objective

Determine whether clinicians will use machine learned clinical order recommendations for electronic order entry in simulated inpatient cases. Further assess how such recommender systems will be viewed by physicians and affect their workflow.

Methods

As described previously, 16 we extracted deidentified structured data for all inpatient hospitalizations from the 2009-2014 STRIDE clinical data warehouse. 29 The data covers >74K patients with >11M instances of >27K items (medication, laboratory, imaging, and nursing orders, lab results and diagnosis codes). We built a clinical collaborative filtering (recommender) system based on this data, modeled on Amazon’s product algorithm 19,20 using item co-occurrence statistics.

We built a simulated computerized physician order entry (CPOE) interface with open technologies including PostgreSQL, Python, Apache HTTP, and HTML/JavaScript. Our unique addition is an automated recommender (Figure 1), analogous to a “Customers Who Bought This Item Also Bought This...” service that anticipates other clinical orders that are likely to be relevant based on similar prior cases in prior electronic health records.

Figure 1.

Figure 1

– Simulated clinical order entry interface, notes and clinical order recommender. Standard functions include navigation links to review notes and results (top-left). Order entry includes a conventional search box for individual orders and pre-authored order sets (top-right). A recommender algorithm suggests clinical orders (right), in this example triggered by a presenting symptom code (Shortness of Breath, ICD9 786.05). Clinical orders predicted most likely to occur next are highlighted under Common Orders, while those under Related Orders are less likely but disproportionately associated with similar cases and thus may be more specifically relevant.

A panel of board certified internal medicine physicians (AK, JH, LH, and JHC) developed clinical cases for common inpatient medical problems including unstable atrial fibrillation, neutropenic fever, variceal gastrointestinal hemorrhage, bacterial meningitis, and acute pulmonary embolism (Table 2). Each case includes clinical notes to represent the patient’s history and physical exam. Diagnostic test results are only visible and change state if respective orders are entered (e.g., low hemoglobin revealed only if a blood count is ordered and changes if a blood transfusion is ordered). With each order entered, the clinical order recommender list updates based on the accumulating patient information.

Table 2.

- Summary description of simulation cases tested. Last column reflects the most common clinical orders the test participants used in each case, with the total in parentheses counting repeat orders. ICD, International Classification of Diseases; R-CHOP, rituximab, cyclophosphamide, hydroxydaunorubicin, oncovin, prednisone; CBC, complete blood count; CSF, cerebrospinal fluid; INR, international normalized ratio; ECG, electrocardiogram; NSAID, non-steroidal anti-inflammatory drug; DCCV, Direct Current Cardioversion; IV, intravenous.

Presenting Symptom (ICD-10) / Diagnosis Case Summary Key findings Most Common Orders (Total Orders)
Fever (453.3) Chemotherapy Induced Neutropenic Fever 32 year-old patient with diffuse large B-cell lymphoma presenting with fevers and rigors after receiving chemotherapy (R-CHOP) 10 days prior. Hypotension, lactic acidosis, severe neutropenia Sodium Chloride IV Bolus (42) Metabolic Panel, Comprehensive (33) Blood Cultures (32) Cefepime, IV (31) CBC with Differential (31)
Headache (R55) Bacterial Meningitis 25 year-old previously healthy patient presenting with fever, headache, neck stiffness, and photophobia Fever, significant neck stiffness on examination, absence of rashes Ceftriaxone, IV (33) Sodium Chloride IV Bolus (32) CBC with Differential (32) CSF Culture and Gram Stain (32) Glucose, CSF (30)
Dyspnea (R06.00) Acute Pulmonary Embolism 70 year-old with a past medical history including systolic heart failure and COPD presenting with worsening dyspnea following a vacation to Hawaii Hypoxia (81% oxygen saturation), tachycardia, absence of jugular venous distension, minimal wheezes CBC with Differential (31) ECG 12-Lead (31) Metabolic Panel, Comprehensive (27) NT-proBNP (25) Albuterol-Ipratropium, Inhaled (22)
Palpitations (R00.2) Unstable Paroxysmal Atrial Fibrillation with Rapid Ventricular Rate 66 year-old with a history of diastolic heart failure presenting with palpitations Tachycardia (rate >150 beats/min), hypotension, irregularly irregular pulse ECG 12 Lead (46) DCCV (29) CBC with Differential (28) Metabolic Panel, Comprehensive (26) Consult to Cardiology (23)
Hematemesis (K92.0)Acute Variceal Bleeding 59 year-old with a history of alcoholism and NSAID use presenting with hematemesis Tachycardia, spider angiomata, scleral icterus, mid epigastric pain Consult to Gastroenterology (59) Sodium Chloride IV Bolus (41) Prothrombin Time/INR (40) CBC with Differential (40) Metabolic Panel, Comprehensive (37)

We recruited licensed physician participant users with experience admitting medical inpatients within the past year through local mailing lists. Participants were offered $195 to use the interface to simulate admitting hospital patients and complete a survey over the course of an hour. A researcher guided participants through two demonstration cases (diabetic ketoacidosis and chest pain) to illustrate basic functions (data review, order entry, order sets) as well as the use of clinical order options presented by the recommender system. The subsequent five test cases were presented to participants in a sequence randomly assigned for each user (Table 2). Each case was randomly assigned as either an intervention case with the recommender system available or a control case without the recommender system not available. Conventional clinical order entry options including order set checklists and manual search of individual orders by name were available in all cases, making usage of the recommender system completely optional. Participant activity was recorded through screen capture, audio, and user interface tracking software. Metrics compared between the intervention and control cases include the time to complete the case and the number of clinical orders selected from manual search vs. the automated recommender system, with P-values assessing differences calculated by two-tailed t-test. The study was approved by the Stanford Institutional Review Board.

Results

Participants

A total of 34 physicians participated in this phase of the study, with a median of 3 and interquartile range of [3, 5.25] years since obtaining their medical degree, 22 (64%) identified their primary specialty as Internal Medicine, 9 (26%) identified Emergency Medicine, 18 (53%) were resident trainees, and 14 (42%) were board certified in their respective specialty.

Case-Based Scenarios

Table 2 summarizes key elements of the five case scenarios that participants were tested with.

Physician Experience

Overall, physicians spent an average time of 6.7 minutes per clinical module, with a mean 54.4 navigation clicks between sections (e.g., notes vs. results review) and 16.5 clinical orders per case. In subgroup analysis of resident physicians in training vs. non-residents (Table 3b), there appears to be varying effects with residents spending less case time and ordering more with the recommender available, while non-residents spent more case time. Across different simulated case types (Table 3c), there was not a consistent trend in physicians taking more or less time to complete cases with the recommender system available. Physicians ordered a greater amount of orders when the recommender system was available (mean 17.1 vs. 15.8 orders per case, p<0.001). The recall of the recommender options was consistently greater than manual search options (59% vs 41%), indicating users were more likely to find the clinical orders they wanted from the automated recommender lists than from options returned by manual search. The precision of the recommender options was similarly greater than manual search options (25% vs 17%), indicating users had to sift through fewer irrelevant options to find the clinical orders they wanted than the number of irrelevant options produced by manual searches. 12

Table 3b.

- Usage metrics when clinical order recommender system was available vs. not, separated by Resident physicians (trainees) vs. non-Residents. Reported as totals, proportions, or means +/- standard error.

graphic file with name 3270249t3b.jpg

Table 3c.

- Usage metrics stratified per simulated case scenario for when the clinical order recommender system was available vs. not.

graphic file with name 3270249t3c.jpg

Survey Responses

Overall, the clinical decision tool was positively received by the study participants, where 94% agreed or strongly agreed that the tool would be useful for their position. Moreover, 89% agreed or strongly agreed that the system would make their job easier and 85% felt that it would increase their productivity. Participant comments suggest that physicians believe the system would be more useful for cases that involved common diagnoses or standardized treatment algorithms. Others mentioned the tool’s utility for diseases that may require several simultaneous orders (for example, diabetic ketoacidosis). A small subset of responses noted that the tool could be used for patients presenting to the emergency department without a clear diagnosis, as this would facilitate expedient ordering of several screening tests to help differentiate the patient. Additional comments indicate physicians felt that the tool would be less useful for sub-specialized care or for patients that require few simultaneous orders.

Discussion

We found that the use of a clinical order recommender system for common clinical scenarios seen in inpatient emergency medicine did not clearly affect the amount of time physicians spent on an EHR interface. Physicians did place slightly more total orders per case, although this varied by scenario and user training level. Importantly, physicians placed less orders from manual searches as a result of the tool. The recommender demonstrated superior recall of orders, suggesting that users were more likely to find the orders they wanted from the recommender rather than from manual searches. Similarly, the tool’s greater precision for suggested orders indicated that users were not exposed to as many irrelevant order options when compared to manual searching. The tool was positively received by the study participants, who identified clear benefits toward their workflow and productivity. This represents a key study to examine the use of clinical recommender decision support tools on physician ordering habits as applied toward inpatient emergency clinical scenarios.

Implications of this study include that such decision support tools based on clinical ordering patterns can be integrated and readily accepted for usage into physician workflows. While many systems attempt to improve the EHR experience by providing standardized order sets to streamline and improve care, 3033 static order sets often do not align with individual cases with many extraneous or irrelevant order options. 12 Our recommender tool essentially functions as a dynamical clinical order set that continuously updates in response to new patient information, demonstrating increased accuracy and reduced need for conventional manual searches. This may account for the largely positive views of the participants, suggesting that physicians will accept machine generated clinical order tools if they are embedded into clinical workflows.

Time motion studies indicate that clinicians spend most of their time in the EHR, 34,35 with many spending significantly more time searching for and entering orders. 36 While this study showed a reduction on reliance of manual searches, interestingly, it did not show a reduction in the amount of time that physicians spent per simulated case. The simulated test setting may have lead participants to artificially fill the time within cases, or perhaps the reduction in manual search efforts freed their cognitive attention to attend more to the medical decision making tasks of each case. Notably, most of our study participants were experienced physicians who have likely developed diagnostic/treatment algorithms based on their previous training. Table 3b suggests that the tool could provide time savings to less experienced physicians (in residency training) who may not be familiar with the EHR interface or have less refined diagnostic/treatment schemata.

The total number of orders tended slightly to increase as a result of the recommender system (mean 1.3 additional orders per case), though this varied by case. The quantity of orders placed with a recommender system may be dependent on the clinical context, where some scenarios (i.e. meningitis) may have more standardized approaches to treatment37 compared to others (i.e. dyspnea), which may affect how recommended a la carte orders are viewed. Indeed, the purpose of commercial product recommender algorithm is to increase the amount of products a customer will buy. 20 While order sets have been shown to promote cost-effectiveness, 38,39 further evaluation is needed to determine how much clinical recommender systems are promoting improved care with more useful orders vs. reducing cost-effectiveness with more unnecessary orders.

Limitations to this study approach include the nature of collaborative filtering algorithms tending to recommend patterns of historical behavior, but risk a “cold start” 40 problem when they are unable to recommend newly available treatments while still recommending older, possibly obsolete, practices. This is not unique to the recommender systems however, as existing standards of information dissemination such as clinical practice guidelines warrant manual updating and dissemination every few years. 41 For example, the manually-authored static order sets available in our own institution for deep venous thrombosis treatment still recommend warfarin therapy, despite direct oral anticoagulants largely becoming the current standard of practice. 42 Guidance for up-to-date medical care clearly must come from multiple sources, though this points to one potential advantage of the collaborative filtering approach in that it can rapidly and automatically adapt to newly emerging practices.

Technical limitation considerations include that the tool was based on a clinical data warehouse of electronic health records data that may not be available at all institutions. Similarly, the lack of a broadly accepted open architecture that allows for custom workflow integrations into common commercial EHRs, limits the ease of implementation of the system components studied. Our users were given an orientation of the recommender system and its purpose before engaging with the practice scenarios, which likely contributes a Hawthorne effect on how users interacted and viewed the system.

At a time when the EHR is met with distrust and negativity by clinicians from the burdens of documentation and data entry, clinical recommender systems represent a key opportunity to improve the quality, consistency, and experience of healthcare. We hope this will be an important step towards a future where EHRs anticipate clinical needs without even having to ask, so that clinicians can start to feel like the computers are working for us, instead of the other way around.

Conclusions

Clinical order suggestions from a data-driven recommender system were readily used and accepted by physicians across a variety of simulated inpatient clinical scenarios. These systems did not clearly affect time spent in the EHR, but physicians were more likely to find the clinical orders they wanted using such tools as compared to manual search methods (i.e. superior recall). Usage may vary by clinical scenario, with further evaluation needed on the clinical value of how such tools affect ordering habits. Nonetheless, clinicians overall view such clinical recommender systems positively, perceiving a clear potential benefit toward their workflow.

Acknowledgements

This research was supported in part by the NIH Big Data 2 Knowledge initiative via the National Institute of Environmental Health Sciences under Award Number K01ES026837, the Gordon and Betty Moore Foundation through Grant GBMF8040, and a Stanford Human-Centered Artificial Intelligence Seed Grant. This research used data or services provided by STARR, “STAnford medicine Research data Repository,” a clinical data warehouse containing live Epic data from Stanford Health Care (SHC), the University Healthcare Alliance (UHA) and Packard Children’s Health Alliance (PCHA) clinics and other auxiliary data from Hospital applications such as radiology PACS. The STARR platform is developed and operated by Stanford Medicine Research IT team and is made possible by Stanford School of Medicine Research Office. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH, VA, or Stanford Healthcare.

Figures & Table

Figure 2.

Figure 2

– Simulated clinical order entry interface, results review and clinical order manual search. Users can order diagnostics such as cerebrospinal fluid (CSF) studies to review results (left). This may require simulated passage of time for results to be ready (e.g., CT Head requiring another 44 minutes for results to be ready). Conventional manual search for clinical orders via a text search box (top-right) yields clinical order options (right) identified by prefix. In this example, identifying all clinical orders with a word starting with “cef.”

Table 3a.

- Usage metrics when clinical order recommender system was available vs. not. Reported as totals, proportions, or means +/- standard error. Options reflects clinical order options that were presented to the user for consideration via either manual search results or automated recommender. Recommender precision (positive predictive value) reflects the proportion of clinical order options from the recommender that were actually used. Recommender recall (sensitivity) reflects the proportion of all clinical orders used that arose from the recommender options.

graphic file with name 3270249t3a.jpg

Table 4.

- Physician Survey Responses. Responses were assessed based on a 5-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree).

Survey Question 1 2 3 4 5
I would find the system useful in my job 0% 0% 7% 46% 48%
Using the system would make it easier to do my job 0% 4% 7% 44% 46%
This system would increase my productivity 0% 9% 7% 41% 44%
This system would let me complete tasks more quickly 0% 4% 9% 37% 50%
This system would increase my job performance 0% 9% 7% 48% 30%

References

  • 1.Richardson W. C., Others . Institute of Medicine. National Academy Press; 2001. Crossing the quality chasm: a new health system for the 21st century. [Google Scholar]
  • 2.Tricoci P., Allen J. M., Kramer J. M., Califf R. M., Smith S. C., Jr Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301:831–841. doi: 10.1001/jama.2009.205. [DOI] [PubMed] [Google Scholar]
  • 3.Health and Human Services Department. Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology. Federal Register. 2012;vol. 77:54163–54292. [PubMed] [Google Scholar]
  • 4.de Lissovoy G. Big data meets the electronic medical record: a commentary on ‘identifying patients at increased risk for unplanned readmission’. Med. Care. 2013;51:759–760. doi: 10.1097/MLR.0b013e3182a67209. [DOI] [PubMed] [Google Scholar]
  • 5.Frankovich J., Longhurst C. A., Sutherland S. M. Evidence-based medicine in the EMR era. N. Engl. J. Med. 2011;365:1758–1759. doi: 10.1056/NEJMp1108726. [DOI] [PubMed] [Google Scholar]
  • 6.Longhurst C. A., Harrington R. A., Shah N. H. A. ‘green button’ for using aggregate patient data at the point of care. Health Aff. 2014;33:1229–1235. doi: 10.1377/hlthaff.2014.0099. [DOI] [PubMed] [Google Scholar]
  • 7.Smith M., et al. A Continuously Learning Health Care System. US: National Academies Press; 2013. [PubMed] [Google Scholar]
  • 8.Krumholz H. M. Big data and new knowledge in medicine: the thinking, training, and tools needed for a learning health system. Health Aff. 2014;33:1163–1170. doi: 10.1377/hlthaff.2014.0053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kaushal R., Shojania K. G., Bates D. W. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch. Intern. Med. 2003;163:1409–1416. doi: 10.1001/archinte.163.12.1409. [DOI] [PubMed] [Google Scholar]
  • 10.Overhage J. M., Tierney W. M., Zhou X. H., McDonald C. J. A randomized trial of ‘corollary orders’ to prevent errors of omission. J. Am. Med. Inform. Assoc. 1997;4:364–375. doi: 10.1136/jamia.1997.0040364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Bates D. W., et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J. Am. Med. Inform. Assoc. 2003;10:523–530. doi: 10.1197/jamia.M1370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Li R. C., Wang J. K., Sharp C., Chen J. H. When order sets do not align with clinician workflow: assessing practice patterns in the electronic health record. BMJ Qual. Saf. 2019 doi: 10.1136/bmjqs-2018-008968. doi: 10.1136/bmjqs-2018-008968. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sittig D. F., et al. Grand challenges in clinical decision support. J. Biomed. Inform. 2008;41:387–392. doi: 10.1016/j.jbi.2007.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Middleton B., Sittig D. F., Wright A. Clinical Decision Support: a 25 Year Retrospective and a 25 Year Vision. Yearb. Med. Inform. 2016;(Suppl 1):103–16. doi: 10.15265/IYS-2016-s034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Chen J. H., Goldstein M. K., Asch S. M., Mackey L., Altman R. B. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets. J. Am. Med. Inform. Assoc. 2017;24:472–480. doi: 10.1093/jamia/ocw136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Chen J. H., Podchiyska T., Altman R. B. OrderRex: clinical order decision support and outcome predictions by data-mining electronic medical records. J. Am. Med. Inform. Assoc. 2016;23:339–348. doi: 10.1093/jamia/ocv091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Chen J. H., Altman R. B. Data-Mining Electronic Medical Records for Clinical Order Recommendations: Wisdom of the Crowd or Tyranny of the Mob? AMIA Jt Summits Transl Sci Proc. 2015;2015:435–439. [PMC free article] [PubMed] [Google Scholar]
  • 18.Wang J. K., et al. An evaluation of clinical order patterns machine-learned from clinician cohorts stratified by patient mortality outcomes. J. Biomed. Inform. 2018;86:109–119. doi: 10.1016/j.jbi.2018.09.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Linden G., Smith B., York J. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Comput. 2003;7:76–80. [Google Scholar]
  • 20.Smith B., Linden G. Two Decades of Recommender Systems at Amazon.com. IEEE Internet Comput. 2017;21:12–18. [Google Scholar]
  • 21.Zhang Y., Levin J. E., Padman R. Data-driven order set generation and evaluation in the pediatric environment. AMIA Annu. Symp. Proc. 2012;2012:1469–1478. [PMC free article] [PubMed] [Google Scholar]
  • 22.Klann J., Schadow G., McCoy J. M. A recommendation algorithm for automating corollary order generation. AMIA Annu. Symp. Proc. 2009;2009:333–337. [PMC free article] [PubMed] [Google Scholar]
  • 23.Wright A. P., Wright A. T., McCoy A. B., Sittig D. F. The use of sequential pattern mining to predict next prescribed medications. J. Biomed. Inform. 2015;53:73–80. doi: 10.1016/j.jbi.2014.09.003. [DOI] [PubMed] [Google Scholar]
  • 24.Chen J. H., Goldstein M. K., Asch S. M., Altman R. B. Usability of an Automated Recommender System for Clinical Order Entry. AMIA. 2016 [Google Scholar]
  • 25.King A. J., et al. Using Machine Learning to Predict the Information Seeking Behavior of Clinicians Using an Electronic Medical Record System. AMIA Annu. Symp. Proc. 2018;2018:673–682. [PMC free article] [PubMed] [Google Scholar]
  • 26.Emani S., et al . Physician Beliefs about the Meaningful Use of the Electronic Health Record: A Follow-Up Study. Appl. Clin. Inform. 2017;8:1044–1053. doi: 10.4338/ACI-2017-05-RA-0079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Verghese A., Shah N. H., Harrington R. A. What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA. 2018;319:19–20. doi: 10.1001/jama.2017.19198. [DOI] [PubMed] [Google Scholar]
  • 28.Gawande A. Why Doctors Hate Their Computers. The New Yorker. 2018 [Google Scholar]
  • 29.Lowe H. J., Ferris T. A., Hernandez P. M., Weber S. C. STRIDE—An integrated standards-based translational research informatics platform. AMIA Annu. Symp. Proc. 2009;2009:391–395. [PMC free article] [PubMed] [Google Scholar]
  • 30.Brown K. E., Johnson K. J., DeRonne B. M., Parenti C. M., Rice K. L. Order Set to Improve the Care of Patients Hospitalized for an Exacerbation of Chronic Obstructive Pulmonary Disease. Ann. Am. Thorac. Soc. 2016;13:811–815. doi: 10.1513/AnnalsATS.201507-466OC. [DOI] [PubMed] [Google Scholar]
  • 31.Radosevich M. A., et al. Implementation of a Goal-Directed Mechanical Ventilation Order Set Driven by Respiratory Therapists Improves Compliance With Best Practices for Mechanical Ventilation. J. Intensive Care Med. 2019;34:550–556. doi: 10.1177/0885066617746089. [DOI] [PubMed] [Google Scholar]
  • 32.Nichols K. R., Petschke A. L., Webber E. C., Knoderer C. A. Comparison of Antibiotic Dosing Before and After Implementation of an Electronic Order Set. Appl. Clin. Inform. 2019;10:229–236. doi: 10.1055/s-0039-1683877. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zeidan A. M., et al. Impact of a venous thromboembolism prophylaxis ‘smart order set’: Improved compliance, fewer events. Am. J. Hematol. 2013;88:545–549. doi: 10.1002/ajh.23450. [DOI] [PubMed] [Google Scholar]
  • 34.Desai S. V., et al. Education Outcomes in a Duty-Hour Flexibility Trial in Internal Medicine. N. Engl. J. Med. 2018;378:1494–1508. doi: 10.1056/NEJMoa1800965. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kumar A., Chi J. Duty-Hour Flexibility Trial in Internal Medicine. The New England journal of medicine. 2018;vol. 379(300) doi: 10.1056/NEJMc1806648. [DOI] [PubMed] [Google Scholar]
  • 36.Ouyang D., Chen J. H., Hom J., Chi J. Internal Medicine Resident Computer Usage: An Electronic Audit of an Inpatient Service. JAMA Intern. Med. 2016;176:252–254. doi: 10.1001/jamainternmed.2015.6831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Tunkel A. R., et al. Practice guidelines for the management of bacterial meningitis. Clin. Infect. Dis. 2004;39:1267–1284. doi: 10.1086/425368. [DOI] [PubMed] [Google Scholar]
  • 38.Fleming N. S., Ogola G., Ballard D. J. Implementing a standardized order set for community-acquired pneumonia: impact on mortality and cost. Jt. Comm. J. Qual. Patient Saf. 2009;35:414–421. doi: 10.1016/s1553-7250(09)35058-8. [DOI] [PubMed] [Google Scholar]
  • 39.Ballard D. J., et al. The Impact of Standardized Order Sets on Quality and Financial Outcomes. In: Henriksen K., Battles J. B., Keyes M. A., Grady M. L., editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign) US: Agency for Healthcare Research and Quality; 2011. [PubMed] [Google Scholar]
  • 40.Lika B., Kolomvatsos K., Hadjiefthymiades S. Facing the cold start problem in recommender systems. Expert Syst. Appl. 2014;41:2065–2073. [Google Scholar]
  • 41.Vernooij R. W. M., Sanabria A. J., Solà I., Alonso-Coello P., Martínez García L. Guidance for updating clinical practice guidelines: a systematic review of methodological handbooks. Implement. Sci. 2014;9(3) doi: 10.1186/1748-5908-9-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Mazzolai L., et al. Diagnosis and management of acute deep vein thrombosis: a joint consensus document from the European Society of Cardiology working groups of aorta and peripheral vascular diseases and pulmonary circulation and right ventricular function. Eur. Heart J. 2018;39:4208–4218. doi: 10.1093/eurheartj/ehx003. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Summits on Translational Science Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES