Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2020 Mar 4;2019:477–486.

Exploring Different Approaches in Measuring EHR-based Adherence to Best Practice – A Case Study with Order Sets and Associated Outcomes

Nathan C Hulse 1,2, Jaehoon Lee 1,2, José Benuzillo 1
PMCID: PMC7153084  PMID: 32308841

Abstract

In connection with a recent enterprise-wide rollout of a new electronic health record, Intermountain Healthcare is investing significant effort in building a central library of best-practice order sets. These order sets represent best practice guidelines for specific clinical scenarios and are deployed with the intent of standardizing care, reducing variation, and consistently delivering good clinical outcomes to the populations we serve. The importance of measuring their use and the level to which caregivers adhere to these standards becomes an important factor in understanding and characterizing the impact that they deliver. Notwithstanding the importance of these metrics, well- defined methods for measuring adherence to a given clinical guideline as delivered through an order set are not fully characterized in the medical literature. In this paper, we describe initial efforts at measuring compliance to a defined ‘best practice’ standard by means of content utilization analysis, a calculated adherence model, and relevant clinical key performance indicators. The degree to which specified clinical outcomes vary across these measurement models are compared for a group of order sets tied to treating coronary artery bypass graft patients and heart failure patients. While the patterns derived from this analysis show some uncertainty, more granular methods that look at line-item, or ‘order level’ detail reveal more significant differences in the corresponding set of outcomes than higher-level adherence surrogates.

Introduction

Order set libraries have become an established part of electronic health record (EHR) implementation as EHRs have become increasingly common in the United States1. Previous research in the informatics field has shown them to be convenient, thought-provoking, and foundationally supportive of clinical best practice2,3. Studies have shown that well-crafted order set libraries are among the key determinants that predict a successful deployment of a new EHR 4,5.

Order sets are a unique component of an EHR installation in that (like clinical decision support rules) they represent clinical knowledge as delivered through interactive user interfaces6. They are typically rendered as interactive forms that deliver logical, pre-populated views of commonly grouped orders for specific procedures or clinical conditions. Their overall intent is to help clinicians deliver appropriate care in a simple, streamlined way. (Figure 1 below depicts an order set in the EHR used at Intermountain). The ‘checklist effect’ that they deliver forces users to mentally process and remember steps which could otherwise be easy to overlook in clinical care7. Collectively, as caregivers routinely use standardized order sets in delivering care, several key objectives that health organizations hope to realize would include lower overall variation in clinical care processes and by correlation, improved clinical outcomes and lower associated costs8,9.

Figure 1-.

Figure 1-

Order set example. This particular order set focuses on treatment of acute myocardial infarction

© Cerner Corporation. All rights reserved. This document contains Cerner confidential and/or proprietary information belonging to Cerner Corporation and/or its related affiliates which may not be reproduced or transmitted in any form or by any means without the express written consent

The benefit afforded by these order set content libraries comes with an associated need for ongoing investment in resources so as to maintain and update the content to ensure validity against current clinical science10. Both homegrown and vended order set libraries have arisen in recent years as implementers have noted and characterized the difficulties of building up and maintaining a comprehensive library of order sets. Compiling and maintaining an enterprise knowledge base of order sets has been described as highly resource-intensive. The magnitude of work required to maintain the content contained in these order set knowledge bases has led various groups to implement various automated and machine-learning-based processes intended streamline the review and update workflows that support them11,12. Researchers have noted that with the exponential growth in medical knowledge, that there is a real risk that once implemented, order sets may be inadequately maintained; in essence, driving caregivers to practice outdated medicine on a widespread basis13.

Ultimately, the variability of functional pathways afforded by an EHR often allows caregivers multiple ways of performing the same task. Local experience has shown that clinical tools like quick-lists, generic order set templates (e.g. ‘general surgery order set’) personalized derivatives of order set templates (sometimes known as ‘favorites’ or ‘personal order sets’), and even poorly-coordinated central authoring efforts can leave users with a content library that affords them different content vehicles targeted at the very same clinical process. Furthermore, order set usage is not typically mandated by process or workflow, and users are free to electronically order care patterns without them entirely. Implementers must be aware of and confront the need to standardize the way that the care teams they manage use centralized order sets through education, active monitoring, and ongoing feedback. Figure 2 illustrates how treatment of a cardiovascular care processes (with acute myocardial infarction depicted as a relevant example) has a broad set of different content utilization patterns as it relates to order set content was used as part of the care delivery process. In this figure, we depict order sets which are designated as an intended standard of care for this condition in the striped bars, whereas the solid bars indicate other content templates that were used. The distribution shows a wide variety of order set content that is used in caring for this single condition. Some of this diversity is to be expected, in the treatment of comorbidities and in addressing personalized needs. Yet, the chart shows the diversity of knowledge content that can address a common condition, and the associated difficulty of standardizing care. In short, publishing thoughtful and relevant content standards as order sets does not ensure they will be uniformly embraced.

Figure 2-.

Figure 2-

Bar chart of collective order set usage tied to treatment encounters dealing with acute myocardial infarction. Designated enterprise content standards are shown in the striped bars.

It is important to note that variation in order set usage can occur at two different levels. First, caregivers still have the ability to use different content in delivering care (e.g. use a designated clinical standard or something entirely different), but also have the ability to vary in terms of how closely they follow the care outlined within an enterprise standard itself. Order set templates are inherently interactive, and users have the ability to select orders that aren’t part of the template, remove recommended orders, or change default values and order sentences. Inevitably there will be differences not only in what content is or isn’t used, but also in how users interact with the content itself. Measurement models aimed at measuring adherence to a standard of care must be able to account for differences at multiple levels of engagement.

Background

Intermountain Healthcare is a not-for-profit integrated delivery network that serves the populations of the Intermountain West (Utah and southern Idaho)14. It has 22 hospitals, over 150 clinics, a medical group of over 700 employed physicians and an insurance plan that serves the needs of the people in the region. Collectively, it accounts for roughly half of the healthcare given in the region and insures a little more than a quarter of the population. In switching EHRs and the toolsets that support them, our main priorities in terms of order set knowledge management were threefold:

  1. Build tools and processes to quickly amass and refine a sufficiently comprehensive knowledge base,

  2. Provide visibility into usage patterns surrounding their use, including actionable recommendations for improvement

  3. Characterize clinical caregivers’ adherence to these standards and their associated impact in clinical outcomes.

Previous efforts have detailed efforts in deriving actionable recommendations for change to order set templates based off collective usage data8,10,15. In this manuscript, we will focus describe current efforts at characterizing how well a clinical standard is used and the associated clinical outcomes derived from this care.

Rationale

We want to be able to clearly understand the implications of owning and publishing an order set library. In order to fully understand the value of our efforts, we want to understand both the extent to which users are engaging with a clinical content standard (e.g. centrally defined and designate order set), but how they are using it and the combined effects of the above as it relates to clinical outcomes. Our hypothesis in pursuing this investigation is that the more that our measurement algorithms accounts for specific actions, the greater will be our ability to account for any observed differences in the corresponding outcomes. We intend to account for three specific approaches to measurement of adherence in this investigation:

  1. Whether or not a designated enterprise content standard was used as part of an encounter for which it was designed

  2. A calculated approach that quantifies how closely a user that engaged with the standard

  3. A ‘key performance indicator’ approach in which specific key electronic orders are accounted for, regardless of whether the orders originated from a designated enterprise standard order set

Methods

In order to facilitate the study, we engaged with colleagues in the cardiovascular clinical program at Intermountain Healthcare to identify two specific diseases of interest, specifically acute myocardial infarction and heart failure. After discussion with them, we determined to look at ordering patterns related to standards of care intended for treating patients presenting with those conditions. For each, we agreed upon a common cohort definition for the intended targets of specific enterprise content sets. We also assembled a common set of outcomes of interest related to the care processes decided upon. Eight order sets related to treating acute myocardial infarction and six order sets relevant in treating heart failure were identified. These cohort definitions and order sets of interest are detailed in Table 1.

Table 1-.

Cohort definitions and enterprise standard order sets for acute myocardial infarction and heart failure

Acute Myocardial Infarction Heart Failure
Cohort Definition Patients age (Admission Date minus Birthdate) greater than or equal to 18 years admitted to the hospital Patients age (Admission Date minus Birthdate) greater than or equal to 18 years admitted to the hospital
for inpatient acute care with an ICD-9/10-CM Principal Diagnosis Code for AMI as defined by CMS. for inpatient acute care with an ICD- 9/10-CM Principal Diagnosis Code for HF as defined by CMS.
Associated Enterprise Order Sets to be Analyzed in the Study (each are designated standards for the associated conditions) • CV Acute Coronary Syndrome Probable Phased • CV Heart Failure Admission
• CV Cath / PCI FLOOR to Cath Lab PRE Procedure Phased • CV Heart Failure Discharge Phased
• CV Cath / PCI POST - Recovery to Acute Care Floor Phased • CV Heart Failure ICU Admission
• CV Cath / PCI POST - Recovery to Home Phased • CV Heart Failure Observation
• CV Cath / PCI POST - Recovery to ICU Phased • CV General Floor Admission Phased
• CV Cath / PCI PRE Procedure Phased CV General ICU Admission
• CV General Floor Admission Phased
• CV General ICU Admission

Adherence Measurement Definition 1 – Raw Usage Data - In discussing options surrounding various ways of measuring adherence to a clinical standard, we felt that the act of ordering from a designated order set during an encounter noted as relevant to the content. Simply put, this type of measurement would answer the question of whether or not the designated standard content was regularly used in the contexts for which it was designed. While this type of measurement doesn’t account for the specific actions that occur in a caregiver’s interactions with the order set form itself, it may serve as a sufficiently granular approximation of compliance to the standard since the effort to use it may in and of itself signal an intent to follow the pattern.

Adherence Measurement Definition 2- Calculated adherence scores, based off interactions within order sets – A second approach that we opted to pursue in measuring compliance involves a deeper measurement of how closely the issued orders resemble those that are predefined by the order set itself as a recommendation. In this model, users are credited for issuing orders that are recommended (as defined by preselection in the order set template), credited to a lesser extent for issuing optional orders in the order set template, and penalized for omitting recommended orders and adding ‘ad hoc’ orders to the ordering session (orders that are not included in the template at all). In building these score guidelines, we recognize that caregivers will have the need to deviate from a specific standard and tailor the care that they give. In attempting to assign scores and numbers to these types of interactions, we are attempting only to measure the overall adherence or deviation from the initial order set template, and not offer any critique or commentary as to the clinical appropriateness of the actions taken. In this model, the coefficients for these scoring mechanisms per line item order are detailed in the scenarios below:

  • A = Order pre-selected in template & ordered = 1

  • B = Order pre-selected in template & not ordered = -0.5

  • C = Order initially unselected in template, but ordered = 0.1

  • D = Order initially unselected in template, and not ordered = 0.0

  • E = Ad-hoc order not derived from order set template = -0.5

In order to facilitate comparison of calculated adherence scores across order set groupings of variable size, we normalized each of these scores by dividing this score by the total number of orders issued in that encounter. This would allow scores to be meaningful and comparable from scores derived both from small and large templates. We pursued two normalization approaches, one accounting for a maximum score based off the total number of prechecked orders in the template, the second based off the total number of orders placed. The equations below where the variables are measures of the scenarios from the bullet list above were calculated for each ordering scenario for both heart failure and acute myocardial infarction.

Normalized score 1 = ((A)+ .1(C) -.5(B) -.5(E) )/ Total # of prechecked orders in template Normalized score 2 = ((A)+ .1(C) -.5(B) -.5(E) )/ (A+C+E)

Adherence Measurement Definition 3- Key Performance Indicators- In this third adherence definition, we looked to simplify adherence metrics somewhat by focusing strictly on a ‘Key Performance Indicator’ (KPI) in which all predefined orders from an order set template were listed as KPI’s. We opted to pursue this approach rather than a ‘clinical expert defined’ set as we are attempting to explore generalizable methods of measuring adherence that do not necessitate extra steps of human involvement. We interpreted the order set authors’ intent in pre-selecting an order in the template as a stronger indicator that the associated order should probably be issued for the scenario. We then calculated a total count of orders that matched these lists in the encounters that matched the acute myocardial infarction and heart failure scenarios, regardless of whether these orders were derived from the enterprise content standards or not. To normalize these scores, we calculated the total number of KPI’s ordered in the encounter and normalized it by dividing by the total number of possible KPI’s tied to that scenario.

We extracted 12 months of order set usage data for the order sets detailed in Table 1 from Intermountain Healthcare’s enterprise data warehouse, deriving data only from our Cerner EHR system. Data was cleansed and vetted to remove test patients and other non-clinical data sources. The dataset used in the study was de-identified and vetted to ensure that PHI was not available. After extracting relevant ordering data from the encounters identified, we also pulled outcomes data from the cardiovascular clinical program’s outcomes database to align the order instances and scores with specific outcomes data.

In aligning data to specific outcomes results, we opted to group the data in patterns that match the three adherence measurement approaches detailed above. For Adherence Measurement Definition 1, we grouped all providers who used the enterprise standards in one cluster and those who didn’t in another. Summary statistics were prepared for each group. For Adherence Measurement Definition 2, we took the ‘adherent’ group from definition 1, and clustered it into three groups, based off the adherence scores tied to each encounter. Encounters were grouped by score into the above 75th percentile, 25th-75th percentile, and below 25th percentile groups. In similar fashion, Adherence Measurement Definition group 3, we grouped all participants into percentile-based groups. An important difference in this metric compared to the second group is that all participants were measured the same way, since it was not a pre-requisite to use the order set standard at all in order to still measure KPI scores.

Results

According to the calculation patterns and data extraction efforts above, we gathered data and summarized basic statistics for each measurement pattern. 1,693 instances of acute myocardial infarction and 1,736 instances of heart failure were identified from the cohort definitions and the 12 month data set from the Cerner EHR system. Table 2 contains the results for Adherence Measurement Definition 1, in which encounters were grouped according to whether or not a specific clinical order set was used as part of a matching clinical scenario.

Table 2-.

Summary statistics for outcomes tied to encounters for acute myocardial infarction and heart failure, grouped by whether or not the ordering provider used the designated order set standard for the specific encounter.

Acute Myocardial Infarction Heart Failure
Standard - Y Standard - N Standard - Y Standard - N
Avg Length of Stay 3.585 days 3.342 days 4.51 days 4.48 days
Avg Mortality rate 0.044 0.0485 0.0878 0.1006
Avg Readmission Rate (30 days) 0.060 0.063 0.170 0.150

For Adherence Measurement Definition 2, we produced similar statistics, as shown in Tables 3 and 4. Histograms representing the distribution of scores in this exercise are shown in Figure 3 below. These distribution curves are mostly Gaussian in nature, though there are elements that might suggest a bi-modal normal distribution, implying that there may be reason to suspect that the data represents two heterogenous groups of data sources. Although we did not analyze further of what distinguishes the two groups in this study, we feel that it is statistically reasonable to group by <25, 25-75 and >75 percentiles from these bell-shaped distributions. In these clusters, we created three groups for the populations of caregivers that adhered to the enterprise standard order sets. After the scores had been calculated and normalized against the two denominator definitions above, we grouped the scores into three clusters, those above the 75th percentile, those between 25th and 75th, and the bottom quartile (below 25th percentile).

Table 3-.

Summary statistics for outcomes tied to encounters for acute myocardial infarction, grouped by the scoring mechanism detailed in Adherence Measurement Definition 2. Results normalized against both denominator definitions are given.

Acute Myocardial Infarction– Normative Denominator 1
Y - >75th percentile Y - 75th-25th percentile Y - < 25th percentile Standard - N
Avg Length of Stay 3.358 days 3.626 days 3.586 days 3.342 days
Avg Mortality rate 0.028 0.045 0.057 0.0485
Avg Readmission Rate (30 days) 0.043 0.066 0.063 0.063
Acute Myocardial Infarction – Normative Denominator 2
>75th percentile 75th-25th percentile < 25th percentile Standard - N
Avg Length of Stay 3.349 days 3.721 days 3.535 days 3.342 days
Avg Mortality rate 0.019 0.050 0.056 0.0485
Avg Readmission Rate (30 days) 0.063 0.062 0.056 0.063

Table 4-.

Summary statistics for outcomes tied to encounters for heart failure, grouped by the scoring mechanism detailed in Adherence Measurement Definition 2. Results normalized against both denominator definitions are given.

Heart Failure – Normative Denominator 1
>75th percentile 75th-25th percentile < 25th percentile Standard - N
Avg Length of Stay 4.780 days 4.384 days 4.510 days 4.48 days
Avg Mortality rate 0.104 0.073 0.103 0.1006
Avg Readmission Rate (30 days) 0.195 0.150 0.189 0.150
Heart Failure – Normative Denominator 2
>75th percentile 75th-25th percentile < 25th percentile Standard - N
Avg Length of Stay 4.341 days 4.391 days 4.912 days 4.48 days
Avg Mortality rate 0.138 0.059 0.176 0.1006
Avg Readmission Rate (30 days) 0.165 0.170 0.188 0.150

Figure 3-.

Figure 3-

Histograms depicting the distribution curves of scores derived for AMI (left) and MI (right) data sets. The data is mostly normal, Gaussian in nature, though there are elements of bimodality in the sample.

For Adherence Measurement Definition 3, we produced summary statistics, as shown in Tables 5 and 6. In these clusters, we created three groups for the populations, following similar percentile-based clusters as defined above. In this scenario, percentiles were based off KPI definition scores as previously detailed in the methods section.

Table 5-.

Summary statistics for outcomes tied to encounters for acute myocardial infarction, grouped by the scoring mechanism detailed in Adherence Measurement Definition 3

Acute Myocardial Infarction
>75th percentile 75th-25th percentile < 25th percentile
Avg Length of Stay 3.518 days 3.544 days 3.733 days
Avg Mortality rate 0.026 0.046 0.057
Avg Readmission Rate (30 days) 0.049 0.063 0.065

Table 6-.

Summary statistics for outcomes tied to encounters for heart failure, grouped by the scoring mechanism detailed in Adherence Measurement Definition 3

Heart Failure
>75th percentile 75th-25th percentile < 25th percentile
Avg Length of Stay 4.877 days 4.366 days 4.488 days
Avg Mortality rate 0.098 0.075 0.107
Avg Readmission Rate (30 days) 0.195 0.157 0.176

Discussion

In exploring the various approaches to measuring adherence to best practice standards through the EHR, we found various patterns in terms of how outcomes differ, based off the generalized approach to measurement. The first approximation to adherence, specifically that of whether or not a designated content standard was used, showed relatively minimal differences in the outcomes distributions that corresponded with each group. Across the two diseases studied, the six outcomes metrics were all comparable, with three of the ‘content standard user group’ content producing slightly more favorable outcomes, and three producing slightly less favorable ones. Overall, whether or not the designated content sets were used did not offer strong differences in the outcomes analyzed in this effort.

Adherence measurement two took a slightly more detailed approach at analyzing the specifics of what users were doing with content inside the order sets of interest. For the acute myocardial infarction data set, almost all of the outcome metrics showed improvements in the highest quartile of adherence measures, with the exception of readmission rate under the second normative denominator. The heart failure data results exhibit similar trending that tends to correlate with stratification of outcomes favoring better outcomes with closer adherence, with the exceptions of length of stay.

The third approach to adherence, specifically that of only analyzing pre-checked orderables and treating them as KPIs, showed a trend for all three outcomes trending to better outcomes as ‘adherence’ to the KPI’s increased for the acute myocardial infarction data. The heart failure showed similar trends in comparing the bottom and middle quartiles, but less so in the highest quartile. In conferring with clinical domain experts on this observation, they noted that the diagnosis and treatment of heart failure is inherently broader by nature and the underlying etiology and associated treatments are more varied.

The approach to normalizing the data scores in the second and third branches of our study seemed to allow for comparative analysis of content sets of varying sizes. By nature, labeling ‘groups’ of order sets as a single standard makes comparison a much more difficult task. A similar type of study in which single order sets were designated as a content standard would make for a much simpler comparison.

One of the underlying premises of our effort is that we would hope and expect that as adherence to clinical standards was measurably improved, that corresponding outcomes would trend in a positive direction. The data we have analyzed doesn’t hold this as universally true, though there are many other co-variates not analyzed by our current approach. Our models are ‘order-centric’ and don’t account for co-morbidities, actual delivery of care, patient compliance, and other factors that have an important bearing on outcomes.

Limitations

We are only basing our preliminary analysis off two content types from a single clinical knowledge domain in cardiovascular medicine. Whether or not similar analyses would be relevant for other clinical domains remains to be seen. Our current model does not account for acuity, and that alone may account for significant impact in terms of how the corresponding outcomes move. Further, this analysis only looks at data from one site and one EHR vendor.

In this analysis, we did not assume any variation existed in how the providers used order sets. We intentionally did not use data derived from order set from use at early stages of implementation so that we could avoid inadvertently incorporating some learning curve effects in the data. However, there are many studies that show that user behaviors in interacting with order sets differ by physician characteristics, clinical specialty, clinical settings, and familiarity or training level to computerized order entry systems16. A more rigorous statistical effort to show that those factors do not cause variations of ordering patterns may be required, as a preliminary assumption to analyze the relations between the ordering patterns and clinical outcomes. To do so, further order set utilization data may be required to analyze it at different levels of granularity.

Next Steps

We intend to pursue similar analyses in other enterprise content standards, including surgical targets of appendectomy, cholecystectomy, and joint replacement surgery. We are hoping to see if similar patterns hold or if more defined ones emerge. Some of these content standards involve a single order set instead of groups of order sets as a designated standard, so we plan to test out similar methods that do not require a normalization step so as to be able to compare scores. Finally, we also intend to test machine-learning models that can derive and identify KPI types of order items that may be justifiably more predictive and potent than just the set of preselected orders in the original order set template.

Acknowledgments

We would like to acknowledge our colleagues at Cerner for their help in identifying data relevant to the order set templates, corresponding metadata, and the order set instance derivatives created from the use of these plans in the clinical data repository. Additionally, we would like to thank clinical data analysts and domain experts from the Cardiovascular Clinical Program at Intermountain Healthcare for their time, feedback, and suggestions on the approaches taken in the manuscript.

Conclusion

In this paper, we have characterized the need for methods to calculate adherence to clinical standards, including artifacts distributed through knowledge artifacts like order sets. Three separate approaches for calculating adherence have been described and characterized, including high level content usage, a more detailed mathematical approach for approximating adherence, and a third approach that focuses only on key actions. The various scores and corresponding outcomes for all of these approaches have been presented for two key disease processes of interest, specifically for acute myocardial infarction and heart failure. We hope to extend our models going forward to more fully account for user behaviors in adhering to and varying from clinical standards, as well as the corresponding effects that accompany them in clinical care.

Figures & Table

References

  • 1.Payne TH, Hoey PJ, Nichol P, Lovis C. Preparation and Use of Preconstructed Orders, Order Sets, and Order Menus in a Computerized Provider Order Entry System. Journal of the American Medical Informatics Association : JAMIA. 2003;10(4):322–329. doi: 10.1197/jamia.M1090. doi:10.1197/jamia.M1090. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Teich JM. Clinical decision support implementers’ workbook. Chicago: Healthcare Information Management and Systems Society; 2004 [Google Scholar]
  • 3.Wright A, Feblowitz JC, Pang JE, et al. Use of Order Sets in Inpatient Computerized Provider Order Entry Systems: A Comparative Analysis of Usage Patterns at Seven Sites. International journal of medical informatics. 2012;81(11):733–745. doi: 10.1016/j.ijmedinf.2012.04.003. doi:10.1016/j.ijmedinf.2012.04.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ozdas A, Speroff T, Waitman R, Ozbolt J, Butler J, Miller R. Integrating “Best of Care” Protocols into Clinicians’ Workflow via Care Provider Order Entry: Impact of Quality-of-Care Indicators for Acute Myocardial Infarction. Journal of the American Medical Informatics Association. 2006 Mar-Apr;13(2):188–96. doi: 10.1197/jamia.M1656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Asaro P, Sheldahl A, Char D. American Medical Informatics Association Symposium; Washington, DC.; 2005. Oct, Physician Perspective on Computerized Order-sets with Embedded Guideline Information in a Commercial Emergency Department Information System. pp. 22–26. 2005. pp. 6–10. [PMC free article] [PubMed] [Google Scholar]
  • 6.Scheck McAlearney A, Chisolm D, Veneris S, Rich D, Kelleher K. Utilization of evidence-based computerized order sets in pediatrics. International Journal of Medical Informatics. 2006 Jul;75(7):501–12. doi: 10.1016/j.ijmedinf.2005.07.040. [DOI] [PubMed] [Google Scholar]
  • 7.Jacobs BR, Hart KW, Rucker DW. Reduction in clinical variance using targeted design changes in computerized provider order entry (CPOE) order sets: impact on hospitalized children with acute asthma exacerbation. Appl Clin Inform. 2012;3:52–63. doi: 10.4338/ACI-2011-01-RA-0002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wright A, Sittig DF. Automated development of order sets and corollary orders by data mining in an ambulatory computerized physician order entry system. AMIA Annu Symp Proc; 2006:pp. 819–23. [PMC free article] [PubMed] [Google Scholar]
  • 9.Thomas SM, Davis DC. The Characteristics of Personal Order Sets in a Computerized Physician Order Entry System at a Community Hospital. AMIA Annual Symposium Proceedings. 2003;2003:1031. [PMC free article] [PubMed] [Google Scholar]
  • 10.Zhang Y, Levin JE, Padman R. Data-driven order set generation and evaluation in the pediatric environment. AMIA Annual Symposium Proceedings. 2012:1469–78. [PMC free article] [PubMed] [Google Scholar]
  • 11.Hulse NC, Del Fiol G, Bradshaw RL, et al. Towards an on-demand peer feedback system for a clinical knowledge base: a case study with order sets. J Biomed Inform. 2008;41:152–64. doi: 10.1016/j.jbi.2007.05.006. [DOI] [PubMed] [Google Scholar]
  • 12.Bobb AM, Payne TH, Gross PA. Viewpoint: Controversies Surrounding Use of Order Sets for Clinical Decision Support in Computerized Provider Order Entry. Journal of the American Medical Informatics Association : JAMIA. 2007;14(1):41–47. doi: 10.1197/jamia.M2184. doi:10.1197/jamia.M2184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Pearson SD, Goulart-Fisher D, Lee TH. Critical pathways as a strategy for improving care: problems and potential Ann Intern Med. 1995;123:941–948. doi: 10.7326/0003-4819-123-12-199512150-00008. [DOI] [PubMed] [Google Scholar]
  • 14.Intermountain Healthcare; www.intermountainhealthcare.org Accessed on 8 Mar 2019. [Google Scholar]
  • 15.Hulse NC, Lee J. Extracting Actionable Recommendations for Modifying Enterprise Order Set Templates from CPOE Utilization Patterns. 2018 Apr 16;2017:950–958. eCollection 2017. [PMC free article] [PubMed] [Google Scholar]
  • 16.Brunette, Doug D, et al. “Implementation of Computerized Physician Order Entry for Critical Patients in an Academic Emergency Department Is Not Associated with a Change in Mortality Rate.”. Western Journal of Emergency Medicine 14.2. (2013):114–120. doi: 10.5811/westjem.2012.9.6601. PMC. Web. 9 Mar. 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES