Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 1.
Published in final edited form as: J Am Coll Radiol. 2015 Jan;12(1):70–74. doi: 10.1016/j.jacr.2014.07.028

Effect of Computerized Physician Order Entry on Imaging Study Indication

Joshua M Pevnick 1,2,3,4,5,6, Andrew J Herzik 1,2,3,4,5,6, Ximin Li 1,2,3,4,5,6, Irene Chen 1,2,3,4,5,6, Mamata Chithriki 1,2,3,4,5,6, Lysander Jim 1,2,3,4,5,6, Paul Silka 1,2,3,4,5,6
PMCID: PMC4284426  NIHMSID: NIHMS615601  PMID: 25557572

Abstract

The effect of computerized physician order entry (CPOE) on imaging indication quality had only been measured in one institution’s emergency department using a homegrown electronic health record with faculty physicians, and only with one instrument. To better understand how many US hospitals’ recent CPOE implementations had affected indication quality, we measured its effect in a generalizable inpatient setting, using one existing and one novel instrument.

We retrospectively analyzed the indications for 100 randomly selected inpatient abdominal computed tomography studies during two calendar months immediately prior to a 3/3/2012 CPOE implementation (1/1/2012–2/29/2012) and during two subsequent calendar months (5/1/2012–6/30/2012). We excluded two intervening months to avoid behaviors associated with adoption. We measured indication quality using a published 8-point explicit scoring scale and our own, novel, implicit 7-point Likert scale.

Explicit scores increased 93% from a pre-CPOE mean ±95% CI of 1.4 ±0.2 to a CPOE mean of 2.7 ±0.3 (p<0.01). Implicit scores increased 26% from a pre-CPOE mean of 4.3 ±0.3 to a CPOE mean of 5.4 ±0.2 (p < 0.05). When presented with a statement that an indication was “extremely helpful,” and choices ranging from “strongly disagree” = 1 to “strongly agree” = 7, implicit scores of 4 and 5 signified “undecided” and “somewhat agree,” respectively.

In an inpatient setting with strong external validity to other US hospitals, CPOE implementation increased indication quality, as measured by two independent scoring systems (one pre-existing explicit system and one novel, intuitive implicit system). CPOE thus appears to enhance communication from ordering clinicians to radiologists.

Keywords: Computerized physician order entry, Diagnostic imaging, Referral and consultation, Medical informatics

INTRODUCTION

Multiple studies demonstrate that clinical context improves imaging interpretation.1 As many US hospitals have recently switched from paper ordering to computerized physician order entry (CPOE), we sought to study the effect of this change on the quality of imaging indications received by inpatient radiologists. Based on research showing that CPOE can take longer than paper ordering2 and can adversely affect communication,3 we considered the possibility that it could worsen the utility of the indications provided by ordering clinicians. However, we also recognized that CPOE allows for dynamic, study-specific imaging order interfaces, which can be used to both reminds clinicians that an indication is required and to offer them easy access to common indications for a given imaging study. Thus, we also had reason to believe that certain components of CPOE could improve indication quality.

Historically, ordering physicians’ indications for imaging examinations have often been handwritten on paper before undergoing various stages of computer scanning and/or human transcribing to ultimately be received by the reading radiologist. This system causes sundry errors.4 Furthermore, given the time pressures faced by clinicians, asking them to handwrite indications may result in little to no information being provided. Many blank paper order forms provide no reminder to the ordering physician that an indication is necessary.

One prior study showed that imaging indication quality improved when CPOE was implemented.5 This work was pioneering in its vision, and it provided us the impetus to study CPOE in an inpatient setting with strong external validity to the many US hospitals that have recently implemented CPOE. Three major differences in our study help build on this previous research. First, the prior analysis was conducted at an institution that initially used a custom paper form for the imaging exam studied, with many checkboxes for various common indications. This differs from the blank paper order forms common in most pre-CPOE environments. The studied custom forms could have contributed to higher baseline indication quality, and thereby led to underestimating the size of any change. Second, the study analyzed the transition to a homegrown medical record with a user interface allowing only free text input of imaging indications. This differs from the vendor CPOE systems most commonly adopted at US hospitals, which tend to feature a combination of study-specific indication buttons and free text. Third, the study was conducted in the emergency department of an academic institution staffed by employed physicians who could be required to use the interface as a condition of employment. Finally, only one instrument to assess indication quality existed previously.5

When our large hospital implemented inpatient CPOE, it provided an excellent setting, from the standpoint of external validity to other US hospitals, to further test the effect of CPOE on indication quality. The ordering interface changed from free text paper to an interface adopted by many US hospitals as a part of their vendor CPOE: study-specific indication buttons and free text. The CPOE was used by both employed and community physicians. This latter group was not contractually obligated to use the CPOE interface in a given way to receive their income. Indeed, like many similar hospitals, our institution sought to minimize hard stops and other cumbersome ordering mechanisms that could drive private physicians to admit their patient to nearby competing hospitals.

We sought to analyze the effect of CPOE on indication quality when implemented under these circumstances, which offered excellent external validity to many other US hospitals. Furthermore, we used a novel indication quality assessment system and compared it to results obtained using the prior instrument. Our novel instrument uses a 7-point Likert scale to capture the opinion of a reading radiologist regarding the quality of the indication.

METHODS

Study Design and Setting

We conducted a retrospective analysis of deidentified imaging orders from before and after implementation of CPOE.

Cedars-Sinai Medical Center is a nonprofit medical center with 896 licensed beds. Over 2000 providers are on its medical staff, including both academic faculty physicians and non-employee community physicians.

Intervention

On March 3rd, 2012, Cedars-Sinai Medical Center implemented a vendor CPOE (Epic Systems, Verona, WI) commonly adopted by US hospitals. Prior to the adoption of this new system, physician ordering of inpatient radiology studies used paper forms. The form offered space for the physician to write pertinent clinical information. The paper form was then electronically transcribed by an order entry clerk. A series of technical interfaces then relayed this information to the reading radiologist. After implementing CPOE, physicians directly entered indications using a computer interface. The new format still allowed for free text entries, but also contained study-specific clinical indication buttons. A hard stop required that at least one of these buttons, including an ‘other’ button that was meant to be complemented with free text, be selected.

We compared physician indications for computed tomography (CT) scans of the abdomen from two different time periods: the two calendar months immediately preceding CPOE go-live (1/1/12–2/29/12) and two subsequent calendar months during which CPOE was being used (5/1/12–6/30/12). The two calendar months immediately following CPOE go-live were excluded to allow for washout of any ordering behaviors associated with adoption. We randomly selected 100 completed CT abdomen orders from each time period. For each imaging order, we obtained the indication available to the reading radiologist.

We included CT scans that imaged both the abdomen and pelvis. Because we intentionally chose radiologists with experience interpreting these images to rate indications, we excluded CT angiograms of the abdomen usually read by interventional radiologists.

Primary Outcome Variables – Explicit and Implicit Scoring Systems to Assess Indication Quality

Two independent methods were used to assess the quality of the imaging indications. First, we used an explicit scoring system both tested and cited in prior work.5,6 Two co-authors independently used a 3 point scale (0–2, where 0=no information, 1=one piece of information, and 2=more than one piece of information) to evaluate each study across 4 separate component criteria: signs and symptoms, abnormal lab values, prior history, and relevant clinical question. The raters were blinded to the time period of each order indication.

These four component scores were then summed into a global explicit score. A mean score across the two raters was calculated for each order indication. For example, the study indication: “Abdominal Pain; evaluate for right-sided diverticulitis,” contains some information about signs and symptoms (“abdominal pain:” score = 1), no information regarding lab values or prior history (scores = 0), and detailed information regarding a clinical question (“evaluate for right-sided diverticulitis:” score = 2). The score for this study would thus be: 1/0/0/2, with a global explicit score of 3. In our study, these score would be averaged with the scores of a second rater.

We used the explicit scoring method because it had been cited in prior literature, but we recognized that it had never been validated. We thus also designed and used a novel implicit scoring method that relied on radiologists’ intuition about the usefulness of the clinical information provided to them.

This second method used two radiologist co-authors with extensive experience reading abdominal CT scans. Blinded to study time period, they independently rated indication quality using a seven-point Likert scale (“To interpret an abdominal CT scan, the following clinical indication is extremely helpful:” 1 = Strongly Disagree, 2 = Disagree, 3 = Somewhat Disagree, 4 = Undecided, 5 = Somewhat Agree, 6 = Agree, 7 = Strongly Agree).

Data Analysis

For both scoring systems, mean scores across the two raters for each were calculated for the paper and CPOE time periods. Scores from these two time periods were then compared using two-tailed T-tests. To measure inter-rater reliability, we calculated a linear weighted Kappa value for each scoring system.7 Using the implicit and explicit scores for each study, we calculated a coefficient of correlation between the two scoring methods.

Institutional Review Board Approval

The Cedars-Sinai Medical Center Institutional Review Board approved our methodology on 12/18/2011.

RESULTS

After implementation of CPOE, the quality of physician indications for inpatient abdominal CT scans improved significantly, using both explicit (Table 1) and implicit measurement. Among the components of the explicit score, there was a significant increase associated with CPOE for “signs and symptoms” and “history,” but not for “abnormal lab values” or “clinical question.” With paper ordering, the most likely component to be included in the imaging indication was “clinical question,” whereas with CPOE, emphasis shifted to “history.” Overall, explicit scores increased 93% from a mean ± 95% CI of 1.4 ± 0.2 using paper ordering to 2.7 ± 0.3 with CPOE (p<0.01). Implicit scores increased 26% from 4.3 ± 0.3 using paper ordering to 5.4 ± 0.2 with CPOE (p < 0.05).

Table 1.

Mean component scores in the explicit scoring system before and with CPOE

Mean component scores Paper ordering* CPOE* p-value
Signs and symptoms 0.40 0.74 <0.01
History 0.28 0.96 <0.01
Abnormal lab values 0.04 0.16 0.02
Clinical question 0.69 0.80 0.29
*

0–2 scale, where 0 = no information, 1 = one piece of information, and 2 = more than one piece of information

The linear weighted Kappa values were 0.77 (95% CI 0.72 – 0.82) and 0.42 (95% CI 0.33 – 0.52) for the explicit and implicit scales, respectively. This represents substantial and moderate agreement, respectively. The Pearson’s correlation coefficient between the explicit and implicit scales was 0.69 (p<.0001).

DISCUSSION

CPOE use was associated with improved imaging indication quality, as measured by two different scoring systems. Our analysis cannot tell us which components of CPOE were most responsible for this change (e.g. increased legibility, user interface reminding clinicians of need for indication). Further study would be useful to determine the relative contributions of different factors.

Nonetheless, these results are important for at least three reasons. First, this analysis was carried out in a setting similar to that where most patients in the US receive their care: predominantly non-employee physicians8 who recently changed from a free text paper ordering system to a vendor CPOE.9,10 This differs from the 18–25% of studies of health information technology conducted at eight leading institutions that predominantly use physician employees who can be required to use CPOE and even cumbersome hard stops.11 Hospitals with non-employee physicians must carefully consider the burdens applied to their physicians, lest they risk physician dissatisfaction and loss of patient admissions to nearby competitors.12

Because indication quality improved in our sample, we expect that indication quality has improved across the US as CPOE has been implemented. Even if one took issue with the admittedly nascent methods of scoring indication quality, it is important to know that these methods, which are the best currently available, showed no decrement in indication quality associated with CPOE implementation. Further study would be helpful to learn whether CPOE-mediated increases in indication quality improve image interpretations, and ultimately patient outcomes.

Second, these results advance the science of imaging indication quality measurement. We are only aware of one previously published measure of indication quality, which is the explicit measure used in this study. Although its inter-rater reliability had been analyzed,6 its validity had not been tested. The fact that both scoring systems detected increases in indication quality suggests not only that CPOE was responsible for this increase, but also serves to provide some validation of the two scoring systems, in that these two independent measures produced similar results.

In contrast to the explicit system’s use of arbitrary components in its scale, we used an implicit system to better incorporate the perspective of the reading radiologist. In this respect, we believe it to be a better measure of imaging indication quality. To be sure, for two radiologists who used the implicit scale, strength of agreement was moderate. Although this is inferior to the substantial strength of agreement of the explicit scale, it is within the range of the kappa values for some commonly accepted clinical tests and assessments.13 This kappa value might be improved in future iterations of this scale by providing examples of indications and corresponding Likert scale scores, or by otherwise providing more structure for implicit evaluations. A similar strategy has been used in developing structured implicit methods of assessing healthcare quality scoring, although explicit methods tend to retain higher inter-rater reliability.14 Another strategy to potentially increase inter-rater reliability would be to use a 5-point Likert scale. Such a scale would be easier to use and interpret, and the fact that most implicit scores fell within a small range suggests that not much discriminatory power would be lost.

Finally, against a backdrop of recent vast increases in government spending on health information technology (IT), debate regarding its valued has also intensified.15,16,17,18 Our study contributes to that debate by shedding light on an oft-cited, but infrequently documented potential benefit of EHRs: that care processes are improved by making clinical documentation both available and legible. Indeed, when a systematic overview of health IT analyzed 53 systematic reviews for evidence of benefit of increased information accessibility and legibility, the authors found that most reviews did not assess these benefits. Among those that did, only three reviews found any evidence of benefit of increased information accessibility, and only 4 reviews found any evidence of benefit of increased information legibility.17

Our study demonstrates that a core EHR technology (electronic text transmission) improves communication between physicians in a specific situation, namely the ordering of CT studies of the abdomen. Further research should confirm that these results extend to other imaging studies, examine whether all providers using electronic notes accrue benefit from improved provider-to-provider communication, and most importantly, whether their patients subsequently benefit. If this is the case, prior assessments of the potential benefit of health IT, which have tended to concentrate on adverse drug events and disease management,19 may be gross underestimates.

It is important to note the limitations in our study. As with all observational studies, we cannot exclude the possibility that another factor could have increased indication quality. However, we are not aware of other changes that occurred during the study time period that would have been the primary driver of these changes. Second, we only studied imaging indications for one type of radiologic study, and at one provider organization. Given that resource restrictions limited our ability to conduct a more comprehensive study, we believe these choices were well made. Abdominal CT scans are an excellent choice for this research question because there is so much information in these scans that the radiologist’s attention is very likely to be guided by the indication. This is different from more straightforward studies like a Doppler ultrasound of the lower extremities, where the ordering clinician is usually looking for one of a very small number of diagnoses, and there is much less information for a radiologist to examine, interpret, and describe.

Regarding our chosen institution, we believe that selecting an institution with a commonly used vendor CPOE where we were able to evaluate indications from both private practice and employed physicians greatly increases external validity. Nonetheless, we would be interested to learn of results of similar research across different imaging studies and institutions.

TAKE-HOME POINTS.

  • In an inpatient setting with strong external validity to other US hospitals, CPOE implementation increased indication quality, as measured by two independent scoring systems (one pre-existing explicit system and one novel, intuitive implicit system).

  • CPOE thus appears to enhance communication from ordering clinicians to radiologists. This suggests that EHRs in general may facilitate enhanced communication between clinicians. Such a benefit of health IT would be widespread, but has not yet been extensively assessed.

  • We not only provided some validation for an existing scoring system, but also introduced a novel implicit scoring system that reflects the perspective of reading radiologists.

Acknowledgements

The authors thank Galen Cook-Wiens for his statistical assistance.

Funding Source: This work was supported by a Clinical Scholars Research Grant from the Burns and Allen Research Institute at Cedars-Sinai Medical Center (Dr. Pevnick), a Summer Project Fellowship from the Division of Education at Albert Einstein College of Medicine (Mr. Herzik), and by the National Center for Advancing Translational Science, Grant UL1TR000124 (Dr. Pevnick).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflicts of Interests: None

Contributorship: JP and PS contributed to study concept and design. JP, AH, and XL contributed to data acquisition, data analysis and interpretation, statistical analysis, drafting of the manuscript, and manuscript revisions. IC, MC, and LJ contributed to data analysis and manuscript revisions.

REFERENCE LIST

  • 1.Berbaum KS, Franken JEA. Commentary: Does Clinical History Affect Perception? Academic Radiology. 2006;13(3):402–403. doi: 10.1016/j.acra.2005.11.031. [DOI] [PubMed] [Google Scholar]
  • 2.Shu K, et al. Comparison of time spent writing orders on paper with computerized physician order entry. Studies in Health Technology and Informatics. 2001;(2):1207–1211. [PubMed] [Google Scholar]
  • 3.Beuscart-Zephir MC, Pelayo S, Anceaux F, Meaux JJ, Degroisse M, Degoulet P. Impact of CPOE on doctor-nurse cooperation for the medication ordering and administration process. International Journal of Medical Informatics. 2005;74(7–8):629–641. doi: 10.1016/j.ijmedinf.2005.01.004. [DOI] [PubMed] [Google Scholar]
  • 4.Agarwal R, Bleshman MH, Langlotz CP. Comparison of Two Methods to Transmit Clinical History Information From Referring Providers to Radiologists. Journal of the American College of Radiology. 2009;6(11):795–799. doi: 10.1016/j.jacr.2009.06.012. [DOI] [PubMed] [Google Scholar]
  • 5.Alkasab TK, Alkasab JR, Abujudeh HH. Effects of a Computerized Provider Order Entry System on Clinical Histories Provided in Emergency Department Radiology Requisitions. Journal of the American College of Radiology. 2009;6(3):194–200. doi: 10.1016/j.jacr.2008.11.013. [DOI] [PubMed] [Google Scholar]
  • 6.Stavem K, Foss T, Botnmark O, Andersen OK, Erikssen J. Inter-observer agreement in audit of quality of radiology requests and reports. Clinical Radiology. 2004 Nov;59(11):1018–1024. doi: 10.1016/j.crad.2004.04.002. [DOI] [PubMed] [Google Scholar]
  • 7.Warrens M. Cohen’s linearly weighted kappa is a weighted average. Adv Data Anal Classif. 2012;6(1):67–79. [Google Scholar]
  • 8.Filling the Void: 2013 Physician Outlook & Practice Trends, Jackson Healthcare, 2013 [Google Scholar]
  • 9.Hospitals Lag in Computerized Physician Order Entry, Information Week, 2011. [accessed April 24, 2014]; http://www.informationweek.com/healthcare/clinical-information-systems/hospitals-lag-incomputerized-physician-order-entry/d/d-id/1099582? [Google Scholar]
  • 10. [accessed April 24, 2014]; https://www.himssanalytics.org/home/index.aspx. [Google Scholar]
  • 11.Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The Benefits of Health Information Technology: A Review of the Recent Literature Shows Predominantly Positive Results. Health Affairs. 2011;30(3):464–471. doi: 10.1377/hlthaff.2011.0178. [DOI] [PubMed] [Google Scholar]
  • 12.Bates DW. Invited commentary: The road to implementation of the electronic health record. Proc Bayl Univ Med Cent. 2006;19(4):311–312. doi: 10.1080/08998280.2006.11928189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.McGinn T, Wyer PC, Newman TB, Keitz S, Leipzig R for GG, Group TE-BMTTW. Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic) Canadian Medical Association Journal. 2004;171(11):1369–1373. doi: 10.1503/cmaj.1031981. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ashton CM, Kuykendall DH, Johnson ML, Wray NP. An Empirical Assessment of the Validity of Explicit and Implicit Process-of-Care Criteria for Quality Assessment. Medical Care. 1999;37(8):798–808. doi: 10.1097/00005650-199908000-00009. [DOI] [PubMed] [Google Scholar]
  • 15.Jones SS, Adams JL, Schneider EC, Ringel JS, McGlynn EA. Electronic health record adoption and quality improvement in US hospitals. The American Journal of Managed Care. 2010;16(12) Suppl HIT:SP64–SP71. Epub 2011/02/16. PubMed PMID: 21314225. [PubMed] [Google Scholar]
  • 16.Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: Impact on national ambulatory care quality. Archives of Internal Medicine. 2011;171(10):897–903. doi: 10.1001/archinternmed.2010.527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Black AD, Car J, Pagliari C, et al. The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview. PLoS Med. 2011;8(1):e1000387. doi: 10.1371/journal.pmed.1000387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Jones SS, Heaton PS, Rudin RS, Schneider EC. Unraveling the IT Productivity Paradox — Lessons for Health Care. New England Journal of Medicine. 2012;366(24):2243–2245. doi: 10.1056/NEJMp1204980. PubMed PMID: 22693996. [DOI] [PubMed] [Google Scholar]
  • 19.Hillestad R, Bigelow J, Bower A, et al. Can Electronic Medical Record Systems Transform Health Care? Potential Health Benefits, Savings, And Costs. Health Affairs. 2005;24(5):1103–1117. doi: 10.1377/hlthaff.24.5.1103. [DOI] [PubMed] [Google Scholar]

RESOURCES