Both researchers and policy-making organizations have identified electronic prescribing (e-prescribing) systems as a means to improve quality in healthcare delivery. However, there are documented risks involved in integrating such clinical systems into healthcare processes. 1–6 These risks generally fall into two categories. First, new technologies may not accomplish what they are designed to do. Second, introduction of new technologies may lead to unintended consequences such as patient harm or misused resources. To determine whether they work as expected and without incurring risk, people and institutions implementing e-prescribing should evaluate their systems in the context of pre-existing processes.
The current issue of the Journal of the American Medical Informatics Association (JAMIA) includes five articles that evaluate e-prescribing tools or related clinical workflows. These papers include a detailed prescription workflow analysis, a study of the diffusion of drug withdrawal knowledge to reference texts, and three studies that evaluate the impact of various forms of medication-related decision support, including adverse drug event monitoring. Taken together, the JAMIA papers illustrate a variety of methodological approaches to evaluation, and demonstrate some of the challenges related to evaluating e-prescribing.
The major reason to evaluate e-prescribing systems is to determine how their use improves or impairs clinical and process-related outcomes. Evaluators of such systems have a palette of study methodologies from which to choose. 7 For example, they may simply observe and describe past or current prescribing conditions. Alternatively, they may introduce a prospective intervention into a clinical environment and measure the impact of the change. Investigators generally select a study design methodology based on its ability and relevance to demonstrating associations between one or more factors and an outcome of interest. For example, one design may be preferred over another based on how well it can validate reduced medication error rates for e-prescribing systems. Randomized, prospective controlled trials are often considered the most definitive methodology for demonstrating such associations. By contrast, individual case reports documenting observations from one site may do little to convince skeptics that an association exists. Prospective controlled trials allow investigators to distribute subjects into two or more study groups and to control, artificially, a single factor, varying it from group to group. Investigators then measure differences among the groups and draw conclusions based on observed similarities or differences. In case reports, investigators describe in retrospect a single observation (“case”) that “occurred in the real world” and then speculate about general lessons learned from the case. There also exist hybrid, or “quasi-experimental”, methods for evaluating observed events. 7 Such methods, which include case-controlled studies and time series analyses, permit investigators to apply robust statistical algorithms to study the impact of naturally occurring changes or events on populations.
Prospective controlled study designs, while ideal for showing associations and causality, are challenging to implement in clinical informatics. The e-prescribing-related manuscripts in the current issue of JAMIA illustrate three important challenges to conducting such studies. Investigators should strive 1) to understand and articulate the nature of the intervention as actually implemented; 2) to define the most appropriate unit of study and analysis; and, 3) to randomize subjects in a manner that takes workflow considerations into account. The validity and generalizability of informatics evaluations result from careful attention paid to selecting and implementing appropriate analytical methods. This discussion will use the term “factor” to refer to any attribute of an e-prescribing system, of a workflow, or of an individual subject that might reasonably influence an observed outcome.
The first challenge in evaluating e-prescribing (and other clinical informatics) systems involves identification of a specific factor that differs among study groups, and whose variation is expected to correlate with important study outcomes—i.e., whether measurable data can support a relationship between an isolated factor and a measurable outcome. To establish that observed differences are solely due to the individual factor under study, investigators must ensure that the only difference among study groups is the factor in question. However, in clinical settings where the intervention is an informatics (e.g., e-prescribing) system, isolating the effect of one specific factor in the study from others in the environment can be challenging. Unmeasured systematic differences between study groups may cause observed differences in outcomes. For example, in the current issue of JAMIA, McGregor and colleagues evaluated whether a decision support tool could reduce inappropriate antibiotic prescriptions in a hospital. 8 In the study, an antimicrobial management team interacted with an investigational decision support system designed to alert team members when an antibiotic order was potentially inappropriate. As study subjects, antimicrobial management team members were exposed both to decision support alerts and to a change in their workflows that included time for focused chart review. While it is possible that the decision support alerts per se led to the observed reduction in antibiotic costs and total team workload, it is also possible that workflow changes necessary to deliver the decision support could have contributed to these outcomes.
A second challenge during informatics systems evaluations is to identify the most appropriate individual or entity to serve as the “unit of study,” or the “study subject,” that is exposed to or influenced by the factor under study. Study subjects may consist of individual patients, single healthcare providers, complete hospital wards or entire hospitals. For example, investigators studying a new e-prescribing system's impact on error rates might calculate ward-specific error rates in a hospital where the system was present on five wards and not present on ten other wards; in this case, each hospital ward would comprise a single “study subject.” Investigators should determine statistical power, acquire results data, and compare outcomes as they directly involve the study subject. In the current issue of JAMIA, the study reported by Kilbridge and colleagues compared the rates of adverse drug events reported by automated and manual systems both at an academic hospital and at a community hospital. 9 The authors hypothesized that having an academic affiliation would impact the hospital's adverse drug event rates. The entire hospital was subjected to the exposure under study, specifically, whether it was an academic teaching hospital or not. Therefore, the most appropriate unit of study (i.e., study subject) would have been the entire hospital. The authors elected instead to compare per-patient adverse drug event rates. While this approach may have been valid, other hospital-wide differences between the two sites, such as admission rates, available subspecialty services, workflow variation, and various unmeasured factors might have caused the observed differences.
A third challenge to informatics systems investigators is to minimize the chance that subjects will cross over among study conditions when randomized into study groups. Investigators randomly assign subjects to study groups with the goal of increasing the likelihood that each group has a uniform composition before being subjected to the study conditions. Random assignment accomplishes this goal by evenly distributing among study groups all factors that may impact the outcome under investigation. For example, random assignment in a study evaluating the impact of e-prescribing on prescribing errors would be expected to create several study groups, each with an equivalent number of subjects who are comfortable using computers, who represent various clinical roles, and whose ages fall within similar ranges. Investigators typically randomize based on the unit of study, as defined above. As Johnson and Fitzhenry describe in the current issue of JAMIA, 10 processes surrounding e-prescribing workflows involve many people who have different clinical roles. The presence of complex workflows can increase the chance that an individual study subject will crossover from one study group to others, and thus be exposed to different experimental conditions. Crossing over can reduce differences in observed results among study groups. To mediate against this risk, studies of e-prescribing systems should attempt to randomize subjects by defining the unit of study based on workflow considerations. The current manuscript by Judge and colleagues provides a good example of how this can be done. In that study, three intact, self-contained long-term care facility patient units were randomly assigned to receive decision support messages during order entry, while four control units received no such messages. 11 Randomizing by entire ward likely decreased the risk that individual study subjects crossed from an intervention group into a control group, or vice versa. For cases where certain care-team members work in multiple units, it may make more sense to randomize using even larger blocks, such as entire facilities.
Designing and conducting prospective comparative and controlled studies should be a goal for all informatics evaluators. However, this is not always easily feasible. In cases where such trials are not practical, investigators may turn to observational methods that can also produce reasonable conclusions. Observational studies, also called descriptive studies, chronicle existing environments and systems. The main goal of such studies is to describe in detail phenomena as they exist and evolve as the result of natural, as opposed to experimental, factors. Such studies allow the investigator to evaluate factors that may contribute to observable outcomes without introducing external changes for the sake of testing a hypothesis. In that sense, observational studies record “real-world” events as they unfold over time. Investigators may use observational study designs to detail workflow processes, to compare the impact of different environmental conditions among groups, and to record individuals' subjective impressions.
Observational methods allow investigators to scrutinize single or multiple cases as examples of a given phenomenon; they can also provide insight into the environment in which phenomena of interest occur. Such methodologies include case reports, case series, cross sectional studies, workflow analyses, and qualitative evaluations such as surveys. For example, in the current issue of JAMIA, Johnson and Fitzhenry provide a series of in-depth workflow analyses outlining the processes healthcare providers follow in generating prescriptions. 10 Likewise, Strayer and colleagues report on the time required for diffusion of new knowledge to various pharmacological reference texts and electronic resources. 12 Both studies illustrate the rich detail that can be acquired using observational methods. For each report, a prospective controlled trial would not have been feasible. Investigators could not easily manipulate complex prescribing workflows, or influence the withdrawal of medication from the marketplace, simply to determine how information flow changed. While the lessons learned from these studies may not generalize well to other settings, and may be confounded by unmeasured factors, they nonetheless provide useful information.
Healthcare informatics evaluation studies measure the influence of providing information on clinical outcomes. Among the many available methods and study designs, a small number are best suited to a given specific research task. Investigators should select the most robust, reproducible methodology for demonstrating associations between information-system-related interventions and clinical outcomes. They must further ensure that the chosen methodology is correctly applied to data collection and analysis. Informaticians must conduct carefully thought out and scientifically objective evaluations to prove that their systems neither cause harm (per “primum non nocere,” often included in the modern Hippocratic Oath) nor waste resources unnecessarily.
Footnotes
This work was supported in part by a grant from the United States National Library of Medicine (2K22 LM08576-02)
References
- 1.Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health carethe nature of patient care information system-related errors. J Am Med Inform Assoc 2004;11(2):104-112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Keeffe B, Subramanian U, Tierney WM, et al. Provider response to computer-based care suggestions for chronic heart failure Med Care 2005;43(5):461-465. [DOI] [PubMed] [Google Scholar]
- 3.Rosenbloom ST, Chiu KW, Byrne DW, Talbert DA, Neilson EG, Miller RA. Interventions to regulate ordering of serum magnesium levelsreport of an unintended consequence of decision support. J Am Med Inform Assoc 2005;12(5):546-553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Tierney WM, Overhage JM, Murray MD, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial Health Serv Res 2005;40(2):477-497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system Pediatrics 2005;116(6):1506-1512. [DOI] [PubMed] [Google Scholar]
- 6.Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors Jama 2005;293(10):1197-1203. [DOI] [PubMed] [Google Scholar]
- 7.Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies Fam Pract 2000;17(Suppl 1):S11-S16. [DOI] [PubMed] [Google Scholar]
- 8.McGregor JC, Weekes E, Forrest GN, et al. Impact of a Computerized Clinical Decision Support System on Reducing Inappropriate Antimicrobial UseA Randomized Controlled Trial. J Am Med Inform Assoc 2006;13:378-384. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kilbridge PM, Campbell UC, Cozart HB, Mojarrad MG. Automated Surveillance for Adverse Drug Events at a Community Hospital and an Academic Medical Center J Am Med Inform Assoc 2006;13:372-377. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Johnson KB, FitzHenry F. Case ReportActivity Diagrams for Integrating Electronic Prescribing Tools into Clinical Workflow. J Am Med Inform Assoc 2006;13:391-395. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Judge J, Field TS, Deflorio M, et al. Prescribers' Responses to Alerts During Medication Ordering in the Long Term Care Setting J Am Med Inform Assoc 2006;13:385-390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Strayer SM, Slawson DC, Shaughnessy AF. Disseminating Drug Prescribing InformationThe Cox-2 Inhibitors Withdrawals. J Am Med Inform Assoc 2006;13:396-398. [DOI] [PMC free article] [PubMed] [Google Scholar]
