Abstract
Clinical decision support systems (CDSSs) assist clinicians with patient diagnosis and treatment. However, inadequate attention has been paid to the process of selecting and buying systems. The diversity of CDSSs, coupled with research obstacles, marketplace limitations, and legal impediments, has thwarted comparative outcome studies and reduced the availability of reliable information and advice for purchasers. We review these limitations and recommend several comparative studies, which were conducted in phases; studies conducted in phases and focused on limited outcomes of safety, efficacy, and implementation in varied clinical settings. Additionally, we recommend the increased availability of guidance tools to assist purchasers with evidence-based purchases. Transparency is necessary in purchasers’ reporting of system defects and vendors’ disclosure of marketing conflicts of interest to support methodologically sound studies. Taken together, these measures can foster the evolution of evidence-based tools that, in turn, will enable and empower system purchasers to make wise choices and improve the care of patients.
Keywords: clinical decision support systems, comparative study, medical ethics, medical economics, marketing
INTRODUCTION
Clinical decision support systems (CDSSs) aid clinicians with decision-making tasks, including patient diagnosis and treatment.1 They are conducted either on or off site via personal computers, the Internet, or handheld devices, or as components of electronic health record (EHR) systems. As components of health information technology (HIT),2 certain CDSSs provide reference material, drug interaction alerts (DIAs), medical calculators, or clinical diagnoses. Others create preventive screening reminders, deliver guidelines for treatment options, or improve communication and recordkeeping.3Table 1 itemizes different CDSS capabilities.
Table 1:
Summary of Major CDSS Capabilities
| Purpose of CDSS | Examples of Functions |
|---|---|
| Institutional efficiency | Order sets that organize physician directions and develop individualized stay and treatment plans |
| Healthcare costs | Duplicate testing and drug availability notifications |
| Preventive carea | Screening, immunization, and disease management suggestions |
| Diagnosisa | Lists of ranked differential diagnoses |
| Treatment plansa | Treatment guidelines, drug dosing recommendations, and DIAs |
| Reference | Searchable clinical information catalogues |
Source: Berner3.
aThese are the foci of this article.
Programs that emphasize clinical treatment and outcomes (i.e., preventive care, diagnosis, and treatment plan programs) rather than providing reference material or assisting with institutional efficiency will be designated as preventive, diagnostic, and treatment (PDT) CDSSs. With a large selection of products, prospective users are faced with a significant challenge when deciding among PDT programs for purchase. Although extensive research has compared the safety and efficacy of PDT-CDSSs and controls using no decision support software, there remains a dearth of comparative evaluation among such CDSSs for outcomes, including morbidity, length of hospital stay, and adverse events.4 Such comparisons would help the clinician-purchaser make an informed, evidence-based purchase of programs that ideally “deliver the right information, to the right person, in the right format, through the right channel, at the right point in workflow.”5 Although past normative analyses have addressed regulation of CDSSs,6 alert fatigue,7 and drawbacks to increased liability for users,8 insufficient attention has been paid to the consumer decision process behind selecting a PDT-CDSS. We will examine decision-making in purchasing PDT-CDSS; discuss research, marketplace, and legal limitations restricting the body of knowledge for purchasers; and provide recommendations for more realistic and meaningful comparative outcome studies.
PDT-CDSS Purchasing Decision-Making
Recognizing the potential benefits and pitfalls of such programs, the first and perhaps most basic question that purchasers ask when deciding among PDT-CDSSs is “What do we expect from our software?” Thus, purchasers must evaluate their needs and expectations for functional capabilities.9 Understandably, committing to a single program, which often must be adopted into a larger, preexisting information workflow,10 requires considerable capital and time, and represents a significant long-term investment. As such, making a proper decision requires various individual and institutional considerations to be defined and weighed. Certain academic sources9,11–14 explicitly detail some of these considerations. Some consumer websites, such as Healthcare Information and Management Systems Society, Leapfrog Group, Health Level Seven International, ECRI Institute, and InformationWeek HealthCare, provide guidelines to help individuals and institutions define their software needs and expectations when making purchases.15–19 Although such resources for purchaser education remain available, their often-general foci and missions may not adequately account for very real differences among specific systems.
Another important question for consumers is “What can each program do?” CDSSs do not necessarily provide the same capabilities,20,21 and the same data may be used for different applications.9 Several authors have assessed the leading CDSSs for technical and performance capabilities.22–26 Sittig et al.27 likewise assessed the clinical decision capacities of commercial EHR programs. Although this limited literature can provide some evidence-based information regarding program capabilities for potential consumers, it does not adequately meet the needs of an informed consumer. As Roshaynoy and colleagues28 noted, important concerns for clinician consumers, such as “cost, user satisfaction, system interface and feature sets, unique design and deployment characteristics, and effects on user workflow,” were not frequently studied.
Still, the basic usability of a program may be a large concern for purchasers. A PDT-CDSS may remain difficult to learn and use, or may engender user input or system errors. It may create excessive alert notifications and provide unclear or periphrastic directions. In some cases, within institutions, disagreements may arise over dosage limits, order sets, and even alert language.29 Many concerns ultimately center on fostering extraneous and therefore unproductive effort while delaying healthcare treatment, making some purchasers wary of PDT-CDSSs.
Program modifiability may also be a concern.30 Clinicians and hospitals must determine whether to opt for a commercially produced system or build a customized one, how to smoothly implement their choice, and which metrics should be used to assess successful implementation.3,31 Users may wish to modify a commercial program’s settings according to personal or institutional preferences, but this may be difficult or time-consuming. This usually (but infrequently) applies to medication programs32 that provide DIAs, specify drug contraindications for allergies, and track formulary options.30
Purchasers should also care about the safety and efficacy of a PDT-CDSS. Metzger et al.33 found that only 44% of harmful medication orders entered by physicians were later detected by CDSS in different hospitals, suggesting that systems might not adequately detect errors. However, in certain cases, this stems from entering incorrect patient values, including age and weight.12 Other studies suggest that PDT-CDSS-mediated harm to patients often stems from implementation problems rather than intrinsic system flaws.34–36
Purchasers particularly weigh a CDSS in comparison to others. Despite some examples of comparative studies, there is no comprehensive body of evidence. Data from such studies are essential because purchasers remain wary of the potential for harm to patients and potential liability, even when acting in good faith.10 We have found few outcome-oriented studies that comprehensively evaluate CDSSs. For instance, some small yet ambitious studies explore the effect of program usage on clinician decision-making.37 Relatively few studies of CDSSs are randomized controlled trials, and most focus on the effect of the CDSS on decision making rather than the outcome.3 The literature on the safety and efficacy of a single program versus the null condition (i.e., having no program) is extensive.4,38–50 Other studies review multiple programs against the null condition.29,51 Few studies,52 however, compare several programs against one another. In an analysis by Berner et al.23, a panel of expert clinicians compared the accuracy of four programs in producing diagnoses or differentials for cases based on actual patients. Bright et al.4 systematically reviewed randomized trials of 148 CDSSs to determine the frequency of studies focusing on the effect of CDSSs on the healthcare process, workflow, and cost. They found that 76% of studies measured against the null condition instead of a specific comparator, 86% evaluated process of care, and only 20% evaluated clinical outcomes, with even fewer assessing adverse outcomes or unintended events.
It is not for lack of interest in studying outcomes that adequate, useful data are lacking. There are numerous barriers to such research, and they can hinder comparative outcome research.
Purchaser Barriers
Obstacles to Comparative CDSS Studies
It is important to understand that comparative CDSS outcome studies often include systems that address a disparate variety of medical conditions and clinical tasks:
a. Many systems focus on single medical conditions, and studies with large sample sizes or across several sites are rare.
b. Programs may grow outdated and thus be updated or phased out, and purchasers may modify the settings of programs to suit personal needs.
c. Inpatient and larger academic settings are most frequently studied, making comparison with outpatient or smaller private settings less translatable.
The designs of PDT-CDSS studies limit proper evaluation and comparison of programs for limited outcomes. Because no two programs with generally similar purposes have identical uses, and the capabilities of a single program may be emphasized differently in practice by different clinicians, attempting to run a controlled trial, even a randomized controlled trial for limited outcomes, may control for certain variables but will never allow fully sufficient comparison. Many comparative studies also investigate usage for a single medical condition, utilize small sample sizes,4 and apply unevenly defined markers of safety and efficacy. This reduces generalizability to other clinical environments when using different parameters.9 This frequent choice of study design is partially understandable when measuring outcomes for programs intended to address a large array of conditions, but it can prove complicated, slow, and costly.21,53
PDT-CDSSs may become outdated. Future systems will evolve, change, or improve, complicating both prospective and retrospective studies of usage patterns. Whether incorporated in an EHR or as a standalone system, PDT-CDSSs are complex programs and any change will confound attempts to compare them. For this reason, a well-formulated past study might not apply to more recent systems.
The fusion of EHR with CDSS has provided an additional challenge in study design. Many vendors’ EHRs and computerized provider-order entry programs possess CDSS units that users must configure for use in clinical decision support, and certain EHRs are modified by purchasers. Such individual modifications complicate comparative studies because differences in clinical vocabulary, representation of standard laboratory values, outcome variables, and pharmaceutical formularies thwart comparison among individual systems or even sites with the same system.3
Finally, even when following a well-designed study, comparative PDT-CDSS studies are limited by their choice of clinical setting. For instance, many studies tend to focus on academic settings with well-established HIT personnel.54 Smaller, private practices may lack staff with CDSS experience. Thus, the demonstration of a system’s safety or efficacy in a large setting might not be generalizable to smaller practices. Notably, CDSS studies also tend to focus on implementation in inpatient settings.55 The typical conditions encountered in these settings tend to differ from those in outpatient clinics. Although some studies have described CDSS implementation in multisite, nonacademic locations,4 this is more the exception than the norm.
Barriers to CDSS Performance Transparency
Challenges related to transparency might explain the difficulty in conducting comparative studies:
a. Although the US government publishes online reports of CDSS adverse outcomes, submissions remain voluntary and rare.
b. Vendors may offer remuneration to previous purchasers for successfully recommending a new purchaser to their product.
c. Others, though not always legally liable for damages, may contractually obligate purchasers from publically disclosing adverse CDSS outcomes.
It is difficult to identify the adverse outcomes of PDT-CDSSs. The US Food and Drug Administration maintains an online database for adverse outcomes of medical devices, including HITs and CDSSs: the Manufacturer and User Facility Device Experience (MAUDE). The MAUDE website provides voluntary reports from 1993 onward, but specifies that the data should not be used in comparative studies or to represent the frequency of adverse outcomes.56 Of the nearly 900 000 MAUDE reports from January 2008 to July 2010, only 0.1% of the reports involved an HIT incident. Eleven percent of the 436 relevant HIT reports related to patient harm. Only 1% involved a patient death attributable to HIT, but HIT includes more than just CDSSs.57 MAUDE reporting remains voluntary; notably, system vendors must opt to include their CDSS on MAUDE,58 limiting the reporting of some programs and underreporting the actual occurrence of negative incidents. Certain research indicates that incident reports do not provide actual frequencies of errors and adverse outcomes, and so will inevitably fail to portray a complete picture.59 Moreover, incomplete and poor informatics data may frustrate or impede health decision making, both on clinical and public health levels.60 It follows that larger collections of incident reports are needed to identify and explain errors made by these systems.61
Program vendors sometimes play a large role in disseminating product information. Even though there have been academic and government-funded studies in comparative effectiveness,62 purchasers may come to rely on more subjective and less evidence-based guidance—deferring to colleagues, personal websites, or HIT consultants. However, in some cases, it has been alleged but not publicly confirmed that certain institutions that purchase EHRs may receive referral fees if they successfully refer another institution to the vendor.6 Such a vendor–purchaser relationship would constitute a conflict of interest (COI). One way to manage or mitigate the conflict might be to disclose such agreements at the outset,63 although a strong case can be made that such payments are inherently wrongful.6
Further, it has been reported that vendors may attempt to insulate themselves from liability by insisting on contractual provisions for vendor-limited liability; that is, “hold harmless clauses” and disclaimers of warranty.64,65 Other clauses limit the disclosure of software glitches, mistakes, and design flaws to anyone but the vendor, including reports in publications; such clauses have been termed “gag clauses.”64 Whereas “no court has applied product liability standards to computer software”10 and vendors are encouraged to provide adequate training and warnings to purchasers, vendors and physicians or a hospital may be successfully sued for negligence. This concern has motivated certain vendors to provide overly inclusive DIAs10 and to prevent system users from modifying urgency levels attached to different notifications,7 although litigation has not yet hinged on the act of overriding these alerts.10 One consequence of such efforts to minimize liability is that consumer interest groups, government agencies, academia, and even patients may be blocked from access to information needed to support an informed opinion. At ground, potential PDT-CDSS customers have very limited access to any reliable “consumer reports.”6
Recommendations
Recommendations for Guidance Tools
CDSSs have become fixtures in healthcare. Clinicians and institutions must make well-informed decisions before purchasing CDSS systems. Government, nonprofit, and private organizations are encouraged to provide even more guidance tools for potential CDSS purchasers: The Agency for Healthcare Research and Quality’s Health IT Evaluation Toolkit is meant to counsel nonacademic HIT purchasers. However, it provides advice other users on developing thorough plans for appraising purchases by various measures, each chosen for individual needs and expectations. Criteria include clinical outcomes, clinical processes, provider adoption and attitudes, patient adoption, knowledge and attitudes, workflow compact, and financial impact.66
Recommendations for Comparative Studies
It is no small challenge to call for comprehensive studies when measuring limited outcomes has already proven difficult (Table 2). Nevertheless, recognition of these limitations should not preclude efforts to conduct further comparative evaluations. Well-informed PDT-CDSS consumers require more and better comparative studies of system safety and efficacy, which should feature varied study designs and clinical settings. At the least, a larger number of independently funded studies focused on limited outcomes are needed, given the lack of financial incentives for software and system comparisons. This is similar to the “small ball” mentality in HIT adoption, based on norms of small ball baseball, in which “narrower studies [are] conducted over the life-cycle of the project,” rather than “randomized experiments conducted only at the project’s conclusion (‘powerball’ studies).”37 Such studies would be phased in, rather than conducted simultaneously. Phased studies can analyze institutional readiness, normative and quantitative workflow efficiencies, usability, and post-discharge and follow-up audits (e.g., for patient readmissions). This approach allows vendors and users to determine strengths and weaknesses at the end of each phase and inform and guide improvements in subsequent phases.67 If possible, comparative outcome studies should include outpatient and nonacademic settings, often excluded in studies, and larger multisite sample sizes.
Table 2:
Limitations to Performing Comparative Studies
| CDSSs not often performing the same task |
| Very narrow study designs (e.g., single conditions) |
| Inpatient and larger academic settings more frequently studied |
| Programs growing outdated |
| CDSS modified by individual purchasers |
Recommendations for Transparency
Finally, elimination of barriers to unbiased disclosure is needed to allow for meaningful studies (Table 3). The voluntary nature of submitting adverse CDSS events to government agencies remains tied to larger debates over government regulation of EHRs and CDSSs.6
Table 3:
Transparency Barriers to Disclosing Information
| Incomplete and often voluntary government reports of adverse events |
| Financial remuneration for purchasers successfully recommending others |
| “Hold harmless” clauses in contracts providing vendors with limited liability |
| “Gag clauses” preventing public disclosure of CDSS incidents |
Although a previous purchaser may receive remuneration for successfully recommending a new purchaser to a system, proper COI disclosure must be required to ensure that purchasers make informed, evidence-based decisions in purchases.
Moreover, whereas system developers might be concerned about liability for adverse events, this should never preclude, discourage, or impede public disclosure, whether to consumer websites or government databases. So-called gag clauses in purchaser-vendor contracts are illicit and should never be condoned. The disclosure of each and every trivial defect may be too cumbersome for companies to report, and so is not necessary; however, adequate disclosure done in good faith should be encouraged, and standards should be developed to support and guide it. This requirement and corresponding guidance will foster improvements in current technology, advance public awareness of patient safety issues, and increase consumer confidence.
Hurdles to transparency must be overcome to allow for the kind of comparative effectiveness studies required to improve system performance and therefore patient care.
CONCLUSION
An appropriate first step in broadening the availability of advice for making CDSS purchases may be increased guidance tools from government, nonprofit, and private organizations. Even then, there remain numerous methodological barriers to conducting comparative outcome studies of CDSS used for prevention, diagnosis, and treatment. Supporting such studies will provide system purchasers with more useful knowledge and lead to improvements in PDT-CDSSs. Questions such as “How is one PDT-CDSS better than another?” and “How can this information be used to improve patient outcomes?” cannot be answered in the absence of such studies. Reliable data and information for conducting comparative studies require proper COI disclosure from vendors who compensate purchasers for referring others. CDSS users must also be allowed to disclose program flaws to government databases and consumer websites. Together, providing more guidance tools, supporting comparative studies, and removing barriers to unbiased disclosure can foster the evolution of evidence-based tools that, in turn, will enable and empower system purchasers to make better decisions and improve the care of patients.
Acknowledgments
We would like to acknowledge Dr. Eta Berner, Director of the Center for Health Informatics for Patient Safety/Quality in the School of Health Professions at the University of Alabama at Birmingham, for critiquing and commenting on an earlier version of this article.
CONTRIBUTORS
Mr. Dhiman conceived the concept of this submission, performed a majority of the literature search, and drafted and revised the manuscript. He was actively engaged in review, drafting, and final approval of the manuscript. He is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. As corresponding author, he takes primary responsibility for communication with the journal during the manuscript submission, peer review, and publication process and is responsible for completing the journal’s administrative requirements. He, as guarantor, also accepts full responsibility for the work and controlled the decision to publish.
Dr. Amber performed literature searches for specific parts of each section, and helped draft and revise the manuscript. He was actively engaged in review, drafting, and final approval of the manuscript. He is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Dr. Goodman helped conceive the concept of this submission, contributed to the literature search of ethical issues, and helped draft and revise the manuscript. He was actively engaged in review, drafting, and final approval of the manuscript. He is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
FUNDING
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
COMPETING INTERESTS
None.
REFERENCES
- 1. Office of the National Coordinator for Health Information Technology. What is Clinical Decision Support (CDS)? Washington, DC: Office of the National Coordinator for Health Information Technology; 2013. [Google Scholar]
- 2. Healthcare Information Technology Standards Panel (HITSP). Healthcare Information Technology Standards Panel, 2009. http://www.hitsp.org. Accessed October 15, 2013. [DOI] [PubMed] [Google Scholar]
- 3. Berner ES, ed. Clinical Decision Support Systems: State of the Art. Rockville, MD: U.S. Department of Human and Health Services, Agency for Healthcare Research and Quality; 2009:1–26. [Google Scholar]
- 4. Bright T, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157:29–43. [DOI] [PubMed] [Google Scholar]
- 5. Sirajuddin AM OJ, Sittig DF, et al. Implementation pearls from a new guidebook on improving medication use and outcomes with clinical decision support: effective CDS is essential for addressing healthcare performance improvement imperatives. J Healthc Inform Manag. 2009;23:38–45. [PMC free article] [PubMed] [Google Scholar]
- 6. Goodman KW, Berner ES, Dente MA, et al. AMIA Board of Directors Challenges in ethics, safety, best practices, and oversight regarding HIT vendors, their customers, and patients: a report of an AMIA special task force. J Am Med Inform Assoc. 2011;18:77–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Kesselheim AS, Cresswell K, Phansalkar S, et al. Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Aff. 2011;30:2310–2317. [DOI] [PubMed] [Google Scholar]
- 8. Sittig DF, Singh H. Legal, ethical, and financial dilemmas in electronic health record adoption and use. Pediatrics. 2011;127:e1042–e1047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Miller RA. Evaluating evaluations of medical diagnostic systems. J Am Med Inform Assoc. 1996;3:429–431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Ridgley MS, Greenberg MD. Too many alerts, too much liability: sorting through the malpratice implications of drug-drug interaction clinical decision support. St. Louis U. J. Health L. & Pol'y. 2012;5:257–296. [Google Scholar]
- 11. Graeber SM. How to select a clinical information system. Proc AMIA Annu Symp. 2001;2001:219–223. [PMC free article] [PubMed] [Google Scholar]
- 12. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14(1):29–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2006;13:523–530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Osheroff JA, Pifer EA, Sittig DF, et al. Clinical Decision Support Implementers’ Workbook. 2nd ed.Chicago, IL: Health Information Management Systems Society; 2004:3–66. [Google Scholar]
- 15. Leapfrog Group. Leapfrog Group; 2014 [cited July 2, 2013]. http://www.leapfroggroup.org.
- 16. Health Level Seven, Inc. Clinical Decision Support Work Group. Health Level Seven, Inc.; 2009[cited March 20, 2013]. http://www.hl7.org. [Google Scholar]
- 17. ECRI Institute. Top 10 Health Technology Hazards for 2015: A Report from Health Devices November 2014. Plymouth Meeting, PA: ECRI Institute; 2014:7–8,28–33 [cited January 6, 2015]. https://www.ecri.org/Resources/Whitepapers_and_reports/Top_Ten_Technology_Hazards_2015.pdf.
- 18. InformationWeek HealthCare. 10 Innovative Clinical Decision Support Programs: UBM Tech; 2011 [cited July 1, 2013]. http://www.informationweek.com/healthcare/clinical-systems/10-innovative-clinical-decision-support/232300511?pgno=1#slideshowPageTop.
- 19. Teich J, Osheroff J, Levick D, et al. Clinical Decision Support. Chicago, IL: Healthcare Information and Management Systems Society; 2014[cited July 7, 2014]. http://www.himss.org/library/clinical-decision-support. [Google Scholar]
- 20. Wright A, Sittig DF, Ash JS, et al. Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc. 2009;16:637–644. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Wright A, Sittig DF, Ash JS, et al. Development and evaluation of a comprehensive clinical decision taxonomy: comparison of front-end tools in commercial and internally developed electronic health record systems. J Am Med Inform Assoc. 2011;18:232–242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Berner ES, Webster GD, Sugerman AA, et al. Performance of four computer-based diagnostic systems. N Eng J Med. 1994;330:1792–1796. [DOI] [PubMed] [Google Scholar]
- 23. Berner ES, Jackson JR, Algina J. Relationships among performance scores of four diagnostic decision support systems. J Am Med Inform Assoc. 1996;3:208–215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Wright A, Bates DW, Middleton B, et al. Creating and sharing clinical decision support content with web 2.0: issues and examples. J Biomed Inform. 2009;42:334–346. [DOI] [PubMed] [Google Scholar]
- 25. Kantor M, Wright A, Burton M, et al. Comparison of computer-based clinical decision support systems and content for diabetes mellitus. Appl Clin Inform. 2011;2:284–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Gardner RM. Computerized clinical decision-support in respiratory care. Respir Care. 2004;49:378–386. [PubMed] [Google Scholar]
- 27. Sittig DF, Wright A, Meltzer S, et al. Comparison of clinical knowledge management capabilities of commercially-available and leading internally-developed electronic health records. BMC Med Inform Decis Mak. 2011;11:13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Roshanov PS, Misra S, Gerstein HC, et al. Computerized clinical decision support systems for chronic disease management: a decision-maker-researcher partnership systematic review. Implement Sci. 2011;6:92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Wolters Kluwer Medispan. Overcoming Clinician Resistance to Medication Decision Support within CPOE. Wolter Kluwer Health. 2 [cited october 15, 2013]. http://www.himss.org/files/HIMSSorg/content/files/ClinicalInformatics/WoltersKluwerMediSpan_PhysicianResistance_WhitePaper_HiRes_FIN.pdf.
- 30. Kuperman GJ, Reichley RM, Bailey TC. Using commerical knowledge bases for clinical decision support: opportunities, hurdles, and recommendations. J Am Med Inform Assoc. 2006;13:369–371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Metzger JB, Welebob E, Turisco F, et al. Effective use of medication-related decision support in CPOE. In: Patient Safety & Quality Healthcare. Marietta, GA: Lionheart Publishing, Inc.; 2008[Cited July 10, 2014]. http://www.psqh.com/sepoct08/cpoe.html. [Google Scholar]
- 32. Coleman JJ, van der Sijs H, Haefeli WE, et al. On the alert: future priorities for alerts in clinical decision support for computerized physician order entry identified by a European workshop. BMC Med Inform Decis Mak. 2013;13:111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Metzger J, Welebob E, Bates DW, et al. Mixed results in the safety performance of computerized physician order entry. Health Aff. 2010;29:655–663. [DOI] [PubMed] [Google Scholar]
- 34. Joint Commission. Sentinel Event Alert: Safety Implementing Health Information and Converging Technologies. Joint Commission; 2008;42:1–3 [cited October 15, 2013]. http://www.jointcommission.org/assets/1/18/SEA_42.pdf. [PubMed] [Google Scholar]
- 35. Del Beccaro MA, Jeffries HE, Eisenberg MA, et al. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics. 2006;118:290–295. [DOI] [PubMed] [Google Scholar]
- 36. Sittig DF, Ash JS, Zhang J, et al. Lessons from “Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system”. Pediatrics. 2006;118:797–801. [DOI] [PubMed] [Google Scholar]
- 37. Friedman CP. “Smallball” evaluation: a prescription for studying community-based information interventions. J Med Libr Assoc. 2005;93:S43–S48. [PMC free article] [PubMed] [Google Scholar]
- 38. Purcell GP. What makes a good clinical decision support system? Br Med J. 2005;330:740–741. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. Br Med J. 2005;330:765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Johnston ME, Langton KB, Haynes RB, et al. Effects of computer-based clinical decision support systems on clinician performance and patient outcome. A critical appraisal of research. Ann Intern Med. 1994;120:135–142. [DOI] [PubMed] [Google Scholar]
- 41. Hunt DL, Haynes RB, Hanna SE, et al. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. J Am Med Assoc. 1998;280:1339–1346. [DOI] [PubMed] [Google Scholar]
- 42. Love TE, Cebul RD, Einstadter D, et al. Electronic medical record-assisted design of a cluster-randomized trial to improve diabetes care and outcomes. J Gen Intern Med. 2008;23:383–391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Meigs JB, Cagliero E, Dubey A, et al. A controlled trial of web-based diabetes disease management: the MGH diabetes primary care improvement project. Diabetes Care. 2003;26:750–757. [DOI] [PubMed] [Google Scholar]
- 44. Kucher N, Koo S, Quiroz R, et al. Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med. 2005;352:969–977. [DOI] [PubMed] [Google Scholar]
- 45. Rodriguez-Gonzalez A, Torres-Nino J, Mayer MA, et al. Analysis of a multilevel diagnosis decision support system and its implications: a case study. Comput Math Methods Med. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Apkon M, Mattera JA, Lin Z, et al. A randomized outpatient trial of a decision-support information technology tool. Arch Intern Med. 2005;165:2388–2394. [DOI] [PubMed] [Google Scholar]
- 47. Ramnarayan P, Kapoor RR, Coren M, et al. Measuring the impact of diagnostic decision support on the quality of clinical decision making: development of a reliable and valid composite score. J Am Med Inform Assoc. 2003;10:563–572. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Ramnarayan P, Roberts GC, Coren M, et al. Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: a quasi-experimental study. BMC Med Inform Decis Mak. 2006;6:22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Kilsdonk E, Peute LW, Riezebos RJ, et al. From an expert-driven paper guideline to a user-centered decision support system: a usability comparison study. Artif Intell Med. 2013;9:5–13. [DOI] [PubMed] [Google Scholar]
- 50. Trowbridge R, Weingarten S. Clinical decision support systems. In: Shojania KG, Duncan BW, McDonald KM, et al, eds. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Agency for Healthcare Research and Quality. Evidence Report/Technology Assessment, No. 43 AHRQ Publication No. 01E058. Rockville, MD: Agency for Healthcare Research and Quality;2001:589–594. [Google Scholar]
- 51. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. J Am Med Assoc. 2005;293:1223–1238. [DOI] [PubMed] [Google Scholar]
- 52. Friedman CP, Elstein AS, Wolf FM, et al. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. J Am Med Assoc. 1999;282:1851–1856. [DOI] [PubMed] [Google Scholar]
- 53. Kaplan B. Evaluating informatics applications—clinical decision support systems literature review. Int J Med Inform. 2001;64:15–37. [DOI] [PubMed] [Google Scholar]
- 54. Lorenzi NM, Novak LL, Weiss JB, et al. Crossing the implementation chasm: a proposal for bold action. J Am Med Inform Assoc. 2008;15:290–296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Dexter PR, Perkins SM, Maharry KS, et al. Inpatient computer-based standing orders vs physician reminders to increase influenza and pneumococcal vaccination rates: a randomized trial. J Am Med Assoc. 2004;292:2366–2371. [DOI] [PubMed] [Google Scholar]
- 56. U.S. Food and Drug Administration. MAUDE—Manufacturer and User Facility Device Experience Silver Spring, MD: U.S. Food and Drug Administration; 2013 [cited May 24, 2013]. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/PostmarketRequirements/ReportingAdverseEvents/ucm127891.htm.
- 57. Magrabi F, Ong MS, Runciman W, et al. Using FDA reports to inform a classification for health information technology safety problems. J Am Med Inform Assoc. 2012;19:45–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Magrabi F, Ong MS, Runciman W, et al. Patient harm associated with healthcare information technology: an analysis of events reported to the US Food and Drug Administration. AMIA Annu Symp Proc. 2011;2011:853–857. [PMC free article] [PubMed] [Google Scholar]
- 59. Runciman WB, Klugger MT, Morris RW, et al. Crisis management during anaesthesia: the development of an anaesthetic crisis management manual. Qual Saf Health Care. 2005;14:e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Dixon BE, Grannis SJ. Why “What data are necessary for this project?” and other basic quesitons are important to address in public health informatics practice and research. J Public Health Inform. 2011;3:1–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Holland R, Hains J, Roberts JG, et al. Symposium—The Australian incident monitoring study. Anaesth Intensive Care. 1993;21:501–505. [DOI] [PubMed] [Google Scholar]
- 62. Office of Extramural Research. ARRA OS Recovery Act Limited Competition: Impact of Decision-Support Systems on the Dissemination and Adoption of Imaging-Related Comparative Effectiveness Findings (UC4). Bethesda, MD: U.S. Department of Health and Human Services, National Institutes of Health; 2010 [cited July 11, 2014]. http://grants.nih.gov/grants/guide/rfa-files/RFA-OD-10-012.html. [Google Scholar]
- 63. Task Force on Financial Conflicts of Interest in Clinical Research, Association of American Medical Colleges. Protecting Subjects, Preserving Trust, Promoting Progress—Policy and Guidelines for the Oversight of Individual Financial Interests in Human Subjects Research. Washington, DC: Association of American Medical Colleges; 2001. [PubMed]
- 64. Koppel R, Kreda D. Health care information technology vendors’ “hold harmless” clause: implications for patients and clinicians. J Am Med Assoc. 2009;301:1276–1278. [DOI] [PubMed] [Google Scholar]
- 65. Belmont E, Waller AA. The role of information technology in reducing medical errors. J Health Law. 2003;36:615–625. [PubMed] [Google Scholar]
- 66. Cusack CM, Byrne CM, Hook JM, et al. Health Information Technology Evaluation Toolkit. Rockville, MD: U.S. Department of Human and Health Services, Agency for Healthcare Research and Quality; 2009: 1–59. [Google Scholar]
- 67. Johnson KB, Gabb C. Playing smallball: approaches to evaluating pilot health exchange systems. J Biomed Inform. 2007;40:S21–S26. [DOI] [PubMed] [Google Scholar]
