Skip to main content
Applied Clinical Informatics logoLink to Applied Clinical Informatics
. 2010 Sep 22;1(3):331–345. doi: 10.4338/ACI-2010-05-RA-0031

Best Practices in Clinical Decision Support

The Case of Preventive Care Reminders

Adam Wright 1,2,3,1,2,3,1,2,3,, Shobha Phansalkar 1,2,3,1,2,3,1,2,3, Meryl Bloomrosen 4, Robert A Jenders 5,6,5,6, Anne M Bobb 7, John D Halamka 3,8,3,8, Gilad Kuperman 9,10,9,10, Thomas H Payne 11, S Teasdale 12, A J Vaida 13, D W Bates 1,2,3,1,2,3,1,2,3
PMCID: PMC3189503  NIHMSID: NIHMS318270  PMID: 21991299

Abstract

Background

Evidence demonstrates that clinical decision support (CDS) is a powerful tool for improving healthcare quality and ensuring patient safety. However, implementing and maintaining effective decision support interventions presents multiple technical and organizational challenges.

Purpose

To identify best practices for CDS, using the domain of preventive care reminders as an example.

Methods

We assembled a panel of experts in CDS and held a series of facilitated online and inperson discussions. We analyzed the results of these discussions using a grounded theory method to elicit themes and best practices.

Results

Eight best practice themes were identified as important: deliver CDS in the most appropriate ways, develop effective governance structures, consider use of incentives, be aware of workflow, keep content current, monitor and evaluate impact, maintain high quality data, and consider sharing content. Keys themes within each of these areas were also described.

Conclusion

Successful implementation of CDS requires consideration of both technical and socio-technical factors. The themes identified in this study provide guidance on crucial factors that need consideration when CDS is implemented across healthcare settings. These best practice themes may be useful for developers, implementers, and users of decision support.

Keywords: Clinical decision support systems, clinical governance, computerized medical record systems, hospital information systems

Background

Electronic health records (EHRs), computerized physician order entry (CPOE) and other clinical information systems have been hailed as potentially transformative of the healthcare system. These systems have the ability to improve the quality, safety and efficiency of health care (1). These systems represent a critical focus of the Obama administration’s healthcare reform proposals and programs for which the US Federal Government is proposing to spend, through incentives, grants and research, an unprecedented 48.8 billion USD to spur their adoption (2, 3). While it is clear that when these systems are used effectively with clinical decision support (CDS) they can powerfully and positively improve healthcare quality by reducing errors and improving patient outcomes (4-6), there is also countervailing evidence that suggests these benefits are not always achieved (7) and that, in certain cases, they may cause patient harm (8, 9).

A variety of factors appear to distinguish successful clinical information system implementations from unsuccessful ones. Technology is of course a factor: systems with better functionality often have better results than those with reduced functionality, though this is not always the case. However, the technology may well not be the most important factor. Indeed, organizations implementing the same systems may have very different results (9, 10). Organizational factors (such as resistance to adoption) and financial factors (such as the substantial initial investment required) also play an important role (11).

However, one potential key differentiator may be their decision to adopt (or not adopt) CDS within their information systems. CDS has been defined in different ways (12-15) but, at its core, CDS means providing the right information at the right time to the user in order to facilitate making clinical decisions. Most CDS is electronic, but paper-based systems are also possible. Indeed, much of the reported benefits of clinical information systems are actually benefits attributable to CDS (16-20) and a recent study suggested that the benefits of implementing CPOE at one particular site were very modest but that, once CDS was turned on, the rate of medication errors was reduced considerably (21).

Although the promise of CDS is great, effective implementation of CDS (18, 22) and of clinical information systems more broadly is challenging (11, 23-31). These challenges have included the lack of quality CDS software available from vendors, difficulty in developing in-house software and maintaining the clinical knowledge bases, issues with integrating decision support into provider workflows, and governance challenges.

In 2006, recognizing both the promise and challenge of CDS, the Office of the National Coordinator for Health Information Technology (ONC) within the United States Department of Health and Human Services (DHHS) asked the American Medical Informatics Association (AMIA) to convene an expert panel to develop a roadmap for national action on CDS. In the final roadmap, the expert panel identifies three pillars of successful decision support:

  • 1.

    the best knowledge available when needed,

  • 2.

    high adoption and effective use and

  • 3.

    continuous improvement of knowledge and methods for the systems (17).

The roadmap describes, at a high level, a vision for increasing the dissemination and use of CDS. The roadmap has been widely cited and has, at least in part, helped shape a number of CDS initiatives, including the Morningside Initiative (32), the Clinical Decision Support Consortium (CDSC) (33) and the Glides into Decision Support (GLIDES) project (34).

In 2008, another panel of subject matter experts (many of whom were also part of the CDS Roadmap expert panel) worked to identify the thorniest challenges in clinical decision support (22). They identified ten challenges which they dubbed “Grand Challenges in Clinical Decision Support”. These challenges included improving the human-computer interface, prioritizing and filtering recommendations to the user and disseminating best practices in CDS design, development, and implementation. One of the most important challenges identified by this expert panel was the need to formulate and disseminate best practices in CDS design, development and implementation. The overarching purpose of our investigation was

  • 1.

    to establish best practices for decision support in a real-world setting with emphasis on preventive and ambulatory care and

  • 2.

    to develop decision support “starter content” on the basis of our research that would ease CDS implementation.

In keeping with the “best practices” grand challenge, we set out to determine best practices for CDS using an expert consensus approach – the first of this project’s two aims. While some best practices from individual organizations have been identified in prior work at single institutions (14), we believe that best practices may differ between institutions, particularly across different settings of care (acute care and ambulatory) and clinical system implementation strategies (self-developed vs. commercial software). We also sought to cover a broader segment of decision support, including governance and content development, rather than just the presentation of content. We believe that organizations hoping to adopt clinical decision support may be able to use these best practices to accelerate their progress and benefit from the lessons of sites which have gone before them.

Along with our first aim to ascertain best practices, our second aim was to develop a starter set of decision support content that new decision support implementers might use for initial implementations. We discuss preventive care reminders to exemplify these best practices and also describe the content that was vetted by the expert panel on this topic. This combination of content development tied to explicit analysis of overall best practices with regard to CDS builds on the work of others in identifying best practices and we hope it will help advance the uptake of CDS to improve the quality of clinical care.

Our goal in both phases of this project was to provide a framework of decision support best practices and a foundation for future design and evaluation of decision support interventions. It is our hope that, by gathering and synthesizing expert opinions on clinical decision support best practices, we can provide a jumping off point for future discussion of CDS best practices and encourage new areas of research in this field.

Methods

In order to ascertain these best practices, our first aim, we convened an expert panel that included individuals from a variety of settings (public and private provider organizations, academic medical centers and national associations with interest in CDS), a variety of roles and responsibilities (information technology, quality, pharmacy and research) and a variety of backgrounds (informaticists, pharmacists and physicians). Our criteria for defining an expert was based on tenure in and contributions to the field of clinical decision support. This included factors such as

  • 1.

    number and significance of publications in the field,

  • 2.

    participation in national activities and standards bodies,

  • 3.

    extensive experience in real-world implementation of decision support,

  • 4.

    thought leadership in CDS and healthcare IT.

All of our panelists were selected because they either had substantial experience with CDS in their own institution, or because they had experience with CDS across a number of institutions. This project was reviewed and approved by the Partners HealthCare Human Research Committee.

We sent invitations to a pool of nine experts, eight of whom accepted our invitation (the ninth had an unavoidable scheduling conflict). The members of the expert panel are listed in ►Table 1. The panel was moderated by two of the authors (AW, SP) and a third (MB) provided staff support. Moderators posed open-ended questions and structured polls on decision support in order to elicit initial responses and facilitate discussion. Examples of the questions and polls from Phase 1 include

Sample questions

What kinds of preventive care reminders do you have in your CIS [clinical information systems] today?

What standards, terminologies, and vocabularies, if any, should the content of these rules be mapped to in order to make it easily implementable to your environment? (e.g. problems to SNOMED or ICD-9 and medications to FDB and NDF-RT)?

What strategies do you have in place to evaluate rules after their put in place? For example, do you review override rates, override reasons, impact on quality metrics and patient satisfaction scores, etc.?

Sample polls

What settings would these reminders apply to? (outpatient primary, outpatient specialty, medical inpatient, surgical inpatient, walk-in)

Which clinicians should be alerted about these reminders? (primary care, specialist, nurse, pharmacist, appointment personnel)

Would it be appropriate to show these reminders directly to patients? (yes, no)

What tangible benefits do you think you would see upon implementation of these reminders? (higher quality of care, more satisfied patients, fewer admissions, shorter length of stay, fewer errors, decreased health insurance premiums, improved scores on external evaluation)

Table 1.

Members of the expert panel

Member Institution
David W. Bates Brigham and Women’s Hospital
Anne M. Bobb Northwestern Memorial Hospital
John D. Halamka CareGroup Healthcare System
Robert A. Jenders University of California, Los Angeles and Cedars-Sinai Medical Center
Gilad Kuperman New York Presbyterian Hospital
Thomas H. Payne University of Washington
Sheila Teasdale American Medical Association
Allen J. Vaida Institute for Safe Medication Practices

The work of the expert panel took place in two phases. First, we invited the experts to participate in an asynchronous discussion of CDS content and best practices. This discussion took place using the EMC Documentum eRoom discussion system (EMC Corporation, Hopkinton, MA, USA) (35). ►Figure 1 shows a discussion taking place in the eRoom and ►Figure 2 shows the eRoom’s polling capability. The discussion was carried out in four phases:

Fig. 1.

Fig. 1

A discussion in the eRoom system

Fig. 2.

Fig. 2

A poll (screenshot)

  • Phase 1: Best practices for CDS and preventive care reminders

  • Phase 2: Content-focused discussion of preventive care reminders

  • Phase 3: Combined best practices and content-focused discussion of therapeutic duplication alerts

  • Phase 4: Combined best practices and content-focused discussion of drug-drug interaction alerts

In each of the four phases, the discussion was seeded by the moderators and then the panelists were invited to participate in the discussion using eRoom’s notification which also sends an email alert to the panelists. As the discussion progressed, reminders were sent to panelists who had not yet participated in the discussion and several polls were conducted.

This paper focuses on Phases 1 and 2 of the discussion, including best practices across the spectrum of CDS (with an emphasis on preventive care reminders) as well as specific content for preventive care reminders. In a related paper, we focus on the content and lessons learned specific to medications. The Phase 1 discussion corresponds to the first aim of our project (best practices ascertainment) and Phase 2 corresponds to our second aim (development of a starter set of best practice reminders).

In order to ensure maximum participation and amass sufficient data for analysis, information was collected through

  • 1.

    online discussion,

  • 2.

    online polling and

  • 3.

    in-person discussion.

This strategy of interviewing multiple experts in varied formats ensured both breadth of topics covered and overlap between collection methods. This allowed us to more effectively distill and synthesize themes and best practices. The online discussion took place from April, 2008 to July, 2008. The online discussion culminated in an in-person meeting of the experts in Boston on July 7, 2008.

We had 80 posts on our online discussion system prior to the face to face meeting. We noticed a downward trend in participation across the four phases described above. The first phase (best practices for preventive care reminders) received 40 posts (including posts designed to facilitate the discussion), the second phase had 27 posts, the third 7 and the fourth only 6. We hypothesize that there are a variety of reasons for this phenomenon

  • 1.

    Some participants began posting very eagerly at the start of the discussion, perhaps because the group was starting out and posting ideas to build upon, but did not continue to post as often.

  • 2.

    Some of the best practices may overlap between preventive care reminders (Phases 1 and 2) and medication CDS (Phases 3 and 4), so people may not have wanted to repeat themselves.

  • 3.

    Participating in and keeping up on the discussion was somewhat time-consuming.

Although the quantity of participation reduced over time, the quality, in our subjective estimation, remained quite high throughout.

The product of the face to face meeting was a 41,000 word transcript that was 190 double-spaced pages in length. Members of the panel core team (AW, SP) then analyzed the transcript and online postings to identify common themes using a modified grounded theory approach (36). In essence, the grounded theory approach is a method of conducting research with a well-defined research goal but in the absence of a specific hypothesis. Given our research was based on a expert discussion forums (online and in-person), the use of a grounded theory framework allowed us to avoid confirmation bias and leading discussion questions, and to identify de novo themes rather than attempting to force-fit results include pre-existing models. These themes were grouped into best practices and challenges, which are presented in the remainder of this manuscript.

Results and Discussion

Our analysis of online discussions, online polling results and the in-person meeting yielded eight best practices, which are reviewed in this section. The eight best practices are listed below and described in detail throughout this section.

  • 1.

    Deliver CDS in the most appropriate ways – not just reminders

  • 2.

    Develop effective governance structures

  • 3.

    CDS most effective when aligned with other organizational motivations

  • 4.

    Be aware of workflow

  • 5.

    Keep content current

  • 6.

    Monitor and evaluate decision support’s clinical impact

  • 7.

    Maintain high quality and complete data

  • 8.

    Consider sharing content

Best Practice 1: Deliver CDS in the most appropriate ways – not just reminders

A foundational and cross-cutting issue identified by the panel, and returned to frequently, related to the definition of CDS. The most canonical form of CDS is the alert or reminder: an often time interruptive notification to the user that an action is required (or at least suggested) before proceeding with the order. Alerts and reminders represent a critical form of CDS, particularly for issues such as panic lab values where interruption and immediate action are required. However, the range of CDS is much broader and alerts and reminders are often used in situations where other forms of decision support and communication might be more effective (37), and the panel felt strongly that it is important to deliver CDS involving interruptions to workflow to those decisions that are most appropriate given the context.

According to one panelist, “We will move our agenda further if we stop saying ‘request for an alert’ or ‘request for a reminder’. We need to look at more: documentation, templates, etc. You can’t get to the core measures with just an alert or reminder. People come to us with a request and we help them figure out how to do it. Alerts are the last step – usually we try to do it with an order set, form, template, etc. Alerts should be used only when necessary.”

Other panelists agreed with this sentiment, proposing a variety of other methods that didn’t involve interruptions in the order process. Many of the panelists reported success with incorporating standing orders in their IT systems rather than just alerts and reminders. Another panelist commented, “Reminders didn’t work at all. In contrast, standing orders work very, very well.” This is consistent with much of the literature on the topic, which has shown that standing orders are, indeed, generally more effective than reminders (38) when it is possible to implement them. Although, choosing among too many standing orders may also inhibit the ease to use the system for order entry. One must strike a balance between alerts, reminders, and order sets.

Best Practice 2: Develop effective governance structures

Governance structures described ranged from very simple (a small group of people who made decisions on CDS) to very sophisticated, involving standing and specially convened committees and orderly review and approval processes. As organizational size, number and diversity of clinical users of the system, and the complexity of decision support offerings (e.g., medications, laboratory monitoring, diagnostic procedures) increases, a more complex structure is necessary. Having these governance processes was considered important, but securing participation from those needed in the decision making had proved to be challenging at most sites. One panelist noted that, at his institution, “persuading (leading) clinicians to attend CDS planning meetings [is]... challenging.” They also pointed out, though, that acceptance of the content that has ultimately been deployed at that site has been quite good, largely because “substantial institutional sign-off is required (i.e., there is a kinetic barrier to implementation)”. The converse can also be an issue – when clinicians are not involved rejection can occur.

Although the panelists all agreed on the importance of good governance structures, they also stressed the importance of back channels as a way to get things done in certain instances. Two panelists shared examples where a single strong voice championed decision support content either through or around the governance processes. Although this could be viewed as a circumvention of governance, in these panelists’ institutions these back channels also proved critical to getting things done.

Best Practice 3: CDS most effective when aligned with other organizational motivations

A third commonly echoed theme was incentives. Many of the panelists expressed that decision support had been most effective at their institutions when its use was linked to other (often financial) incentives. According to one panelist, “At [my site] the single most important intervention that brought about compliance was a simple one: the hospital director’s financial bonus was tied to the performance of the hospital. It was tied to the measurement that the national organization prescribed. As a consequence people got flu shots, Pneumovax, feet examined.” According to another panelist, his hospital has a similar program in place: “All senior management of [my site] get bonuses contingent on meeting quality and safety goals. We...” aim to “...eliminate all causes of preventable harm by 2012. Chiefs are paid based on infection rates.” A third panelist spoke of a similar program: “We have something similar to what [the other panelists] said. Our senior administrative goals are to be in the top 10% on quality measures in the nation.” Such incentives were not, however, universal and incentives to front-line providers were not discussed.

Throughout the meeting, the relationship between clinical decision support and clinical quality measurement and improvement activities was also emphasized. Panelists pointed out that decision support is a powerful tool for improving performance on quality measures and that, conversely, measurement is a powerful tool for evaluating and improving decision support (related to Best Practice 6).

Best Practice 4: Be Aware of Workflow

Many panelists also discussed the importance of being aware of workflow issues during the development of decision support. The panel encouraged implementers to consider (and minimize) the intrusiveness of the alerts and to minimize interruptions unless they are clinically necessary. One panelist attributes much of the success of his organization’s outpatient decision support to the fact that “[Our information system]’s CDS capability is relatively unobtrusive, and with so low an annoyance factor people generally are supportive.”

A critical workflow issue discussed by the panel was the intended audience/user – the person to whom decision support should be provided. The classic paradigm for decision support is notification of physicians (especially primary care providers). However, the panel also considered the possibility of delivering the alerts to specialists, nurses and, when appropriate, patients themselves. Many on the panel did not generally consider it necessary to deliver preventive care reminders to pharmacists, but panel members did point out that pharmacists are natural recipients of medication-related preventive measures decision support.

Another critical element of workflow identified by the panel is the selection of optimal decision support types (Best Practice 1). Indeed, different decision support types fit into different workflows and are more appropriate for different clinical purposes. For example, ordering hemoglobin A1c’s for diabetic patients who are overdue may be more amenable to an asynchronous process (e.g. panel reports or automated letter generation) especially for patients who do not have a visit for some other reason, while drug-drug interaction checking in the context of a medication order is likely better carried out in a synchronous fashion, with either alerting or tailored item display.

Best Practice 5: Keep Content Current

A key challenge cited by many of the panelists was the issue of keeping CDS content current. When the amount of content in an organization is small, keeping it current is generally a manageable task. However, as the amount of content increases, maintaining currency becomes increasingly difficult. Many of the clinical foci of decision support change quickly: new guidelines are published, new drugs are placed on the market and new evidence becomes available. Any of these external events could, of course, necessitate a change in decision support content, as could internal requests or quality reviews. One panelist pointed out that implementing even apparently simple rules can be a significant institutional challenge and “requires a commitment to maintain the rules and individuals dedicated to this task.”

Many of the organizations had annual (or otherwise periodic) reviews where content was checked against current practice and updated as needed. Some of the organizations also described tracking tools they use to keep content up to date. This best practice is also intimately connected to Best Practice 2 – appropriate governance and oversight structures include mechanisms for prioritizing content review and ensuring that content is kept up to date.

Some of the organizations also reported that they outsourced some or all of their CDS content development and maintenance to an external supplier such as First DataBank (South San Francisco, CA, USA) for medications or Zynx Health (Westwood, CA, USA) for order sets. They were generally satisfied with the arrangements, although all strongly underscored the need to be able to customize the vendor content for their local needs.

Best Practice 6: Monitor and Evaluate Decision Support’s Clinical Impact

All panelists pointed out the importance of monitoring and evaluating decision support interventions after they are deployed and improving them continuously with additions and/or deletions in the CDS library. One panelist said “we do initially review override rates etc. Later, we review complaints. We would like to have a mechanism for tracking these things on an on-going way but it has remained on the drawing board so far.” Two other panelists pointed out that their organizations take similar approaches, though their efforts are being expanded and improved over time.

Two panelists also pointed out the need for chart review to get a deeper understanding of how alerts are really performing in ambulatory practice sites. One of the two commented, “We usually recommend, especially for the smaller practices, that periodic prospective review of a selected number of patient charts be done by scheduling personnel or nurses in the practice who may first see the patient. This could also entail retrospective chart review. It involves taking a few reminders and having everyone focus on them for the next week and see if any were missed from a previous visit of the patient seen in the practice. This can go along with any information in the system or set up by the practice (overrides, reviewing new medications the patient is on or has received from inpatient care or specialist care...).”

Many panelists also pointed out that their review of CDS performance was tied to broader quality metrics that were being evaluated in their acute care organizations. One panelist mentioned that “as these [alerts] relate to quality measures endorsed by the hospital leadership, we track responses to these.”

Best Practice 7: Maintain high-quality and complete data

A common refrain among the panelists was the need for available high-quality data in the underlying information system applications to enable sound decision support. This is easier said than done, however. We polled the panelists regarding the availability of various data types at each of their institutions. Medications, problem lists, laboratory results, vital signs, and demographic information were universally available. However, information relating to family history and procedure (and/or surgical) history was not widely available, even though this information is often used in CDS (39).

In addition to completeness, issues of data quality and correctness were also discussed. For the most part, many clinical decision support systems operate on a garbage-in garbage-out basis and, without accurate clinical data, they cannot function optimally. There is a fairly substantial body of evidence that suggests that errors in clinical data are common (40, 41). Without stringent rules that drive the quality of data being entered into systems, the reliability of this data to trigger decision support logic accurately remains questionable.

Another issue that was discussed was the reliability of patient-provided data. For example, at one site, there is a preventive care colonoscopy reminder for patients over 50. The reminder is dismissed if the test has been done previously; however, the panel discussed the issue (which the site had faced) of what to do if the patient simply reports the test had been done, though no confirming documentation was available. The site decided that such a response would “snooze’ the reminder for a year, but that it would be displayed again until the test performance could be substantiated (it is worth noting that this speaks to the capabilities of that particular site’s clinical information system – not all sites reported that they could achieve a similar snooze period).

One best practice for improving data accuracy and completeness briefly discussed by the panel was interoperability. This is the seamless exchange of information between information systems within, and even outside of, the organization. There was a hope that, if significant interoperability is eventually achieved and more complete patient data become available and shared within the total IT environment, that this might substantially enable better decision support. However, it may also increase complexity, as it would entail easy assessment and sharing of data from outside organizations that may have different IT systems. The panel also spent time discussing the issue of value sets (coded, structured choices for data elements). Many value sets, such as clinical problems, are inherently difficult to develop, but the panelists pointed out that even apparently simple sets, such as sex, have their own challenges. The panel believed that, if standard value sets were more widely available and used, sharing decision support content (and, in fact, simply executing decision support) would be easier. The Health Information Technology Standards Panel (HITSP) is in the process of standardizing many of these value sets, and the panel thought this work would be extremely fruitful.

Best Practice 8: Consider Sharing Content

Another issue discussed by the committee was sharing decision support content. There was wide consensus that there will eventually be great value in sharing content across institutions and, perhaps, nationally. The panelists felt that sharing content saved a tremendous amount of time. One panelist mentioned that he had exported his organization’s content and sent it to other organizations who reported that having the content to start from “saved us a year in committee”.

The panel also discussed knowledge representation and whether content was best shared using a standard knowledge representation formalism (such as Arden Syntax). Interestingly, there was not wide support on the panel for using such an approach, for several reasons. First, panelists pointed out that there is limited commercial support for such formats, so content in such formats was not necessarily more readily implemented. The panelists also pointed out that the primary difficulties in developing new CDS interventions were clinical and organizational in nature rather than technical. Deciding the content of the rule and getting buy in and approval was, in their opinion, more challenging than implementing the system itself. As a result, the panelists clearly favored sharing content in human readable forms, so that it could be more easily understood and discussed by potential implementers. There was also some interest expressed in knowledge libraries that contained vendor-specific (i.e. proprietary rather than standardized) formats, and in the possibility of using CDS services (i.e. a service oriented architecture) as a vehicle for increasing sharing and syndication of decision support content.

A starter set of clinical content

In addition to the best practices described here, the expert panel also developed a starter set of preventive care reminders based on reminders in use at Partners HealthCare. These are six reminders which the panel believes are important and represent a good starting point for organizations beginning to develop CDS content for preventive care. The reminders were selected from existing content in the Partners Healthcare decision support system. These six specific reminders were chosen

  • 1.

    because they relate to common and chronic medical conditions for which preventive screenings and vaccinations are highly important and effective and

  • 2.

    because they are focused on content that relates to national quality measures.

The reminders are:

  • Pneumococcal vaccination for older adults or those with high risk medical conditions

  • Influenza vaccination

  • Screening and prophylaxis (most commonly bisphosphonate therapy) for older women or women with risk factors for osteoporosis

  • Mammography

  • Cervical cancer screening

  • Screening for hyperlipidemia

These reminders consist of sets of logic-based rules that trigger alerts when certain criteria are met. For example, the influenza vaccination reminder consists of two separate rules

  • 3.

    a reminder for older patients, and

  • 4.

    a reminder for high risk patients.

For the “Influenza Vaccination Older Than 50” rule, an alert is triggered when the following criteria are met

  • 1.

    when the patient’s age is over 50,

  • 2.

    when the date is within flu season (Oct-Feb) and

  • 3.

    when there is no previous vaccination on record.

This content is available on the CERT-HIT website at http://hit-cert.org/. It is also posted on the ClinfoWiki at http://www.clinfowiki.org, an online environment where potential users of the content can view, discuss and even improve on the content. We hope that the availability of this starter content on ClinfoWiki will encourage potential adopters by providing an established decision support foundation on which to build. The reminders provided should ideally provide an effective framework for additional customized reminders. It is important to note that, although this content available in good faith, neither the expert panel, the CERT-HIT, Brigham and Women’s Hospital or the Agency for Healthcare Research and Quality guarantee this content – it is provided as is and must be reviewed for correctness and completeness by any potential user.

For reasons discussed above, we are providing this starter content in a structured human readable (but non-executable) format. Our intent is not that they could be run without modification at other sites, but simply that they might provide the basis for discussion and development of CDS in order to reduce the initial difficulty of content development in these related areas.

Limitations

This study has a number of limitations. The panel size was small and included mostly large institutions, most of which are fairly far along in the CDS development pathway, so that the results may not be generalizable to other types of institutions. Our expert panel was not selected based on systematic criteria and represents only a limited subgroup of individuals with experience in the field. The opinions expressed were those of the panelists, and different opinions might have been obtained from a different expert group. An inherent limitation of our expert-consensus approach was that, while it provides a valuable conceptual template, it does not allow for systematic review and testing of these best practices. Our research represents only one piece of the puzzle and more evaluation of these findings will be required in the future.

Conclusions

Using a consensus approach with a broad-based, multidisciplinary panel, we have elaborated a useful set of CDS best practices and practical recommendations for initial CDS implementation at healthcare organizations. Although other such recommendations have been made, most have come from single institutions. Our research represents one step towards establishing decision support best practices. More investigation will be necessary in the future to rigorously test and refine these standards. Assessing how best to deliver and manage CDS will remain a high priority if we are to obtain the hoped-for value from these systems.

Conflict of Interest Statement

None of the authors has a conflict of interest to declare with respect to the content of this manuscript.

Human Subjects Protection

This project was reviewed and approved by the Partners HealthCare Human Research Committee.

Acknowledgments

Acknowledgements

This project was funded under AHRQ CERT grant #1U18HS016970. AHRQ was not involved in the design or execution of the project, or the decision to publish the results. We are grateful for the assistance of Christine Soran and Francine Maloney who transcribed the recording of the meeting and to Justine Pang for assistance in the preparation of this manuscript, and to Joshua Feblowitz for assistance in its revision.

References

  • 1.Hillestad R, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff (Millwood). 2005; 24(5): 1103-1117 [DOI] [PubMed] [Google Scholar]
  • 2.U.S. House of Representatives and Senate American Recovery and Reinvestment Act of 2009. 2009[cited 2009 May 18]; Available from: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h1enr.pdf
  • 3.U.S. Department of Health and Human Services Health Information Technology (IT) Recovery Program. 2009[cited 2009 May 18]; Available from: http://www.hhs.gov/recovery/programs/index.html#Health
  • 4.Garg AX, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 293(10): 1223-1238 [DOI] [PubMed] [Google Scholar]
  • 5.Bates DW, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998; 280: 1311-1316 [DOI] [PubMed] [Google Scholar]
  • 6.Kawamoto K, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330: 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Linder JA, et al. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med 2007; 167: 1400-1405 [DOI] [PubMed] [Google Scholar]
  • 8.Koppel R, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293: 1197-1203 [DOI] [PubMed] [Google Scholar]
  • 9.Han YY, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 2005; 116: 1506-1512 [DOI] [PubMed] [Google Scholar]
  • 10.Del Beccaro MA, et al. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics 2006; 118: 290-295 [DOI] [PubMed] [Google Scholar]
  • 11.Ash JS, Bates DW. Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion. J Am Med Inform Assoc 2005; 12: 8-12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Greenes RA. Clinical decision support: the road ahead. Boston, MA: Elsevier Academic Press; 2007 [Google Scholar]
  • 13.Osheroff JA, et al. Improving outcomes with clinical decision support: an implementers’ guide. Chicago: HIMSS; 2005 [Google Scholar]
  • 14.Osheroff JA, et al. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007; 14(2): 141-145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Wyatt J, Spiegelhalter D. Evaluating medical expert systems: what to test and how? Medical informatics = Medecine et informatique 1990; 15(3): 205-217 [DOI] [PubMed] [Google Scholar]
  • 16.Chaudhry B, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006; 144: 742-752 [DOI] [PubMed] [Google Scholar]
  • 17.Osheroff JA, et al. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007; 14: 141-145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bates DW, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10: 523-530 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003; 163: 1409-1416 [DOI] [PubMed] [Google Scholar]
  • 20.Kaushal R, et al. Return on investment for a computerized physician order entry system. J Am Med Inform Assoc 2006; 13: 261-266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kadmon G, et al. Computerized order entry with limited decision support to prevent prescription errors in a PICU. Pediatrics 2009: 124: 935-940 [DOI] [PubMed] [Google Scholar]
  • 22.Sittig DF, et al. Grand challenges in clinical decision support. J Biomed Inform 2008; 41: 387-392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ash JS, Stavri PZ, Kuperman GJ. A consensus statement on considerations for a successful CPOE implementation. J Am Med Inform Assoc 2003; 10: 229-234 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ash J. Organizational factors that influence information technology diffusion in academic health sciences centers. J Am Med Inform Assoc 1997; 4: 102-111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Lorenzi NM, Kouroubali A, Detmer DE, Bloomrosen M. How to successfully select and implement electronic health records (EHR) in small ambulatory practice settings. BMC medical informatics and decision making 2009; 9: 15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Yoon-Flannery K, et al. A qualitative analysis of an electronic health record (EHR) implementation in an academic ambulatory setting. Informatics in primary care 2008; 16: 277-284 [DOI] [PubMed] [Google Scholar]
  • 27.Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information technology. Evidence report/technology assessment 2006; 132: 1-71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Brokel JM, Harrison MI. Redesigning care processes using an electronic health record: a system’s experience. Joint Commission journal on quality and patient safety / Joint Commission Resources 2009; 35:82-92 [DOI] [PubMed] [Google Scholar]
  • 29.Terry AL, et al. Implementing electronic health records: Key factors in primary care. Canadian family physician Medecin de famille canadien 2008; 54: 730-736 [PMC free article] [PubMed] [Google Scholar]
  • 30.Simon SR, et al. Electronic health records: which practices have them, and how are clinicians using them? Journal of evaluation in clinical practice 2008; 14: 43-47 [DOI] [PubMed] [Google Scholar]
  • 31.Campbell EM, et al. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2006; 13: 547-556 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.AMIA Clinical Decision Support (CDS): Morningside Initiative. [cited 2009 May 18]; Available from: http://www.amia.org/inside/initiatives/cds
  • 33.Lin JH, Haug PJ. Exploiting missing clinical data in Bayesian network modeling for predicting medical problems. Journal of biomedical informatics 2008; 41: 1-14 [DOI] [PubMed] [Google Scholar]
  • 34.Poissant L, Tamblyn R, Huang A. Preliminary validation of an automated health problem list. AMIA Annual Symposium proceedings / AMIA Symposium 2005: 1084. [PMC free article] [PubMed] [Google Scholar]
  • 35.Wright A, et al. Creating and sharing clinical decision support content with Web 2.0: Issues and examples. J Biomed Inform 2009; 42: 334-346 [DOI] [PubMed] [Google Scholar]
  • 36.Glaser B Strauss A The Discovery of Grounded Theory. Chicago: Aldine Publishing Company; 1967 [Google Scholar]
  • 37.Sittig DF, Teich JM, Osheroff JA, Singh H. Improving clinical quality indicators through electronic health records: it takes more than just a reminder. Pediatrics 2009; 124: 375-377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Dexter PR, et al. Inpatient computer-based standing orders vs physician reminders to increase influenza and pneumococcal vaccination rates: a randomized trial. JAMA 2004; 292: 2366-2371 [DOI] [PubMed] [Google Scholar]
  • 39.Wright A, Goldberg H, Hongsermeier T, Middleton B. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. J Am Med Inform Assoc 2007; 14: 489-496 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Wagner MM, Hogan WR. The accuracy of medication data in an outpatient electronic medical record. J Am Med Inform Assoc 1996; 3: 234-244 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Szeto HC, et al. Accuracy of computerized outpatient diagnoses in a Veterans Affairs general medicine clinic. The American journal of managed care 2002; 8: 37-43 [PubMed] [Google Scholar]

Articles from Applied Clinical Informatics are provided here courtesy of Thieme Medical Publishers

RESOURCES