Abstract
Background
Clinical decision support systems can prevent knowledge-based prescription errors and improve patient outcomes. The clinical effectiveness of these systems, however, is substantially limited by poor user acceptance of presented warnings. To enhance alert acceptance it may be useful to quantify the impact of potential modulators of acceptance.
Methods
We built a logistic regression model to predict alert acceptance of drug–drug interaction (DDI) alerts in three different settings. Ten variables from the clinical and human factors literature were evaluated as potential modulators of provider alert acceptance. ORs were calculated for the impact of knowledge quality, alert display, textual information, prioritization, setting, patient age, dose-dependent toxicity, alert frequency, alert level, and required acknowledgment on acceptance of the DDI alert.
Results
50 788 DDI alerts were analyzed. Providers accepted only 1.4% of non-interruptive alerts. For interruptive alerts, user acceptance positively correlated with frequency of the alert (OR 1.30, 95% CI 1.23 to 1.38), quality of display (4.75, 3.87 to 5.84), and alert level (1.74, 1.63 to 1.86). Alert acceptance was higher in inpatients (2.63, 2.32 to 2.97) and for drugs with dose-dependent toxicity (1.13, 1.07 to 1.21). The textual information influenced the mode of reaction and providers were more likely to modify the prescription if the message contained detailed advice on how to manage the DDI.
Conclusion
We evaluated potential modulators of alert acceptance by assessing content and human factors issues, and quantified the impact of a number of specific factors which influence alert acceptance. This information may help improve clinical decision support systems design.
Keywords: Visualization of data and knowledge, knowledge representations, knowledge acquisition and knowledge management, distributed systems, agents, software engineering: architecture, developing and refining EHR data standards (including image standards), data models, data exchange, controlled terminologies and vocabularies, communication, integration across care settings (inter- and intra-enterprise), knowledge bases, electronic decision support, medication errors, patient safety, decision support
Introduction
Many safety and quality benefits are expected to result from the implementation of clinical decision support systems (CDSS) in healthcare.1 The presence of decision support in an electronic prescribing platform can prevent many types of errors, including knowledge-based prescription errors.2 Yet the quantitative impact of CDSS on prescribing has lagged behind expectations in many implementations. While theoretical analyses suggest that about one in two harmful prescription errors may be prevented by an electronic prescribing system with decision support,3 implemented CDSS have had varying impact on prescribing quality4 and even less clear effect on clinical outcomes.5
Many things can account for these gaps. For example, CDSS implementation can fail. The timing of CDSS integration often overlaps with the implementation of computerized physician order entry (CPOE) systems, with more decision support added later in implementation. Implementation of CPOE usually creates profound changes in clinical workflows which can impair user commitment.6 To prevent rejection from the first, factors for successful implementation have been identified and suggest stepwise roll-out, failure analysis, and assessment of potential workflow changes.7 In addition, once in place, user acceptance can be low. Most CDSS operate as alerting systems that warn providers against potential prescribing errors. However, in clinical practice, providers override up to 90% of presented alerts, which can substantially limit CDSS impact.8 Providers identify especially the low clinical relevance of warnings as major reasons for alert overriding.9 Moreover, user acceptance depends on the usability of the system, that is, the way the alerts are presented to the provider.10
Recently, two major categories of modulators for alert acceptance (ie, factors that influence alert acceptance) have been identified: (1) alert content (ie, quality of knowledge) and (2) alert presentation (ie, human factors). However, their quantitative impact on alert acceptance and their relationship to one another have not been empirically evaluated in ways that inform how we can improve electronic warnings and CDSS acceptance. We undertook this study in order to quantify the impact and mutual relationship of distinct modulators of providers' reactions to electronic prescription alerts. We elected to use drug–drug interaction (DDI) alerts because they are one of the most common types of medication-related decision support, and have suffered from low user acceptance in previous evaluations. We assessed the presented alert content and also the alert presentation and then quantified the impact of distinct modulators.
Methods
Definition of a scale for quality of knowledge
In general, knowledge presented in CDSS can be classified into basic and advanced knowledge.11 Advanced knowledge implies that alerts are suppressed if they are inappropriate due to certain characteristics of the patient, the drug, or the medication regimen, leaving only specific and personalized alerts to be displayed. Using the example of DDI alerts, a CDSS incorporating basic knowledge would trigger alerts whenever two interacting drugs are prescribed concurrently. In comparison, a CDSS based on advanced knowledge would suppress warnings which were clinically insignificant or even inappropriate, for example because (1) the route of administration of either one interacting drug precludes concurrent presence at the site of interaction (eg, erythromycin ophthalmic ointment and systemic cyclosporine),12 (2) the timing of the administration of either one drug precludes concurrent presence at the site of interaction (eg, antacids and doxycycline are administered with a time lag of at least 2 h),13 (3) the dosage of either one drug already accounts for the extent of a pharmacokinetic DDI,14 or (4) current laboratory values are affected by the DDI but remain within standard ranges.
Definition of a human factors scale
Phansalkar et al recently published a review outlining 11 human factors principles which can be used to guide the design and implementation of electronic prescription alerts.15 From this review we identified three human factors domains each summarizing several covariates which are quantifiable and can thus be used as measurable modulators of alert acceptance. First, we assessed the display characteristics which refer to the implementation of the alert in the workflow, for example, its physical or temporal proximity to the event against which it is warning as well as the visibility of the warning (in terms of color, shape, legibility). The content of textual information has been described by different sources.15–17 Four components were chosen: signal word (eg, ‘caution’), hazard (eg, ‘serious drug interaction between drug A and drug B’), consequences (eg, ‘may result in increased plasma concentrations of drug A, thus increasing the risk of bleeding’), and instructions (eg, ‘reduce dosage of drug A by 50% and closely monitor levels’). Prioritization refers to how the tiering of drug alerts according to the severity of the expected adverse drug reaction (ADR) is realized (table 1).
Table 1.
Parameter | Definition | Score (points) |
Display characteristics | ||
Meaningful grouping | Are different types of alert meaningful grouped in terms of perceptual proximity (on the screen) and according to their meaning (processing proximity)18 | 1 |
Summarizing information on the screen | Is the alert/the reaction to the alert visible close to the respective medication order? | 1 |
Proximity to alerted event (timing) | Is the alert linked with the medication order by appropriate timing? | 1 |
Proximity to recommended corrective actions | Are recommended actions immediately accessible (ie, ordering a potassium laboratory test)? | 1 |
Visibility | ||
Visibility | Is the alert clearly visible (eg, brightness)? | 1 |
Legibility | Is the alert clearly readable (eg, size, letter characteristics)? | 1 |
Color | Is the color appropriate for the level of the alert? | 1 |
Shape | Is the shape appropriate for the alert? | 1 |
Icon | Is there an appropriate icon indicating the type and severity of the alert? | 1 |
Textual information | ||
Signal word | Is there a signal word indicating the type and severity of the alert (eg, ‘caution’)? | 1 |
Hazard | Is there a short description of the hazard (eg, ‘serious drug–drug interaction’)? | 1 |
Consequences | ||
Mechanism of the DDI | Is the mechanism of the DDI outlined (eg, ‘additive cardiotoxicity’, ‘inhibition of metabolism results in increased plasma concentrations’)? | 1 |
Potential ADR | Is the potentially resulting ADR named (eg, ‘increased risk for rhabdomyolysis’)? | 1 |
Instructions | ||
What to do | Are there general instructions on what to do available (eg, ‘reduce the dose’, ‘monitor concurrent use’)? | 1 |
How to do it | Are those instructions specified (eg, ‘reduce the dose by 50%’, ‘monitor potassium level 7 days after first dose, then every 4 weeks’) | 1 |
Prioritization | ||
Visual effects | Have different alert levels different visual presentations (eg, by color, shape)? | 1 |
Wording | Have different alert levels different wordings (eg, contraindication versus information)? | 1 |
Required actions | Have different alert categories different required actions (eg, interruptive versus non-interruptive alerts) | 1 |
Total | 18 |
The score refers to a value of 1 if the covariate was incorporated in the respective clinical decision support system.
ADR, adverse drug reaction; DDI, drug–drug interaction.
In this approach, each covariate was allocated a value of 1 if it was incorporated in the respective CDSS. By adding up the values of the covariates, each variable could be classified in an arbitrary score referring to three categories—poor, moderate, or excellent (table 2). The quality of the prioritization and the display characteristics (stratified for each alert level) was categorized by two independent reviewers and inter-rater agreement (Cohen's κ) was assessed. In case of disagreement, consensus was achieved by discussion.
Table 2.
Parameter | Poor (score) | Moderate (score) | Excellent (score) |
Display characteristics | 0–3 | 4–6 | 7–9 |
Textual information | 0–2 | 3–4 | 5–6 |
Prioritization | 0–1 | 2 | 3 |
Allocation to different categories is based on different scores (calculated as the sum of points according to table 1).
Other factors influencing alert acceptance
Alert acceptance has been shown to be influenced by independent factors besides knowledge quality or human factors principles. We therefore controlled the analysis for the following known factors: (1) patient setting: a comparison of studies conducted in each setting indicates that acceptance rates might be lower for outpatients19; (2) patient age: previous studies have found that alert acceptance correlated with patient age20; (3) type of substance triggering the alert, as providers might be more likely to accept alerts referring to high risk substances20; (4) frequency of the alert, because in previous studies providers were stated to be more likely to override an alert if it is repeatedly presented21; (5) alert category referring to the potential risk for the patient because acceptance rates have been higher for high level alerts compared to low level alerts8 22; and (6) whether the alert had to be acknowledged by the provider in order to continue with the prescription.21
Previous studies have also found that alert acceptance can be influenced by the training level or the specialty of the provider issuing the prescription.8 Since this was a secondary analysis of previously collected data and neither the training level nor the provider's specialty were documented, we could not evaluate the impact of this factor on alert acceptance.
Analysis of DDI alerts
Description of study sites and data collection
We retrospectively analyzed DDI alerts issued in three different sites (one outpatient (site A) and two inpatient sites (sites B and C)) referring to a locally-developed system. All alerts were issued between February 1, 2004 and February 1, 2005. Data were collected from Partners HealthCare teaching hospitals and the Partners outpatient prescribing system. Data collection at each study site was approved by the Partners HealthCare System Institutional Review Board. Each alert was characterized using the above mentioned variables.
Categorization of alert response
Alerts requiring acknowledgment were categorized as accepted if the provider (1) chose to cancel the prescription of either one of the interacting drugs or (2) kept the prescription but indicated they would adjust the dosage, administer the drugs with an appropriate time lag, or monitor concurrent use as recommended. Conversely, an alert was categorized as overridden if the provider entered a reason not indicating any modification in the prescription (eg, ‘aware’, ‘patient has already tolerated combination before’, ‘no reasonable alternatives’) or if they did not specify a reason at all. The acceptance of alerts that did not require acknowledgment could not be assessed.
Inclusion and exclusion criteria
Alerts for all prescriptions issued for patients >18 years in one of the study sites during the study period were included (pediatric services were not included in any setting).
Only drug–drug combinations occurring in all three sites were included. The acceptance rates of alerts were descriptively analyzed for all three sites. A descriptive comparison of all three sites regarding alert acceptance was conducted in a subset of alerts which were identically integrated in the providers' workflow in all three settings.
For multinomial analysis and quantification of modulators of alert acceptance, alerts issued in all three sites with all three levels were included.
Statistical analysis
In univariate analyses, nominal variables were analyzed using the Pearson's χ2 test, and ordinal and continuous data with the Wilcoxon or Kruskal–Wallis test, if more than two groups were compared. All variables reaching statistical significance in univariate analysis were included in the multivariate analysis. In multivariate analyses, modulators for alert acceptance were assessed in binary and multinomial logistic regression models, with alert acceptance as dependent variable (binary: accepted vs overridden; multinomial: accepted by canceling the prescription vs accepted by modifying the prescription vs overridden). We included 10 independent variables in the model (table 3). The analysis was clustered for sites in order to adjust for specific site effects. Interaction terms were not considered. The association between independent variables and the dependent variable was expressed as ORs with 95% CI. The quality of discrimination was assessed via C-statistics. A p value <0.05 was considered significant. All analyses were performed with SAS for Windows, v 9.1 and 9.2 (SAS Institute).
Table 3.
Allocation | Parameter | Data type | Categories |
Knowledge quality | Knowledge quality | Ordinal | Inappropriate, potentially inappropriate, appropriate |
Human factor | Display of the alert | Ordinal | Poor, moderate, good |
Textual information | Ordinal | Poor, moderate, good | |
Prioritization | Ordinal | Poor, moderate, good | |
Confounder | Setting | Nominal | Inpatient, outpatient |
Patient age | Continuous | – | |
Dose-dependent toxicity | Nominal | Yes, no | |
Frequency of the alert | Nominal | Rare, repeated (ie, alert frequency per user > the 75% quartile alert frequency in the respective site) | |
Level of the alert | Ordinal | Low, moderate, high risk | |
Acknowledgment required | Nominal | Yes, no |
Results
Classification of alerts according to the quality of knowledge
For the current work, only DDI alerts referring to prescriptions with a route of administration prohibiting the DDI were classified as inappropriate (N=189). The timing of the administration, the dosage, as well as the patient's laboratory values were not documented in the analyzed dataset which precluded complete assessment of the appropriateness of the alert. Such alerts were therefore categorized as potentially inappropriate (N=241). All other alerts were categorized as appropriate, as their inclusion in the database underlies a well proven and documented algorithm which guarantees that only clinically relevant and correct information is displayed.23
Classification of human factors variables at sites studied
The presentation of the alerts was different at each site. Whereas the electronic prescribing settings into which the DDI warnings were implemented offered an advanced design and intuitive workflow integration in the outpatient setting (site A) and in one inpatient site (site B), alert presentation seemed to be less advanced in the second inpatient setting (site C). At sites A and B, DDI alerts were tiered into three severity categories requiring different user acknowledgment with mandatory canceling of either one drug for level 1 alerts (combination of the two drugs might be life threatening), canceling or entering a reason for keeping the prescription for level 2 alerts (combination of the two drugs might be potentially serious), and no mandatory action required for level 3 alerts (information only). Conversely, at site C, all DDI alerts had to be acknowledged by canceling or entering a reason for keeping the prescription.
For each site, the quality of prioritization and the display characteristics (stratified for each level) was categorized by two independent reviewers: site A has a ‘moderate’ alert display and an ‘excellent’ prioritization, site B has a ‘moderate’ alert display and prioritization (the prioritization did differ from site A in terms of visual effects), and site C has a ‘poor’ alert display and prioritization (ie, no prioritization at all). The classification was done by two independent reviewers who had an inter-rater agreement of 0.97 (Cohen's κ). In case of disagreement, consensus was found through discussion. The quality of the textual information was individually evaluated for each DDI alert by one reviewer who checked whether the textual information contained information on a signal word, the hazard, the mechanism of the DDI, the potential ADR, what to do in respect of the DDI, and how to do it.
Overall alert acceptance
In the 1-year study period, 281 distinct drug combinations were ordered in all three settings, and there were a total of 50 788 DDI alerts (10 329 at site A, 21 678 at site B, and 18 781 at site C) for 21 910 patients (table 4). Only 0.06% of alerts were level 1 alerts (potentially life-threatening combinations), namely 10 at site A, 16 at site B, and 5 at site C. Less than one third of alerts were categorized as level 2 alerts (2335 at site A, 4591 at site B, and 6363 at site C). For these level 2 alerts, the acceptance rates between different sites differed significantly and were highest at site B and lowest at site C. In the outpatient setting, an alert was significantly more frequently accepted by canceling either one of the interacting drugs. In contrast, in the inpatient setting, providers were more likely to keep both prescriptions but to monitor the concurrent administration as recommended. Moreover, the option to keep the prescription but to adjust the dosage was significantly more often chosen in the outpatient setting and at site B compared to site C (table 5).
Table 4.
Site A | Site B | Site C | |
Total number of patients | 6436 | 7451 | 8023 |
Age (mean±SD) | 56.1±17.4 | 63.6±15.8 | 59.7±16.8 |
% Female | 63% | 48% | 55% |
DDI, drug–drug interaction.
Table 5.
Parameter | Site A, N (%) | Site B, N (%) | Site C, N (%) |
Overall | |||
Total alert number | 10329 | 21678 | 18781 |
Accepted* | 1997 (19.3%) | 3967 (18.3%) | 8763 (46.7%) |
Overridden | 8332 (80.7%) | 17711 (81.7%) | 10018 (53.3%) |
Level 2 alerts | |||
Total alert number | 2335 | 4591 | 6363 |
Accepted* | 1629 (69.8%) | 3950 (86%) | 3556 (55.9%) |
Cancel/discontinue† | 708 (43.6%) | 1084 (27.4%) | 1565 (44.0%) |
Adjust dosage‡ | 289 (17.7%) | 898 (22.7%) | 12 (0.3%) |
Monitor as recommended§ | 632 (38.7%) | 1884 (47.7%) | 1684 (47.4%) |
Time shift administration | – | 84 (2.1%) | 295 (8.3%) |
Overridden | 706 (30.2%) | 641 (14%) | 2807 (44.1%) |
Site A: outpatient; site B: inpatient with advanced CDSS; site C: inpatient with basic CDSS.
Acceptance rates are differ significantly with site B>site A>site C (χ2, p<0.001).
Rate to cancel or discontinue an order is significantly higher at site A than at site B or C (χ2, p<0.001).
Rates to keep a prescription but adjust the dosage are significantly higher at site A and B compared to site C (Pearson's χ2, p<0.001).
Rates to keep a prescription but monitor as recommended are significantly higher at site B and C compared to site A (Pearson's χ2, p<0.001).
CDSS, clinical decision support systems.
Modulators of alert acceptance
All 10 variables showed significant impact on alert acceptance in the univariate analyses. However, if an alert did not require acknowledgment, it was only accepted in 1.4% of cases, indicating that alerts are overridden almost completely if no acknowledgment is required. Accordingly, the fact, that an alert was interruptive was a very strong predictor of alert acceptance. In order to be able to assess the presence of other factors, we only included alerts which were ‘interruptive’ in the multivariate analyses.
In the binary logistic regression model, the display of the alert, the setting, the alert level, the fact that the drug was a critical dose drug, and the frequency of the alerts were found as independent correlates for providers' alert acceptance. While in the univariate analyses, alert acceptance correlated with the knowledge quality (9.5% accepted if alerts were inappropriate, 21% accepted if alerts were potentially inappropriate, and 29% accepted if alerts were appropriate, p<0.001), knowledge quality was not an independent correlate in binary logistic regression nor was textual information. All results of the binary logistic regression model are presented in table 6.
Table 6.
Parameter | OR (95% CI) | p Value |
Knowledge quality | 1.05 (0.82 to 1.35) | 0.711 |
Display of the alert | 4.75 (3.87 to 5.84) | <0.001 |
Textual information | 1.04 (0.91 to 1.19) | 0.554 |
Setting (inpatients vs outpatients) | 2.63 (2.32 to 2.97) | <0.001 |
Patient age | 0.998 (0.996 to 0.999) | 0.011 |
Dose-dependent toxicity | 1.13 (1.07 to 1.21) | <0.001 |
Frequency of the alert | 1.30 (1.23 to 1.38) | <0.001 |
Level of the alert | 1.74 (1.63 to 1.86) | <0.001 |
Prioritization has been found as a linear combination of other variables and is not reported.
The quality of the textual information did not significantly correlate with the provider's decision to accept or override an alert. However, in a multinomial logistic model the quality of the text did influence the mode of reactions, with better quality alerts having a higher likelihood that the prescription would be kept and modified than overridden (table 7).
Table 7.
Parameter | Cancel the prescription versus override the alert | Keep and modify the prescription versus override the alert | ||
OR (95% CI) | p Value | OR (95% CI) | p Value | |
Knowledge quality | 0.76 (0.40 to 1.44) | 0.3983 | 1.26 (1.06 to 1.50) | 0.0080 |
Display of the alert | 4.23 (1.78 to 10.08) | 0.0011 | 4.78 (4.63 to 4.93) | <0.0001 |
Textual information | 0.81 (0.42 to 1.58) | 0.5342 | 1.22 (1.15 to 1.30) | <0.0001 |
Setting (inpatients vs outpatients) | 1.73 (1.64 to 1.83) | <0.0001 | 3.33 (3.28 to 3.38) | <0.0001 |
Patient age | 0.99 (0.992 to 0.996) | <0.0001 | 1.00 (0.996 to 1.004) | 0.9435 |
Dose-dependent toxicity | 0.59 (0.49 to 0.70) | <0.0001 | 1.47 (1.35 to 1.59) | <0.0001 |
Frequency of the alert | 0.91 (0.68 to 1.22) | 0.5386 | 1.50 (1.31 to 1.72) | <0.0001 |
Level of the alert | 2.80 (2.66 to 2.95) | <0.0001 | 1.38 (1.28 to 1.48) | <0.0001 |
Prioritization has been found as a linear combination of other variables and is not reported.
Only for the textual information were different combinations of covariates available allowing for calculation of the influence of distinct covariates (ie, signal word, hazard, mechanism of the DDI, potential ADR, what to do regarding the DDI, how to do it). Of all information provided, only whether the textual information contained detailed information on how to react to the DDI alert reached statistical significance: If no information was given on how to react to the alert, providers were less likely to keep and modify it (compared to overriding it) (OR 0.84; 95% CI 0.74 to 0.94).
Discussion
Clinical decision support has the potential to improve care, but a key limitation of the success of CDSS has been that they often have low user acceptance rates, which have been described for different settings and types of alerts. In this study, we quantified the impact of nine variables which appear to act as modulators of providers' alert acceptance.
Many previous studies have evaluated the impact of clinical decision support.15 24–26 Some of the factors affecting whether or not decision support is accepted appear to include workflow issues, the intrusiveness and importance of the alert, who it was displayed to, and factors about the system itself. However, many studies have focused more on whether decision support was expected, and have not explored in detail factors related to the reasons for this.
In this study, we attempted to quantify the impact of some of the major human factors issues which may affect alert acceptance by specification of three variables—the display of the alert, the textual information, and the prioritization of the alerts, all of which summarize specific covariates. According to the number of covariates integrated in the CDSS, the corresponding human factors variable may be classified as poor, moderate, or excellent. In our model, the alert display most strongly correlated with alert acceptance, reinforcing that the presentation of the alerts at the user interface is an important determinant of alert acceptance. Hence, to optimize current and future systems, knowledge of human factors and esthetics as well as the psychological aspects of human–computer interaction should be included in the development process. The textual information did not influence the frequency of alert acceptance; however, it did affect whether the provider would actually cancel an interacting drug or continue the prescription and modify it in order to account for the DDI. We were able to assess the influence of distinct covariates and found that providers were less likely to cancel a prescription but to keep and modify it if providers received detailed instructions on how to manage the interaction.
The factor with the largest impact on alert acceptance, which we therefore could not include in the logistic regression models, is whether an acknowledgment of the alert is mandatory or not, that is, whether the provider is forced to interact with the system or not. Of course, all the alerts in the category in which no acknowledgment was required had been put in that category because they were felt to be relatively less important than the remainder. Making interaction with the system mandatory for a specific alert will increase the likelihood that a specific alert will be accepted—but it also creates risks: (1) if the provider disagrees with the presented knowledge but is forced to accept warnings, overall acceptance and user satisfaction might decrease; (2) users might demand all alert functions be turned off; or (3) users may over-rely on the alert and the presented information.27 Indeed, in the current study, alert acceptance of inappropriate alerts was lower than those for appropriate alerts but still, one in 10 alerts was accepted (compared to three in 10, if the alert was appropriate). This underscores the importance of providing accurate and correct knowledge considering a maximum of clinical information of the individual patient context, in particular if alert acknowledgment is mandatory.
Previous studies have suggested that alerts are more frequently overridden if they are repeatedly presented. However, we found that alerts were more often accepted, especially by keeping the prescription but modifying it, if they were presented more frequently per user of the system. This may correspond to findings from psychology which indicate that we are better able to handle information we already know.
The current approach has several limitations. We assessed only one specific domain, DDIs, and other issues may be found for other domains. Moreover, the overall quality of the studied DDI knowledge base was high, and only a small number of DDI alerts were classified as inappropriate. Thus, the impact of quality of knowledge on alert acceptance might be stronger if knowledge bases with a larger proportion of poor alerts are assessed. Moreover, in such a case, a more detailed assessment of the clinical impact of the alerts might be required and patient individual clinical context should be considered in order to determine the appropriateness of the alert on a patient individual basis. Providers might also have altered their decisions later after the initial interaction based on consultation with another provider or another reference, but we could not evaluate this. We included only CDSS for adult patients and did not assess pediatric populations. We also performed studies within only one integrated delivery system, although we studied several different applications. This approach should be tested in other systems and with other vendors. Although we included a large amount of data from three different sites with different CDSS applications (even though an identical DDI knowledge base was used) and patient populations, the data were still not diverse enough to assess each variable or covariate. In particular, the variable prioritization was not considered in the logistic regression model because multicollinearity was found. On the other hand, because of the large sample size, some results are statistically significant when the differences observed are probably not clinically significant. Moreover, except for the covariates of textual information, there were not enough combinations of different specifications of the covariates in order to model the influence of each covariate. In order to conclusively quantify the impact of the covariates as well as the interplay of the display characteristics and the alert prioritization, further studies with different CDSS applications must be conducted. At that point the classification of distinct covariates may also be changed from binary (present/absence) to a gradual scale.
We conclude that specific modulators may affect the likelihood that decision support will be effective and hence can be useful for improving patient safety. If validated in other settings, the model we developed might help predict the acceptance of CDSS along with factors characterizing the setting, the system itself, and the presented knowledge. If healthcare is to be improved with CDSS, it will be important to have approaches for predicting whether or not specific alerts, warnings, and suggestions are likely to be successful.
Footnotes
Funding: This work was supported in part by a fellowship within the postdoctoral program of the German Academic Exchange Service (DAAD), Bonn, Germany; the Health Information Technology Center for Education and Research in Therapeutics, supported by the Agency for Healthcare Research and Quality, Rockville, Maryland; and a grant from the Agency for Healthcare Research and Quality (no. HS11169-01; ‘Improving safety by computerizing outpatient prescribing’).
Competing interests: None.
Ethics approval: Partners HealthCare System Institutional Review Board approved this study.
Provenance and peer review: Not commissioned; externally peer reviewed.
References
- 1.Johnston D, Pan E, Walker J. The Value of CPOE in ambulatory settings. JHIM 2003;18:5–8 [PubMed] [Google Scholar]
- 2.Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety. A systematic review. Arch Intern Med 2003;16:1409–16 [DOI] [PubMed] [Google Scholar]
- 3.Bobb A, Gleason K, Husch M, et al. The epidemiology of prescribing errors: the potential impact of computerized prescriber order entry. Arch Intern Med 2004;164:785–92 [DOI] [PubMed] [Google Scholar]
- 4.Schedlbauer A, Prasad V, Mulvaney C, et al. What evidence supports the use of computerized alerts and prompts to improve clinicians prescribing behavior? J Am Med Inform Assoc 2009;16:531–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005;293:1223–38 [DOI] [PubMed] [Google Scholar]
- 6.Aarts J, Dooreward H, Berg M. Understanding implementation: the case of a computerized physician order entry system in a large Dutch university medical center. J Am Med Inform Assoc 2004;11:207–16 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Woods DD, Cook RI. Nine steps to move forward from error. Cogn Technol Work 2002;4:137–44 [Google Scholar]
- 8.Weingart SN, Toth M, Sands DZ, et al. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med 2003;163:2625–31 [DOI] [PubMed] [Google Scholar]
- 9.van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006;13:138–47 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Oren E, Shaffer ER, Guglielmo BJ. Impact of emerging technologies on medication errors and adverse drug events. Am J Health Syst Pharm 2003;60:1447–58 [DOI] [PubMed] [Google Scholar]
- 11.Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007;14:29–40 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Payne TH, Nichol WP, Hoey P, et al. Characteristics and override rates of order checks in a practitioner order entry system. Proc AMIA Symp 2002:602–6 [PMC free article] [PubMed] [Google Scholar]
- 13.van der Sijs H, Lammers L, van den Tweel A, et al. Time-dependent drug-drug interaction alerts in care provider order entry: software may inhibit medication error reductions. J Am Med Inform Assoc 2009;16:864–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Seidling HM, Storch CH, Bertsche T, et al. Successful strategy to improve the specificity of electronic statin-drug interaction alerts. Eur J Clin Pharmacol 2009;65:1149–57 [DOI] [PubMed] [Google Scholar]
- 15.Phansalkar S, Edworthy J, Hellier E, et al. A review of human factors principles for the design and implementation of medication alerts in clinical information systems. J Am Med Inform Assoc 2010;17:493–501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.NHS Common User Interface NHS Connecting for Health. http://www.cui.nhs.uk/Pages/NHSCommonUserInterface.aspx
- 17.Sweidan M, Reeve JF, Brien JA, et al. Quality of drug interaction alerts in prescribing and dispensing software. Med J Aust 2009;190:251–4 [DOI] [PubMed] [Google Scholar]
- 18.Wickens C, Carswell C. The proximity compatability principle: its psychological foundation and its relevance to display design. Hum Factors Aero Saf 1995;37:473–94 [Google Scholar]
- 19.Isaac T, Weissman JS, Davis RB, et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009;169:305–11 [DOI] [PubMed] [Google Scholar]
- 20.Seidling HM, Schmitt SP, Bruckner T, et al. Patient-specific electronic decision support reduces prescription of excessive doses. Qual Saf Health Care 2010;19:e15. [DOI] [PubMed] [Google Scholar]
- 21.Taylor LK, Tamblyn R. Reasons for physician non-adherence to electronic drug alerts. Stud Health Technol Inform 2004;107:1101–5 [PubMed] [Google Scholar]
- 22.Paterno MD, Maviglia SM, Groman PN, et al. Tiering drug-drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009;16:40–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Shah NR, Seger AC, Seger DL, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc 2006;13:5–11 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Black AD, Car J, Pagliari C, et al. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med 2011;8:e1000387. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Sirajuddin AM, Osheroff JA, Sittig DF, et al. Implementation pearls from a new guidebook on improving medication use and outcomes with clinical decision support. Effective CDS is essential for addressing healthcare performance improvement imperatives. J Healthc Inf Manag 2009;23:38–45 [PMC free article] [PubMed] [Google Scholar]
- 26.Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003;10:523–30 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Ash JS, Sittig DF, Poon EG, et al. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2007;14:415–23 [DOI] [PMC free article] [PubMed] [Google Scholar]