Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2011 Apr 12;18(Suppl 1):i51–i61. doi: 10.1136/amiajnl-2010-000053

Development and validation of a survey instrument for assessing prescribers' perception of computerized drug–drug interaction alerts

Kai Zheng 1,2,, Kathleen Fear 2, Bruce W Chaffee 3,4, Christopher R Zimmerman 3,4, Edward M Karls 5, Justin D Gatwood 3, James G Stevenson 3,4, Mark D Pearlman 6
PMCID: PMC3241157  PMID: 21486876

Abstract

Objective

To develop a theoretically informed and empirically validated survey instrument for assessing prescribers' perception of computerized drug–drug interaction (DDI) alerts.

Materials and methods

The survey is grounded in the unified theory of acceptance and use of technology and an adapted accident causation model. Development of the instrument was also informed by a review of the extant literature on prescribers' attitude toward computerized medication safety alerts and common prescriber-provided reasons for overriding. To refine and validate the survey, we conducted a two-stage empirical validation study consisting of a pretest with a panel of domain experts followed by a field test among all eligible prescribers at our institution.

Results

The resulting survey instrument contains 28 questionnaire items assessing six theoretical dimensions: performance expectancy, effort expectancy, social influence, facilitating conditions, perceived fatigue, and perceived use behavior. Satisfactory results were obtained from the field validation; however, a few potential issues were also identified. We analyzed these issues accordingly and the results led to the final survey instrument as well as usage recommendations.

Discussion

High override rates of computerized medication safety alerts have been a prevalent problem. They are usually caused by, or manifested in, issues of poor end user acceptance. However, standardized research tools for assessing and understanding end users' perception are currently lacking, which inhibits knowledge accumulation and consequently forgoes improvement opportunities. The survey instrument presented in this paper may help fill this methodological gap.

Conclusion

We developed and empirically validated a survey instrument that may be useful for future research on DDI alerts and other types of computerized medication safety alerts more generally.

Keywords: Decision support systems, clinical (L01.700.508.300.190), medical order entry systems (L01.700.508.300.408.600), drug interactions (G07.690.812.240), psychology, social (F01.829), Questionnaires (L01.280.800), collaborative technologies, personal health records and self-care systems, personal health records and self-care systems, developing/using clinical decision support (other than diagnostic) and guideline systems, systems supporting patient-provider interaction, human-computer interaction and human-centered computing, improving healthcare workflow and process efficiency, system implementation and management issues, social/organizational study, qualitative/ethnographic field study, pharmacy, clinical decision support, outcome, quality improvement, medications, surveys, business intelligence, quality improvement, patient satisfaction, employee satisfaction, pharmacy, quality, patient safety, machine learning, knowledge bases

Introduction

Computerized prescriber order entry (CPOE) holds great promise for reducing adverse drug events through generation of clinical decision-support (CDS) advisories such as drug–drug interaction (DDI) alerts.1–8 To realize this benefit, numerous experts and professional organizations have advocated the accelerated adoption and improved use of CPOE,9–14 and consumer groups such as Leapfrog have already made implementation of CPOE, and its CDS component in particular, part of their evaluation criteria in rating hospitals' patient safety performance.15 16 Despite the great potential, recent review studies have consistently shown that use of computerized medication safety alerts provided in CPOE (or ambulatory ePrescribing systems) are only effective in preventing certain types of prescribing errors, and no strong evidence exists suggesting that their use leads to significant improvements in actual patient safety outcomes.17–27

It has been widely acknowledged that this gap is caused by, or manifested in, poor acceptance by end users, which not only diminishes the value of computerized alerts but also suggests increased cognitive burden and decreased time efficiency.28–37 For example, two recent retrospective chart review studies showed that even in informatics-advanced institutions, the override rates of medication safety alerts remained extraordinarily high at over 80%.30 35 Moreover, the majority of clinicians participating in a qualitative study expressed the view that they wanted to turn off DDI alerts in order to reduce alert overload.34 Understanding the psychological factors underlying end users' decisions to skip or reject computerized medication safety alerts is therefore vital. Such an understanding may help us, for example, fine tune the sensitivity level of alert issuing, identify more effective means to deliver CDS alerts, and introduce tailored training and incentive strategies to improve end user acceptance and adherence.

Through this research, we developed a survey instrument for assessing and understanding prescribers' perception of computerized DDI alerts. While the questionnaire was worded specifically for DDIs, the underlying constructs were derived from general social psychology and technology acceptance theories; therefore, it can be readily adapted for use in other settings. In the Materials and Methods section, we describe these theoretical models as well as review the extant literature that informed the development of the survey. We then present the results obtained from a two-stage empirical validation study, which led to the final survey instrument and usage recommendations.

Background and significance

Computerized medication safety alerts and issues of end user acceptance

Augmenting human cognition using the computational power of machines has been an enduring topic in health informatics research, starting with a proliferation of artificial intelligence based diagnostic systems developed from the 1960s to the 1980s.38 39 However, few of these systems made their way into everyday clinical practice due to numerous barriers that were not addressable at the time, such as the lack of integration of electronic patient data, misaligned incentive structures, and perceived regimentation of automated decision-making threatening clinicians' professional autonomy.40 41

In the past 2 decades, a new chapter in CDS research and practice has opened with rapid advances in information technology, the introduction of pay-for-performance models, and new generations of technologically savvy clinicians entering the workforce.41 Progress has also been encouraged more recently by strong government initiatives (eg, the HITECH Act)13 14 and support from payers, trade organizations (eg, HIMSS), and professional societies (eg, AMIA), as well as healthcare provider institutions. Of the many areas where CDS can be applied, its efficacy will most likely be demonstrated in the provision of computerized medication safety alerts, because: (1) medication orders placed through CPOE usually exist in a computable, codified format, eliminating the need for processing unstructured narrative data; (2) a significant proportion of prevalent adverse drug events are identifiable and preventable, offering great opportunities for producing tangible performance improvements42 43; and (3) medication safety alerts are provided natively in most commercially available CPOE systems as required by the certification criteria,44 and they are often powered by well-established and continuously updated medication lexicons.

The assumption about the efficacy of computerized medication safety alerts, however, does not seem to hold strongly. Only a limited number of empirical studies have shown that use of computerized medication safety alerts leads to significantly improved patient safety outcomes,17–27 and most of these studies were conducted at a handful of informatics-advanced academic institutions.17 18 A recent systematic review, for example, analyzed the literature published from 1998 to 2007 and found that ‘the evidence-base reporting the effectiveness of CPOE to reduce prescribing errors (among hospital inpatients) is not compelling and is limited by modest study sample sizes and designs.’24 Such performance is no better than that of computerized alerts or reminders used in other patient care areas (eg, preventive medicine), and neither measure up to expectations.29 What may account for this gap?

First, the software build quality of current generation commercially sold CPOE systems seems to be highly variable45–48 and their medication knowledge bases are not necessarily best poised for generating context-appropriate and clinically useful alerts.49 For example, a recent study found that CPOE systems used in a national sample of 62 hospitals performed poorly and inconsistently on generation of computerized alerts to prevent fatalities and other serious adverse drug events,47 and another study revealed a lot of variability in the CDS capabilities provided in nine prevalently used commercial clinical information systems.45 These factors may be further aggravated by the variable quality of local customization, IT infrastructure and support, and adopting institutions' competency in managing complex changes.50 Additionally, even if CPOE software and the implementation processes are optimized as much as possible, resistance by end users can still cause a CPOE project to fail or the decision-support potential to be under-fulfilled.51 As van der Sijs et al showed in a review study, CPOE-provided medication safety alerts are overridden by clinician users in 49–96% of cases.34 Clearly, heavily invested CDS technologies cannot deliver their promises if they are not used, and the alert fatigue phenomenon that may incur, reflective of escalated cognitive load and possible negative affect, may be responsible for certain unintended adverse consequences associated with CPOE adoption that have been extensively documented in the literature.52–54

Little is known, however, about the psychological and sociotechnical factors underlying such end user acceptance issues.33 55 In addition, most prior empirical studies were conducted based on instruments developed ad hoc, which were not rigorously validated and did not leverage theoretical advances in understanding complex human behaviors in technology acceptance. These facts inhibit learning and knowledge accumulation, and consequently forgo improvement opportunities such as creating necessary facilitating conditions through the introduction of behavioral, societal, and organizational interventions. The objective of this research was to develop a theoretically informed and empirically validated survey instrument that may help address this methodological gap.

Theoretical frameworks

A significant branch of social psychology is devoted to studying the determinants underlying people's decision to conduct (or not to conduct) a behavior. Two prevalent theories, the theory of reasoned action and the theory of planned behavior, jointly postulate that the influence of various behavioral antecedents can be substantially modeled through three mediating constructs: attitudes toward the behavior (personal beliefs), subjective norms (normative beliefs), and perceived facilitating conditions (control beliefs).56 57 This theoretical postulation makes it possible to disentangle complex human behaviors using a relatively parsimonious set of variables.58 59

Building upon the theory of reasoned action and the theory of planned behavior, numerous models have been proposed to study end users' acceptance behavior of technological innovations. These include the model of personal computer utilization,60 the theory of task technology fit,61 the theory of interpersonal behavior,62 the technology acceptance model,63 64 and more recently the unified theory of acceptance and use of technology (UTAUT) which attempts to synthesize existing work.65 Comprehensive reviews of these models and their empirical applications can be found elsewhere.66–69

The survey instrument described in this paper is grounded in UTAUT, in addition to an adapted accident causation model proposed by van der Sijs et al which accounts for common unexpected prescriber reactions to computerized medication safety alerts (hence ‘accident’).32 70 UTAUT provides us with conceptual constructs at the theoretical level, and van der Sijs et al's model informs the contextual interpretation of these constructs relevant to this research.

Materials and methods

Conceptual constructs and measurement scales

Additional detail about UTAUT can be found in Venkatesh et al.65 Briefly, the model proposes that four constructs are most influential in determining an individual's technology acceptance behavior: performance expectancy (PE), effort expectancy (EE), social influence (SI), and facilitating conditions (FC).65 PE and EE assess the expected gains and costs associated with the behavior. Social influence captures the influence received from others, which may be conveyed via direct or indirect social interaction mechanisms such as persuasion (eg, demands by supervisors),71–73 reflected appraisal (eg, behaving in a certain way in expectation of social rewards or other incentives),73 74 and peer comparison (eg, imitating the behavior of ‘similar others’ in order to maintain one's social status).73 75 76 Finally, FC evaluates the perception of conditions facilitating or impeding the conduct of the behavior, such as adequacy of knowledge and technical assistance.

In van der Sijs et al's adapted accident causation model,32 70 the authors propose that prescribers' reaction to computerized medication safety alerts, the perception of alert fatigue in particular (perceived fatigue, PF), is a result of a concatenation of system, individual, and organizational factors. These factors include latent conditions such as training, as well as error producing or impeding conditions that may be found at the (1) environment/system level (eg, sensitivity and specificity of alerts and clarity of presentation), (2) task level (eg, appropriateness of alert handling in a given task context), (3) team level (eg, impact on clinical workflow and team coordination), and (4) individual level (eg, time, trust/distrust, and motivation).32 Combining these two models, we defined key constructs and measures to be accommodated in the survey (table 1).

Table 1.

Conceptual constructs and key measures

Construct Contextual interpretation and measures
Performance expectancy (PE) Expected performance gains that can be achieved by using DDI alerts; main measures include: (1) overall perceived usefulness, (2) appropriateness of specificity, sensitivity, and severity (determinants of alerts' accuracy, relevance, and importance), (3) associated benefits such as incidental learning (ie, increased knowledge about ADE as a result of reading and responding to computerized alerts),* (4) appropriateness of the volume of alerting (a key factor contributing to ‘distrust’ and consequently decreased perceived value), and (5) utility in reducing professional risks.*
Effort expectancy (EE) Expected time and effort associated with use of DDI alerts; main measures include: (1) perceived ease of use, (2) clarity of information content, (3) extra time required, (4) effort incurred when the same alerts need to be addressed repeatedly,* and (5) workflow integration.
Social influence (SI) Perceived behavioral influence received from others; main measures include: (1) reflected appraisal (to meet supervisors' expectations), (2) peer comparison (to imitate colleagues' behavior in order to be compliant with workspace norm), and (3) perceived impact on professional image.
Facilitating conditions (FC) Perceived facilitating (or impeding) conditions; main measures include: (1) adequacy of training, (2) adequacy of clinical knowledge for interpreting and acting upon the alerts presented, (3) provision of reasoning and reference information, (4) provision of suggestions for management alternatives, and (5) availability of assistance when problems occur.
Perceived fatigue (PF) Perceived alert fatigue principally caused by receiving an excessive number of alerts.
Perceived use behavior (UB) Perceived actual use of DDI alerts; main measures include frequency of: (1) reviewing the alerts presented, (2) providing reasons for accepting or rejecting, and (3) taking actions accordingly by revising the initial prescribing decisions.
*

Not originally included in UTAUT or van der Sijs et al's model but added later based on literature review results (see the Questionnaire development section).

Professional image differs from professional risks assessed in PE, in that professional image solely reflects one's perception of how one's performance and professionalism may be judged by others (patients or clinician peers), whereas professional risks are associated with foreseeable legal and financial consequences.

ADE, adverse drug events; DDI, drug–drug interaction; UTAUT, unified theory of acceptance and use of technology.

Questionnaire development

To maximally leverage research instruments that have already been validated or applied in the field, we conducted a literature review of prior work on: (1) prescribers' opinions of or experiences with computerized medication safety alerts, (2) prescriber-provided reasons for overrides (or adherence), and (3) clinicians' attitudes toward computer-based CDS technologies in general. The results also rendered additional measures that may not have been present in van der Sijs et al's model, which is exclusively focused on unexpected prescriber behaviors.

To identify relevant work, we first searched MEDLINE, EMBASE, and PsychINFO using various search term combinations consisting of ‘alert*,’ ‘alarm,’ ‘remind*,’ ‘prompt*,’ ‘opinion*,’ ‘view*,’ and ‘attitude*.’ Then, we expanded the search by examining citations contained in the papers initially retrieved. Note that the objective of the review was to identify seminal work that might provide insights into the survey development, rather than to perform an inclusive analysis of all studies that have been conducted in related areas. Also note that editorials and commentaries were excluded, as were clinical trials that did not include a user evaluation component.

A total of 23 papers were deemed highly relevant. They represent six study types: (1) theoretical development, (2) questionnaire surveys, (3) focus groups and interviews, (5) CPOE log audits, and (6) meta-analyses and systematic reviews.29 33 77–97 Table 2 provides summaries of all papers that we reviewed. As shown in table 2, very few prior empirical studies were based on established theoretical frameworks, and almost none included a rigorous validation of the instrument used.

Table 2.

Summary of the existing empirical studies (presented in reverse chronological order)

Citation and PubMed identifier Study description Measures or themes identified Theory and validation Inclusion or exclusion in this study
van der Sijs et al,77 20171929 An experimental study observing how participants responded to CDS alerts, followed by structured interviews. Better training, improved concise alert texts, and increased specificity were identified as facilitating factors. Loosely based on Reason's model of accident causation; validation does not apply. All critical facilitating factors were included.
Hor et al,78 20067624 A survey among GPs in Ireland regarding perceived benefits of and barriers to adopting CDS in ePrescribing. 27 questions related to value of CDS and barriers to adoption, such as high sensitivity of alerting. Self-developed survey instrument; underlying theory not indicated; validation not reported. All value- or barrier-related questions were included, except those at the practice level (eg, those related to standardized product software).
Vashitz et al,79 19000935 Development and validation of a conceptual model of clinicians' responses to CDS reminders related to cholesterol management. Conceptualized four principal types of user responses: compliance, reliance, spillover, and reactance. Response types derived from cognitive engineering concepts on end user responses to warning systems. The spillover effect is difficult to assess via self-reported surveys; a related perceptual measure, the incidental learning effect, was added instead.
Weingart et al,80 19786683 A survey among ambulatory care clinicians regarding their experiences in using drug–drug and drug–allergy alerts provided in an ePrescribing system. 42 items assessing perceived value, satisfaction, barriers, behavioral effects, and impact on safety, efficiency, and cost of care. Survey developed based on focus groups with practitioners; validation conducted but results not reported; underlying theory not indicated. Questions about the frequency of events related to behavioral alteration and impact were revised to a leveled scale.
Weingart et al,81 19395307 A focus group study leading to the survey instrument used in the paper above. Relevant themes included an excessive number of alerts of uncertain value, high sensitivity, trivial alerts interrupting workflow, and appropriate polypharmacy not acknowledged by CDS. The semi-structured facilitator guide was pilot-tested with an unknown number of physicians and nurses; underlying theory not indicated. All relevant themes were incorporated.
Mollon et al,82 19210782 A systematic review of prescribing decision-support systems to identify which features predict implementation success and changes in user behavior and patient outcomes. 41 papers independently assessed by two reviewers to study the association between outcomes and 28 predefined system features. Does not apply All features were assessed to varying degrees.
Ko et al,83 17068346 A survey among VA prescribers and pharmacists regarding their opinions about and suggestions for DDI alerts. Prescriber survey (33 items) covered measures such as alert burden and outcomes; and pharmacist survey (39 items) covered additional measures such as their interactions with prescribers regarding alerts. Self-developed survey instrument; underlying theory not indicated; pilot-tested but detail not reported. Questions specific to the VA setting or only applicable to pharmacists were not included.
Mayo-Smith and Agrawal,84 16935025 An alert log review investigating the relationship between reminder response rates and practice (primary care facilities at a VA site), and provider and reminder characteristics, followed by a user survey. Various facilitating and impeding conditions at the practice, provider, and reminder levels; the user survey contained 13 questions assessing providers' perceived value of CDS reminders and adequacy of facilitating conditions. Self-defined characteristics measures and self-developed survey instrument; underlying theory not indicated; pilot-tested but no formal validation reported. Very specific characteristics, for example, minimization of keystrokes, were not included.
Grizzle et al,33 17927462 An alert log review investigating prescribers' rationales for overriding DDI alerts at six VA facilities. 14 categories of common prescriber-provided reasons for overriding, such as lack of relevance and availability of alternative management plans. Does not apply. All relevant categories were incorporated.
Graham et al,85 17617908 A survey among physicians from multiple specialties soliciting their perceptions of computerized decision aids and intention to use. 43 items on value of CDS for patients and clinicians, content/format, quality of implementation, and intention to use. Based on the Ottawa Model of Research Use, technology diffusion theories, and prior work by the research team; validation results reported. Patient-oriented questions and use intention questions were not included.
van der Sijs et al,32 16357358 A systematic review paper summarizing extant literature on alert overrides. Various facilitating or impeding conditions at the environment, task, team, and individual levels. A foundational paper of this study, proposing an adapted accident causation model to account for unexpected use behaviors by prescribers. All measures were incorporated.
Sittig et al,86 16451720 A survey delineating factors affecting primary care providers' acceptance of CDS reminders. Factors related to patient and provider characteristics, type and volumes of alerts, and configuration of use environments. Self-developed survey instrument based on a prior observational study conducted by the research team (Saleem et al, 2005).90 Questions specific to the primary care setting (eg, examination room layout) were not included.
Glassman et al,87 16501396 A survey at a VA facility regarding clinicians' knowledge about DDI (as a result of alert use) and their perception of and experiences with DDI alerts. Research methods based on Glassman et al, which included a 21-item survey soliciting perceived benefits of and barriers to using CDS alerts.96 Self-developed survey instrument; underlying theory not indicated; validation not reported. All questions were incorporated; increased knowledge about DDI was added as an additional measure of benefits (incidental learning).
Abarca et al,88 16602224 A national survey assessing community pharmacy managers' perception of DDI alerts. 34 questions on perceived value of alerts, meaningfulness, and facilitating conditions such as provision of additional information. Self-developed instrument; validation not reported; underlying theory not indicated. All questions were incorporated except for a few that specifically addressed pharmacists' work (eg, coordination with providers).
Niès et al,89 17238410 A systematic review characterizing common success factors of CDS functionality provided through CPOE systems. Included four success characteristics: system-initiated interventions, assistance without user control over output, automated data retrieval, and provision of corollary actions. Does not apply Most success factors were incorporated.
Saleem et al,90 15802482 An observational study conducted at four VA facilities to assess barriers and facilitators related to use of preventive care and chronic disease management reminders. Five impeding conditions (eg, workload) and four facilitating conditions (eg, workflow integration). Ethnographically based observations. Most barriers and facilitators were incorporated.
Kawamoto et al,91 15767266 A meta-analysis investigating success factors of CDS systems. Four key success factors identified: (1) automatic provision of decision support as part of clinician workflow, (2) provision of recommendation rather than just assessments, (3) provision of decision-support at the time and location of decision-making, and (4) computer-based decision-support. Does not apply Success factors (2) and (4) were not included because they do not usually apply in the research context that the survey instrument of this study is designed for.
Taylor and Tamblyn,92 15360983 A chart audit study assessing Canadian GPs' overrides of medication alerts and common reasons for overriding. Seven common reasons for physician non-adherence, such as alerts not clinically important and interaction already known. Does not apply All seven reasons were assessed.
Patterson et al,93 14527974 Observations followed by semi-structured interviews at six VA sites to study human factors barriers to effective use of computerized reminders related to HIV screening, intervention, and progression monitoring. Six common human factors barriers such as workload, inapplicability of reminders, and limited training. Self-developed observation and interview protocols; underlying theory not indicated; validation not reported. All human factors barriers identified were incorporated to varying degrees.
Venkatesh et al65 A theory development study consolidating existing models related to technology adoption and acceptance. 16 relevant questionnaire items assessing the four conceptual constructs in addition to three questions assessing perceived adoption intention. A foundational paper of this study proposing the unified theory of acceptance and use of technology. Several questions specific to general business applications were excluded (eg, enabling me to accomplish tasks more quickly). Social influence measures were substantially revised based on relevant research in healthcare.72–75
Weingart et al,29 14638563 A chart review study examining primary care physicians' overrides of medication safety alerts. Eight categories of common reasons for overriding. Does not apply. General categories, such as ‘alerted interaction not clinical significant,’ were included, while context-specific ones such as ‘medication list out of date’ were not.
Ahearn and Kerr,94 12831382 A focus group study among GPs in Australia regarding their options regarding pharmaceutical decision-support systems. Seven semantic themes ranged from GPs' reaction to computerized alerts to suggested improvements and attitudes to evidence-based guidelines. Self-developed focus group protocol; detail not revealed. All themes were incorporated to varying degrees.
Magnus et al,95 12383140 A survey among GPs in the UK assessing their views about computerized alerts and perceived rates of override. Nine questions on perceived usefulness, applicability, relevance, and quality of information presentation; and six questions on main reasons for overriding. Self-developed survey instrument; underlying theory not indicated; validation not reported. All relevant categories were incorporated.
Glassman et al,96 12458299 A survey study conducted at a VA facility soliciting clinicians' knowledge about DDI alerts (as a result of alert use) as well as perceptions of and experiences with computerized alerting. A survey instrument consisting of 19 questions and 67 items; an adapted version was used in Glassman et al.87 Self-developed survey instrument; underlying theory not indicated; validation not reported. Most questions were incorporated.
Krall and Sittig,97 11825206 A survey among Kaiser Permanente primary care clinicians regarding the usability and usefulness of different approaches to presenting reminders and alerts, in addition to the desirability of six alert types. Six characteristics contributing to user acceptance of computerized clinical alerts: number, priority, accuracy, subject domain, relevance, presentation mode, and usefulness. Self-developed survey instrument; underlying theory not indicated; validation not reported. All characteristics were incorporated to varying degrees.

CDS, clinical decision-support; CPOE, computerized prescriber order entry; DDI, drug–drug interaction; GP, general practitioner; VA, Veterans Affairs.

Based on the review results, we consolidated existing questionnaire items or qualitative themes and mapped them to the constructs and measures derived from UTAUT and van der Sijs et al's accident causation model. The wording deemed most appropriate was used with minor revisions to tailor the survey instrument to the context of this study. Supplementary online appendix 1 describes this questionnaire development process in detail.

Validation method

To validate the draft survey instrument, we first identified a convenience sample of 20 CPOE experts and super users who helped us pretest the survey to improve its content validity. Then, we invited all eligible prescribers at our institution (excluding the pilot testers) to use the refined survey to provide feedback about the DDI alerts implemented in our institutional CPOE system. The field validation was part of a larger randomized controlled trial study that aimed to evaluate the utility of computerized DDI alerts generated at distinct sensitivity levels.

The empirical setting is the University of Michigan Health System (UMHS) where a commercially sold CPOE, Eclipsys Sunrise XA (formerly Eclipsys Corp., Atlanta, Georgia, USA) was deployed in 2006–2008 in all inpatient care services. The system uses Multum (Cerner, Kansas City, Missouri, USA) as its underlying medication lexicon and knowledge base for generating medication safety alerts.98 The survey validation process followed procedures described in the information systems field99 100 as well as those used in Cork et al.101 Both the pretest and the field validation surveys were electronically administered using Qualtrics, an online survey management tool (Qualtrics Labs, Provo, Utah, USA).

Results

The survey instrument

The survey instrument, consisting of 28 items in addition to an open-ended closing question, is presented in table 3. A full version formatted for paper-and-pencil administration is available in appendix 2 of the online supplemental data.

Table 3.

The questionnaire

Preamble
PRE.1 A. Please estimate, during an average week of your practice, how many Drug–Drug Interaction alerts you receive from [name of CPOE]? _____ (Please provide a numeric estimate)
PRE.2 B. Please estimate, of the Drug–Drug Interaction alerts you receive, what per cent do you read thoroughly? _____ %
PRE.3 C. Please estimate, of the Drug–Drug Interaction alerts you read, what per cent do you find relevant? _____ %
PRE.4 D. Please estimate, of the Drug–Drug Interaction alerts you find relevant, what per cent change your prescribing decisions? _____ %
  • Section 1 of 5

  • Please respond to the following statements based on your experience using [name of CPOE] at [name of institution]

  • (Scale: Strongly Disagree, Disagree, Agree, Strongly Agree, and Does not apply)

PE.1 1. Drug–Drug Interaction (DDI) alerts are useful in helping me care for my patients.
PE.2 2. DDI alerts are relevant to the individual patients for which they appear.
PE.3 3. DDI alerts capture all drug interaction instances for my patients.
PE.4 4. DDI alerts I receive are clinically important.
PE.5 5. DDI alerts help me better understand which drugs should not be used at the same time.
PE.6 6. DDI alerts help me improve the monitoring for and management of DDIs for my patients.
PE.7 7. DDI alerts help me reduce professional risk by preventing potential adverse events in my patients.
Section 2 of 5
EE.1/EU1* 8. I find Drug–Drug Interaction (DDI) alerts easy to understand.
EE.2/EU2* 9. The system makes it easy to respond to DDI alerts.
EE.3 10. Reading and responding to DDI alerts takes too much time.
EE.4 11. I repeatedly receive DDI alerts to which I have already responded.
EE.5 12. Reading and responding to DDI alerts interferes with my workflow.
SI.1 13. I read and respond to Drug–Drug Interaction (DDI) alerts because my colleagues read and respond to them.
SI.2 14. My supervisor (eg, attending physicians, nurse managers) encourages me to read and respond to DDI alerts.
SI.3 15. Reading and responding to DDI alerts helps to improve my professional image.
Section 3 of 5
FC.1 16. I received adequate training on how to read and respond to Drug–Drug Interaction (DDI) alerts.
FC.2 17. I have adequate clinical knowledge to understand DDI alerts.
FC.3 18. The system provides adequate explanations of clinical relevance for DDI alerts.
FC.4 19. The system provides adequate management alternatives for DDI alerts.
FC.5 20. If I have questions about DDI alerts, I always have someone to consult with.
Section 4 of 5
PF 21. During order entry, I receive too many Drug–Drug Interaction (DDI) alerts that I must read and respond to.
  • Section 5 of 5

  • Please respond to the following statements based on your experience using [name of CPOE] at [name of institution]

  • (Scale: Never, Rarely, Less than half the time, About half the time, More than half the time, Always, and Does not apply)

UB.1 22. I thoroughly read the Drug–Drug Interaction (DDI) alerts that I receive.
UB.2 23. I provide reasons for DDI alerts that I decide to override.
UB.3 24. DDI alerts presented to me during order entry change my prescribing decisions.
Open-ended closing
Please provide any additional comments you have regarding Drug–Drug Interaction alerts you receive from [name of CPOE]. Thank you for your time.
*

EE.1 and EE.2 should be treated as a standalone construct, ‘perceived ease of use’ (EU), according to field validation results; see the Validation results section for more detail.

The survey begins with four questions inviting respondents to estimate their level of interaction with DDI alerts. Besides helping respondents warm up for the survey, these questions also allow researchers to obtain a quantitative reference frame of certain perceptual measures, for example, approximately how many alerts would lead to the perception that alert handling ‘takes too much time.’ We believe that this design will not introduce common methods biases102 because these questions are assessed at the beginning of the survey, while the psychometric measures that they may potentially affect, such as perceived fatigue and perceived use behavior, are presented many steps apart toward the end.

Sections 1–3 of the survey instrument consist of 20 items soliciting respondents' opinions of and experiences with DDI alerts, in addition to a single-question section, Section 4, that solicits their perception of alert fatigue. All these items are assessed on a four-level, forced choice Likert scale (from ‘Strongly Disagree’ to ‘Strongly Agree’). In Section 5, perceived use behavior is measured through three items assessed on a six-level frequency scale: ‘Never,’ ‘Rarely,’ ‘Less than half the time,’ ‘About half the time,’ ‘More than half the time,’ and ‘Always.’ All questionnaire items are provided with an exit option, ‘Does not apply,’ as their applicability may vary according to respondents' clinical roles. The survey closes with an open-ended question inviting additional thoughts and comments.

It should be noted that: (1) the survey instrument presented in table 3 already reflects the changes made based on validation results (described in the next section); and (2) questions for collecting respondents' demographic data are not included but can be added as needed.

Validation results

Pretest

The pretest sample consisted of 10 physicians, five nurses, and five pharmacists. About two thirds of them are practicing clinicians who use the CPOE system on a daily basis; the remainder are members of technical or managerial teams who have abundant experience with the system as well as user submitted issues.

The feedback received in this stage was focused on question clarity, survey flow, and administration of the survey in the online tool. In particular, specific concerns were raised regarding several items that were initially worded negatively; reviewers suggested that when administered to busy practicing clinicians, negatively worded questions or statements could be confusing or misleading, thus defeating their purpose of enhancing the reliability of self-reported data. Based on these suggestions, the research team revised the survey instrument accordingly. The questionnaire presented in table 3 reflects the changes made during this step.

Field validation: study sample

In this phase, three rounds of email invitations were sent to the 3700 eligible medication prescribers at UMHS who had placed at least one medication order through the CPOE system. Of these, 1370 visited the survey website during a 4-week study period (June 2 to June 30, 2010); 1020 complete responses were received (‘Does not apply’ was deemed a valid response).

The statistical analyses reported in this paper, for the purpose of instrument validation, only used a subset of the sample, that is those who indicated receiving at least one DDI alert during an average week of work (otherwise they might not be able to provide germane responses to all survey questions). The effective sample size was 814. Table 4 shows the sample characteristics.

Table 4.

Sample characteristics

Clinician type Eligible prescribers Complete responses received and response rate, N (%) Included in validation analyses, N (%)*
Advanced practice nurse 176 74 (42.0) 60 (82.2)
Nurse 2144 535 (25.0) 420 (78.5)
Pharmacist 87 49 (56.3) 44 (89.8)
Physician 1088 303 (27.8) 245 (81.1)
Physician assistant 111 44 (39.6) 38 (86.4)
Therapist 94 15 (16.0) 7 (46.7)
Total 3700 1020 (27.6) 814 (80.0)
*

For the purpose of instrument validation, this paper only analyzed a subset of the sample, that is those who indicated receiving at least one drug–drug interaction (DDI) alert during an average week of work.

Field validation: statistical analysis results

Based on the field validation data, we inspected the reliability and construct validity of the survey instrument. Moore and Benbasat suggested that for early stages of survey research, reliabilities (ie, the extent to which items within each scale are correlated with one another) of 0.5–0.6 are sufficient.100 In this study, we chose 0.65 as the target level of acceptance. Table 5 shows the initial reliability test results. Three constructs (PE, FC, and UB) passed the test, while one (EE) fell below the target level and another (SI) was borderline.

Table 5.

Initial reliability test results

Construct Number of items Cronbach's α
Performance expectancy (PE) 7 0.89
Effort expectancy (EE) 5 0.49
Social influence (SI) 3 0.65
Facilitating conditions (FC) 5 0.71
Perceived use behavior (UB) 3 0.69

Then, we deleted the questionnaire items one at a time from each of the constructs and re-performed the reliability test. If Cronbach's α increased as a result, then the question removed became a candidate for exclusion. Table 6 reports the results.

Table 6.

Reliability test results with item deletion

Effort expectancy (EE) Social influence (SI)
Item removed Cronbach's α Item removed Cronbach's α
EE.1 0.52 SI.1 0.55
EE.2 0.55 SI.2 0.49
EE.3 0.35 SI.3 0.59
EE.4 0.32
EE.5 0.34

The left-hand portion of table 6 shows the test results for EE, the construct that did not perform well initially. The reliability score improved after EE.1 or EE.2 was dropped, and thus, these two questions were candidates for exclusion. However, within-group pairwise correlation tests suggested that instead of eliminating them entirely from the survey, EE could be treated as two separate constructs, EE.1/2 and EE.3–5, each demonstrating distinct psychometric properties. EE.1 and EE.2 (‘I find DDI alerts easy to understand’ and ‘The system makes it easy to respond to DDI alerts’) emphasize usability (ease of use), whereas the remaining three EE questions address time and effort requirements in alert handling. Hence, we recommended dividing the original EE construct into two constructs: ‘perceived ease of use’ (EE.1/EE.2) and ‘effort expectancy’ (EE.3–5). The reliability test results after the split were 0.78 and 0.76, respectively; both are well above the target acceptance level.

The reliability of the borderline construct, SI, was not considerably improved with item deletion, as shown in the right-hand portion of table 6. Because this construct is assessed with only three questions, dividing it up was not an option either. This finding is in agreement with the technology acceptance literature, which has shown that SI is a weak predictor of behavioral intention primarily due to issues in measurement: despite their best efforts, survey respondents may simply not be able to accurately recall those social interaction events that resulted in behavioral alteration, particularly when the effect of social influence is exerted via indirect mechanisms such as social comparison (eg, two residents may imitate each other in order to meet the expectations of their attending physician; they may, however, only identify the attending physician as the direct behavior modifier, rather than each other).56 59 75 103 To address this limitation, structural exploration methods such as social network analysis have been proposed as alternative approaches.73 75

To examine the construct validity of the instrument, we performed a confirmatory factor analysis using structural equation modeling (SEM) which allows for testing hypotheses of both the number of factors and the pattern of loadings, connecting theory with the specifications of the model.104 105 Use of SEM in instrument validation has also been suggested to provide a richer set of information than conventional methods.105 The analysis was performed with LISREL 8.8 (Scientific Software International). Table 7 reports the results.

Table 7.

Factor loading results

Construct Factor loading (all significant at the 0.05 level) Reliability Eigenvalue R2
PE PE.1 PE.2 PE.3 PE.4 PE.5 PE.6 PE.7 0.89 4.13 18.0
0.81 0.80 0.33 0.81 0.84 0.85 0.80
EU EU.1 EU.2 0.78 1.30 5.6
0.80 0.81
EE EE.1 EE.2 EE.3 0.76 1.64 7.1
0.76 0.55 0.87
SI SI.1 SI.2 SI.3 0.65 1.12 4.9
0.56 0.61 0.66
FC FC.1 FC.2 FC.3 FC.4 FC.5 0.71 1.71 7.5
0.49 0.41 0.77 0.71 0.46
UB UB.1 UB.2 UB.3 0.69 1.26 5.5
0.72 0.60 0.62

The factor loadings shown in the table were produced using an iterative approach enabled by SEM. First, factors were loaded onto each theoretical construct. Then, significance of the results was assessed through model modification indices (factor loading values, error variances, and squared multiple correlations), which suggested potential cross-loading or model misspecification issues as well as alternative loading options for improving the model, until the optimal solution was reached.

The results confirmed that the factors of the refined model, with the split of the original EE, were satisfactorily loaded on their corresponding constructs. Even for factors that had relatively low scores (PE3, EE2, and several items in FC), loading them onto other constructs did not yield significantly better results. Additionally, all eigenvalues are greater than 1, suggesting that each factor should be retained, and the reliability measures are all above 0.65, the target acceptance level. As a whole, the model accounts for 48.6% of the response variance.

The SEM model fit statistics, reported in table 8, further confirm that the refined model represents a good fit to the empirical validation data.

Table 8.

Structural equation modeling (SEM) goodness of fit indices

Fit statistic Result Description Acceptable range
χ2 1127.71 (df=215), p<0.001 Overall measure of model fit based on discrepancies between the sample and the covariance matrices106 107 Sample size dependent; the result suggests a reasonable fit in light of the study sample size and other goodness of fit measures
Comparative fit index (CFI) 0.96 Comparison of a restricted model to a null model108 >0.90
Root mean square error of approximation (RMSEA) 0.07 Measure of the discrepancy per degree of freedom109 <0.05: close fit
(0.05, 0.08): reasonable fit
(0.08, 0.10): acceptable fit
Standardized root mean square residual (SRMR) 0.06 Measure of standardized fitted residuals107 110 <0.08

Field validation: qualitative analysis results

Among the 1020 respondents who provided complete responses to the survey, about one fifth entered narrative feedback in the open-ended closing section. We conducted an open, interpretive qualitative content analysis of the feedback collected.111 The objective was to derive additional insights into the validity of the survey or the way it was administered.

Most open-ended comments were consistent with the quantitative responses; nonetheless, two instances appeared potentially problematic. First, about 20 respondents estimated that they received fewer than five DDI alerts during an average workweek, yet in the open-ended feedback they complained about the ‘excessive’ amount of alerts that had been presented to them. Within the scope of this study, we are unable to determine whether these prescribers might have a particularly low threshold of tolerance or if their perception might be influenced by sources of information other than their personal, hands-on experiences with the CPOE system. Second, when responding to the survey, a dozen or so respondents did not seem to differentiate between DDI and other types of alerts, even though DDI was clearly defined in the email invitations, the introduction, and the informed consent screens preceding the survey, as well as at the beginning of each survey section. This issue was clearly indicated by alert examples they provided; for instance, some left lengthy comments about overdose prevention alerts that they had received from the system. It is unclear whether these survey respondents used the open-ended space to provide extra information regarding their experiences with other types of alerts, or if they responded to the entire survey based on their general perception of all types of computerized alerts that they had ever encountered (ie, not specific to DDI).

While the magnitude of these issues is minor in contrast to our sample size, these observations do raise questions about the reliability of self-reported data collected from busy clinicians who only have limited time and cognitive commitment to participating in survey studies. A respondent specifically commented: ‘alerts are great ideas, but we are now saturated with alerts and SURVEYS… making them all much less effective.’ Because the sample size of informatics studies is usually small, such issues could have an influential impact and therefore should not be overlooked.

Discussion

The lack of end user acceptance of heavily invested CDS technologies is a widely acknowledged problem which has raised great concerns regarding their practicability, value, as well as unintended detrimental effects that they may bring with them.52–54 112 While the optimal methods of delivering computerized alerts are yet to be identified, the current widespread deployment of CPOE systems provides an unprecedented opportunity for researchers and practitioners to learn quickly from practice to identify deficiencies and improvement opportunities.113 114 To facilitate knowledge discovery and accumulation requires (1) use of theoretically informed research instruments to better assess and understand emerging issues, and (2) use of standardized tools to obtain comparative results across studies and across institutions. The survey development and validation work described in this paper represents such an attempt.

The development of the survey was based on social psychology and technology acceptance theories that have been extensively validated in their respective domains.58 59 66 69 According to Venkatesh et al, this family of models accounts for as much as 70% of behavioral intention variance, which ‘may be approaching the practical limits to explain individual acceptance and usage decisions in organizations.’65 van der Sijs et al's adapted accident causation model and the review of the extant literature further provided the survey contextual measures and questionnaire items. This extension is essential because the original UTAUT instrument, developed with generic business IT applications in mind, may not be well suited for studying decision-support technologies used in complex healthcare environments. For example, while ‘improved time efficiency’ is an important UTAUT measure, it is less relevant in the context of this study given that DDI alerts are meant to improve quality of care and patient safety, oftentimes at a sacrifice of upfront time efficiency.112 Further, with the inclusion of multidimensional conditions and beliefs, the survey instrument may provide a useful tool for studying common behavioral antecedents underlying prescribers' decisions to adopt or not to adopt a CDS technology, that is, helping fill the ‘left side of the model’ by investigating behavioral or social forces that drive the behavior observed, as called for by other researchers.69

Lastly, as indicated in the qualitative analysis of the narrative feedback, some respondents did not seem to differentiate between DDI and other kinds of alerts (or were unable to), yet they provided complete responses to all survey questions. The reliability of self-reported data collected from busy practitioners is hence called into question, which adds to other common forms of measurement errors in survey research.115 Informatics studies may be particularly vulnerable to such data issues due to the smaller sample sizes they typically enroll. In addition to improving the communication with prospective research participants, whenever possible, alternative methods such as ethnographic observations and analyses of computer-recorded usage logs should be considered to triangulate results obtained from questionnaire surveys.

Conclusion

In this paper, we present the development and empirical validation of a survey instrument for assessing prescribers' perception of computerized DDI alerts. The survey is grounded in UTAUT and an adapted accident causation model. Development of the survey was also informed by a review of the extant literature on prescribers' attitudes toward computerized medication safety alerts and common prescriber-provided reasons for overriding. The empirical validation yielded satisfactory results. However, a few potential issues were also identified. We analyzed these issues accordingly and the results led to the final survey instrument as well as usage recommendations.

Footnotes

Funding: This project was supported in part by Grant # UL1RR024986 received from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH) and NIH Roadmap for Medical Research.

Competing interests: None.

Ethics approval: The research protocol of this study was approved by the Medical School Institutional Review Board at the University of Michigan (IRB # HUM00038030).

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998;(15):1311–16 [DOI] [PubMed] [Google Scholar]
  • 2.Bates DW, Teich JM, Lee J, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999;6:313–21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Nightingale PG, Adu D, Richards NT, et al. Implementation of rules based computerised bedside prescribing and administration: intervention study. BMJ 2000;320:750–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Teich JM, Merchia PR, Schmiz JL, et al. Effects of computerized physician order entry on prescribing practices. Arch Intern Med 2000;160:2741–7 [DOI] [PubMed] [Google Scholar]
  • 5.Institute of Medicine Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC, USA: National Academy Press, 2001 [PubMed] [Google Scholar]
  • 6.Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med 2003;348:2526–34 [DOI] [PubMed] [Google Scholar]
  • 7.Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007;14:29–40 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Shamliyan TA, Duval S, Du J, et al. Just what the doctor ordered. Review of the evidence of the impact of computerized physician order entry system on medication errors. Health Serv Res 2008;43:32–53 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Gardner RM, Evans RS. Using computer technology to detect, measure, and prevent adverse drug events. J Am Med Inform Assoc 2004;11:535–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Teich JM, Osheroff JA, Pifer EA, et al. CDS Expert Review Panel Clinical decision support in electronic prescribing: recommendations and an action plan: report of the joint clinical decision support workgroup. J Am Med Inform Assoc 2005;12:365–76 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Osheroff JA, Teich JM, Middleton B, et al. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007;14:141–5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Congressional Budget Office Costs and Benefits of Health Information Technology. Washington, DC, USA: Congressional Budget Office, 2008 [Google Scholar]
  • 13.Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med 2009;360:1477–9 [DOI] [PubMed] [Google Scholar]
  • 14.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010;363:501–4 [DOI] [PubMed] [Google Scholar]
  • 15.The Leapfrog Group What does Leapfrog Ask Hospitals? 2010. http://www.leapfroggroup.org/for_consumers/hospitals_asked_what (accessed 17 Aug 2010).
  • 16.The Leapfrog Group Fact Sheet: Computerized Physician Order Entry. 2008. http://www.leapfroggroup.org/media/file/Leapfrog-Computer_Physician_Order_Entry_Fact_Sheet.pdf (accessed 17 Aug 2010).
  • 17.Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163:1409–16 [DOI] [PubMed] [Google Scholar]
  • 18.Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006;144:742–52 [DOI] [PubMed] [Google Scholar]
  • 19.Eslami S, Abu-Hanna A, de Keizer NF. Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc 2007;14:400–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Eslami S, de Keizer NF, Abu-Hanna A. The impact of computerized physician medication order entry in hospitalized patients—a systematic review. Int J Med Inform 2008;77:365–76 [DOI] [PubMed] [Google Scholar]
  • 21.Ammenwerth E, Schnell-Inderst P, Machan C, et al. The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc 2008;15:585–600 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wolfstadt JI, Gurwitz JH, Field TS, et al. The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med 2008;23:451–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Schedlbauer A, Prasad V, Mulvaney C, et al. What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior? J Am Med Inform Assoc 2009;16:531–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Reckmann MH, Westbrook JI, Koh Y, et al. Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc 2009;16:613–23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.van Rosse F, Maat B, Rademaker CM, et al. The effect of computerized physician order entry on medication prescription errors and clinical outcome in pediatric and intensive care: a systematic review. Pediatrics 2009;123:1184–90 [DOI] [PubMed] [Google Scholar]
  • 26.Pearson SA, Moxey A, Robertson J, et al. Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990–2007). BMC Health Serv Res 2009;9:154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Shojania KG, Jennings A, Mayhew A, et al. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010;182:E216–25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Payne TH, Nichol WP, Hoey P, et al. Characteristics and override rates of order checks in a practitioner order entry system. Proc AMIA Symp 2002:602–6 [PMC free article] [PubMed] [Google Scholar]
  • 29.Weingart SN, Toth M, Sands DZ, et al. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med 2003;163:2625–31 [DOI] [PubMed] [Google Scholar]
  • 30.Hsieh TC, Kuperman GJ, Jaggi T, et al. Characteristics and consequences of drug allergy alert overrides in a computerized physician order entry system. J Am Med Inform Assoc 2004;11:482–91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Spina JR, Glassman PA, Belperio P, et al. Primary Care Investigative Group of the VA Los Angeles Healthcare System Clinical relevance of automated drug alerts from the perspective of medical providers. Am J Med Qual 2005;20:7–14 [DOI] [PubMed] [Google Scholar]
  • 32.van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006;13:138–47 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Grizzle AJ, Mahmood MH, Ko Y, et al. Reasons provided by prescribers when overriding drug-drug interaction alerts. Am J Manag Care 2007;13:573–8 [PubMed] [Google Scholar]
  • 34.van der Sijs H, Aarts J, van Gelder T, et al. Turning off frequently overridden drug alerts: limited opportunities for doing it safely. J Am Med Inform Assoc 2008;15:439–48 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lin CP, Payne TH, Nichol WP, et al. Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs' Computerized Patient Record System. J Am Med Inform Assoc 2008;15:620–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Isaac T, Weissman JS, Davis RB, et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009;169:305–11 [DOI] [PubMed] [Google Scholar]
  • 37.van der Sijs H, Mulder A, van Gelder T, et al. Drug safety alert generation and overriding in a large Dutch university medical centre. Pharmacoepidemiol Drug Saf 2009;18:941–7 [DOI] [PubMed] [Google Scholar]
  • 38.Berner ES. Clinical Decision Support Systems: State of the Art. Rockville, MD, USA: The Agency for Healthcare Research and Quality (AHRQ), 2010 [Google Scholar]
  • 39.Miller RA. Medical diagnostic decision support systems—past, present, and future: a threaded bibliography and brief commentary. J Am Med Inform Assoc 1994;1:8–27 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Engle RL., Jr Attempts to use computers as diagnostic aids in medical decision making: a thirty-year experience. Perspect Biol Med 1992;35:207–19 [DOI] [PubMed] [Google Scholar]
  • 41.Berner ES, Detmer DE, Simborg D. Will the wave finally break? A brief view of the adoption of electronic medical records in the United States. J Am Med Inform Assoc 2005;12:3–7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Bates DW, O'Neil AC, Boyle D, et al. Potential identifiability and preventability of adverse events using information systems. J Am Med Inform Assoc 1994;1:404–11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Institute of Medicine To Err Is Human: Building a Safer Health System. Washington, DC, USA: National Academy Press, 2000 [Google Scholar]
  • 44.Certification Commission for Health Information Technology CCHIT Certified 2011 Inpatient EHR, 2010. http://www.cchit.org/certify/2011/cchit-certified-2011-inpatient-ehr (accessed 22 Aug 2010).
  • 45.Wright A, Sittig DF, Ash JS, et al. Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc 2009;16:637–44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.van der Sijs H, Lammers L, van den Tweel A, et al. Time-dependent drug-drug interaction alerts in care provider order entry: software may inhibit medication error reductions. J Am Med Inform Assoc 2009;16:864–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Metzger J, Welebob E, Bates DW, et al. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010;29:655–63 [DOI] [PubMed] [Google Scholar]
  • 48.van der Sijs H, Bouamar R, van Gelder T, et al. Functionality test for drug safety alerting in computerized physician order entry systems. Int J Med Inform 2010;79:243–51 [DOI] [PubMed] [Google Scholar]
  • 49.Kuperman GJ, Reichley RM, Bailey TC. Using commercial knowledge bases for clinical decision support: Opportunities, hurdles, and recommendations. J Am Med Inform Assoc 2006;13:369–71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Eichner J, Das M; NORC at the University of Chicago Challenges and Barriers to Clinical Decision Support (CDS) Design and Implementation Experienced in the Agency for Healthcare Research and Quality CDS Demonstrations. Rockville, MD, USA: The Agency for Healthcare Research and Quality (AHRQ), 2010 [Google Scholar]
  • 51.Ford EW, Menachemi N, Peterson LT, et al. Resistance is futile: but it is slowing the pace of EHR adoption nonetheless. J Am Med Inform Assoc 2009;16:274–81 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293:1197–203 [DOI] [PubMed] [Google Scholar]
  • 53.Campbell EM, Sittig DF, Ash JS, et al. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2006;13:547–56 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Ash JS, Sittig DF, Poon EG, et al. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2007;14:415–23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Vaziri A, Connor E, Shepherd I, et al. Are we setting about improving the safety of computerised prescribing in the right way? A workshop report. Inform Prim Care 2009;17:175–82 [DOI] [PubMed] [Google Scholar]
  • 56.Fishbein M, Ajzen I. Beliefs, Attitude, Intention and Behavior: An Introduction to Theory and Research. Reading, MA, USA: Addison-Wesley, 1975 [Google Scholar]
  • 57.Ajzen I. From intentions to actions: a theory of planned behavior. In: Kuhl J, Beckmann J, eds. Springer Series in Social Psychology. Heidelberg, Germany: Springer Berlin, 1985:11–39 [Google Scholar]
  • 58.Sheppard BH, Hartwick J, Warshaw PR. The theory of reasoned action: a meta-analysis of past research with recommendations for modifications and future research. J Cons Res 1988;15:325–43 [Google Scholar]
  • 59.Armitage CJ, Conner M. Efficacy of the Theory of Planned Behaviour: a meta-analytic review. Br J Soc Psychol 2001;40:471–99 [DOI] [PubMed] [Google Scholar]
  • 60.Thompson RL, Higgins CA, Howell JM. Personal computing: toward a conceptual model of utilization. MIS Quart 1991;15:125–43 [Google Scholar]
  • 61.Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Quart 1995;19:213–36 [Google Scholar]
  • 62.Triandis HC. Values, attitudes, and interpersonal behavior. Nebr Symp Motiv 1980;27:195–259 [PubMed] [Google Scholar]
  • 63.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart 1989;13:319–40 [Google Scholar]
  • 64.Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci 2000;46:186–204 [Google Scholar]
  • 65.Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information technology: toward a unified view. MIS Quart 2003;27:425–78 [Google Scholar]
  • 66.Lee Y, Kozar K, Larsen K. The technology acceptance model: past, present, and future. Comm Assoc Inform Syst 2003;12:752–80 [Google Scholar]
  • 67.Zheng K, Padman R, Johnson MP, et al. Evaluation of healthcare IT applications: the user acceptance perspective. In: Kacprzyk J, ed. Studies in Computational Intelligence. Heidelberg, Germany: Springer Berlin, 2007;65:49–78 [Google Scholar]
  • 68.Godin G, Bélanger-Gravel A, Eccles M, et al. Healthcare professionals' intentions and behaviours: a systematic review of studies based on social cognitive theories. Implement Sci 2008;3:36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform 2010;43:159–72 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci 1990;327:475–84 [DOI] [PubMed] [Google Scholar]
  • 71.Homans GC. The Human Group. New York City, NY, USA: Harcourt Brace, 1950 [Google Scholar]
  • 72.Coleman J, Katz E, Menzel H. Medical Innovation: A Diffusion Study. 2nd edn New York City, NY, USA: Bobbs-Merrill, 1966 [Google Scholar]
  • 73.Rice RE, Aydin C. Attitudes toward new organizational technology: network proximity as a mechanism for social information processing. Adm Sci Q 1991;36:219–44 [Google Scholar]
  • 74.Sullivan HS. Conceptions of Modern Psychiatry. New York City, NY, USA: W.W. Norton & Co., Inc, 1953 [Google Scholar]
  • 75.Burt RS. Social contagion and innovation: cohesion versus structural equivalence. Am J Sociol 1987;92:1287–35 [Google Scholar]
  • 76.Zheng K, Padman R, Krackhardt D, et al. Social networks and physician adoption of electronic health records: insights from an empirical study. J Am Med Inform Assoc 2010;17:328–36 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.van der Sijs H, van Gelder T, Vulto A, et al. Understanding handling of drug safety alerts: a simulation study. Int J Med Inform 2010;79:361–9 [DOI] [PubMed] [Google Scholar]
  • 78.Hor CP, O'Donnell JM, Murphy AW, et al. General practitioners' attitudes and preparedness towards Clinical Decision Support in e-Prescribing (CDS-eP) adoption in the West of Ireland: a cross sectional study. BMC Med Inform Decis Mak 2010;10:2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Vashitz G, Meyer J, Parmet Y, et al. Defining and measuring physicians' responses to clinical reminders. J Biomed Inform 2009;42:317–26 [DOI] [PubMed] [Google Scholar]
  • 80.Weingart SN, Simchowitz B, Shiman L, et al. Clinicians' assessments of electronic medication safety alerts in ambulatory care. Arch Intern Med 2009;169:1627–32 [DOI] [PubMed] [Google Scholar]
  • 81.Weingart SN, Massagli M, Cyrulik A, et al. Assessing the value of electronic prescribing in ambulatory care: a focus group study. Int J Med Inform 2009;78:571–8 [DOI] [PubMed] [Google Scholar]
  • 82.Mollon B, Chong J, Jr, Holbrook AM, et al. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak 2009;9:11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Ko Y, Abarca J, Malone DC, et al. Practitioners' views on computerized drug-drug interaction alerts in the VA system. J Am Med Inform Assoc 2007;14:56–64 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Mayo-Smith MF, Agrawal A. Factors associated with improved completion of computerized clinical reminders across a large healthcare system. Int J Med Inform 2007;76:710–16 [DOI] [PubMed] [Google Scholar]
  • 85.Graham ID, Logan J, Bennett CL, et al. Physicians' intentions and use of three patient decision aids. BMC Med Inform Decis Mak 2007;7:20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Sittig DF, Krall MA, Dykstra RH, et al. A survey of factors affecting clinician acceptance of clinical decision support. BMC Med Inform Decis Mak 2006;6:6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Glassman PA, Belperio P, Simon B, et al. Exposure to automated drug alerts over time: effects on clinicians' knowledge and perceptions. Med Care 2006;44:250–6 [DOI] [PubMed] [Google Scholar]
  • 88.Abarca J, Malone DC, Skrepnek GH, et al. Community pharmacy managers' perception of computerized drug-drug interaction alerts. J Am Pharm Assoc 2006;46:148–53 [DOI] [PubMed] [Google Scholar]
  • 89.Niès J, Colombet I, Degoulet P, et al. Determinants of success for computerized clinical decision support systems integrated in CPOE systems: a systematic review. AMIA Annu Symp Proc 2006:594–8 [PMC free article] [PubMed] [Google Scholar]
  • 90.Saleem JJ, Patterson ES, Militello L, et al. Exploring barriers and facilitators to the use of computerized clinical reminders. J Am Med Inform Assoc 2005;12:438–47 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330:765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Taylor LK, Tamblyn R. Reasons for physician non-adherence to electronic drug alerts. Stud Health Technol Inform 2004;107:1101–5 [PubMed] [Google Scholar]
  • 93.Patterson ES, Nguyen AD, Halloran JP, et al. Human factors barriers to the effective use of ten HIV clinical reminders. J Am Med Inform Assoc 2004;11:50–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Ahearn MD, Kerr SJ. General practitioners' perceptions of the pharmaceutical decision-support tools in their prescribing software. Med J Aust 2003;179:34–7 [DOI] [PubMed] [Google Scholar]
  • 95.Magnus D, Rodgers S, Avery AJ. GPs' views on computerized drug interaction alerts: questionnaire survey. J Clin Pharm Ther 2002;27:377–82 [DOI] [PubMed] [Google Scholar]
  • 96.Glassman PA, Simon B, Belperio P, et al. Improving recognition of drug interactions: benefits and barriers to using automated drug alerts. Med Care 2002;40:1161–71 [DOI] [PubMed] [Google Scholar]
  • 97.Krall MA, Sittig DF. Subjective assessment of usefulness and appropriate presentation mode of alerts and reminders in the outpatient setting. Proc AMIA Symp 2001:334–8 [PMC free article] [PubMed] [Google Scholar]
  • 98.Chaffee BW, Zimmerman CR. Developing and implementing clinical decision support for use in a computerized prescriber-order-entry system. Am J Health Syst Pharm 2010;67:391–400 [DOI] [PubMed] [Google Scholar]
  • 99.Straub DW. Validating instruments in MIS research. MIS Quart 1989;13:147–69 [Google Scholar]
  • 100.Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inform Syst Res 1991;2:192–222 [Google Scholar]
  • 101.Cork RD, Detmer WM, Friedman CP. Development and initial validation of an instrument to measure physicians' use of, knowledge about, and attitudes toward computers. J Am Med Inform Assoc 1998;5:164–76 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Podsakoff PM, MacKenzie SB, Lee JY, et al. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 2003;88:879–903 [DOI] [PubMed] [Google Scholar]
  • 103.Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: a comparison of two theoretical models. Manage Sci 1989;35:982–1003 [Google Scholar]
  • 104.Kaplan D. Structural Equation Modeling: Foundations and Extensions. Thousand Oaks, CA, USA: Sage Publications, 2009 [Google Scholar]
  • 105.Boudreau MC, Gefen D, Straub DW. Validation in information systems: a state-of-the-art assessment. MIS Quart 2001;25:1–16 [Google Scholar]
  • 106.Bagozzi RP. Structural equation models in marketing research: basic principles. In: Bagozzi RP, ed. Principles of Marketing Research. Oxford, England: Blackwell Publishers, 1994:317–85 [Google Scholar]
  • 107.Hu LT, Bentler PM. Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods 1998;3:424–53 [Google Scholar]
  • 108.Bentler PM. Comparative fit indexes in structural models. Psychol Bull 1990;107:238–46 [DOI] [PubMed] [Google Scholar]
  • 109.Browne MW, Cudeck R. Alternative ways of assessing model fit. In: Bollen KA, Long JS, eds. Testing Structural Equation Models. Thousand Oaks, CA, USA: Sage Publications, 1993:136–62 [Google Scholar]
  • 110.Sousa KH, Kwok OM. Putting Wilson and Cleary to the test: analysis of a HRQOL conceptual model using structural equation modeling. Qual Life Res 2006;15:725–37 [DOI] [PubMed] [Google Scholar]
  • 111.Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005;15:1277–88 [DOI] [PubMed] [Google Scholar]
  • 112.Berger RG, Kichak JP. Computerized physician order entry: helpful or harmful? J Am Med Inform Assoc 2004;11:100–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.McDonald CJ, Overhage JM, Mamlin BW, et al. Physicians, information technology, and health care systems: a journey, not a destination. J Am Med Inform Assoc 2004;11:121–4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Lyman JA, Cohn WF, Bloomrosen M, et al. Clinical decision support: progress and opportunities. J Am Med Inform Assoc 2010;17:487–92 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Schwarz N, Oyserman D. Asking questions about behavior: cognition, communication, and questionnaire construction. Amer J Eval 2001;22:127–60 [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES