Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2021 Aug 13;28(11):2514–2522. doi: 10.1093/jamia/ocab160

A theory-based meta-regression of factors influencing clinical decision support adoption and implementation

Siru Liu 1,✉,#, Thomas J Reese 2,#, Kensaku Kawamoto 1, Guilherme Del Fiol 1, Charlene Weir 1
PMCID: PMC8510321  PMID: 34387686

Abstract

Objective

The purpose of the study was to explore the theoretical underpinnings of effective clinical decision support (CDS) factors using the comparative effectiveness results.

Materials and Methods

We leveraged search results from a previous systematic literature review and updated the search to screen articles published from January 2017 to January 2020. We included randomized controlled trials and cluster randomized controlled trials that compared a CDS intervention with and without specific factors. We used random effects meta-regression procedures to analyze clinician behavior for the aggregate effects. The theoretical model was the Unified Theory of Acceptance and Use of Technology (UTAUT) model with motivational control.

Results

Thirty-four studies were included. The meta-regression models identified the importance of effort expectancy (estimated coefficient = −0.162; P = .0003); facilitating conditions (estimated coefficient = 0.094; P = .013); and performance expectancy with motivational control (estimated coefficient = 1.029; P = .022). Each of these factors created a significant impact on clinician behavior. The meta-regression model with the multivariate analysis explained a large amount of the heterogeneity across studies (R2 = 88.32%).

Discussion

Three positive factors were identified: low effort to use, low controllability, and providing more infrastructure and implementation strategies to support the CDS. The multivariate analysis suggests that passive CDS could be effective if users believe the CDS is useful and/or social expectations to use the CDS intervention exist.

Conclusions

Overall, a modified UTAUT model that includes motivational control is an appropriate model to understand psychological factors associated with CDS effectiveness and to guide CDS design, implementation, and optimization.

Keywords: clinical decision support, unified theory of acceptance, use of technology, taxonomy, meta-regression

INTRODUCTION

Computerized clinical decision support (CDS) has been defined as systems that provide clinicians, staff, and patients with task-relevant knowledge and person-specific information that is intelligently filtered and presented at appropriate times.1 CDS includes a broad range of tools from simple alerts and reminders to complex dashboards, info-buttons, documentation templates, and order sets. While the overall goals of CDS are to enhance decision-making, workflow, and eventually improve healthcare, many CDS tools have failed to show improved quality of care.2–4 Efforts to understand why certain implementations succeed and others fail have not always included the user’s experience and behavior in the analysis and, as a result, often have fallen short in providing adequate and generalizable explanations.1,5,6 Because the target of many of the CDS interventions is user behavior, a closer examination of the user’s experience from a theoretical point of view provides an opportunity to investigate CDS factors that can successfully change clinician behavior. Such an examination would also support effective implementation strategy planning. The goal of this work is to add to the literature and science in developing an understanding of CDS success.

Three predictors commonly appear in systematic reviews as being related to CDS success: 1) the automatic provision of CDS as part of clinician workflow1,4–8; 2) the provision of CDS to both clinicians and patients1,4–8; and 3) requiring an override reason.1,3,6,8 Beyond the 3 discussed above, most identified factors were unique to respective reviews. We provided a table to compare previously identified CDS factors (see Supplementary Additional File 1). Overall, inconsistent agreements on CDS factors among reviews and the small, aggregated effect sizes suggest an opportunity to formalize success factors through validated theoretical models.

The diversity in the CDS factors identified in previous studies and the variability of successful implementations may partially be explained by the lack of theoretical underpinnings in both the study designs and the measurement metrics. Theories provide causal mechanisms and reproducible methods of operationalizing both the interventions and the outcomes. We posit that the lack of a theoretical foundation may lead to measurement inconsistencies, failure to generalize design and implementation strategies, and inconsistent interpretation of results. Theory-based literature reviews have been applied to identify effective intervention factors in other healthcare areas, such as physical activity behavior change interventions, with good success.9,10 This approach allows for an in-depth exploration of the causal relationships between factors and a quantitative analysis of how factors improve or decrease the effectiveness of interventions.11 In addition, adequately measured constructs from theoretical models could be used as predefined explanatory factors in meta-regression models. Multivariate models control for the interactions between variables, thereby avoiding an excessive false positive rate. Moreover, the use of theory-based factors based on user behavior could be combined to refine implementation strategies proposed by Powell et al.12 to help researchers design new CDS applications. Studies have proposed that the design and evaluation of complex interventions in health should have a theoretical basis, which helps to formulate a hypothesis focused on potential active ingredients and provides empirical evidence from previous studies.13 Michie et al. have shown the importance of theoretical constructs in the design of interventions for changing human behavior.14,15

This study uses the Unified Theory of Acceptance and Use of Technology (UTAUT) model,16 which consists of 4 constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. We adapted the UTAUT model to better characterize the CDS environment by adding a motivational control construct. The motivational control construct refers to the psychological constructs of autonomy and agency. CDS tools vary substantially in that dimension (eg, hard-stop alert vs passive reminder)16 and autonomy and/or perceived control has notable empirical impact. In prior work, we created a taxonomy based on these 5 theoretical constructs, which could reliably characterize CDS interventions (Cohen’s κ = 0.805).17,18

In this article, our goal was to evaluate the role of these constructs in CDS success. We used a meta-regression approach from a systematic review of comparative effectiveness studies on CDS. Comparative effectiveness studies compare at least 2 forms of the same intervention (ie, head-to-head trials) rather than comparing interventions to usual care.19 For each article, we characterized the CDS factors in terms of the UTAUT constructs because no CDS studies have actually tested the psychological mechanisms directly. We sought to answer 2 research questions: 1) What is the current state of studies that have compared CDS with and without some factors? 2) What are the effects of specific UTAUT constructs in a theoretical model on the clinician behavior (eg, the proportion of appropriate antibiotics prescribed)? In sum, this study summarizes the current work of comparative randomized controlled trials (RCTs) in enhancing CDS effectiveness to help researchers design CDS following technology adoption theories.

MATERIALS AND METHODS

Study design

This study is a systematic review with a meta-regression analysis. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to report data (Supplementary Additional File 2).20 The process of conducting meta-regression was in accordance with guidelines in the Cochrane handbook.21

Search strategy and inclusion criteria

Two approaches were used to identify relevant studies. First, we identified an earlier systematic review by Van de Velde et al.5 that focused on the same topic and reviewed the included studies. The inclusion and exclusion criteria for the present study was more restrictive; therefore, we only included a subset of the Van de Velde et al. studies after review. We excluded articles published before 2000 because our pilot search showed that CDS systems of that era often relied heavily on manual data entry and article-based charts, which could not automatically extract data from laboratories or other systems. To test the accuracy of this approach, we ran the full search strategy and screened studies from January 2015 to December 2016 to assess the inclusion overlap. Second, since the Van de Velde et al.5 search ended in December 2016, we updated it to include articles published from January 2017 to January 2020. The search strategy included MEDLINE and EMBASE through the Ovid platform, the Cochrane Central Register of Controlled Trials, and CINAHL.

We adapted the Participants-Intervention-Comparison-Outcome framework22 to focus on clinician behavior. Table 1 provides the inclusion and exclusion criteria. Three reviewers (SL, TR, and CW) participated in the study selection using Covidence. Two reviewers (SL, TR) conducted a preliminary screening of titles and abstracts, followed by full text screening. Disagreements were discussed and consulted with the third reviewer.

Table 1.

Inclusion and exclusion criteria

Participants (P) were defined as healthcare providers working in inpatient acute care hospitals and in outpatient primary and sub-specialty care clinics. We excluded studies conducted at nursing homes or public health departments; studies based on simulated scenarios; and CDS that targeted only patients.
Intervention (I) included CDS that displayed on a screen, were integrated into the EHR, and automatically extracted at least some data from the EHR. All types of CDS were included (eg, alerts, reminders, clinical guidelines, order sets, dashboards, documentation templates, diagnostic support, and clinical workflow tools).23
Comparison (C) was the same CDS tool with 1 or more additional factors as the comparators.
Outcomes (O) was adapted to measurements of clinician behavior.
Design: Only RCTs or cluster RCTs were included.

(CDS: clinical decision support; EHR: electronic health record; RCT: randomized controlled trial)

Data extraction

We extracted the study design, study duration, care settings, country, number of participants, and measurements of clinician behavior. We also extracted the sample sizes from each study arm, the adjusted effect size and the associated standard error as reported by authors in the manuscripts. If the adjusted values were not reported, then we extracted the intraclass correlation (ICC) value, mean, and standard deviation (for continuous measurements), or odds ratio (for dichotomous measurements). If any required information was missing, we contacted the study’s corresponding author to request the information.

Risk of bias assessment

The Cochrane Collaboration Risk of Bias Tool was used to evaluate the risk of bias in individual studies.24 For RCTs, 5 domains are assessed: 1) randomization, 2) deviations from intended interventions, 3) missing outcomes, 4) outcome measurement, and 5) selected reported results. For cluster RCTs, the time of identification and recruitment of participants was added to the RCT domains.25 If all domains were deemed low risk, then the study was classified as low risk of overall bias. If the study was high risk in at least 1 domain, then it was rated as high risk of overall bias. Otherwise, the study was deemed to be medium risk. After calibration with other raters, 1 reviewer rated the included studies.

Coding constructs

In our previous study, we developed a reliable coding schema to characterize CDS interventions (Cohen’s κ = 0.805).17 The coding rules were iteratively developed by 3 raters (SL, TR, and CW) where an independent review was done, disagreements discussed, and coding rules revised. This process continued until the Cohen’s κ reached ≥0.80 and was maintained. The final test of independent review produced a high level of interrater agreement (Cohen’s κ = 0.805). Following the coding schema, we coded each arm for the included studies after establishing high reliability. Definitions of constructs in the theoretical model are shown in the Table 2. Except for the facilitation conditions, which were listed narratively, the remaining constructs were assessed on 3 levels: high, medium, and low. The difference of the coding results between the 2 arms was the final coding result for that study. If the level of a specific construct between the 2 study arms changed from high to medium, then the code for that attribute for the study would be −1. For facilitating conditions, we used the difference between the counts of conditions in the control arm and the intervention arm.

Table 2.

Definitions of constructs in the theoretical model

Construct Definition
Performance Expectancy The user’s belief that the CDS intervention can improve work performance.
Effort Expectancy The perceived ease of use, or perceived effort using the CDS intervention.
Social Influence The degree of social pressure or social expectations to use the CDS intervention.
Facilitating Conditions The infrastructure (both technical and organizational) and the strategies used to support and implement the CDS intervention.
Motivation Control User’s controllability of the CDS intervention.

Data analysis

To determine the relationships between constructs with clinician behavior, we applied random effects meta-regression with 2 different models: 1) univariate analysis and 2) multivariate analysis. Before developing the models, we summarized effects across studies and investigated heterogeneity.

The effect size of interest was defined as the difference between the study arm that received a CDS intervention manipulation and the arm that received the CDS intervention alone without the manipulation. The effect size of continuous measurements of clinician behavior was measured by the standardized mean difference between 2 groups (Cohen’s d) to account for different measurement scales.26 For dichotomous measurements, we calculated the odds ratio then converted it to Cohen’s d. The effect size was weighted by the standard error, which was related to the sample size. If studies reported multiple measurements in 1 category, then the primary measurement was chosen. If the primary measurement was unknown, we used the average effect size across all measurement in each category. To interpret the effect size, we used the following metric: small effects 0.2, medium effects 0.5, and large effects 0.8.26

To avoid unit of analysis errors when including cluster RCTs in the meta-analysis, we corrected the analysis by following a method suggested from the Cochrane handbook.25 We multiplied the standard error (without adjusting for clustering) by the square root of the design effect.

DesignEffect=1+AverageClusterSize-1*ICC

If the ICC for a specific study could not be obtained, we used 0.03 as an estimate based on the median of ICCs from other included studies. In addition, we conducted a sensitivity analysis for imputing ICC from 0.01 to 0.99 to evaluate the robustness of the model.

The heterogeneity of studies was assessed by using the omnibus homogeneity test (Q) and reported I2 statistics: 0% to 40% heterogeneity might not be important; 30% to 60% may represent moderate heterogeneity; 50% to 90% may represent substantial heterogeneity; and 75% to 100% may represent considerable heterogeneity.27 We assessed publication bias using the funnel plot.28,29

Meta-regression provides regression coefficients as the measure of impact for each factor while controlling for others as well as the related P values and confidence intervals. The regression coefficient indicates the change of effect size when the variable adds 1 unit. For example, a positive coefficient of performance expectancy means that if the level of performance expectancy is increased, then the measurement of clinician behavior will increase by 1 unit. We reported R2 statistics to explain the overall variance. The factors were considered significant if P <.05. The meta-regression analysis was performed using the Metafor package30 in R.

RESULTS

Description of studies and data extraction

Out of 3,076 studies, we identified 34 based on the inclusion and exclusion criteria (Figure 1). The interrater reliability was 0.81. Most CDS interventions (n = 29, 85%) were in outpatient settings, with primary care being the most common (n = 20, 59%). Other studies were related to geriatrics, HIV, and pediatrics. Most of the included studies had 1 comparison (n = 22). The remaining 4 studies had 5 comparisons,31 3 comparisons,32,33 and 15 comparisons.34 For those 4 studies, we used the primary comparison. Supplementary Additional File 3 shows the change of construct along with brief descriptions of the control arm and the intervention arm for each included comparison. We excluded 6 studies35–42 because of a lack of sufficient data to calculate effect sizes.

Figure 1.

Figure 1.

Flow diagram of included studies. (CDS: clinical decision support, EHR: electronic health record).

Risk of bias assessment

For 27 cluster RCTs, 6 studies (22%) were in the high risk of bias category, 18 studies (67%) had some concerns for bias, and the remaining 3 studies (11%) were rated as having a low risk of bias. For 7 RCT studies, 2 were in the high risk of bias category while the other 5 studies had some concerns of bias. Most studies had at least some concerns in the selective reporting bias because of the lack of published protocols. Figure 2 depicts the risk assessment of each domain.

Figure 2.

Figure 2.

Risk of bias assessment for cluster randomized controlled trials (top) and randomized controlled trials (bottom).

Coding results

More than half of the comparisons between groups (n = 29, 56%) consisted of a change in the levels of at least 2 constructs. Twenty-one comparisons (40%) changed only 1 construct and 2 comparisons did not change any construct. For each construct, the most commonly changed construct (n = 37; 74%) was facilitating conditions. Nineteen comparisons (38%) modified social influence. Twenty-one comparisons (42%) adjusted the level of effort expectancy, 19 comparisons (38%) changed the motivational control, and 11 comparisons (31%) affected the level of performance expectancy. The numbers of studies that changed 2 constructs simultaneously are listed in Table 3. Coding results for included studies are listed in the Supplementary Additional File 4.

Table 3.

Numbers of studies that changed 2 constructs simultaneously

graphic file with name ocab160ilf1.jpg

Abbreviations: EE: effort expectancy; FC: facilitating conditions; MC: motivational control; PE: performance expectancy; SI: social influence.

Publication bias

The intercept value from the Egger’s test was 1.154 [−0.414, 2.722] with P = .143, indicating no significant publication bias. The funnel plot to assess publication bias lists in the Supplementary Additional File 5.

Meta-regression

We used a multicollinearity test to assess if multiple regression was feasible. The multicollinearity test found the maximum |r| value was 0.149, indicating a small correlation. Therefore, performing multiple regression was feasible for this dataset. We developed 2 types of models: the first one treated each construct separately and the second model considered the interaction between constructs. The sensitivity analysis is in the Supplementary Additional File 6.

Results of the univariate analysis are reported in Table 4. The meta-regression model without interaction terms accounted for a moderate level of heterogeneity (R2 = 59.42%) (Supplementary Additional File 7). The model revealed that among the theoretical constructs, effort expectancy (estimated coefficient = −0.162, P = .0003) was a significant factor in predicting the clinician behavior. Motivational control approached the borderline of significance (estimated coefficient = −0.089, P = .079). CDS interventions that required more effort or provided more control were associated with less impact on clinician behavior.

Table 4.

Results of the univariate analysis

Estimated coefficient 95% CI Standard error P value
Performance Expectancy 0.048 [−0.108, 0.204] 0.080 .547
Effort Expectancy −0.162 [−0.250, −0.074] 0.045 .0003
Social Influence 0.011 [−0.099, 0.122] 0.056 .839
Motivational Control −0.089 [−0.188, 0.010] 0.051 .079
Facilitating Conditions 0.038 [−0.039, 0.114] 0.039 .332

To explore the interaction relationship between theoretical constructs, we also developed a multivariate meta-regression model (Table 5). The overall model explained a larger amount of the heterogeneity across studies (R2 = 88.32%). From this model, we found 2 significant terms: facilitating conditions (estimated coefficient = 0.094; P = .013), performance expectancy with motivational control (estimated coefficient = 1.029; P = .022). Social influence with motivational control was close to being statistically significant (estimated coefficient = 0.146; P = .064). Providing more facilitating conditions by itself was shown to increase the likelihood of successfully changing clinician behavior. If users were allowed increased motivational control, then high-performance expectancy and high social influence would increase clinician behavior.

Table 5.

Results of the multivariate analysis

Estimated coefficient 95% CI Standard error P value
Performance Expectancy 0.291 [−0.070, 0.652] 0.184 .114
Effort Expectancy −0.357 [−0.813, 0.099] 0.233 .125
Social Influence 0.256 [−0.081, 0.592] 0.172 .136
Motivational Control −0.114 [−0.349, 0.121] 0.120 .343
Facilitating Conditions 0.094 [0.020, 0.168] 0.038 .013
Performance Expectancy with Effort Expectancy 0.169 [−0.344, 0.682] 0.262 .520
Performance Expectancy with Social Influence 0.254 [−0.071, 0.580] 0.166 .126
Performance Expectancy with Motivational Control 1.029 [0.151, 1.906] 0.448 .022
Performance Expectancy with Facilitating Conditions −0.155 [−0.377, 0.066] 0.113 .170
Effort Expectancy with Social Influence 0.024 [−0.295, 0.342] 0.162 .885
Effort Expectancy with Motivational Control −0.091 [−0.475, 0.293] 0.196 .641
Effort Expectancy with Facilitating Conditions 0.142 [−0.093, 0.377] 0.120 .237
Social Influence with Motivational Control 0.146 [−0.008, 0.301] 0.079 .064
Social Influence with Facilitating Conditions −0.106 [−0.276, 0.065] 0.087 .224
Motivational Control with Facilitating Conditions −0.165 [−0.416, 0.086] 0.128 .198

DISCUSSION

We conducted a comparative effectiveness review of research that compared a baseline CDS against an enhancement for that CDS. To more fully understand the mechanisms and reasons why some of these interventions were successful and others were not, we applied the UTAUT model in a meta-regression and found a complex picture that goes beyond univariate explanations of effects. Although there were several significant univariate effects, specifically effort expectancy and facilitating conditions, the more complete explanations are visible in the interaction effects in terms of the specific manipulations in CDS formats. Overall, using the UTAUT model and motivational control could guide the CDS design and implementation to improve CDS effectiveness.

The interactions between the constructs can help explain the complexity of how CDS enhancements or changes are effective. First, with the interaction between performance expectation (usefulness) with motivational control, we found that a high level of motivational control along with a high level of performance expectancy positively contributed to change in clinician behavior. This suggests that passive reminders (high level of motivational control) can be effective as long as providers believe that using the CDS will improve job performance. Ensuring that providers have that belief requires substantial design testing as well as good marketing at the time of implementation. Second, the interaction between social influence and motivational control highlights how institutional and social factors, both within and outside the CDS, can affect behavior. We found that even when users could control their interaction with a CDS intervention (high level of motivational control), a high degree of social influence positively enhanced their behavior. This finding suggests that organizational influence, positive reinforcement, and strong leadership early on could encourage adoption. As an example from the included studies, Ziemer et al. found significantly improved clinician behavior after combining a passive dashboard intervention with 1-to-1 feedback (high levels of social influence and motivational control).38 In contrast, for the CDS with low levels of motivational control, increasing the level of social influence or performance expectancy might still not increase CDS effectiveness. For example, in a study by Feldstein et al. who compared alerts and an education session (a high level of social influence and a low level of motivational control) with alerts alone, they found that increasing education sessions did not improve the changes in the proportion of prescriptions for heavily marketed medications.36

Characterization of CDS types in terms of the constructs

It is useful to discuss how the interaction of the constructs function across various CDS types. Types or categories of CDS include order sets, documentation templates, reminders, and alerts. Of these categories, order sets and documentation templates have historically had the most success.43,44 A direct comparison within the category was not in the scope of this study, but it is possible to speculate why certain interventions may have been more successful than others. Most studies have examined effort expectancy (easy to use) and performance expectancy (usefulness), but few have examined social influence (except for when patients are targets of the CDS)5 or motivational control, and none have examined the interaction of these constructs. Order sets are among the most widespread CDS implementations and are also probably among the most successful in changing users’ behavior.43–46 One possible reason for their success is that, in terms of the model, order sets have a high level of motivational control, a low level of effort expectation, and are likely to have a high level of performance expectancy. In those studies where order sets were not adopted, the most common reason noted was low clinical relevance.47 Both order sets and documentation templates have been recommended by a previous expert panel study as more effective in facilitating providers’ working process.48

Although passive reminders may have high levels of motivational control, they may not be adopted if they have low levels of social influence and/or performance expectancy. In this situation, adoption could be enhanced by increasing social influence, such as through education sessions, clinical champions, and timely feedback. In addition, if passive reminders are based on inaccurate data, the performance expectancy is likely to be low, and the effort expectancy will increase. Finally, interruptive alerts are used across many settings but have been widely criticized. They provide low levels of motivational control. Even though they often work to improve clinician behavior due to their automaticity, they also create a negative motivational environment as users resent and rebel against the loss of autonomy. Alert fatigue is a behavioral indication of this loss of autonomy, which can lead to patient safety risks. Given this concern, practitioners should not limit the delivery of CDS to alerts, but should also consider selecting CDS types that have a high level of motivational control while increasing social influence and performance expectancy.

Significance of comparative effectiveness studies

In contrast to many previous CDS reviews,3,4,7 our study used only head-to-head trials, which is the best design to address the study question and is highly appropriate for the current field of CDS. Because CDS interventions have become more widespread, comparing CDS to usual care will not provide much useful information for optimal design or implementation. In addition to our study, Kawamoto et al. conducted the first systematic review for comparative effectiveness studies and applied meta-regression in 2005.1 However, their selected literature was from almost 20 years ago. In addition, their meta-regression analysis used binary outcome measures instead of effect sizes, did not analyze potential interactions between factors, and did not use constructs from a theoretical model. Another systematic review that compared effectiveness of 2 CDS interventions with and without factors was published in 2018.5 However, they reported median values with interquartile ranges rather than using a meta-regression to calculate the statistical significance of each factor. This calculation treated each study equally, rather than weighting studies based on the sample size.

Attempts have been made to identify factors associated with CDS implementation or technology by applying models or concepts. However, previous studies have not examined causal mechanisms of CDS effectiveness from a user behavioral theoretical model. For example, Kilsdonk et al. applied the human, organization, and technology-fit (HOT-fit) framework to map potential factors that might affect CDS implementation.49 The HOT-fit framework focuses on how to successfully implement information systems by examining factors related to the organization and the technology itself.50 The direct use of concepts can provide some of mechanisms for explaining user acceptance of CDS, but it is limited in terms of theoretical consistency of measurements and mechanisms of action. Powell et al. proposed and refined implementation strategies through a modified Delphi discussion with experts.12 Many of these implementation strategies are related to psychological constructs, but that relationship was not made explicit. Simply using these concepts which exist in isolation is likely to miss other mechanisms and prevent a comprehensive analysis of CDS effectiveness. In summary, the UTAUT model is more appropriate for understanding the effectiveness of CDS.

Using the theoretical constructs in the UTAUT model could help improve the effectiveness of CDS in 3 ways: 1) it supports using a complex approach that allows consideration of both clinical, institutional, and regulatory concerns at the same time; 2) it minimizes cognitive load, burden, and alert fatigue; and 3) it generates hypotheses—which, hopefully, will promote more direct manipulation of factors to see what works. This study focused on the design of CDS (eg, content, display, rules of use). Specifically, we suggested that UTAUT constructs should be incorporated into the design of CDS design to maximumly change user behavior.

Limitations

The analysis was likely affected by incomplete reporting of clinician behavior measurements in primary studies. Eight cluster RCTs did not report ICC values or respond to request e-mails, so we could use only the median value and impute all possible ICC values in the sensitivity analysis. We excluded 8 studies because of insufficient data to calculate effect sizes. Therefore, we encourage researchers to provide comprehensive results when conducting RCTs or cluster RCTs. Finally, the UTAUT constructs are not fully orthogonal and have some conceptual overlap, especially because the studies themselves did not base their designs on an explicit test of those constructs. Moreover, the UTAUT constructs were inferred based on intervention descriptions rather than being directly categorized through surveys.

CONCLUSION

The aim of the present study was to understand factors contributing to CDS effectiveness by using the UTAUT model and motivational control. Three factors have positive impacts on clinician behavior: 1) low effort to use a CDS, 2) low controllability, and 3) providing more infrastructure and implementation strategies to support the CDS. These univariate effects are modified by the interaction analysis in the multivariate analysis which identified that motivational control has interactions with other 2 constructs, performance expectancy and social influence. Therefore, a passive CDS could be effective if users believe the CDS is useful and/or social expectations exist to use the CDS intervention. The multivariate meta-regression model could explain a large amount of the heterogeneity across studies (R2 = 88.32%). Overall, the UTAUT model combined with motivational control is an appropriate model to understand factors associated with CDS effectiveness and to guide design, implementation, and optimization of CDS interventions.

LIST OF ABBREVIATIONS

CDS=Clinical decision support

UTAUT=Unified Theory of Acceptance and Use of Technology

RCTs=Randomized controlled trials

PRISMA=Preferred Reporting Items for Systematic Reviews and Meta-Analyses

ICC=Intraclass correlation

FUNDING

This study was supported by the University of Utah.

AUTHOR CONTRIBUTIONS

Substantial contributions to the conception and study design: CW, GDF, KK, TR, SL. Substantial contributions to data acquisition and data analysis: CW, TR, SL. Substantial contributions to draft the manuscript: CW, TR, SL. Substantial contributions to revise the manuscript: CW, GDF, KK, TR, SL. All authors have approved the final manuscript.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

DATA AVAILABILITY

The dataset supporting the conclusions of this article is included within the article (and its additional files).

CONFLICT OF INTEREST STATEMENT

KK reports honoraria, consulting, sponsored research, licensing, or codevelopment outside the submitted work in the past 3 years with McKesson InterQual, Hitachi, Pfizer, Premier, Klesis Healthcare, RTI International, Mayo Clinic, Vanderbilt University, the University of Washington, the University of California at San Francisco, MD Aware, and the US Office of the National Coordinator for Health IT (via ESAC and Security Risk Solutions) in the area of health information technology. KK was also an unpaid board member of the nonprofit Health Level Seven International health IT standard development organization; he is an unpaid member of the US Health Information Technology Advisory Committee; and he has helped develop a number of health IT tools which may be commercialized to enable wider impact. None of these relationships have direct relevance to the manuscript but are reported in the interest of full disclosure. Other authors do not have any conflicts of interest related to this study.

Supplementary Material

ocab160_Supplementary_Data

REFERENCES

  • 1. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330 (7494): 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Moja L, Kwag KH, Lytras T, et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health 2014; 104 (12): e12–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ 2013; 346: f657. [DOI] [PubMed] [Google Scholar]
  • 4. Garg AX, Adhikari NKJ, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 293 (10): 1223–38. [DOI] [PubMed] [Google Scholar]
  • 5. Van de Velde S, Heselmans A, Delvaux N, et al. A systematic review of trials evaluating success factors of interventions with computerised clinical decision support. Implement Sci 2018; 13 (1): 114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Kawamoto K, Lobach DF.. Clinical decision support provided within physician order entry systems: a systematic review of features effective for changing clinician behavior. AMIA Annu Symp Proc 2003; 14: 361–5. [PMC free article] [PubMed] [Google Scholar]
  • 7. Lobach D, Sanders GD, Bright TJ, et al. Enabling health care decisionmaking through clinical decision support and knowledge management. Evid Rep Technol Assess (Full Rep) 2012: 1–784. [PMC free article] [PubMed] [Google Scholar]
  • 8. Fillmore CL, Rommel CA, Welch BM, et al. The perils of meta-regression to identify clinical decision support system success factors. J Biomed Inform 2015; 56: 65–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Gilinsky AS, Dale H, Robinson C, et al. Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques. Health Psychol Rev 2015; 9 (2): 244–63. [DOI] [PubMed] [Google Scholar]
  • 10. Dombrowski SU, Sniehotta FF, Avenell A, et al. Identifying active ingredients in complex behavioural interventions for obese adults with obesity-related co-morbidities or additional risk factors for co-morbidities: a systematic review. Health Psychol Rev 2012; 6 (1): 7–32. [Google Scholar]
  • 11. Atkins L, Francis J, Islam R, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci 2017; 12 (1): 77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci 2015; 10: 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex interventions to improve health. BMJ 2000; 321 (7262): 694–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Michie S, van Stralen MM, West R.. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 2011; 6: 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Michie S, Johnston M, Francis J, et al. From theory to intervention: mapping theoretically derived behavioural determinants to behaviour change techniques. Appl Psychol 2008; 57 (4): 660–80. [Google Scholar]
  • 16. Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information technology: toward a unified view. MIS Q 2003; 27: 425. [Google Scholar]
  • 17. Liu S, Reese TJ, Kawamoto K, et al. A systematic review of theoretical constructs in CDS literature. BMC Med Inform Decis Mak 2021; 21 (1): 102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Liu S, Reese TJ, Kawamoto K, et al. Toward optimized clinical decision support: a theory-based approach. In: 2020 IEEE International Conference on Healthcare Informatics (ICHI). IEEE; 2020: 1–2. doi: 10.1109/ICHI48887.2020.9374346. [Google Scholar]
  • 19. Sox HC. Defining comparative effectiveness research. Med Care 2010; 48 (Suppl 6): S7–8. [DOI] [PubMed] [Google Scholar]
  • 20. Shamseer L, Moher D, Clarke M, et al. ; the PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015; 349 (1): g7647–25. [DOI] [PubMed] [Google Scholar]
  • 21. Higgins JP, Green S. Cochrane Handbook for Systematic Reviews of Interventions. https://training.cochrane.org/handbook/current Accessed January 27, 2020.
  • 22. Fineout-Overholt E, Johnston L.. Teaching EBP: asking searchable, answerable clinical questions. Worldviews Evid Based Nurs 2005; 2 (3): 157–60. [DOI] [PubMed] [Google Scholar]
  • 23. Osheroff JA, Teich JM, Middleton B, et al. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007; 14 (2): 141–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Sterne JAC, Savović J, Page MJ, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019; 366. doi:10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  • 25. Higgins JPTEldridge S, Li T.. Chapter 23: including variants on randomized trials. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, eds. Cochrane Handbook for Systematic Reviews of Interventions Version 6.2. Cochrane; 2021. www.training.cochrane.org/handbook. [Google Scholar]
  • 26. Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York: Routledge; 2013. [Google Scholar]
  • 27. Higgins JPT, Thompson SG, Deeks JJ, et al. Measuring inconsistency in meta-analyses. BMJ 2003; 327 (7414): 557–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Rothstein HR, Sutton AJ, Borenstein M.. Publication bias in meta-analysis. In: Publication Bias in Meta-Analysis. Chichester, UK: John Wiley & Sons; 2006: 1–7. [Google Scholar]
  • 29. Sterne JAC, Sutton AJ, Ioannidis JPA, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011; 343: d4002. [DOI] [PubMed] [Google Scholar]
  • 30. Schwarzer G, Carpenter JR, Rücker G.. Meta-Analysis with R. Cham: Springer International Publishing; 2015. [Google Scholar]
  • 31. Scheepers-Hoeks A-MJ, Grouls RJ, Neef C, et al. Physicians’ responses to clinical decision support on an intensive care unit: comparison of four different alerting methods. Artif Intell Med 2013; 59 (1): 33–8. [DOI] [PubMed] [Google Scholar]
  • 32. Bosworth HB, Olsen MK, Dudley T, et al. Patient education and provider decision support to control blood pressure in primary care: a cluster randomized trial. Am Heart J 2009; 157 (3): 450–6. [DOI] [PubMed] [Google Scholar]
  • 33. Leung S, Zheng WY, Sandhu A, et al. Feedback and training to improve use of an electronic prescribing system: a randomised controlled trial. Stud Health Technol Inform 2017; 239: 63–9. [PubMed] [Google Scholar]
  • 34. Meeker D, Linder JA, Fox CR, et al. Effect of behavioral interventions on inappropriate antibiotic prescribing among primary care practices: a randomized clinical trial. JAMA 2016; 315 (6): 562–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Bloomfield HE, Nelson DB, Van Ryn M, et al. A trial of education, prompts, and opinion leaders to improve prescription of lipid modifying therapy by primary care physicians for patients with ischemic heart disease. Qual Saf Heal Care 2005; 14 (4): 258–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Feldstein AC, Smith DH, Perrin N, et al. Reducing warfarin medication interactions: an interrupted time series evaluation. Arch Intern Med 2006; 166 (9): 1009–15. [DOI] [PubMed] [Google Scholar]
  • 37. Simon SR, Smith DH, Feldstein AC, et al. Computerized prescribing alerts and group academic detailing to reduce the use of potentially inappropriate medications in older people. J Am Geriatr Soc 2006; 54 (6): 963–8. [DOI] [PubMed] [Google Scholar]
  • 38. Ziemer DC, Doyle JP, Barnes CS, et al. An intervention to overcome clinical inertia and improve diabetes mellitus control in a primary care setting: Improving Primary Care of African Americans with Diabetes (IPCAAD) 8. Arch Intern Med 2006; 166 (5): 507–13. [DOI] [PubMed] [Google Scholar]
  • 39. Kenealy T, Arroll B, Petrie KJ.. Patients and computers as reminders to screen for diabetes in family practice: randomized-controlled trial. J Gen Intern Med 2005; 20 (10): 916–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. McCarthy DM, Curtis LM, Courtney DM, et al. A multifaceted intervention to improve patient knowledge and safe use of opioids: results of the ED EMC 2 randomized controlled trial. Acad Emerg Med 2019; 26 (12): 1311–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Thomas KEH, Kisely S, Urrego F.. Electronic heath record prompts may increase screening for secondhand smoke exposure. Clin Pediatr (Phila) 2018; 57 (1): 27–30. doi:10.1177/0009922816688261 [DOI] [PubMed] [Google Scholar]
  • 42. Werk LN, Diaz MC, Cadilla A, et al. Promoting adherence to influenza vaccination recommendations in pediatric practice. J Prim Care Community Health 2019; 10: 215013271985306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Payne TH, Hoey PJ, Nichol P, Lovis C.. Preparation and use of preconstructed orders, order sets, and order menus in a computerized provider order entry system. J Am Med Inform Assoc 2003; 10 (4): 322–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Wright A, Sittig DF, Carpenter JD.. Order sets in computerized physician order entry systems: an analysis of seven sites. AMIA. Annu Symp Proc 2010; 2010: 892–6. [PMC free article] [PubMed] [Google Scholar]
  • 45. Mulvehill S, Schneider G, Cullen CM, et al. Template-guided versus undirected written medical documentation: a prospective, randomized trial in a family medicine residency clinic. J Am Board Fam Pract 2005; 18 (6): 464–9. [DOI] [PubMed] [Google Scholar]
  • 46. Lorenzetti DL, Quan H, Lucyk K, et al. Strategies for improving physician documentation in the emergency department: a systematic review. BMC Emerg Med 2018; 18 (1): 36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Hulse NC, Del Fiol G, Bradshaw RL, et al. Towards an on-demand peer feedback system for a clinical knowledge base: a case study with order sets. J Biomed Inform 2008; 41 (1): 152–64. [DOI] [PubMed] [Google Scholar]
  • 48. Wright A, Phansalkar S, Bloomrosen M, et al. Best practices in clinical decision support. Appl Clin Inform 2010; 1 (3): 331–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Kilsdonk E, Peute LW, Jaspers MWM.. Factors influencing implementation success of guideline-based clinical decision support systems: a systematic review and gaps analysis. Int J Med Inform 2017; 98: 56–64. [DOI] [PubMed] [Google Scholar]
  • 50. Yusof MM, Kuljis J, Papazafeiropoulou A, et al. An evaluation framework for Health Information Systems: human, organization, and technology-fit factors (HOT-fit). Int J Med Inform 2008; 77 (6): 386–98. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocab160_Supplementary_Data

Data Availability Statement

The dataset supporting the conclusions of this article is included within the article (and its additional files).


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES