Skip to main content
Implementation Science : IS logoLink to Implementation Science : IS
. 2015 Aug 7;10:109. doi: 10.1186/s13012-015-0295-0

Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study

Thomas J Waltz 1,2,, Byron J Powell 3, Monica M Matthieu 4,5,10, Laura J Damschroder 2, Matthew J Chinman 6,7, Jeffrey L Smith 5,10, Enola K Proctor 8, JoAnn E Kirchner 5,9,10
PMCID: PMC4527340  PMID: 26249843

Abstract

Background

Poor terminological consistency for core concepts in implementation science has been widely noted as an obstacle to effective meta-analyses. This inconsistency is also a barrier for those seeking guidance from the research literature when developing and planning implementation initiatives. The Expert Recommendations for Implementing Change (ERIC) study aims to address one area of terminological inconsistency: discrete implementation strategies involving one process or action used to support a practice change. The present report is on the second stage of the ERIC project that focuses on providing initial validation of the compilation of 73 implementation strategies that were identified in the first phase.

Findings

Purposive sampling was used to recruit a panel of experts in implementation science and clinical practice (N = 35). These key stakeholders used concept mapping sorting and rating activities to place the 73 implementation strategies into similar groups and to rate each strategy’s relative importance and feasibility. Multidimensional scaling analysis provided a quantitative representation of the relationships among the strategies, all but one of which were found to be conceptually distinct from the others. Hierarchical cluster analysis supported organizing the 73 strategies into 9 categories. The ratings data reflect those strategies identified as the most important and feasible.

Conclusions

This study provides initial validation of the implementation strategies within the ERIC compilation as being conceptually distinct. The categorization and strategy ratings of importance and feasibility may facilitate the search for, and selection of, strategies that are best suited for implementation efforts in a particular setting.

Electronic supplementary material

The online version of this article (doi:10.1186/s13012-015-0295-0) contains supplementary material, which is available to authorized users.

Keywords: Concept mapping, Implementation research, Implementation strategies, Mental health, US Department of Veterans Affairs

Background

Concerns about poor consistency in implementation science (IS) terminology have led researchers to characterize the field as a “Tower of Babel” [1]. Inconsistent terminology complicates literature searches, and researchers have found that search strategy yield and precision indices for implementation and quality improvement studies are moderate at best [2, 3]. This limits meta-analytic and replication efforts aimed at rigorously evaluating the effectiveness of implementation strategies and the value of existing literature for those enacting implementation initiatives. The science and practice of implementation would be greatly facilitated by a parsimonious nomenclature of conceptually distinct implementation strategies [2, 47].

Recently, Powell et al. [8] reviewed the health and mental health literature (including 41 compilations and reviews) and proposed a compilation of 68 discrete implementation strategies involving one action or process. This compilation served as the starting point for a subsequent multi-stage project called Expert Recommendations for Implementing Change (ERIC) [9]. The ERIC project’s first stage involved expert panelists (N = 71) using a modified Delphi process to revise the compilation, which resulted in an updated compilation of 73 discrete implementation strategies [10].

The aim of the ERIC project’s second stage, presented here, was to obtain preliminary validation of the compilation of 73 implementation strategies by studying the relationships between the strategies and obtaining relative importance and feasibility ratings for each strategy. The study of the relationships among the strategies supports the evaluation of whether the strategies are conceptually distinct from one another as well as how the strategies can be organized into conceptually relevant groupings. The former can also serve the practical purpose of making it easier for stakeholders to consider the range of implementation strategies by thematic cluster. The importance and feasibility ratings for the strategies provide insight into the perceived applicability of the strategies. It is of general interest which strategies have relatively high and low ratings by experts.

Method

A purposive sampling procedure was used to recruit an expert panel of implementation science and clinical experts (N = 35) to participate in concept mapping and rating tasks [9, 10]. A detailed description of procedures has been published [9], and a summary is provided here. Concept mapping is a mixed-method procedure for engaging stakeholder groups in a structured conceptualization process [11]. This process supports visually representing the relationships among a set of related concepts and empirically clustering them into conceptually distinct categories and rating them on multiple dimensions.

The Concept Systems Global MAX™ [12] web platform was used for the panel’s sorting and rating tasks and data analysis. A more detailed introduction to concept mapping can be found in Trochim and Kane [13]. For the sorting task, participants were asked to sort virtual cards for each of the 73 strategies, accompanied by their definitions, into piles as they deemed appropriate. Participants were asked to rate each strategy for importance and feasibility ranging from 1 (relatively unimportant/not at all feasible) to 5 (extremely important/extremely feasible). These global ratings were prefaced by the following instructions: “Please select a number from 1 to 5 for each discrete implementation strategy to provide a rating in terms of how important (feasible) you think it is. Keep in mind that we are looking for relative importance (feasibility), use all the values in the rating scale to make distinctions.” Participants were able to select which set of activities they wanted to do first and were also able to work on the sorting and rating activities over multiple online sessions, at their convenience, before submitting their responses.

Multidimensional scaling and hierarchical cluster analyses were conducted to produce visual representations of the relationships among the strategies. Descriptive statistics for the importance and feasibility ratings were calculated. Each strategy’s importance and feasibility score was plotted on a graph. The resulting scatterplot was divided into four quadrants or “Go-zones” (e.g., I, II, III, IV) using the mean of each dimension. For example, quadrant I contains strategies that have values above the means for both dimensions. The Go-zone quadrants column in Table 1 reflects the combined relative importance and feasibility for each strategy.

Table 1.

A summary of the 73 implementation strategies, organized by cluster with mean importance and feasibility ratings

Importance Feasibility Go-zone quadrant
Use evaluative and iterative strategies 4.19 4.01
4  Assess for readiness and identify barriers and facilitators 4.60 4.57 I
5  Audit and provide feedback 4.40 4.13 I
56  Purposefully reexamine the implementation 4.40 4.03 I
26  Develop and implement tools for quality monitoring 4.37 3.63 I
27  Develop and organize quality monitoring systems 4.33 3.37 I
23  Develop a formal implementation blueprint 4.30 4.47 I
18  Conduct local need assessment 4.27 4.33 I
61  Stage implementation scale up 3.97 3.77 I
46  Obtain and use patients/consumers and family feedback 3.67 3.80 I
14  Conduct cyclical small tests of change 3.63 4.03 I
Provide interactive assistance 3.67 3.29
33  Facilitation 4.13 3.77 I
54  Provide local technical assistance 3.97 3.20 IV
53  Provide clinical supervision 3.83 3.10 IV
8  Centralize technical assistance 2.73 3.10 III
Adapt and tailor to context 3.59 3.30
63  Tailor strategies 4.37 4.00 I
51  Promote adaptability 3.90 3.57 I
67  Use data experts 3.23 3.13 III
68  Use data warehousing techniques 2.87 2.50 III
Develop stakeholder interrelationships 3.47 3.64
35  Identify and prepare champions 4.20 3.77 I
48  Organize clinician implementation team meetings 3.97 3.53 I
57  Recruit, designate, and train for leadership 3.93 3.20 IV
38  Inform local opinion leaders 3.90 4.03 I
6  Build a coalition 3.77 3.63 I
47  Obtain formal commitments 3.77 3.17 IV
36  Identify early adopters 3.70 3.70 I
17  Conduct local consensus discussions 3.63 4.07 I
7  Capture and share local knowledge 3.63 3.87 I
64  Use advisory boards and workgroups 3.40 3.87 I
65  Use an implementation advisor 3.30 3.70 I
45  Model and simulate change 3.30 3.20 II
72  Visit other sites 3.17 3.73 II
40  Involve executive boards 2.97 3.63 II
25  Develop an implementation glossary 2.87 4.57 II
24  Develop academic partnerships 2.83 3.40 II
52  Promote network weaving 2.70 2.77 III
Train and educate stakeholders 3.43 3.93
19  Conduct ongoing training 4.17 3.87 I
55  Provide ongoing consultation 4.17 3.63 I
29  Develop educational materials 3.80 4.83 I
43  Make training dynamic 3.67 4.00 I
31  Distribute educational materials 3.50 4.77 I
71  Use train-the-trainer strategies 3.33 3.50 I
15  Conduct educational meetings 3.27 4.50 I
16  Conduct educational outreach visits 3.10 4.07 II
20  Create a learning collaborative 3.10 3.43 II
60  Shadow other experts 2.87 3.37 II
73  Work with educational institutions 2.73 3.30 II
Support clinicians 3.23 3.06
32  Facilitate relay of clinical data to providers 4.17 3.43 I
58  Remind clinicians 3.23 3.77 II
30  Develop resource sharing agreements 3.07 3.13 III
59  Revise professional roles 3.00 2.30 III
21  Create new clinical teams 2.67 2.67 III
Engage consumers 3.25 2.95
41  Involve patients/consumers and family members 3.87 3.63 I
39  Intervene with patients/consumers to enhance uptake and adherence 3.50 3.07 IV
50  Prepare patients/consumers to be active participants 3.40 3.03 IV
37  Increase demand 3.30 2.33 II
69  Use mass media 2.17 2.70 III
Utilize financial strategies 2.86 2.09
34  Fund and contract for the clinical innovation 3.67 2.43 IV
1  Access new funding 3.57 2.40 IV
49  Place innovation on fee for service lists/formularies 3.40 2.10 IV
2  Alter incentive/allowance structures 3.17 2.23 III
42  Make billing easier 2.93 1.77 III
3  Alter patient/consumer fees 2.60 2.03 III
70  Use other payment schemes 2.30 1.87 III
28  Develop disincentives 2.17 2.13 III
66  Use capitated payments 1.97 1.80 III
Change infrastructure 2.40 2.01
44  Mandate change 3.23 2.63 III
12  Change record systems 2.83 2.23 III
11  Change physical structure and equipment 2.60 2.27 III
22  Create or change credentialing and/or licensure standards 2.23 1.47 III
13  Change service sites 2.20 2.20 III
9  Change accreditation or membership requirements 2.17 1.80 III
62  Start a dissemination organization 2.03 2.13 III
10  Change liability laws 1.87 1.33 III

Strategies are organized by rank order of mean importance ratings from the highest to the lowest within each cluster. The importance rating scale ranged from 1 (relatively unimportant) to 5 (extremely important), and the feasibility scale ranged from 1 (not at all feasible) to 5 (extremely feasible). The rightmost column depicts the Go-zone quadrant into which each of the strategies falls based on the scale mean cutoffs (see Fig. 2). Go-zone quadrant I: Importance and feasibility are both above the scale means. Go-zone quadrant II: Importance rating is lower and feasibility rating is higher than the scale means. Go-zone quadrant III: Importance and feasibility ratings are both below scale means. Go-zone quadrant IV: Importance rating higher and feasibility lower than scale means

Results

Experts who participated in the concept mapping and rating tasks and were affiliated with academic or healthcare institutions in the United States (n = 34) or in Canada (n = 1). Thirty-two of the 35 experts provided valid sorts (>75 % of strategies sorted), and 30 provided importance and feasibility ratings for all strategies. Sixty-three percent of participants had exclusive expertise in IS, 29 % were experts in both IS and clinical practice, and 8 % indicated clinical practice expertise only. Sixty-nine percent of participants had some affiliation with the US Department of Veterans Affairs (VA), most of whom also held academic appointments in social science or health-related schools or departments.

Figure 1 presents a point map that visually represents the relationships among the 73 implementation strategies, with each point on the map representing a strategy. The strategies are numbered to aid in cross-referencing the spatial relationships of the points on the map with their labels enumerated in Table 1. All but two strategies were sorted as being conceptually distinct. Strategies #66 (Use capitated payments) and #70 (Use other payment schemes) were always sorted together. Two other strategies were proximal to one another though they were sorted together by only 4 of 32 panelists (#35 Identify and prepare champions and #57 Recruit, designate, and train for leadership), indicating that they are more similar in how they relate with other strategies on the map, than they are directly similar to one another.

Fig. 1.

Fig. 1

Point and cluster map of all 73 strategies identified in the ERIC process. The map reflects the product of an expert panel (valid response n = 32) sorting 73 discrete implementation strategies into groupings by similarity with each strategy being depicted by a yellow dot and accompanied by a number supporting cross-referencing to the strategies enumerated in Table 1. Spatial distances reflect how frequently the strategies were sorted together as similar. In general, the closer two points are together, the more frequently those strategies were sorted together. Strategies distal from one another were infrequently, if at all, sorted together. These spatial relationships are relative to the sorting data obtained in this study, and distances do not reflect an absolute relationship (i.e., a 5-mm distance in the present map does not reflect the same relationship as a 5-mm distance on a map from a different data set). The legend provides the label for each of the nine clusters of strategies. Dotted lines within the Develop stakeholder interrelationships cluster indicate how two separate clusters were merged into one large cluster due to conceptual similarity among their items. Dotted lines extending between other clusters archive the reassignment of strategies from their original cluster to a neighboring cluster to which there was a better conceptual fit (i.e., strategies #48, #58, and #62)

The final clusters were developed over 3 weeks of deliberations by the ERIC investigative team. A 13 cluster starting point was selected because it is one standard deviation above the mean number of clusters typically obtained in concept mapping [14]. In this study, 69 % of respondents sorted statements into 13 or fewer piles. We sequentially reviewed cluster merges and achieved consensus to merge clusters down to nine conceptually distinct clusters. For example, two clusters shown in pale green at the center bottom in Fig. 1 (separated by dashed lines) were merged to form a single cluster labeled Develop stakeholder interrelationships, as the original clusters were judged as not sufficiently conceptually distinct.

When the team reviewed the clusters for conceptual clarity, three proposals came forward to move individual strategies to neighboring clusters. First, #62 (Start a dissemination organization) was moved from the Engage consumers cluster to the Change infrastructure cluster, as it was judged more similar to infrastructure support for a practice change than engaging consumers. Second, #48 (Organize clinician implementation team meetings) was moved to the Develop stakeholder interrelationships cluster from Adapt and tailor to the context, as the former has greater interpersonal focus than the latter. And finally, #58 (Remind clinicians) was moved to the Support clinicians cluster from Provide interactive assistance because it is more administrative than interactive in focus. Unanimous consensus was reached for the final cluster arrangements. Additional file 1 provides a cluster-by-cluster visual tour of the concept map.

A multi-step process was used to determine labels for the final clusters. The list began with labels provided by expert panel members for their clusters that were most similar to the final cluster solutions. This list was supplemented with highly descriptive labels identified from the investigative team’s meeting minutes from cluster solution deliberations. Proposed criteria for developing cluster labels (Table 2) were introduced for team comment by one of the authors (LJD) along with suggested label revisions. These criteria were helpful in structuring iterative discussion among team members, the result of which was voted upon by the team and unanimously adopted.

Table 2.

Guidelines for cluster labels

1 Short and elegant; simpler is better.
2 Easier for users to remember.
3 No redundancies (e.g., labeling with “…the implementation process” which is redundant in mentioning implementation because all these are for implementation; and redundant also because implementation is a process).
4 Not too short; enough description to evoke the general purpose/intent/theme underlying the cluster of techniques that are included.
5 Short enough to make it clear to users that they must look at the individual techniques within the cluster/package to know/understand the activities. A fully descriptive title may lead users to believe the label says it all.
6 Begin with a verb.
7 Command structure (definition: A type of sentence that gives advice or instructions or that expresses a request or command.). Not that these are requests/commands but they are certainly words of action-oriented advice.
8 Use layperson terms to the extent possible.

Table 1 presents a summary of the 73 implementation strategies, organized by cluster with mean importance and feasibility ratings. There was a strong relationship (r = 0.7) between the feasibility and importance ratings, meaning that most strategies fell within either quadrant I (high importance and feasibility) or III (low importance/feasibility). However, there were still a number of strategies that were viewed as important but not as feasible (12 %, e.g., Access new funding), or feasible but less important (15 %, e.g., Remind clinicians). Clusters of strategies that are more immediate and concrete and are potentially more in the control of those tasked with supporting change (e.g., Use evaluative and iterative strategies, Train and educate stakeholders) tended to have higher importance and feasibility ratings. Clusters that are more strategic, but also potentially involve changing well-established systems (e.g., Change infrastructure, Utilize financial strategies), tended to have lower ratings. Figure 2 presents a graphic of the Go-zone data.

Fig. 2.

Fig. 2

Go-zone plot for all 73 strategies based on expert ratings. Note. The range of the x and y axes reflect the mean values obtained for all 73 of the discrete implementation strategies for each of the rating scales. The plot is divided into quadrants on the basis of the overall mean values for each of the rating scales. Quadrant labels are depicted with roman numerals next to the plot. Strategies in quadrant I fall above the mean for both the importance and the feasibility ratings. Thus, these strategies are those where there was the highest consensus regarding their relative high importance and feasibility. Conversely, quadrant III reflects the strategies where there was consensus regarding their relative low importance and feasibility. Quadrants II and IV reflect strategies that were relatively high in feasibility or importance, respectively, but low on the other rating scale

Discussion

Results from this study provide initial validation for viewing the 73 implementation strategies as conceptually distinct. Cluster analyses of the concept mapping data support grouping strategies into nine clusters which have practical heuristic value for those looking to the ERIC compilation of implementation strategies for guidance. The importance and feasibility ratings for the strategies supported the formation of Go-zone quadrants that can be used to help decision makers prioritize which strategies to use when planning an implementation initiative.

While the concept mapping strategy used in this study represents a strong methodological approach to evaluating whether the 73 implementation strategies are conceptually distinct and organizing them by theme and potential applicability (i.e., Go-zone analysis), there are notable limitations. Recruitment had been restricted to the time zones within the continental United States to minimize scheduling conflicts for elements of the ERIC project that required real time interactions among participants. Thus, all but one of the 35 participants were from the United States, and 69 % had some affiliation with the VA. While concept maps with 30 or more participants are considered to be highly reliable [14], if stakeholders from outside the United States had practice contexts that alter the perceptions of these strategies interrelationships, or the ratings of their perceived importance and feasibility, different results may be obtained.

Acknowledgements

This project is funded through the US Department of Veterans Affairs Veterans Health Administration Mental Health Quality Enhancement Research Initiative (QLP 55–025). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the US government. Additionally, TJW received support from the VA Office of Academic Affiliations Advanced Fellowships Program in Health Services Research and Development at the Center for Mental Healthcare & Outcomes Research, and BJP received support from the National Institute of Mental Health (F31 MH098478), the Doris Duke Charitable Foundation (Fellowship for the Promotion of Child Well-Being), and the Fahs-Beck Fund for Research and Experimentation. MJC received support from the VISN 4 Mental Illness Research, Education, and Clinical Center. MJC received support from the VISN 4 Mental Illness Research, Education, and Clinical Center at the VA Pittsburgh Healthcare System.

We would like to acknowledge the contributions of each member of the expert panel: Greg Aarons, University of California, San Diego; Mark Bauer, Harvard University and US Department of Veterans Affairs; Laurel Copeland, Scott and White Healthcare and US Department of Veterans Affairs; Torrey Creed, University of Pennsylvania; Laura Damschroder, University of Michigan and US Department of Veterans Affairs; Rani Elwy, Boston University and US Department of Veterans Affairs; Bradford Felker, University of Washington and US Department of Veterans Affairs; Hildi Hagedorn, University of Minnesota and US Department of Veterans Affairs; Alison Hamilton, University of California, Los Angeles and US Department of Veterans Affairs; Kathy Henderson, US Department of Veterans Affairs; Timothy Hogan, University of Massachusetts and US Department of Veterans Affairs; Kelly Koerner, Evidence-Based Practice Institute; Sarah Krein, University of Michigan and US Department of Veterans Affairs; Julie Kreyenbuhl, University of Maryland and US Department of Veterans Affairs; Sara Landes, University of Washington and US Department of Veterans Affairs; Cara Lewis, Indiana University-Bloomington; Julie Lowery, University of Michigan and US Department of Veterans Affairs; Aaron Lyon, University of Washington; Natalie Maples, University of Texas Health Science Center San Antonio; Alan McGuire, Indiana University-Purdue University Indianapolis and US Department of Veterans Affairs; Amanda Midboe, US Department of Veterans Affairs; Laura Murray, Johns Hopkins University; Princess Osei-Bonsu, US Department of Veterans Affairs; Richard Owen, University of Arkansas for Medical Sciences and US Department of Veterans Affairs; Louise Parker, University of Massachusetts Boston; Enola Proctor, Washington University in St. Louis; Mona Ritchie, US Department of Veterans Affairs; Angie Rollins, US Department of Veterans Affairs and Indiana University Purdue University Indianapolis; Anju Sahay, US Department of Veterans Affairs; Anne Sales, University of Michigan and US Department of Veterans Affairs; Sharon Straus, University of Toronto; Carol VanDeusen Lukas, Boston University and US Department of Veterans Affairs; Dawn Velligan, University of Texas Health Science Center San Antonio; Kara Zivin, University of Michigan and US Department of Veterans Affairs; and Cynthia Zubritsky, University of Pennsylvania.

Abbreviations

ERIC

Expert Recommendations for Implementing Change

VA

US Department of Veterans Affairs

Additional file

Additional file 1: (663.9KB, pdf)

A cluster-by-cluster visual tour of the concept map.

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors were involved in the conceptualization and design of this study. TJW and JEK are co-principal investigators of the ERIC project. JLS, MMM, MJC, and LJD are co-investigators. BJP and EKP are consultants. TJW led the data collection. TJW and BJP conducted the data analysis and drafted this manuscript. All authors read and approved the final manuscript.

Contributor Information

Thomas J. Waltz, Email: twaltz1@emich.edu

Byron J. Powell, Email: byronjpowell@gmail.com

Monica M. Matthieu, Email: mmatthie@slu.edu

Laura J. Damschroder, Email: laura.damschroder@va.gov

Matthew J. Chinman, Email: chinman@rand.org

Jeffrey L. Smith, Email: jeffrey.smith6@va.gov

Enola K. Proctor, Email: ekp@wustl.edu

JoAnn E. Kirchner, Email: joann.kirchner@va.gov

References

  • 1.McKibbon KA, Lokker C, Wilczynski NL, Ciliska D, Dobbins M, Davis DA, Haynes RB, Straus S. A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel? Implement Sci. 2010;5:1–11. doi: 10.1186/1748-5908-5-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hempel S, Rubenstein LV, Shanman RM, Foy R, Golder S, Danz M, Shekelle PG. Identifying quality improvement intervention publications: a comparison of electronic search strategies. Implement Sci. 2011;6:1–10. doi: 10.1186/1748-5908-6-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.McKibbon KA, Lokker C, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Davis DA, Straus SE. Search filters can find some but not all knowledge translation articles in MEDLINE: an analytic survey. J Clin Epidemiol. 2012;65:651–9. doi: 10.1016/j.jclinepi.2011.10.014. [DOI] [PubMed] [Google Scholar]
  • 4.Michie S, Fixsen DL, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:1–6. doi: 10.1186/1748-5908-4-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Mazza D, Bairstow P, Buchan H, Chakraborty SP, Van Hecke O, Grech C, Kunnamo I. Refining a taxonomy for guideline implementation: results of an exercise in abstract classification. Implement Sci. 2013;8:1–10. doi: 10.1186/1748-5908-8-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:1–11. doi: 10.1186/1748-5908-8-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Colquhoun H, Leeman J, Michie S, Lokker C, Bragge P, Hempel S, McKibbon KA, Peters GY, Stevens KR, Wilson MG, Grimshaw J. Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implement Sci. 2014;9:1–6. doi: 10.1186/1748-5908-9-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, York JL. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57. doi: 10.1177/1077558711430690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Waltz TJ, Powell BJ, Chinman MJ, Smith JL, Matthieu MM, Proctor EK, Damschroder LJ, Kirchner JE. Expert recommendations for implementing change (ERIC): protocol for a mixed methods study. Implement Sci. 2014;9:1–12. doi: 10.1186/1748-5908-9-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:1–14. doi: 10.1186/s13012-014-0195-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kane M, Trochim WMK. Concept mapping for planning and evaluation. Thousand Oaks, CA: Sage; 2007. [Google Scholar]
  • 12.Concept Systems Global Max© [http://www.conceptsystems.com/content/view/the-concept-system.html]
  • 13.Trochim WMK, Kane M. Concept mapping: an introduction to structured conceptualization in health care. Int J Qual Health Care. 2005;17:187–91. doi: 10.1093/intqhc/mzi038. [DOI] [PubMed] [Google Scholar]
  • 14.Rosas SR, Kane M. Quality and rigor of the concept mapping methodology: a pooled study analysis. Eval Program Plann. 2012;35:236–45. doi: 10.1016/j.evalprogplan.2011.10.003. [DOI] [PubMed] [Google Scholar]

Articles from Implementation Science : IS are provided here courtesy of BMC

RESOURCES