Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Sep 1.
Published in final edited form as: Adm Policy Ment Health. 2015 Sep;42(5):545–573. doi: 10.1007/s10488-014-0551-7

Measures for Predictors of Innovation Adoption

Ka Ho Brian Chor 1, Jennifer P Wisdom 2, Su-Chin Serene Olin 3, Kimberly E Hoagwood 4, Sarah M Horwitz 5
PMCID: PMC4201641  NIHMSID: NIHMS587131  PMID: 24740175

Abstract

Building on a narrative synthesis of adoption theories by Wisdom et al. (2013), this review identifies 118 measures associated with the 27 adoption predictors in the synthesis. The distribution of measures is uneven across the predictors and predictors vary in modifiability. Multiple dimensions and definitions of predictors further complicate measurement efforts. For state policymakers and researchers, more effective and integrated measurement can advance the adoption of complex innovations such as evidence-based practices.

Keywords: Measure, Adoption, Evidence-based treatments and practices, Organization, Innovation

Introduction

The concept of adoption, the complete or partial decision to proceed with the implementation of an innovation as a distinct process preceding but separate from actual implementation, is at an early stage of development among state policymakers, organizational directors, deliverers of services, and implementation researchers (Glisson & Schoenwald, 2005; Panzano & Roth, 2006; Schoenwald & Hoagwood, 2001). In health and behavioral health, adoption is a key implementation outcome (Proctor et al., 2011; Proctor & Brownson, 2012) because the latter cannot occur without the former, and implementation does not necessarily follow the contemplation, decision, and commitment to adopt an innovation such as an evidence-based practice (EBP). Adoption is a complex, multi-faceted decision-making process. Understanding this process may provide valuable insights for the development of strategies to facilitate effective uptake of EBPs or guide thoughtful de-adoption in order to avoid costly missteps in organizational efforts to improve care quality (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Saldana, Chamberlain, Bradford, Campbell, & Landsverk, in press; Wisdom, Chor, Hoagwood, & Horwitz, 2013). Research to date, however, has focused more on implementation, and less on adoption as a distinct outcome (Wisdom et al., 2013). Further, reviews of measures have focused more on general predictors of broad implementation outcomes, rather than specific predictors of adoption (Chaudoir, Dugan, & Barr, 2013; National Cancer Institute, 2013; Seattle Implementation Research Collaborative, 2013).

This review builds on the theoretical framework by Wisdom et al. (2013) that organizes 27 predictors of adoption by four contextual levels — external system, organization, innovation, and individual (see Figure 1 and Appendix 1). These four contextual levels are consistent with those in other theoretical frameworks such as the Advanced Conceptual Model of EBP Implementation (Aarons, Hurlburt, & Horwitz, 2011), the Practical Robust Implementation Sustainability Model (Feldstein & Glasgow, 2008), and the Consolidated Framework for Implementation Research (Damschroder et al., 2009). The goals of this review are to: (1) identify measures and their properties for the 27 adoption predictors described in Wisdom et al. (2013); (2) describe the measures' relationships to the predictors, to other related measures, and to adoption, especially EBP adoption; (3) highlight the challenges of measurement; and (4), where possible, propose ways to effectively integrate measures for key adoption predictors. Linking the 27 predictors of adoption with their measures will assist systems, organizations, and individuals in identifying and measuring critical predictors of adoption decision-making. Although understanding adoption is important in many areas (e.g., primary medical care), it is sorely needed in state mental health systems given the demands on quality and accountability in the Patent Protection and Affordable Care Act of 2010 (P.L. 111-148). As states expand efforts to improve the uptake of EBPs, state policy leaders and agency directors are faced with the challenge of selectively adopting innovations to improve the quality of services and de-adopting ineffective innovations. Depending on the financial resources, time, and staffing involved, this decision-making — adopt, not adopt, adopt later, and de-adopt — has important consequences for future implementation and sustainability of EBPs in state supported services.

Figure 1.

Figure 1

Theoretical framework for the 27 predictors of adoption organized by four contextual levels (Wisdom et al., 2013).

Methods

To identify measures for the factors associated with adoption, this review used the same search strategy as the narrative synthesis review of adoption theories (Wisdom et al., 2013). This approach insures that the measures are consistent with the theoretical framework of adoption. Appendix 2 illustrates database searches in Ovid Medline, PsycINFO, and Web of Science that yielded 322 unique journal articles. For the synthesis, the articles were screened and rescreened to yield theories that formed the framework. These 322 journal articles were used to abstract measures associated with the 27 predictors of adoption in the following steps:

  1. Measures championed by the theoretical framework by Wisdom et al. (2013) were included, in order to establish a one-to-one relationship between theory and measure.

  2. The articles not specifically used in the framework were reviewed to identify additional measures that could be mapped onto the theoretical framework.

  3. Snowball search of references of references and related themes (Pawson, Greenhalgh, Harvey, & Walshe, 2005) was conducted.

  4. After identifying the measures for inclusion, all authors independently mapped each measure first to each of the four levels of predictors, then to each of the 27 predictors contained in the theoretical framework.

  5. The final mapping was achieved by consensus brainstorming among the authors and consultation with field experts to resolve discrepancies in mapping.

  6. Measures with the most relevance to the predictors were included. The threshold of exhaustiveness and the final selection decision were determined by discussions among the authors and consultation with the field experts. A measure with multiple subscales can be mapped to more than one predictor.

  7. The final set of measures and references were further reviewed for the availability of psychometric data (e.g., reliability, validity), empirical adoption data (i.e., whether a measure associated with an adoption predictor was applied in an empirical study), and the modifiability of predictors associated with the measures (i.e., whether a predictor is more or less malleable). Final agreement on these categories (yes/no) was required to be 100% after resolving discrepancies among the authors, similar to Step 6 above.

Additional details regarding the literature search method are reported in Wisdom et al. (2013).

Results

Table 1 summarizes the 118 measures for the 27 adoption predictors, which are organized by the contextual levels: external system, organization, innovation, and individual (staff or client). For each measure, descriptive information is provided about its domains and items, its availability (i.e., whether the measure is accessible), its structure (e.g., single- vs. multi-items/domains, survey/interview, computation formula, etc.), its psychometric properties, the availability of supportive adoption data, and the modifiability of the corresponding adoption predictor(s). While Table 1 serves as a detailed compendium of measures, below we synthesize measures for each contextual level, describe how measures address similar or different aspects of the predictors, and highlight measures that are associated with EBP adoption.

Table 1.

Measures for the predictors of adoption.

Adoption Predictor Measure Description Measure Available and Accessible? Type of Measure Psychometric Empirical Measured
Properties Validated? Adoption Data? Predictor(s) Modifiable?
1. External System
  • External Environment (n=10)

  • Community Wealth is measured by income of residents on a three-point scale (1=low to moderate income; 2=moderate to middle income; 3=high income) as used in the Reinventing Government (RG) and Alternative Service Delivery (ASD) surveys (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes No
  • Income Growth is calculated by average increase in median family income based on zip codes obtained from the U.S. Census tract (Meyer & Goes, 1988).

Yes Computation No Yes No
  • Industry Concentration is measured by a single question: “What combined market do the top 3 firms in your industry have?” (Gatignon & Robertson, 1989).

Yes Single-item rating scale No Yes No
  • Competitive Price Intensity is measured by two items (e.g., “How frequently does price-cutting take place in your industry?”) on a six-point Likert scale (1=never; 6=constantly) (Gatignon & Robertson, 1989).

Yes Single-item rating scale No Yes No
  • Environment Hostility is measured by three items using a five-point Likert scale (1=strongly disagree; 5=strongly agree) regarding competitive intensity, industry change intensity, and the complexity of the industry (Peltier et al., 2009).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Environment Uncertainty/Complexity is measured by six items (e.g., “Customer needs are increasingly more complex”) regarding the difficulty in setting price, managing inventory, determining profit margins, and managing customers among small businesses, using a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Peltier et al., 2009).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Environmental Dynamism uses a four-item scale to measure the extent to which an organization's services, its customers' tastes, and its technology are prone to change (Ravichandran, 2000).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Urbanization is:

    • Measured by an organization's location within a Metropolitan Statistical Area (MSA) as designated by the U.S. Office of Management and Budget (1=independent, city/county not located in a MSA; 2=suburban, city/county located in MSA; 3=central, core city in an MSA) (Damanpour & Schneider, 2009).

    • Measured by average population density and income growth within a service area (Meyer & Goes, 1988).

Yes
Yes
Single-item rating scale
Computation
No
No
Yes
Yes
No
No
Yes Single-item dichotomous scale No Yes No
  • Government Policy and Regulation (n=3)

  • Review of State Mental Health Commissioner Policies/Guidelines and Statistics from National Association of State Mental Health Program Directors (NASMHPD) Research Institute (NRI) (Ganju, 2003).

No State documentation No Yes Yes (Long-term)
  • State Mental Health Authority Yardstick (SHAY) assesses state authority on mental health policy through seven domains of adoption and implementation of EBPs in community health: planning, financing, training, leadership, policies and regulations, quality improvement, and stakeholders. A five-point rating score is used for each item (Finnerty et al., 2009).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • State Policy Environment is measured by survey and telephone interviews using qualitative and quantitative data. Odds of EBP adoption are reflected by the awareness of how an EBP is included in state contracts and Medicaid formulary (Knudsen & Abraham, 2012).

No Survey/Interview (open-ended) No Yes Yes (Long-term)
  • Reinforcing Regulation with Financial Incentives to Improve Quality Service Delivery (n=2)

  • Presence or Absence of Dedicated Funding Streams and Financial Incentives is measured by qualitative interviews with opinion leaders using a common core of questions and content analysis (Fitzgerald et al., 2002).

No Survey/Interview (open-ended) No Yes Yes (Long-term)
  • State Incentives on Participation in Quality Improvement are measured by state-level administrative data for three incentive conditions (mandate, fiscal incentive, and technical assistance) to predict adoption of a continuous quality improvement initiative for psychotropic prescribing practices (Finnerty et al., 2012).

No State documentation No Yes Yes (Long-term)
  • Social Network (Inter-systems) (n=3)

  • Centralization describes the extent to which a network is organized around its central points such as community leaders. Using open-ended nomination questions administered to community leaders, centralization is computed based on indegree (i.e., number of other individuals in a network with direct ties to the community leader) and outdegree (i.e., number of other individuals in a network with direct ties from the community leader) among members in a network, and ranges from 0 to 1, with higher values indicating a more centralized network (Fujimoto et al., 2009).

Yes Computation Yes Yes Yes
  • Density is a complementary measure of social network to Centralization. It describes the level of cohesion of a social network (e.g., among community leaders) and is computed based on the number of linkages in a network as a proportion of the maximum number of ties, the number of links divided by n(n-1) (Fujimoto et al., 2009).

Yes Computation Yes Yes Yes
  • Neighborhood Effect describes the level of adoption among neighboring counties and is calculated by a matrix variable that captures the number of adjacent counties and the average adoption rate of all neighboring counties (Rauktis et al., 2010).

Yes Computation Yes Yes Yes

2. Organization
  • Absorptive Capacity (n=6)

  • Knowledge Transformation is measured by the number of new product ideas or projects initiated by an organization (Zahra & George, 2002).

Yes Computation No No Yes (Long-term)
  • Knowledge Exploitation is measured by the output such as number of patents, new product announcements, and length of product development cycle (Zahra & George, 2002).

Yes Computation No No Yes (Long-term)
  • Workforce Professionalism is measured by the percentage of staff with at least a master's degree, certification, or license (Knudsen & Roman, 2004).

Yes Computation No Yes Yes (Long-term)
  • Research Development and Intensity, defined as organization-financed business-unit research and development expenditures, can be expressed as a percentage of business unit sales and transfers over a fixed time period (Cohen & Levinthal, 1990).

Yes Computation No Yes Yes (Long-term)
  • Environmental Scanning is measured by the extent to which staff's knowledge about treatment techniques originates from publications, professional development, professional associations, and communication with treatment organizations, using a five-point Likert scale (0=no extent; 5=very great extent) (Knudsen & Roman, 2004).

Yes Single-item rating scale No Yes Yes (Long-term)
  • Satisfaction Data are measured by two items: whether an organization administers a satisfaction survey for client referral sources, and whether an organization collects satisfaction data from third-party payers, using a dichotomous scale (0=no; 1=yes) (Knudsen & Roman, 2004).

Yes Multi-item dichotomous scale No Yes Yes (Long-term)
  • Leadership and Champion of Innovation (n=7)

  • Recency of Staff Education is measured by the median age of hospital's active medical staff (Meyer & Goes, 1988).

Yes Computation No Yes Yes (Long-term)
  • CEO Advocacy is measured by the extent of CEO's support for adoption (support, opposition, or neutrality) and decision-making power (high, medium, or low), the ratings of which are derived from content analysis of qualitative interviews (Meyer & Goes, 1988).

Yes Survey/Interview (open-ended) and multi-item rating scale No Yes Yes
  • Texas Christian University (TCU) Survey of Transformational Leadership (STL-S) is a 96-item measure of transformational practices. Using a five-point Likert scale (0=not at all; 4=frequently, if not always), STL-S examines five domains: idealized influence (13 items), intellectual stimulation (16 items), inspirational motivation (23 items), individualized consideration (eight items), and empowerment (17 items) (Edwards et al., 2010).

Yes Multi-domain and multi-item rating scale Yes No Yes
  • Multi-Factor Leadership Questionnaire (MLQ) is a 45-item measure of transformational leadership (i.e., charismatic or visionary leadership) and transactional leadership (i.e., leadership based on exchanges between the leader and followers, in which recognition and reward for meeting specific goal or performance criteria are emphasized). Similar to the STL-S, transformational leadership consists of four domains rated on a five-point Likert scale (0=not at all; 4=a very great extent): Idealized Influence (eight items), Inspirational Motivation (four items), Intellectual Stimulation (four items), and Individual Consideration (four items); transactional leadership consists of four domains rated on the same five-point Likert scale: Contingent Reward (four items), Laissez-faire (four items), Active Management by Exception (four items), and Passive Management by Exception (four items) (Aarons, 2006; Bass et al., 1996).

Yes Multi-domain and multi-item rating scale Yes Yes Yes
  • Management Practice Survey Tool consists of 14 management practices grouped into four domains: Intake and Retention (strategies), Quality Monitoring and Improvement (tracking key performance indicators in the organization, including how data are collected and disseminated to employees), Targets (examining goals, realism, and transparency of corporate targets), and Employment Incentives (promotion criteria, pay and bonuses, and coping with underperforming employees). The 14 questions on management practices are scored between “1” and “5” for each question, with a higher score indicating a better management performance (McConnell et al., 2009).

Yes Multi-domain and multi-item rating scale No No Yes
  • Quality Orientation of the Host Organization is a six-item scale that measures top management's responsibilities in quality programs, the support of corporate executives for quality initiatives, and the appropriateness of an organization's technology infrastructure to carry out quality improvement programs (Ravichandran, 2000).

Yes Multi-item rating scale Yes Yes Yes
  • Management Support for Quality is a ten-item scale that measures an organization's chief executive's responsibility, evaluation, and support for quality improvement processes (Ravichandran, 2000).

Yes Multi-item rating scale Yes Yes Yes
  • Network with Innovation Developers and Consultants (n=2)

  • Frequency of Communication between innovation developers and innovation users is tracked using communication logs and averaging communication log data over multiple time periods (Lind & Zmud, 1991).

Yes Computation No Yes Yes
  • Richness of Communication Channels used in communication between innovation developers and innovation users can be measured by 14 items that represent 14 types of channels under four categories (interpersonal verbal, group verbal and voice messaging, printed reports, interpersonal written), ranging from high to low level of richness (Lind & Zmud, 1991).

Yes Multi-item rating Scale No Yes Yes
  • Norms, Values, and Cultures (n=7)

  • Organizational Culture Inventory (OCI) consists of 120 items that measure three types of culture — Constructive (40 items), Passive/Defensive (40 items), and Aggressive/Defensive (40 items). Items are rated on a five-point Likert scale (Cooke & Lafferty, 1994; Ingersoll et al., 2000).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)
  • Children's Services Survey assesses organizational culture in mental health services derived from the OCI. The two culture types are Constructive Culture (26 items) and Defensive Culture (52 items) (Aarons & Sawitzky, 2006).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • Organizational Social Context (OSC) measures the effect of organizational social context on the adoption and implementation of EBPs. A key latent construct is Organizational Culture, which consists of 42 items for three domains — Rigidity, Proficiency, and Resistance (Glisson, Landsverk et al., 2008).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Practice Culture Questionnaire (PCQ) measures quality improvement culture in primary care using 25 items rated on a five-point scale (0=view that primary care practice has a very poor attitude towards quality improvement; 4=view that practice is definitively positive towards quality improvement) (Stevenson & Baker, 2005).

Yes Multi-item rating scale Yes No Yes (Long-term)
  • Competing Values Framework allows organizational team members to distribute 100 points across four sets of organizational statements, according to the descriptions that best fit their organization. The four culture types are: Group Culture, Developmental Culture, Hierarchical Culture, and Rational Culture (Shortell et al., 2004).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)
  • Pasmore Sociotechnical Systems Assessment Survey (STSAS) contains the Organizational Commitment/Energy domain that measures organizational commitment and staff's identification with their organization. This domain consists of 11 Likert items (e.g., “Few people here feel personally responsible for how well the organization does,” “Few people are willing to put in effort above the minimum required to help the organization succeed,” and “People are rewarded the same whether they perform well or not” (Ingersoll et al., 2000).

Yes Multi-item rating scale Yes No Yes (Long-term)
  • Hospital Culture Questionnaire uses employees' viewpoints to measure organizational culture. It consists of 50 items rated on a five-point Likert scale (1=strongly agree; 5=strongly disagree) in eight domains: Supervision, Attitudes, Role Significance, Hospital Image, Competitiveness, Staff Benefits, Cohesiveness, and Workload (Sieveking et al., 1993).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)
  • Operational Size and Structure (n=9)

  • Organizational Size is measured by staff force, or the number of full-time employees, on a seven-point scale (1=fewer than 50, 2=50-99, 3=100-249; 4=250-499, 5=500-749, 6=750-999, 7=1000 or more), as used in the RG and ASD surveys (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes (Long-term)
  • Organizational Complexity is measured by the number of patient beds using log-transformation to adjust for curvilinearity (Meyer & Goes, 1988).

Yes Computation No Yes Yes (Long-term)
  • Organizational Characteristics and Program Adoption are measured by a brief close-ended survey asking the sources and amounts of organizational and prevention funding, number of full-time and part-time, paid and unpaid personnel, percentage of paid staff that were members of the target population, and number of clients served by the organization (Miller, 2001).

Yes Survey/Interview (close-ended) No Yes Yes (Long-term)
  • Organization's Economic Health is measured by a five-point Likert scale (1=poor; 5=excellent) as used in the RG and ASD surveys (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes (Long-term)
  • Texas Christian University (TCU) Organizational Readiness for Change (ORC) contains the Institutional Resources domain — Offices (four items), Staffing (six items), Training (four items), Equipment (seven items), and Internet (four items) — rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Lehman et al., 2011; Simpson et al., 2007).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Organizational Diversity is measured by the breadth of an medical organization's domain and horizontal differentiation with respect to two domains: client diversity (e.g., distribution of inpatient, outpatient, and emergency care) and functional diversity (e.g., presence of affiliation with medical school and size of clinical teaching program) (Burns & Wholey, 1993).

Yes Computation No Yes Yes (Long-term)
  • Texas Christian University (TCU) Survey of Structure and Operations (SSO) measures organizational volumes using seven domains — Structural Relationships (six items), Program Characteristics (16 items), Assessments (six items), and Services (six items), Client Characteristics (eight items), Program Staff (three items), and Program Changes (11 items) (Lehman et al., 2011).

Yes Multi-domain and multi-item rating scale No No Yes (Long-term)
  • Organizational Structure is measured by the degree of formalization (explicit rules and procedures governing employee behavior) and centralization (degree to which authority and decision-making power are concentrated vs. dispersed) and measured by three scales: (1) Participation in Decision-making (eight items); (2) Hierarchy of Authority (four items); and (3) Procedural and Rule Specification (three items) (Schoenwald, Sheidow, Letourneau, & Liao, 2003).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)
  • Structural Complexity is measured by an organization's self-report of its organizational structural form — functional, product-based, or matrix (Ravichandran, 2000).

Yes Single-item rating scale Yes Yes Yes (Long-term)
  • Social Climate (n=3)

  • Work Environment Scale (WES) consists of 90 true-false items measuring 10 nine-item dimensions of social climate: Involvement, Peer Cohesion, Staff Support, Autonomy, Task Orientation, Work Pressure, Clarity, Control, Innovation, and Physical Comfort (Moos, 1981; Savicki & Cooley, 1987).

Yes Multi-domain and multi-item dichotomous scale Yes Yes Yes (Long-term)
  • Organizational Social Context (OSC) measures the effect of organizational social context on the adoption and implementation of EBPs. A key latent construct is Organizational Climate, which consists of 46 items for three domains — Stress, Engagement, and Functionality (Glisson, Landsverk et al., 2008).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Texas Christian University (TCU) Organizational Readiness for Change (ORC) contains the Organizational Climate domain — Mission (five items), Cohesiveness (six items), Autonomy (six items), Communication (five items), Stress (four items), and Change (five items) — rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Lehman et al., 2011; Simpson et al., 2007).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • Social Network (Inter-organization s) (n=3)

  • Degree of Linkage measures three computed domains of social network among organizations: Degree Centrality (whether an organization occupies a central or peripheral position), Average Tie Strength (the intensity of the relationship, reciprocity, and interaction), and Boundary Intensity (the extent of affiliation between sub-networks) (Valente, 2005).

Yes Computation Yes No Yes
  • Interorganizational Network Location and Influence is measured by Prestige (academic reputation and visibility), Published Reports (number of reports of innovations appearing each year in journals and research), Cumulative Force of Adoption (percentage of hospitals in the same geographic region adopting the same innovation), and Cumulative Experience (number of years of adoption) (Burns & Wholey, 1993).

Yes Computation No Yes Yes
  • External Influence is measured by the number of influential sources an organization is connected to and can differentiate early adopters from late adopters, though it tends to be field-specific. For instance, it can be measured by subscription to journals in the medical field, cosmopolitan connection in agricultural innovations, or exposure to media campaign in family planning (Valente, 1996).

Yes Computation No Yes Yes
  • Training Readiness and Efforts (n=2)

  • Texas Christian University (TCU) Workshop Evaluation Form (WEVAL) contains the Program Resources Supporting the Training and Implementation domain that consists of three items: “your program has enough staff to implement material,” “your program has sufficient resources to implement material,” and “you have the time to do set-up work required to use this material,” using a five-point Likert scale (1=not at all; 5=very much) (Bartholomew et al., 2007).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Texas Christian University (TCU) Program Training Needs (PTN) measures training efforts and predicts types of innovation likely to be adopted through seven domains — Facilities and Climate (seven items), Computer Resources (five items), Training Needs (10 items), Training Content Preferences (eight items), Training Strategy Preferences (10 items), Barriers to Training (10 items), and Satisfaction with Training (four items) — using a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Simpson et al., 2007).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • Traits and Readiness for Change (n=2)

  • Texas Christian University (TCU) Organizational Readiness for Change (ORC) aggregates four major domains of organizational adoption readiness — Motivation for Change (23 items), Resources (25 items), Staff Attributes (25 items), and Organizational Climate (32 items) — using a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Lehman et al., 2011; Simpson et al., 2007).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • Pasmore Sociotechnical Systems Assessment Survey (STSAS) measures organizational readiness for change using 17 Likert items on the Innovativeness domain (e.g., rewards for innovation, extent to which the organization leaders and members maintain a futuristic orientation) and the Cooperation domain (e.g., Team work, flexibility) (Ingersoll et al., 2000).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)

3. Innovation
  • Complexity, Relative Advantage, and Observabilit y (n=8)

  • Rogers's Adoption Questionnaire consists of nine items using a four-point Likert scale (1=strongly disagree; 4=strongly agree) that measure: Relative Advantage (four items), Complexity (three items), and Observability (two items) (Steckler et al., 1992).

Yes Multi-domain and multi-item rating scale No Yes Yes
  • Innovation Complexity measured by the relative difficulty of adoption of each innovation using a five-point Likert scale (1=less difficult; 5=more difficult) (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes
  • Innovation Skills measured by the manual or specialized training required by an innovation. The level of skill required was determined by an expert panel of medical college faculty members using a seven-point rating scale (Meyer & Goes, 1988).

Yes Expert panel rating No Yes Yes
  • Relative Advantage s

    • Measured by eight items using a four-point Likert scale (1=strongly disagree; 4=strongly agree) regarding the use of information and communication technology. It is the degree the innovation is perceived as a better idea and an enhancement to one's reputation with peers (Richardson, 2011).

    • Measured by ten items using a five-point Likert scale (1=strongly disagree; 5=strongly agree) focusing on the value that innovative customer relationship management systems has for effectively running a small business (Peltier et al., 2009).

Yes
Yes
Multi-item rating scale
Multi-item rating scale
Yes
Yes
Yes
Yes
Yes
Yes
  • Extra Benefit is defined as the value of an innovation with respect to clinical effectiveness and cost-effectiveness. Respondents rate the influence of Extra Benefit on innovation adoption on a seven-point Likert scale ranging from “stimulating,” “neutral,” to “restraining” (Dirksen et al., 1996).

Yes Single-item rating scale No Yes Yes
  • Innovation Observability is measured by the impact of equipment on flows of patients and the degree to which the results of using the innovation are visible to organizational members. The level of observability was determined by an expert panel of medical college faculty members (Meyer & Goes, 1988).

Yes Expert panel rating No Yes Yes
  • Visibility is measured by six items using a four-point Likert scale (1=strongly disagree; 4=strongly agree) regarding the use of information and communication technology. It is the degree an innovation is visible (Richardson, 2011).

Yes Multi-item rating scale Yes Yes Yes
  • Cost-efficacy and Feasibility (n=6)

  • Cost of Implementing New Strategies (COINS) maps the cost components of an innovation to each stage of the implementation process, including the adoption or pre-implementation stage. Cost components include fees for purveyors and stakeholders, hours spent, and costing of clinical salary and full-time equivalent staff (Patricia Chamberlain et al., 2011; Saldana et al., in press).

Yes Computation Yes Yes Yes
  • Texas Christian University (TCU) Treatment Cost Analysis Tool (TCAT) is a Microsoft Excel-based workbook designed for program financial officers and directors to collect, allocate, and analyze accounting and economic costs of treatments (e.g., cost/counseling hour, group counseling hours, enrolled days, etc.). The TCAT is also used to forecast effects of future changes in staffing, client flow, and other resources. Once full costs are available, estimates can be generated for any changes in programming. In addition, the TCAT can be adapted for treatment intervention developers to put a price tag on new innovations (Flynn et al., 2009; Lehman et al., 2011).

Yes Computation Yes No Yes
  • Costing Behavioral Interventions consists of practical methods for assessing intervention costs retrospectively or prospectively related to decision makers' willingness to adopt and implement effective interventions, using an Excel template (Ritzwoller et al., 2009).

Yes Computation No No Yes
  • Cost-efficacy Ratio calculates the relative cost-efficacy across available EBPs by comparing mean costs per unit change of clinical improvement (e.g., how much it costs per unit improvement on a validated measure of depression). Medicare/Medicaid reimbursement rates and federal upper-limit reimbursement rates allow for the analysis of average national rates for tests and services and all costs (McHugh et al., 2007).

Yes Computation No No Yes
  • Innovation Cost on the relative financial expenditure associated with each new innovation is measured by a five-point Likert scale (1=less expensive; 5=more expensive) (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes
  • Switching Cost of adopting an innovation (e.g., training, financial, and time costs) is measured by five items using a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Peltier et al., 2009).

Yes Multi-item rating scale Yes Yes Yes
  • Evidence and Compatibilit y (n=4)

  • Innovation Impact of each innovation on local government performance is measured by a five-point Likert scale (1=negative impact; 5=positive impact) (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes
  • Innovation Compatibility is measured by equipment's compatibility with pattern of medical staff specialization as judged by an expert panel of medical college faculty members (Meyer & Goes, 1988).

No Expert Panel rating No Yes Yes
  • Compatibility is measured by three items using a four-point Likert scale (1=strongly disagree; 4=strongly agree) regarding the use of information and communication technology. It is the degree of perceived consistency with values, experiences, and needs of potential adopters (Richardson, 2011).

Yes Multi-item rating scale Yes Yes Yes
  • Open-ended Interview Protocol is based on the Diffusion of Innovations Theory. It was reviewed, pilot-tested, and revised. The final protocol addresses program resources and constraints, program values, program adoption and rejection experiences, important sources of influence on how participants design and implement prevention programs, and valuable sources of information about prevention. A grounded theory approach was used to derive a final set of codes that emerged from the transcribed interviews (Miller, 2001).

No Interview (open-ended) No Yes Yes
  • Innovation Fit with Users' Norms and Values (n=2)

  • Near-Term Consequences: Job Fit consist of six items rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree): (1) Use of a personal computer (PC) will have no effect on the performance of my job (reverse scored); (2) Use of a PC can decrease the time needed for my important job responsibilities; (3) Use of a PC can significantly increase the quality of output of my job; (4) Use of a PC can increase the effectiveness of performing job tasks (e.g., analysis); (5) Use of a PC can increase the quantity of output for same amount of effort; and (6) Considering all tasks, the general extent to which use of PC could assist on job (Thompson et al., 1991).

Yes Multi-item rating scale No Yes Yes
  • Long-Term Consequences consist of six items rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree): (1) Use of a personal computer (PC) will increase the level of challenge on my job; (2) Use of a PC will increase the opportunity for preferred future job assignments; (3) Use of a PC will increase the amount of variety on my job; (4) Use of a PC will increase the opportunity for more meaningful work; (5) Use of a PC will increase the flexibility of changing jobs; and (6) Use of a PC will increase the opportunity to gain job security (Thompson et al., 1991).

Yes Multi-item rating scale No Yes Yes
  • Risk (n=3)

  • Level of Risk of Injury, Death, or Malpractice Liability is measured by an expert panel of medical college faculty members on the safety and efficacy of a particular innovation using a seven-point rating scale (Meyer & Goes, 1988).

Yes Expert Panel Rating No Yes Yes
  • Personal Risk Orientation is measured by seven items using a five-point Likert scale (1=strongly disagree; 5=strongly agree) regarding personal propensity to accept risk and related psychosocial attributes (Peltier et al., 2009).

Yes Multi-item rating scale Yes Yes Yes
  • Survey of Risk measures risks associated with adopting EBPs. It consists of three major domains rated on a seven-point Likert scale (1=strongly disagree; 7=strongly agree): Perceived Risk (four items), Risk Management Capacity (five items), and Risk Propensity (five items) (Panzano & Roth, 2006).

Yes Multi-domain and multi-item rating scale Yes Yes Yes
  • Trialability, Relevance, and Ease (n=6)

  • Trialability is

    • Measured by prior exposure to the EBP (direct exposure vs. indirect exposure) in a clinical trial network (Ducharme et al., 2007).

    • Measured by the degree the innovation (communication technology) can be experimented with or practiced; based on two items using a four-point Likert scale (1=strongly disagree; 4=strongly agree) (Richardson, 2011).

Yes
Yes
Single-item dichotomous scale
Multi-item rating scale
No
No
Yes
Yes
Yes
Yes
  • Task Relevance and Task Usefulness measures the extent to which an innovation is relevant to the performance of the user and the extent to which the innovation contributes to improvement in task performance, respectively. Task relevance is derived from job determined importance (2 items) and task usefulness is derived from a perceived usefulness scale (Yetton et al., 1999).

Yes Multi-item rating scale Yes Yes Yes
  • Texas Christian University (TCU) Workshop Evaluation (WEVAL) contains the Relevance of Training domain that consists of four items: “material is relevant to the needs of your clients,” “you expect things learned will be used in program soon,” “you were satisfied with the material and procedures,” and “you would feel comfortable using them in your program,” using a five-point Likert scale (1=not at all; 5=very much) (Bartholomew et al., 2007).

Yes Multi-item rating scale Yes Yes Yes
  • Ease of Use consists of six items rated on a seven-point Likert scale (1=strongly agree; 4=neutral; 7=strongly disagree) regarding an innovation: Easy to Learn, Controllable, Clear & Understandable, Flexible, Easy to Become Skillful, and Easy to Use (Davis, 1989).

Yes Multi-item rating scale Yes Yes Yes
  • Perceived Usefulness consists of six items rated on a seven-point Likert scale (1=strongly agree; 4=neutral; 7=strongly disagree) regarding an innovation: Work More Quickly, Job Performance, Increase Productivity, Effectiveness, Makes Job Easier, and Useful (Davis, 1989).

Yes Multi-item rating scale Yes Yes Yes

4. Individual 4a. Staff
  • Affiliation with Organizatio nal Culture (n=1)

  • Organizational Culture Profile (OCP) consists of 54 value statements that capture the fit between individual and organizational values. Staff are asked to sort the 54 statements into nine categories (e.g., from most to least desirable or from most to least characteristic) based on their own preferences or their organizational culture. A person-organization fit score for each individual is calculated based on the correlation between the individual preference profile with the profile of the organization (O'Reilly et al., 1991).

Yes Multi-item rating scale and Computation Yes No Yes (Long-term)
  • Attitudes, Motivation, Readiness towards Quality Improvement and Reward (n=4)

  • Evidence-Based Practice Attitude Scale (EBPAS) consists of 15 items grouped into four domains: Appeal (four items), Requirements (three items), Openness (four items), and Divergence (four items) (Aarons & Sawitzky, 2006).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)
  • San Francisco Treatment Research Center (SFTRC) Course Evaluation contains 10 items for two domains that assess clinician stage of change and clinician attitudes regarding EBPs rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Haug et al., 2008).

Yes Multi-domain and multi-item rating scale No Yes Yes (Long-term)
  • Readiness for Organizational Change measures readiness for organizational change at an individual level. Using a systematic item-developmental framework, the measure consists of 25 items for four domains: Appropriateness (10 items: that the proposed change is appropriate for the organization), Management Support (six items: that the leaders are committed to the proposed change), Change Efficacy (six items: that employees are capable of implementing a proposed change), and Personally Beneficial (three items: that the proposed change is beneficial to organizational members) (Holt, 2007).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)
  • Texas Christian University (TCU) Workshop Assessment Follow-Up (WAFU) form consists of eight items about individual attitudes towards training based on resources (e.g., time, availability of raining) and procedural factors (e.g., already using similar materials, not my style) rated by individual adopters on a five-point Likert scale (1=not at all; 5=very much) (Bartholomew et al., 2007).

Yes Multi-item rating scale Yes Yes Yes (Long-term)
  • Feedback on Execution and Fidelity (n=1)

  • Alternative Stages of Concern Questionnaire (SoCQ) measures teachers' concerns about educational innovations. It contains the five-item Refocusing domain that measures feedback from target students to change the innovation (e.g., a curriculum), revise the innovation to improve its effectiveness or design, and supplement, enhance, or replace the innovation (Cheung et al., 2001).

Yes Multi-item rating scale Yes No Yes
  • Individual Characteristics (n=7)

  • Personal Innovativeness combines scores on three measures of agreement/disagreement with statements about the individual's tendency: (1) “To be…among the first to try new sales tools;” (2) “to use only sales tools that have a proven track record”; (2) “to leave it to others to work out the ‘bugs’ in new sales tools before using them” (combined scale range: 3-15) (Leonard-Barton & Deschamps, 1988).

Yes Multi-item rating scale Yes Yes Yes
  • Subjective Importance of the Task is an addition of two rankings: (1) Importance of the task of configuring systems and satisfaction derived from it, ranked relative to (2) the importance of and satisfaction with other five tasks: face-to-face selling, sales planning, answering technical questions, administrative work, and other internal corporation work (Leonard-Barton & Deschamps, 1988).

Yes Computation Yes Yes Yes
  • Alternative Stages of Concern Questionnaire (SoCQ) measures teachers' concerns about educational innovations. It contains the four-item Awareness domain that measures the knowledge, concern, and interest in learning about the innovation, and distraction with other tasks (Cheung et al., 2001).

Yes Multi-item rating scale Yes No Yes
  • Awareness-Concern-Interest Questionnaire is a 12-item measure of awareness and concern about prevention-type innovations rated on a four-point scale for three dimensions: Awareness (four items: awareness of a problem), Concern (four items: personally concerned about the problem), and Interest (four items: interest in doing something about the problem and learning how to teach prevention) (Steckler et al., 1992).

Yes Multi-domain and multi-item rating scale Yes No Yes
Yes Computation No Yes Yes
  • Configuration Skills (Task-Related) combines two responses: (1) Self-rating on a seven-point scale as a configurer (1=“Adequate, but Beginner”; 7=“Expert; as good as anyone can be”); (2) rank order of configuration among six tasks described above, according to respondent's relative skills (Leonard-Barton & Deschamps, 1988).

Yes Multi-item rating scale Yes Yes Yes
  • Product Class Knowledge consists of four items using a five-point Likert scale (1=strongly disagree; 5=strongly agree) regarding customer relationship, management knowledge, computer knowledge, and analytic skills (Peltier et al., 2009).

Yes Multi-item rating scale Yes Yes Yes
  • Managerial Characterist ics (n=6)

  • Manager Experience within a Product Class is measured by six items rated on a four-point scale (1=none; 4=extensive) on the extent of managers' experience with different types of computer software and languages and participating in the development of computerized information systems (Igbaria, 1993).

Yes Multi-item rating scale No Yes Yes
  • Manager Education is measured by a five-point scale in the RG and ASD surveys (1=less than 2 years of college; 2=four-year college degree; 3=MPA, MBA, or other graduate degrees; 4=JD or equivalent; 5=PhD or equivalent) (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes (Long-term)
  • Tenure is measured by the number of years managers have served in their current positions in the RG and ASD surveys: 1=less than 2 years; 2=2-4 years; 3=5-9 years; 4=10-15 years; 5=more than 15 years) (Damanpour & Schneider, 2009).

Yes Single-item rating scale No Yes Yes (Long-term)
  • Manager Self-Concepts are measured by four dispositional traits - locus of control, generalized self-efficacy, self-esteem, positive affectivity - on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Judge et al., 1999).

Yes Multi-item rating scale Yes Yes Yes
  • Manager Risk Tolerance is measured by three dispositional traits - openness to experience, tolerance for ambiguity, and low risk aversion on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Judge et al., 1999).

Yes Multi-item rating scales Yes Yes Yes
  • Manager Pro-Innovation Attitude is measured by six items on managers' attitude towards competition and entrepreneurship in public services in the RG and ASD surveys. The items are rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Damanpour & Schneider, 2009).

Yes Multi-item rating scale No Yes Yes
  • Social Network (Individual's Personal Network) (n=7)

  • Social Factors consists of four items rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree): (1) The proportion of their coworkers who regularly use a personal computer (PC); (2) The extent to which senior management of the business unit supports PC use; (3) The extent to which the individual's boss supports the use of PCs for the job; and (4) The extent to which the organization supports the use of PCs (Thompson et al., 1991).

Yes Single-item rating scale Yes No Yes
  • Subjective Norm consists of two items rated on a seven-point Likert scale (1=strongly disagree; 7=strongly agree): “People who influence my behavior think that I should use the system”; “People who are important to me think that I should use the system” (Venkatesh & Davis, 2000).

Yes Multi-item rating scale Yes Yes Yes
  • Work Group Integration consists of four items that measure the extent to which respondents feel they are part of a work group, the ease of maintaining working relationships with their work group members, the ease of coordinating with them, and the smoothness of joint projects (Kraut et al., 1998).

Yes Multi-item rating scale No Yes Yes
  • Social Network is measured by a five-point Likert scale on the level of agreement with four statements about adoption: (1) Because others in the same discipline think that using an innovation is valuable; (2) Because other inter-related organizations also use it; (3) Because friends in other divisions are using it; (4) Because others in the same organization uses it (Talukder & Quazi, 2011).

Yes Multi-item rating scale Yes Yes Yes
  • Subscribers to a System measures the total number of individuals who places at least one video telephone call on a system during a biweekly period (Kraut et al., 1998).

Yes Computation No Yes Yes
  • Subscribers within the Work Group measures the number of other individuals in the subject's work group placing at least one video telephone call on the system during a biweekly period, divided by the total number (less one) in the work group (Kraut et al., 1998).

Yes Computation No Yes Yes
  • Sociometric Data use numbers of peer social ties and peer nominations to measure perceptions of peer behavior or perceptions of peer influence. These numbers can be used to calculate contagion effects and network exposure scores (Valente, 2005).

Yes Computation No Yes Yes
4b. Client
  • Readiness for Change/Capacity to Adopt (n=1)

  • Texas Christian University (TCU) Client Evaluation of Self and Treatment (CEST) includes 14 scales measuring client motivation and readiness for treatment, psychological and social functioning, and treatment engagement. In particular, Treatment Needs/Motivation Scales address problem recognition, desire for help, treatment readiness, pressures for treatment, treatment needs, and accuracy, and Treatment Engagement Scales address treatment participation, treatment satisfaction, counseling rapport, and peer support (Simpson et al., 2007).

Yes Multi-domain and multi-item rating scale Yes No Yes (Long-term)

  • Broad-based, Multi-level Predictors (n=5)

  • Management of Ideas in the Creating Organizations measures multi-level EBP facilitators and barriers. It uses the 5Cs of organizational rubric as the basis of an in-depth, semi-structured interview guide: Characteristics (of individual traits and disposition toward innovation), Competencies (skills and styles needed for job at hand), Conditions (of organizational strategies to carry out task at hand), Context (of organizational structure and beliefs), and Change (individual desire for improvement) (Gioia & Dziadosz, 2008).

Yes Interview (semi-structured) No Yes Yes (Long-term)
  • Concept System is a software tool that maps and organizes statements generated by county officials, organizations, staff, and clients about EBP facilitators and barriers. The mapping results in 14 multidimensional clusters of 105 items that can be rated on a four-point Likert scale of importance (0=not at all; 4=extremely important) and changeability (0=not at all; 4=extremely changeable). The 14 clusters are: Consumer Values and Marketing, Consumer Concerns, Impact on Clinical Practices, Clinical Perceptions, Evidence-Based Practices, Limitations, Staff Development and Support, Staffing Resources, Agency Compatibility, Costs of Evidence-Based Practices, Funding, Political Dynamics, System Readiness and Compatibility, Beneficial Features of Evidence-Based Practices, and Research and Outcomes Supporting Evidence-Based Practices (Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009).

No Interview (open-ended) and Multi-domain, multi-item rating scale Yes Yes Yes (Long-term)
  • Reinventing Government (RG) and Alternative Service Delivery (ASD) Surveys uses six separate rating scales to assess external environment (community wealth), organizational characteristics (size and economic health), and individual managerial characteristics (education, tenure, and pro-innovation attitudes) (Damanpour & Schneider, 2009).

Yes Multi-domain and multi-item rating scale No Yes Yes (Long-term)/No
  • San Francisco Treatment Research Center (SFTRC) Course Evaluation contains three domains that have been used to understand innovation adoption: one domain contains six items that measure organizational barriers to adopting EBPs, and two domains contain 10 items that assess individual staff readiness for change and attitudes towards EBPs rated on a five-point Likert scale (1=strongly disagree; 5=strongly agree) (Haug et al., 2008).

Yes Multi-domain and multi-item rating scale No Yes Yes (Long-term)
  • Texas Christian University (TCU) Workshop Evaluation Form (WEVAL) contains two domains that have been used to understand innovation adoption: one domain contains three items to measure organizational training readiness and efforts, and another domain contains four items to measure the relevance of an innovation rated on a five-point Likert scale (1=not at all; 5=very much) (Bartholomew et al., 2007).

Yes Multi-domain and multi-item rating scale Yes Yes Yes (Long-term)

1. External System

Overview of Measures

Eighteen measures are related to the external system. All measures are supported by empirical adoption data such as census data (Damanpour & Schneider, 2009; Meyer & Goes, 1988). Measure types vary: nine measures are rating scales, five measures are derived from computation of frequency data, and four measures are either based on state documentation or open-ended surveys/interviews. The 18 measures assess discrete aspects of the external system and complement one another.

External Environment (n=10)

Of the 10 measures, seven address economic consideration of the market/industry (i.e., community wealth, income growth, industry concentration, competition, hostility, complexity, and dynamism) (Damanpour & Schneider, 2009; Gatignon & Robertson, 1989; Meyer & Goes, 1988; Peltier, Schibrowsky, & Zhao, 2009; Ravichandran, 2000); two measures assess population-based characteristics (urbanization, density) (Damanpour & Schneider, 2009; Meyer & Goes, 1988), and one measure focuses on political influence (e.g., mayor) (Damanpour & Schneider, 2009). Although the economic measures were used in studying adoption of innovations such as management in the public sector (e.g., government) and technology in the private sector (e.g., retail), their relevance to EBP adoption is yet to be tested.

Government Policy and Regulation (n=3)

State policies and regulations can mandate innovation adoption. Their specificity and complexity necessitate diverse measurement methods, including reviews of state guidelines (Ganju, 2003) and open-ended surveys, interviews with state directors about EBP integration in state contracts (Knudsen & Abraham, 2012). Most relevantly, a multi-dimensional scoring system known as the State Mental Health Authority Yardstick (SHAY) measures a state's planning policies and regulations. The development of the measure was based on the adoption and implementation of EBPs among eight states (Finnerty et al., 2009).

Reinforcing Regulation with Financial Incentives to Improve Quality Service Delivery (n=2)

Innovation adoption in the public mental health systems often relies on incentive funding from the government. Through open-ended surveys/interviews with state opinion leaders, the specific incentive conditions can be identified (Fitzgerald, Ferlie, Wood, & Hawkins, 2002). A more in-depth approach requires the review of state documentation on incentive conditions (i.e., mandate vs. fiscal vs. technical assistance) and the use of administrative data (e.g., Medicaid claims) to measure the statewide adoption of large-scale quality improvement initiatives (Finnerty et al., 2012). Financial incentives have a consistent and significant positive effect on adoption.

Social Network (Inter-Systems) (n=3)

Social connections among state agencies and community leaders (e.g., mental health and substance abuse) can have a top-down effect on organizational and individual adoption. The three measures of inter-systems social networks are based on complex computations involving the identification of the appropriate unit (e.g., community leader, county) and linkages among them. Centralization and density are complementary metrics that account for the number of ties linked to and from an identified community leader. A higher centralization value indicates a more centralized network; a higher density value indicates a more connected network among all possible ties (Fujimoto, Valente, & Pentz, 2009). Centralization, in particular, was significantly associated with the increased adoption of substance abuse prevention programs across system stakeholders (Fujimoto et al., 2009). Neighborhood Effect measures the influence of the adoption level of neighboring counties. It was associated with the increased adoption of Family Group Decision Making in child welfare systems (Rauktis, McCarthy, Krackhardt, & Cahalane, 2010).

2. Organization

Overview of Measures

Organizational measures represent the majority of the measures in this review. The nine organizational predictors of adoption are assessed by 41 measures, all of which are available and accessible. Key organizational predictors such as leadership, culture, climate, readiness for change, and operational size and structure are multi-dimensional. The predictive utilities of the majority of these measures are supported by data linking these predictors with adoption of EBPs and other psychosocial interventions. Consistent with the literature, most measures suggest that organizational predictors are potentially modifiable, although these predictors may take considerable time to change.

Absorptive Capacity (n=6)

Absorptive capacity indicates the extent to which an organization utilizes knowledge and research. Based on the six measures, absorptive capacity is broadly measured in three ways: by productivity such as the number of projects and patents (Zahra & George, 2002), the number of staff with advanced education/certification/licensure (Knudsen & Roman, 2004), and the ratio of research and development expenditure to organizational revenue (Cohen & Levinthal, 1990); by the assessment of staff's collective knowledge of innovations (Knudsen & Roman, 2004); or by the use of client satisfaction data to improve organizational knowledge utilization (Knudsen & Roman, 2004). Knudsen and Roman (2004) found that absorptive capacity was associated with increased adoption of EBPs.

Leadership and Champion of Innovation (n=7)

All but one leadership measure (i.e., median age) (Meyer & Goes, 1988) focus on the different dimensions of leadership. The six multi-domain, multi-item measures address specific leadership styles (i.e., the Texas Christian University [TCU] Survey of Transformational Leadership [STL-S] and the Multi-Factor Leadership Questionnaire [MLQ]) (Aarons, 2006; Bass, Avolio, & Atwater, 1996; Edwards, Knight, Broome, & Flynn, 2010), leadership management practices that are associated with quality improvement (i.e., Management Practice Survey Tool, Quality Orientation of the Host Organization, Management Support for Quality) (McConnell, Hoffman, Quanbeck, & McCarty, 2009; Ravichandran, 2000), or the combination of leadership support of adoption and the level of decision-making power (Meyer & Goes, 1988). Empirical findings using these measures showed that transformational and transactional leadership were associated with positive attitudes towards adoption of EBPs (Aarons, 2006). Similarly, a quality improvement leadership orientation was associated with the swiftness and intensity of the adoption of management innovations, though the association was not maintained when adjustments for additional factors were made (Ravichandran, 2000).

Network with Innovation Developers and Consultants (n=2)

Two measures assess the quantity and quality of communication between an organization and innovation developers and consultants. Convergence between innovation developers and innovation users can be measured by the frequency of communication and the richness of the communication, both predictive of technology adoption (Lind & Zmud, 1991). Tracking the frequency and the type of communication (e.g., phone consultation) is especially important for complex innovations that require ongoing assistance, such as organizational EBP training.

Norms, Values, and Cultures (n=7)

Organizational norms, values, and cultures are multidimensional and require multi-domain measures with valid psychometric properties. Three measures identify distinct culture types in health and behavioral health organizations that are related to EBP adoption. The Organizational Culture Inventory (OCI) identifies three culture types and an organization's proximity to an ideal culture. Specifically, a constructive culture promotes innovations and quality improvement (Cooke & Lafferty, 1994; Ingersoll, Kirsch, Merk, & Lightfoot, 2000). The Children's Services Survey confirmed that a constructive organizational culture (derived from the OCI) enhanced positive attitudes towards EBPs (Aarons & Sawitzky, 2006). The Organization Social Context (OSC) showed that a “good” organizational culture profile — a high level of proficiency and low levels of rigidity and resistance to change — was linked to adoption, improved service quality, satisfaction, work attitudes, and decreased staff turnover (Glisson et al., 2008).

Four measures define organizational norms, values, and cultures based on employees' affiliation with and opinions of their organizations, though they have not been directly linked to adoption data. The Practice Culture Questionnaire (PCQ) measures primary care teams' attitudes toward quality improvement (QI) (e.g., whether QI improves staff performance and client satisfaction) as the basis for organizational attitudes toward QI (Stevenson & Baker, 2005). It suggests a relationship between organizational culture and resistance toward QI based on the median and spread of each team's score. The Competing Values Framework (Shortell et al., 2004) identifies four organizational culture types based on the distribution of endorsement points by employees for each culture type. The Pasmore Sociotechnical Systems Assessment Survey (STSAS) measures employees' perceptions of organizational culture that can influence the employees' commitment and ideals (e.g., responsibility, helping an organization succeed) (Ingersoll et al., 2000). Similarly, the Hospital Culture Questionnaire measures organizational culture based on employee's viewpoints on organizational functions (e.g., workload), self-concept (e.g., hospital image, role significance), and support (e.g., benefits, supervision) (Sieveking, Bellet, & Marston, 1993).

Operational Size and Structure (n=9)

Operational size and structure refer to measurable characteristics such as organizational volume (e.g., staffing), portfolios of services provided, or the power and decision-making structure. The nine measures involve the use of quantitative and qualitative data.

Four measures report the number of staff and clients, amount of organizational funding, and self-assessment of organizational economic health (Damanpour & Schneider, 2009; Meyer & Goes, 1988; Miller, 2001). In particular, the Organizational Characteristics and Program Adoption survey was used to distinguish low, moderate, and high adopters among substance abuse treatment organizations (Miller, 2001).

The Texas Christian University (TCU) Organizational Readiness for Change (ORC) has a specific domain that assesses physical organizational resources (e.g., office space, equipment, internet, etc.), which was associated with greater openness to and adoption of innovations in substance abuse treatment (Lehman, Simpson, Knight, & Flynn, 2011; Simpson, Joe, & Rowan-Szal, 2007).

Two measures, Organizational Diversity and the TCU Survey of Structure and Operations (SSO), focus on the diversity of organizational services, clients served, and resources supported by functional affiliation with other entities (e.g., training program, medical school) (Burns & Wholey, 1993; Lehman et al., 2011). Organizational Diversity was empirically linked with the adoption of management programs (Burns & Wholey, 1993).

Two measures, Organizational Structure and Structural Complexity, provide information similar to an organizational chart in terms of task division, hierarchy, rules, procedures, and participation in decision-making (Ravichandran, 2000; Schoenwald, Sheidow, Letourneau, & Liao, 2003). Structural complexity based on organizational self-report was associated with the adoption of management support innovations (Ravichandran, 2000).

Social Climate (n=3)

Social climate generally refers to the multi-dimensional social environment among staff within an organization. The Work Environment Scale (WES) measures 10 such dimensions (Moos, 1981; Savicki & Cooley, 1987). Specifically, supportive and goal-directed work environments significantly predicted an increase in staff's adoption of substance abuse treatment innovations (Moos & Moos, 1998). The Organizational Social Context (OSC), which measures organizational culture as described above, has a distinct domain on organizational climate (Glisson et al., 2008). Organizational climate aggregates staff-level psychological climate ratings. Poor organizational climate characterized by high depersonalization, emotional exhaustion, and role conflict was associated with perceptions of EBPs as not clinically useful and less important than clinical experience (Aarons & Sawitzky, 2006). The TCU Organizational Readiness for Change (ORC) measures five dimensions of organizational climate and showed that clarity of mission, cohesion, and openness to change were associated with positive perception of substance treatment adoption (Lehman et al., 2011; Simpson et al., 2007).

Social Network (Inter-Organizations) (n=3)

The three measures either quantify the degree of linkages among organizations or the degree of exposure of an organization to influential adoption sources. First, the degree of linkage among organizations is indicated by three dimensions: the relative position of an organization within a network (central vs. peripheral), the directionality and intensity of the inter-organizational relationships, and the number of inter-organizational ties among all possible ties (Valente, 2005). These dimensions are similar to measures for system-level social networks (e.g., community leaders and counties) described above, but assess these factors from the perspective of an organization.

Other measures operationalize influential adoption sources based on experience in adoption among networked organizations (e.g., the number of hospitals in the same region adopting the same innovation), perceived prestige of an organization in the network (e.g., reputation), and connections to field-specific innovations (e.g., organizational subscription to journals, exposure to media campaign) (Burns & Wholey, 1993; Valente, 1996). Empirical data indicated that by accounting for these influential sources, early adopters were differentiated from late adopters in medical, farming, and family planning innovations (Valente, 1996), and the adoption of management innovations improved (Burns & Wholey, 1993).

Training Readiness and Efforts (n=2)

Organizational training readiness and efforts can be measured by the availability of physical resources that support training and training foci such as strategy and content. The Texas Christian University (TCU) Workshop Evaluation Form (WEVAL) uses a multi-item rating scale to measure the availability of staffing, time, and other program resources to set up training. It showed that increased program resources were linked to increased adoption of substance abuse treatment innovations (Bartholomew, Joe, Rowan-Szal, & Simpson, 2007). The TCU Program Training Needs (PTN) is a focused measure of three training dimensions — efforts, needs and resources. In a two-year study of 60 treatment programs, staff attitudes about training needs and past experiences were predictive of their adoption of new treatments in the following year (Simpson et al., 2007).

Traits and Readiness for Change (n=2)

Organizational propensity to change and innovate is multi-dimensional, and is usually predicated on past experience with change. Of the four TCU Organizational Readiness for Change (ORC) domains, program needs/pressure for change identifies specific organizational drivers and motivations for change (e.g., client or staff needs) (Lehman et al., 2011; Simpson et al., 2007). The Pasmore Sociotechnical Assessment Survey (STSAS) measures key organizational readiness constructs such as organizational flexibility and maintenance of a futuristic orientation. When organizational change such as innovation adoption was perceived positively, employees were more likely to commit to the work and the innovative mission of the organization (Ingersoll et al., 2000).

3. Innovation

Overview of Measures

These 29 measures represent the second largest group of measures in this review. Driven by the principles of Roger's (2003) Diffusion of Innovation Theory, they measure innovation characteristics using mainly multi-item or single-item rating scales. Innovation characteristics are either primary characteristics intrinsic to an innovation (e.g., cost) or characteristics that depend on the interaction between the adopter and the innovation (e.g., ease of use) (Damanpour & Schneider, 2009; Richardson, 2011). These more complex innovation characteristics such as level of skill required for innovation use and visibility of an innovation's impact may require expert panel ratings. Because adopter and innovation characteristics interact, their malleability is likely dependent on this interaction. Twenty-six of the 29 measures are supported by empirical adoption data, with measures of an innovation's cost-efficacy and risk assessment being especially relevant to EBP adoption.

Complexity, Relative Advantage, and Observability (n=8)

The eight measures originate from different fields ranging from public health, medical to technology innovations. The Rogers's Adoption Questionnaire simultaneously measures three dimensions: Relative Advantage, Complexity, and Observability. It was used to demonstrate the adoption of health promotion innovations and can be modified for other innovations in different settings (Steckler, Goodman, McLeroy, Davis, & Koch, 1992).

The remaining seven measures assess single dimensions of innovation characteristics. Two measures are related to the perceived complexity and the specialized skills required to use an innovation (Damanpour & Schneider, 2009; Meyer & Goes, 1988). Three measures are related to the perceived relative advantage of adopting an innovation over existing practice based on perceived effectiveness and benefits (Dirksen, Ament, & Go, 1996; Peltier et al., 2009; Richardson, 2011). Two measures are related to the observability, or the degree to which an innovation's impact is deemed visible and evaluable (Meyer & Goes, 1988; Richardson, 2011). Empirical studies showed that these measures were positively associated with the decision to adopt technology innovations (Dirksen et al., 1996; Meyer & Goes, 1988; Peltier et al., 2009; Richardson, 2011).

Cost-efficacy and Feasibility (n=6)

All six measures require objective computation or subjective ratings of the cost associated with an innovation, provided that each cost component can be delineated. Cost data allow potential adopters to assess the feasibility of adoption given their fiscal status. Three measures, the Cost of Implementing New Strategies (COINS), the TCU Treatment Cost Analysis Tool (TCAT) and the Costing Behavioral Interventions tool, use an Excel platform to identify cost components retrospectively or prospectively at each stage of the adoption process (Chamberlain, Brown, & Saldana, 2011; Flynn et al., 2009; Lehman et al., 2011; Ritzwoller, Sukhanova, Gaglio, & Glasgow, 2009; Saldana et al., in press). Specifically, the COINS was used to illustrate the costs of adopting and implementing Multidimensional Treatment Foster Care in a two-state randomized controlled trial (Saldana et al., in press). A fourth approach uses administrative data on service reimbursement rates to compare the cost-efficacy ratios across EBPs to aid adoption decision-making (McHugh et al., 2007). These techniques for estimating costs are recommended for policymakers, researchers, and potential adopters (Ritzwoller et al., 2009). The remaining two measures involve subjective ratings on the cost of the new innovation or of switching to the new innovation (Damanpour & Schneider, 2009; Peltier et al., 2009).

Perceived Evidence and Compatibility (n=4)

The perceived evidence that an innovation works and is compatible operationally with an organization can influence adoption decisionmaking (Damanpour & Schneider, 2009; Richardson, 2011). Three measures are unidimensional rating scales on innovation impact (i.e., evidence) and compatibility with staff knowledge, experiences, and needs (Damanpour & Schneider, 2009; Meyer & Goes, 1988; Richardson, 2011). An open-ended interview protocol based on Roger's Diffusion of Innovation Theory can identify adoption and rejection experiences with innovations, and the compatibility of innovation characteristics with an organization's philosophy (Miller, 2001).

Innovation Fit with Users' Norms and Values (n=2)

The goodness-of-fit between an innovation and personal values goes beyond technical compatibility between an innovation and an individual adopter. Derived from the information technology literature, the two measures operationalize this “fit” using multi-item rating scales to assess the perceived job fit (i.e., how an innovation fits with job performance) and perceived long-term consequences (i.e., how an innovation will produce long-term changes in personal meanings and career opportunities) (Thompson, Higgins, & Howell, 1991). These two measures can be applied to other types of innovations.

Risk (n=3)

The perceived risk of adopting an innovation may refer to physical risk, propensity to risk-taking, and capacity for risk management. The three measures address these aspects using a combination of simple and in-depth approaches. In medical innovations, subjective level of injury and risk can be measured by an organizational rating scale to ensure an innovation's viability in hospital settings (Meyer & Goes, 1988). The Personal Risk Orientation uses multiple items to assess individual psychological attributes of risk-taking (e.g., competitiveness, creativity, confidence, achievement orientation, etc.) (Peltier et al., 2009). The Survey of Risk measures three organizational dimensions of risk. Specifically, perceived risk significantly differentiated adopters from non-adopters of EBPs, expected capacity to manage risk and past propensity to take risks were positively related to the propensity to adopt an EBP (Panzano & Roth, 2006).

Trialability, Relevance, and Ease (n=6)

All six measures emphasize adopters' perceptions of how relevant, easy to use and applicable on a trial basis an innovation is, whether the innovation is a medication, an EBP, or technology. They consist of rating scales that are discretely mapped to these innovation characteristics.

Two measures focus on trialability. One measure defines trialability as an organization's participation status in a clinical trial network and whether the organization has an opportunity to experiment with a medication treatment offered by the network (Ducharme, Knudsen, Roman, & Johnson, 2007). Results showed that the trialability of an evidence-based medication significantly increased subsequent adoption of that same medication. A second measure uses two items to assess the degree information and communication technology skills (e.g., computer and software) can be experimented with or practiced prior to full adoption. Through discriminant analysis, however, trialability made less contribution than voluntariness of use in discriminating five adopter groups (early adopters, late adopters, adopters who reinvented the innovation, de-adopters, and rejecters) based on usage rates of the skills (Richardson, 2011).

Two multi-item measures are related to relevance. The Task Relevance and Task Usefulness items measure an innovation's relevance to the day-to-day job performance of the user, which significantly contributed to the adoption of information system innovations (Yetton, Sharma, & Southon, 1999). The TCU Workshop Evaluation (WEVAL) is more expansive. It measures an innovation's relevance to client needs and staff comfort level (Bartholomew et al., 2007). In one study, higher ratings of relevance on the TCU WEVAL were related to greater future trial usage of an EBP (Bartholomew et al., 2007).

Ease can be measured by two multi-item measures. Ease of Use defines ease in multiple dimensions (e.g., ease of learning and controlling an innovation) (Davis, 1989). Perceived Usefulness defines ease based on user experience at their job (e.g., an innovation making the job easier and increasing productivity) (Davis, 1989). Empirical studies showed that Ease of Use served as an antecedent to Perceived Ease of Use, which in turn was significantly associated with current and self-predicted future adoption of information technology innovations (Davis, 1989).

4a. Individual: Staff

Overview of Measures

Twenty-six measures are related to six individual predictors of adoption, ranging from personal characteristics such as knowledge, skill and attitudes to more external influences such as social connections with other adopters. Dimensions of individual predictors are mainly measured by multi-domain or multi-items scales.

Affiliation with Organizational Culture (n=1)

Since staff's affiliation with their organization is the building block of organizational culture (Glisson et al., 2008), staff who view their organizational culture as welcoming or exploring innovations are likely to adopt or explore the use of EBPs (Aarons, Hurlburt, & Horwitz, 2011). Organizational Culture Profile (OCP) uses a Q-sort approach to measure staff-culture fit by correlating staff's preferences with organizational values; this fit was characterized by an overlap in the “innovation” dimension (O'Reilly, Chatman, & Caldwell, 1991).

Attitudes, Motivation, Readiness towards Quality Improvement and Reward (n=4)

Using multi-domain and multi-item scales, all three measures capture staff's attitudes towards innovations such as EBPs, QIs, and general organizational changes that may lead to perceived benefits. The Evidence-Based Practice Attitude Scale (EBPAS) and the SFTRC Course Evaluation measure clinicians' general attitudes towards EBPs based on their clinical experience, clinical innovativeness, and perceived appeals of EBPs (Aarons & Sawitzky, 2006; Haug, Shopshire, Tajima, Gruber, & Guydish, 2008). In one study, the SFTRC Course Evaluation was used to differentiate individual clinicians' readiness for change (e.g., pre-contemplation, preparation, action) (Haug et al., 2008). Applicable to diverse types of innovations, Readiness for Organizational Change measures individual readiness for change based on staff's opinions of their ability to innovate, their organizational support for change, and the appropriateness and benefits of change (Holt, 2007). In addition, the Texas Christian University (TCU) Workshop Assessment Follow-Up (WAFU) measures individual attitudes towards training based on resource and procedural factors, which were related to trial adoption of substance abuse treatment innovations (Bartholomew et al., 2007).

Feedback on Execution and Fidelity (n=1)

Adoption theories indicate that feedback to staff about their alignment with or deviation from the proper usage of innovations can influence adoption prior to full implementation (Wisdom et al., 2013). Measures specific to the adoption stage are lacking compared with those during the implementation stage. Nevertheless, the Alternative Stages of Concern Questionnaire (SoCQ) measures the importance of student feedback on improving teachers' awareness, concern, and adoption of educational innovations, for example, by identifying areas of an innovation that needs change and enhancement (Cheung, Hattie, & Ng, 2001).

Individual Characteristics (n=7)

The seven measures focus on individual adopters' awareness, knowledge of innovations, and competence in current practice. These measures involve a variety of self-ratings by individuals.

Four measures are related to awareness. Personal Innovativeness consists of rating scales that assess an individual's propensity to innovate quickly and scientifically in the workplace (Leonard-Barton & Deschamps, 1988). Subjective Importance of the Task requires an individual to rank innovations based on perceived importance, which was associated with management support prior to adoption (Leonard-Barton & Deschamps, 1988). The Alternative Stages of Concern Questionnaire (SoCQ) (Cheung et al., 2001) and the Awareness-Concern-Interest Questionnaire (Steckler et al., 1992) measure individual awareness, concern, and interest about public health or educational innovations at each adoption stage.

The remaining three measures address knowledge/skill and competence in current practice, with consistent association with adoption. Job performance benchmarks (e.g., project goal reached) (Leonard-Barton & Deschamps, 1988) can be complemented by subjective ratings of competence (from beginner to expert) on a particular task (Leonard-Barton & Deschamps, 1988) and ratings on a class of knowledge about innovations (Peltier et al., 2009).

Managerial Characteristics (n=6)

Managers play a key role in staffs motivation to change and innovate. Of the six self-ratings, three measure qualifications, such as experience within a product class (Igbaria, 1993), and education and tenure at current position (Damanpour & Schneider, 2009). The other three measures identify managerial traits such as self-concepts (e.g., efficacy, esteem) and risk-taking dispositions (Judge, Thoresen, Pucik, & Welbourne, 1999), and entrepreneurial and innovative attitudes (Damanpour & Schneider, 2009). Empirical studies of technology or government management innovations showed that all six measures were positively associated with adoption and receptiveness to change, except for tenure, which had an inverted U-shaped relationship with adoption (Damanpour & Schneider, 2009).

Social Network (Individual's Personal Network) (n=7)

Unlike measures for inter-systems and inter-organizational social networks, measures for intra-organizational social networks among staff focus on person-to-person influence on adoption. Of the seven measures, four measures are rating scales that assess the positive impact of social influence on adoption, based on organizational or personal hierarchy. A Social Factors measure assesses influence from colleagues and superiors (Thompson et al., 1991), while Subjective Norm measures the impact of important others on the use of an innovation (Venkatesh & Davis, 2000). Work Group Integration measures the sense of belonging to a work group as an additional source of social influence on adoption (Kraut, Rice, Cool, & Fish, 1998). Social Network further accounts for the influence of peers in the same discipline or in similar organizations who are adopting an innovation (Talukder & Quazi, 2011).

The other three measures quantify peer influence on adoption within an organization. Subscribers within the Work Group measures the number of colleagues in the same work group who use an innovation, which is used to calculate the proportion of adopters within the work group, or Subscribers within the Work Group (Kraut et al., 1998). Sociometric data on peer influence can be used to calculate contagion effects and network exposure scores, both a function of the number of social ties and nominations associated with an individual (Valente, 2005). These measures are useful in analyzing time until adoption and estimating the extent of peer influence on adoption, given the correct denominator of adopters.

4b. Individual: Client

Readiness for Change/Capacity to Adopt (n=1)

Although the adoption literature on individual predictors mainly focus on the innovation user's perspective (e.g., clinician's), the perspective of the recipient of an innovation — the perspective of the client — is equally important. A potential measure is the Texas Christian University (TCU) Client Evaluation of Self and Treatment (CEST), which contains specific domains on client motivation and readiness for change (Simpson et al., 2007).

Measures for Multiple Levels of Predictors

Five measurement systems are notably broader in that they simultaneously assess predictors of adoption across multiple contextual levels. Two measurement systems employ both qualitative and quantitative approaches to characterize facilitators and barriers of EBP adoption. The Management of Ideas in the Creating Organizations is a system that measures facilitators and barriers of EBP adoption at the individual (individual provider characteristics, competencies, and desire for change) and organizational (conditions and contexts) levels (Gioia & Dziadosz, 2008). It can be re-administered to show changes in attitudes towards EBP adoption over time (Gioia & Dziadosz, 2008). The Concept System is an expansive approach that measures facilitators and barriers based on 14 clusters of 105 statements related to the individual (staff and consumers), innovation, organization, and external system. Its ratings can differentiate the relative importance and changeability of each cluster, such as insufficient EBP training as a barrier, or provision of ongoing EBP supervision as a facilitator (Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009).

Three measures, previously noted within each contextual level above, are described here again because they contain distinct domains that cut across multiple levels of adoption predictors. The Reinventing Government (RG) and Alternative Service Delivery (ASD) surveys include a set of rating scales that assess the external environment, organizational size and structure, and individual managerial characteristics that are associated with adoption (Damanpour & Schneider, 2009). Two other measures are training evaluations that have been used to understand the relationships between organizational level, innovation level and/or individual level variables with innovation uptake: The San Francisco Treatment Research Center (SFTRC) Course Evaluation contains domains that measure both organizational barriers to adoption, as well as individual staff readiness for change and attitudes towards EBPs (Haug et al., 2008); and the Texas Christian University (TCU) Workshop Evaluation Form (WEVAL) contains domains that measure organizational training readiness and efforts to support innovation adoption, as well as the relevance of an innovation to their work (Bartholomew et al., 2007).

Discussion

This review identifies 118 measures for the 27 predictors of innovation adoption associated with a theoretical framework (Wisdom et al., 2013) and augments related reviews of measures, such as the work of Chaudoir et al. (2013), as well as large-scale measurement databases such as the Seattle Implementation Research Collaborative (SIRC) (2013) Instrument Review Project founded on the Consolidated Framework for Implementation Research (http://www.wiki.cf-ir.net/index.php?title=Main_Page) (Damschroder et al., 2009) and the Grid-Enabled Measures (GEM) (National Cancer Institute, 2013), which target broad implementation outcomes (e.g., acceptability, feasibility, fidelity, penetration, sustainability). Although there are conceptual overlaps (e.g., leadership) between predictors of adoption and predictors of broad implementation outcomes, this review shows that adoption-specific predictors and their measures are distinct from those associated with broad implementation outcomes.

The 118 measures are not distributed equally among the 27 predictors. Measures for predictors at the organizational, innovation, and individual staff levels make up the majority of the measures. Specific predictors such as organizational size, discrete innovation characteristics, and individual staff characteristics appear easier to measure based on the number of available measures. A key understudied predictor is client characteristics (Chaudoir et al., 2013), which has direct implications for the adoption of EBPs in mental health services, as client motivation and receptiveness to change could potentially deter or promote the initial adoption of EBPs by clinicians and by organizations.

Measures from outside the fields of health and mental health may need to be tailored for use in these areas. For instance, it may be enough to measure inter-organizational relationships in the corporate world based on market share. However, to measure relationships among non-profit organizations and their social impact on EBP adoption, we need to know which non-profits to include, their sources of knowledge about EBPs, and the greater political environment that could potentially impact such relationships. Further, the specificity of innovation characteristics means factors influential to adoption may also differ by the type of innovation. Overall, innovations that require less specialized skills and show clear and measurable benefit are more strongly related to adoption. This review suggests that some measures can be directly imported (e.g., measures for organizational culture), while other measures (e.g., rating scales for ease of use) will need to have their content modified and their psychometric properties re-established. Importantly, regardless of measurement issues, whether predictors of adoption from fields other than health continue to be important drivers of adoption of mental health EBPs will need to be established.

Multi-dimensional predictors are more challenging to measure because they require in-depth measures or diverse methods that range from surveys/semi-structured interviews of key informants to expert panel ratings. These predictors include organizational culture, climate, readiness for change, leadership, operational size and structure, facilitators and barriers of adoption, and risks associated with an innovation. Measures that are linked to empirical adoption data can guide the selection of measures. Having a specific dimension of interest (e.g., leadership style vs. leadership decision-making) that is hypothesized to influence adoption can also help prioritize measures.

Multiple definitions of adoption predictors pose additional challenges to the measurement field. For instance, what are the key characteristics of an effective leader who fosters adoption of EBPs? While many leadership measures have been developed, few existing measures have been compared to one another. Without direct comparisons of leadership measures that assess different aspects of leadership (e.g., the MLQ, Quality Orientation of the Host Organization, Management Practice Survey Tool), it is difficult to determine which leadership characteristics are important for adoption of EBPs (Chaudoir et al., 2013). A related challenge is the lack of comparisons between related predictors, for example, between organizational culture and other basic organizational characteristics (e.g., leadership). Similar predictors may also present potential redundancy in measurement — cost vs. risk, fit with individual work demands vs. task relevance and usefulness of an innovation, etc. To dispel such redundancy, it is important to gain a clear understanding on the level of analysis (e.g., individual data versus aggregating individual data to form organizational descriptors) and the nuances between similar predictors that may have different theoretical and empirical bases.

Adoption is multifaceted and involves many predictors that may relate to one another in complex ways (Aarons et al., 2011; Wisdom et al., 2013). For example, innovation characteristics inherently reflect individual perceptions of those characteristics (Damanpour & Schneider, 2009; Peltier et al., 2009; Richardson, 2011). An EBP can objectively be characterized as long-term based on its length of treatment, or as individual vs. group treatment based on its modality. Conversely, its trialability, relevance, and observability reflect an adopter's perceptions and evaluations of the EBP (i.e., innovation fit). Thus, innovation characteristics (e.g., modality, skills required, target clients) and individual characteristics (e.g., adopter's skill level, innovativeness, appraisal of the EBP) are likely inter-related and analyses of adoption should reflect these potential relationships as well as evaluate under what circumstances (e.g., contextual variability or adoption stage) predictors relate to one another. In practice, while it is neither feasible nor even desirable to measure all potential predictors, selection of measures should be thoughtfully guided by existing or local knowledge about the context of adoption and variables that are theorized to have the biggest impact on adoption.

Some measures appear to have a circular relationship with adoption, such as the TCU WEVAL and the EBPAS. The WEVAL measures organizational support and resources, which can simultaneously represent adoption predictors or adoption outcomes. Similarly, the EBPAS measures clinician attitudes toward EBPs. Clinicians with positive attitudes are more likely to adopt an EBP, although adopting an EBP can subsequently increase positive attitudes toward EBPs. To avoid circularity, researchers must be clear on the intended purpose and timing of measures that have more than one application.

The modifiability of predictors varies across the four levels of predictors. External system predictors are generally not amenable to modification (e.g., urbanicity). In addition, organizational and individual predictors can be difficult to change, and innovation predictors may be limited in their modifiability because of their interactions with individual predictors. Clearly not all predictors can change and not all modifiable predictors change at the same pace. To improve adoption, research must focus on measuring predictors that are modifiable and such measures can then be used to assess change over time (e.g., organizational culture, relevance of an innovation). Future research must also advance the measurement of these predictors (Chaudoir et al., 2013) to make their measures accessible, affordable, and useful across multiple adoption efforts.

Implications for EBP Adoption — A State System Perspective

This critique of measures for 27 possible predictors of adoption outlined in Wisdom et al. (2013) suggests the following implications for states' efforts to improve the adoption of EBPs:

1. Valid, reliable, and theory-driven measures are critical to understanding characteristics related to EBP adoption

Introducing EBPs is more complex than installing discrete innovations such as computer software or a hand-washing practice (Comer & Barlow, 2013). Therefore, measuring predictors of EBP adoption is also more complex because it is multi-level (organization, staff, client), multi-phasic (initial training, supervision, and ongoing consultation), and costly due to training, staff time and lost clinical revenue while staff are being trained. Having the appropriate measures helps to accurately determine what the drivers of EBP adoption are as well as how they differ by the specific EBP (e.g., learning a new treatment vs. modifying existing treatment) and by the adopter environment (e.g., organizational social context, operational structure).

2. Improving measurement efforts enhances development of adoption interventions

A state can potentially characterize and differentiate non-adopters from adopters, based on measurable differences in adoption facilitators. EBP roll-out strategies can then be customized to distinct adopter profiles. To illustrate, since 2011, the New York State Office of Mental Health (NYSOMH) has been offering technical assistance in business and clinical practices for child-serving mental health clinics through the Clinic Technical Assistance Center (CTAC). To understand possible drivers of clinic adoption patterns of the trainings (e.g., by type, number, and intensity of trainings), state administrative and fiscal data can be used to describe clinic operational size and structure (e.g., gain/loss per unit of service, productivity), clinic type (e.g., hospital vs. community-based), and client need (e.g., proportion of clients with persistent and severe mental illness). Findings will help the state allocate resources to improve adoption when appropriate, for example, by encouraging non-adopters who are financially strained to access free or affordable training options on EBPs. Characterizing non-adopters through better measurement can also help the state tailor future roll-outs more efficiently.

3. Equal emphasis on measuring predictors of adoption, non-adoption, delayed-adoption, and de-adoption

A glaring omission in the adoption measurement literature is any focus on failure to adopt and de-adoption (Panzano & Roth, 2006). Indiscriminate adoption is not always an ideal outcome. If an organization experiences innovation fatigue, financial challenges or is understaffed, it may exercise caution by resisting adoption in order to reduce costs and maintain productivity. Important lessons can be learned from this failure to adopt. To understand the rational decision-making of non-adoption, delayed adoption, or de-adoption, a state needs appropriate measures for the features related to each of these actions.

Conclusion

As healthcare reform increasingly demands accountability and quality measures for tracking change processes, states will need to increase the use of EBPs in their state-funded mental health systems. To do this efficiently will require a clear understanding of the drivers of adoption and the ability to measure these drivers accurately. Although a reasonably “young” scientific discipline (Stamatakis, Norton, Stirman, Melvin, & Brownson, 2013), implementation science needs to focus on measurement to create a common language and to develop interventions that promote strategic uptake of EBPs.

Acknowledgments

This study was funded by the National Institute of Mental Health (P30 MH090322: Advanced Center for State Research to Scale up EBPs for Children, PI: Hoagwood).

Appendix 1. Descriptions of 27 adoption predictors (Wisdom et al., 2013)

Contextual Level Predictor Description
1. External System
  • External Environment

  • Government Policy and Regulation

  • Regulation with Financial Incentives

  • Social Network (Inter-systems)

  • Extra-organizational environment's influences on adoption.

  • Government policy and regulation enacted that have direct implications on adoption.

  • Regulation associated with financial incentives and reward systems for adoption.

  • Social linkages among external systems that influence adoption.


2. Organization
  • Absorptive Capacity

  • Leadership and Champion of Innovations

  • Network with Innovation Developers and Consultants

  • Norms, Values, and Cultures

  • Operational Size and Structure

  • Social Climate

  • Social Network (Inter-organizations)

  • Training Readiness and Efforts

  • Traits and Readiness for Change

  • Organizational capacity to utilize innovative and existing knowledge.

  • Organizational leadership style that champions innovation adoption.

  • Organizational relationships and collaboration with innovation developers and consultants.

  • Norms, values, and cultures that define an organization.

  • Organizational operation resources, size, and structure.

  • Social climate of learning and pressure.

  • Social linkages from one organization to another.

  • Organizational training and efforts related to adoption.

  • Organizational traits and readiness for change related to adoption.


3. Innovation
  • Complexity, Relative Advantage, and Observability

  • Cost-efficacy and Feasibility

  • Evidence and Compatibility

  • Facilitators and Barriers

  • Innovation Fit with Users' Norms and Values

  • Risk

  • Trialability, Relevance, and Ease

  • Perceived complexity, relative advantage over other innovations or existing practice, and the visibility of an innovation.

  • Financial costs and feasibility associated with an innovation.

  • Perceived evidence that an innovation works and that it is compatible with existing practice.

  • Perceived facilitators and barriers, which can apply to external system, organization, innovation, and/or individual levels.

  • Perceived goodness-of-fit between an innovation and one's norms and values.

  • Perceived risk involved in adopting an innovation.

  • Perceived flexibility of an innovation to be experimented, relevance to one's practice, and ease of adoption.


4. Indiv idual 4a. Staff
  • Affiliation with Organizational Culture

  • Attitudes, Motivations, and Readiness towards Quality Improvement and Reward

  • Feedback on Execution and Fidelity

  • Individual Characteristics

  • Managerial Characteristics

  • Social Network (Individual's Personal Network)

  • Fit between individual staff and organizational culture.

  • Individual attitudes, motivations, and readiness towards change, adoption, quality improvement, and associated reward.

  • Individualized feedback on the execution and fidelity of adopting an innovation.

  • Individual characteristics such as awareness of innovations, skills, knowledge, and experience with adoption.

  • Manager characteristics such as education, knowledge, and innovative attitudes that influence adoption.

  • Social linkages fostered among individual staff.


4b. Client
  • Readiness for Change/Capacity to Adopt

  • Readiness and capacity of a client/consumer — the recipient of an innovation — to adopt.

Appendix 2. Databases and search strategy (Wisdom et al., 2013)

Ovid Medline, PsycINFO, and Web of Science were the major electronic databases used for Medical Subject Heading (MeSH) and article keyword searches.

  1. Ovid Medline provided the first and primary source of literature. First, exploratory searches were conducted using these individual MeSH terms:

    • DIFFUSION OF INNOVATION (13,774 hits);

    • EVIDENCE-BASED PRACTICE (51,940 hits);

    • EVIDENCE-BASED MEDICINE (47,305 hits) a subset of EVIDENCE-BASED PRACTICE;

    • MODELS, THEORETICAL (1,120,350 hits).

    Next, the following MeSH terms were combined with the AND Boolean operator:

    • DIFFUSION OF INNOVATION AND EVIDENCE-BASED PRACTICE (1,781 hits);

      DIFFUSION OF INNOVATION AND MODELS, THEORETICAL (1,299 hits);

    • DIFFUSION OF INNOVATION AND EVIDENCE-BASED PRACTICE (1,397 hits).

    Considering theoretical frameworks are the focus of this review, further combinations of the following MeSH terms were conducted using the AND Boolean operator:

    • DIFFUSION OF INNOVATION AND EVIDENCE-BASED PRACTICE AND MODELS, THEORETICAL (320 hits);

    • DIFFUSION OF INNOVATION AND EVIDENCE-BASED MEDICINE AND MODELS, THEORETICAL (237 hits).

    Since EVIDENCE-BASED MEDICINE is a subset of EVIDENCE-BASED PRACTICE in the MeSH grouping, the search narrowed down to DIFFUSION OF INNOVATION AND EVIDENCE-BASED PRACTICE AND MODELS, THEORETICAL.

  2. We used PsycInfo to supplement the original pool of literature using a similar search logic of MeSH:

    • ADOPTION (15,535 hits);

    • EVIDENCE BASED PRACTICE (8,940 hits);

    • INNOVATION (3,995 hits);

    • MODELS (65,923 hits);

    • THEORIES (91,148 hits).

    Since ADOPTION as a MeSH in PsycInfo is quite broad and often refers to the child welfare taxonomy, we used the AND Boolean operator for the following combinations:

    • ADOPTION AND EVIDENCE BASED PRACTICE (291 hits);

    • ADOPTION AND INNOVATION (339 hits).

    Next, to add a theoretical focus to this pool, we used the AND Boolean operator for the following combinations:

    • ADOPTION AND EVIDENCE BASED PRACTICE AND INNOVATION (10 hits);

    • ADOPTION AND EVIDENCE BASED PRACTICE AND MODELS (11 hits);

    • ADOPTION AND EVIDENCE BASED PRACTICE AND THEORIES (9 hits);

    • ADOPTION AND INNOVATION AND MODELS (18 hits);

    • ADOPTION AND INNOVATION AND THEORIES (6 hits).

    All articles from these last searches in PsycInfo were screened for overlaps.

  3. To gain a broader perspective on other fields, we used Web of Science to expand the pool of literature obtained from Ovid Medline and PsycInfo, using the following topic searches:

    • ADOPTION (45,440 hits);

    • DIFFUSION (425,401 hits);

    • EVIDENCE-BASE (47,294 hits);

    • INNOVATION (76,818 hits);

    • MODEL (3,894,846 hits);

    • THEORY (1,172,347).

Given these large yields, we used the Boolean operator AND to combine the following topic searches:

  • ADOPTION AND EVIDENCE-BASE (899 hits);

  • ADOPTION AND INNOVATION (4,209 hits);

  • ADOPTION AND DIFFUSION (3,059 hits);

  • ADOPTION AND EVIDENCE-BASE AND INNOVATION (135 hits).

To narrow the focus on theoretical models, further combinations using the Boolean operator AND were used:

  • ADOPTION AND INNOVATION AND MODEL (48 hits);

  • ADOPTION AND INNOVATION AND THEORY (194 hits);

  • ADOPTION AND EVIDENCE-BASE AND MODEL (23 hits);

  • ADOPTION AND EVIDENCE-BASE AND THEORY (80 hits);

  • ADOPTION AND INNOVATION AND MODEL AND THEORY (439 hits);

  • ADOPTION AND EVIDENCE-BASE AND MODEL AND THEORY (42 hits);

The last step searches from these three databases formed the preliminary pool of literature. All articles were searched for overlaps within and between databases, which yielded 332 unique hits. To systematically zero into specific adoption theories and their measures, we screened article keywords so at least one of the following keywords was included:

  • ADOPTION;

  • ADOPTION OF INNOVATION;

  • CONCEPTUAL MODEL;

  • DIFFUSION;

  • EVIDENCE-BASED INTERVENTIONS;

  • EVIDENCE-BASED PRACTICE;

  • FRAMEWORK;

  • INNOVATION;

  • THEORY.

In addition, article titles were screened so that at least one of the following words was included:

  • ADOPTION;

  • DIFFUSION;

  • DIFFUSION OF INNOVATION;

  • DIFFUSION OF INNOVATIONS;

  • EVIDENCE-BASED INTERVENTIONS;

  • EVIDENCE-BASED MENTAL HEALTH TREATMENTS;

  • EVIDENCE-BASED PRACTICE;

  • FRAMEWORK;

  • INNOVATION;

  • INNOVATION ADOPTION;

  • MODEL;

  • MULTILEVEL;

  • RESEARCH-BASED PRACTICE;

  • THEORETICAL MODELS.

Finally, we screened the actual titles of the adoption theories within the texts of the articles so that at least one the following words were included:

  • ADOPTION;

  • DIFFUSION;

  • EVIDENCE-BASED;

  • EVIDENCE-BASED PRACTICE;

  • FRAMEWORK;

  • INNOVATION;

  • MODEL.

Contributor Information

Ka Ho Brian Chor, New York University Child Study Center, New York University School of Medicine.

Jennifer P. Wisdom, George Washington University.

Su-Chin Serene Olin, New York University Child Study Center, New York University School of Medicine.

Kimberly E. Hoagwood, New York University Child Study Center, New York University School of Medicine.

Sarah M. Horwitz, New York University Child Study Center, New York University School of Medicine.

References

  1. Aarons GA. Transformational and transactional leadership: Association with attitudes toward evidence-based practice. Psychiatric Services. 2006;57(8):1162–1169. doi: 10.1176/appi.ps.57.8.1162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Hurlburt M, Horwitz S. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, Sawitzky AC. Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services. 2006;3(1):61–72. doi: 10.1037/1541-1559.3.1.61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aarons GA, Wells RS, Zagursky K, Fettes DL, Palinkas LA. Implementing evidence-based practice in community mental health agencies: A multiple stakeholder analysis. American Journal of Public Health. 2009;99(11):2087–2095. doi: 10.2105/ajph.2009.161711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bartholomew NG, Joe GW, Rowan-Szal GA, Simpson DD. Counselor assessments of training and adoption barriers. Journal of Substance Abuse Treatment. 2007;33(2):193–199. doi: 10.1016/j.jsat.2007.01.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bass BM, Avolio BJ, Atwater L. The transformational and transactional leadership of men and women. Applied Psychology. 1996;45(1):5–34. doi: 10.1111/j.1464-0597.1996.tb00847.x. [DOI] [Google Scholar]
  7. Burns LR, Wholey DR. Adoption and abandonment of matrix management programs: Effects of organizational characteristics and interorganizational networks. Academy of Management Journal. 1993;36(1):106–138. [PubMed] [Google Scholar]
  8. Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: The Stages of implementation completion (SIC) Implementation Science. 2011;6 doi: 10.1186/1748-5908-6-116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science. 2013;8(1):22. doi: 10.1186/1748-5908-8-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cheung D, Hattie J, Ng D. Reexamining the Stages of Concern Questionnaire: A test of alternative models. The Journal of Educational Research. 2001;94(4):226–236. [Google Scholar]
  11. Cohen WM, Levinthal DA. Absorptive capacity - A new perspective on learning and innovation. Administrative Science Quarterly. 1990;35(1):128–152. [Google Scholar]
  12. Comer JS, Barlow DH. The occasional case against broad dissemination and implementation: Retaining a role for specialty care in the delivery of psychological treatments. American Psychologist, Advance online publication. 2013 doi: 10.1037/a0033582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cooke RA, Lafferty JC. Organizational culture inventory. Plymouth, MI: Human Synergistics International; 1994. [Google Scholar]
  14. Damanpour F, Schneider M. Characteristics of innovation and innovation adoption in public organizations: Assessing the role of managers. Journal of Public Administration Research and Theory. 2009;19(3):495–522. doi: 10.1093/jopart/mun021. [DOI] [Google Scholar]
  15. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4(50) doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Management Information Systems Quarterly. 1989;13(3):319–340. [Google Scholar]
  17. Dirksen CD, Ament AH, Go PMN. Diffusion of six surgical endoscopic procedures in the Netherlands. Stimulating and restraining factors. Health Policy. 1996;37(2):91–104. doi: 10.1016/s0168-8510(96)90054-8. [DOI] [PubMed] [Google Scholar]
  18. Ducharme LJ, Knudsen HK, Roman PM, Johnson JA. Innovation adoption in substance abuse treatment: Exposure, trialability, and the Clinical Trials Network. Journal of Substance Abuse Treatment. 2007;32(4):321–329. doi: 10.1016/j.jsat.2006.05.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Edwards JR, Knight DK, Broome KM, Flynn PM. The development and validation of a transformational leadership survey for substance use treatment programs. Substance Use & Misuse. 2010;45(9):1279–1302. doi: 10.3109/10826081003682834. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Joint Commission Journal on Quality and Patient Safety. 2008;34(4):228–243. doi: 10.1016/s1553-7250(08)34030-6. [DOI] [PubMed] [Google Scholar]
  21. Finnerty M, Leckman-Westin E, Kealey E, Wisdom JP, Olin S, Horwitz S, Hoagwood K. Impact of state incentives on mental health agency decisions to participate in a large state CQI initiative. Paper presented at the 5th Annual NIH Conference on the Science of Dissemination and Implementation; Bethesda, MD. 2012. [Google Scholar]
  22. Finnerty M, Rapp C, Bond G, Lynde D, Ganju V, Goldman H. The State Health Authority Yardstick (SHAY) Community Mental Health Journal. 2009;45(3):228–236. doi: 10.1007/s10597-009-9181-z. [DOI] [PubMed] [Google Scholar]
  23. Fitzgerald L, Ferlie E, Wood M, Hawkins C. Interlocking interactions, the diffusion of innovations in health care. Human Relations. 2002;55(12):1429–1449. doi: 10.1177/001872602128782213. [DOI] [Google Scholar]
  24. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. [Google Scholar]
  25. Flynn PM, Broome KM, Beaston-Blaakman A, Knight DK, Horgan CM, Shepard DS. Treatment Cost Analysis Tool (TCAT) for estimating costs of outpatient treatment services. Drug Alcohol Depend. 2009;100(1–2):47–53. doi: 10.1016/j.drugalcdep.2008.08.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fujimoto K, Valente TW, Pentz MA. Network structural influences on the adoption of evidence-based prevention in communities. Journal of Community Psychology. 2009;37(7):830–845. doi: 10.1002/Jcop.20333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Ganju V. Implementation of evidence-based practices in state mental health systems: Implications for research and effectiveness studies. Schizophr Bull. 2003;29(1):125–131. doi: 10.1093/oxfordjournals.schbul.a006982. [DOI] [PubMed] [Google Scholar]
  28. Gatignon H, Robertson TS. Technology diffusion: An empirical test of competitive effects. The Journal of Marketing. 1989;53(1):35–49. [Google Scholar]
  29. Gioia D, Dziadosz G. Adoption of evidence-based practices in community mental health: A mixed-method study of practitioner experience. Community Mental Health Journal. 2008;44(5):347–357. doi: 10.1007/s10597-008-9136-9. [DOI] [PubMed] [Google Scholar]
  30. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, Green P. Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Adm Policy Ment Health. 2008;35(1-2):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
  31. Glisson C, Schoenwald SK. The ARC organizational and community intervention strategy for implementing evidence-based children's mental health treatments. Ment Health Serv Res. 2005;7(4):243–259. doi: 10.1007/s11020-005-7456-1. [DOI] [PubMed] [Google Scholar]
  32. Haug NA, Shopshire M, Tajima B, Gruber V, Guydish J. Adoption of evidence-based practices among substance abuse treatment providers. Journal of Drug Education. 2008;38(2):181–192. doi: 10.2190/De.38.2.F. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Igbaria M. User acceptance of microcomputer technology: An empirical test. Omega. 1993;21(1):73–90. doi: 10.1016/0305-0483(93)90040-r. [DOI] [Google Scholar]
  34. Ingersoll GL, Kirsch JC, Merk SE, Lightfoot J. Relationship of organizational culture and readiness for change to employee commitment to the organization. Journal of Nursing Administration. 2000;30(1):11–20. doi: 10.1097/00005110-200001000-00004. [DOI] [PubMed] [Google Scholar]
  35. Judge TA, Thoresen CJ, Pucik V, Welbourne TM. Managerial coping with organizational change: A dispositional perspective. Journal of Applied Psychology. 1999;84(1):107–122. doi: 10.1037/0021-9010.84.1.107. [DOI] [Google Scholar]
  36. Knudsen HK, Abraham AJ. Perceptions of the state policy environment and adoption of medications in the treatment of substance use disorders. Psychiatric Services. 2012;63(1):19–25. doi: 10.1176/appi.ps.201100034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Knudsen HK, Roman PM. Modeling the use of innovations in private treatment organizations: The role of absorptive capacity. Journal of Substance Abuse Treatment. 2004;26(1):51–59. doi: 10.1016/s0740-5472(03)00158-2. [DOI] [PubMed] [Google Scholar]
  38. Kraut RE, Rice RE, Cool C, Fish RS. Varieties of social influence: The role of utility and norms in the success of a new communication medium. Organization Science. 1998;9(4):437–453. [Google Scholar]
  39. Lehman WEK, Simpson DD, Knight DK, Flynn PM. Integration of treatment innovation planning and implementation: Strategic process models and organizational challenges. Psychology of Addictive Behaviors. 2011;25(2):252–261. doi: 10.1037/a0022682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Leonard-Barton D, Deschamps I. Managerial influence in the implementation of new technology. Management Science. 1988;34(10):1252–1265. [Google Scholar]
  41. Lind MR, Zmud RW. The influence of a convergence in understanding between technology providers and users on information technology innovativeness. Organization Science. 1991;2(2):195–217. [Google Scholar]
  42. McConnell KJ, Hoffman KA, Quanbeck A, McCarty D. Management practices in substance abuse treatment programs. Journal of Substance Abuse Treatment. 2009;37(1):79–89. doi: 10.1016/j.jsat.2008.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. McHugh RK, Otto MW, Barlow DH, Gorman JM, Shear MK, Woods SW. Cost-efficacy of individual and combined treatments for panic disorder. Journal of Clinical Psychiatry. 2007;68(7):1038–1044. doi: 10.4088/jcp.v68n0710. [DOI] [PubMed] [Google Scholar]
  44. Meyer AD, Goes JB. Organizational assimilation of innovations - A multilevel contextual analysis. Academy of Management Journal. 1988;31(4):897–923. [Google Scholar]
  45. Miller RL. Innovation in HIV prevention: Organizational and intervention characteristics affecting program adoption. American Journal of Community Psychology. 2001;29(4):621–647. doi: 10.1023/A:1010426218639. [DOI] [PubMed] [Google Scholar]
  46. Moos RH. Work Environment Scale manual. Palo Alto, CA: Consulting Psychologists Press; 1981. [Google Scholar]
  47. Moos RH, Moos BS. The staff workplace and the quality and outcome of substance abuse treatment. J Stud Alcohol. 1998;59(1):43–51. doi: 10.15288/jsa.1998.59.43. [DOI] [PubMed] [Google Scholar]
  48. National Cancer Institute. Grid-Enabled Measure Database. Measures. 2013 Retrieved January 3, 2014, from https://www.gem-beta.org/public/Home.aspx?cat=0.
  49. O'Reilly CA, Chatman J, Caldwell DF. People and organizational culture: A profile comparison approach to assessing person-organization fit. Academy of Management Journal. 1991;34(3):487–516. doi: 10.2307/256404. [DOI] [Google Scholar]
  50. Panzano PC, Roth D. The decision to adopt evidence-based and other innovative mental health practices: Risky business? Psychiatric Services. 2006;57(8):1153–1161. doi: 10.1176/appi.ps.57.8.1153. [DOI] [PubMed] [Google Scholar]
  51. Patient Protection and Affordable Care Act of 2010. 2010:119–1205. Pub L No 111-148, 124 Stat. [Google Scholar]
  52. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review - A new method of systematic review designed for complex policy interventions. Journal of Health Services Research and Policy. 2005;10(Suppl 1):21–34. doi: 10.1258/1355819054308530. [DOI] [PubMed] [Google Scholar]
  53. Peltier JW, Schibrowsky JA, Zhao YS. Understanding the antecedents to the adoption of CRM technology by small retailers entrepreneurs vs owner-managers. International Small Business Journal. 2009;27(3):307–336. doi: 10.1177/0266242609102276. [DOI] [Google Scholar]
  54. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Proctor EK, Brownson RC. Measurment issues in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; 2012. pp. 261–280. [Google Scholar]
  56. Rauktis ME, McCarthy S, Krackhardt D, Cahalane H. Innovation in child welfare: The adoption and implementation of Family Group Decision Making in Pennsylvania. Children and Youth Services Review. 2010;32(5):732–739. doi: 10.1016/j.childyouth.2010.01.010. [DOI] [Google Scholar]
  57. Ravichandran T. Swiftness and intensity of administrative innovation adoption: An empirical study of TQM in information systems. Decision Sciences. 2000;31(3):691–724. doi: 10.1111/j.1540-5915.2000.tb00939.x. [DOI] [Google Scholar]
  58. Richardson JW. Technology adoption in Cambodia: Measuring factors impacting adoption rates. Journal of International Development. 2011;23(5):697–710. doi: 10.1002/Jid.1661. [DOI] [Google Scholar]
  59. Ritzwoller DP, Sukhanova A, Gaglio B, Glasgow RE. Costing behavioral interventions: A practical guide to enhance translation. Annals of Behavioral Medicine. 2009;37(2):218–227. doi: 10.1007/s12160-009-9088-5. [DOI] [PubMed] [Google Scholar]
  60. Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The Cost of Implementing New Strategies (COINS): A method for mapping implementation resources using the Stages of Implementation Completion. Children and Youth Services Review. doi: 10.1016/j.childyouth.2013.10.006. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Savicki V, Cooley E. The relationship of work environment and client contact to burnout in mental health professionals. Journal of Counseling & Development. 1987;65(5):249. [Google Scholar]
  62. Schoenwald SK, Hoagwood K. Effectiveness, transportability, and dissemination of interventions: what matters when? Psychiatric Services. 2001;52(9):1190–1197. doi: 10.1176/appi.ps.52.9.1190. [DOI] [PubMed] [Google Scholar]
  63. Schoenwald SK, Sheidow AJ, Letourneau EJ, Liao JG. Transportability of multisystemic therapy: Evidence for multilevel influences. Ment Health Serv Res. 2003;5(4):223–239. doi: 10.1023/a:1026229102151. [DOI] [PubMed] [Google Scholar]
  64. Seattle Implementation Research Collaborative. Instrument Review Project: A comprehensive review of dissemination a nd implementation science instruments. 2013 Retrieved January 3, 2014, from http://www.seattleimplementation.org/sirc-projects/sirc-instrument-project/
  65. Shortell SM, Marsteller JA, Lin M, Pearson ML, Wu SY, Mendel P, Rosen M. The role of perceived team effectiveness in improving chronic illness care. Medical Care. 2004;42(11):1040–1048. doi: 10.1097/00005650-200411000-00002. [DOI] [PubMed] [Google Scholar]
  66. Sieveking N, Bellet W, Marston RC. Employees' views of their work experience in private hospital. Health Services Management Research. 1993;6(2):129–138. doi: 10.1177/095148489300600207. [DOI] [PubMed] [Google Scholar]
  67. Simpson DD, Joe GW, Rowan-Szal GA. Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment. 2007;33(2):201–209. doi: 10.1016/j.jsat.2006.12.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Stamatakis K, Norton W, Stirman S, Melvin C, Brownson R. Developing the next generation of dissemination and implementation researchers: Insights from initial trainees. Implementation Science. 2013;8(1):29–35. doi: 10.1186/1748-5908-8-29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Steckler A, Goodman RM, McLeroy KR, Davis S, Koch G. Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion. 1992;6(3):214–224. doi: 10.4278/0890-1171-6.3.214. [DOI] [PubMed] [Google Scholar]
  70. Stevenson K, Baker R. Investigating organisational culture in primary care. Quality in Primary Care. 2005;13(4):191–200. [Google Scholar]
  71. Talukder M, Quazi A. The impact of social influence on individuals' adoption of innovation. Journal of Organizational Computing and Electronic Commerce. 2011;21(2):111–135. doi: 10.1080/10919392.2011.564483. [DOI] [Google Scholar]
  72. Thompson RL, Higgins CA, Howell JM. Personal computing: Toward a conceptual model of utilization. Management Information Systems Quarterly. 1991;15(1):125–143. [Google Scholar]
  73. Valente TW. Social network thresholds in the diffusion of innovations. Social Networks. 1996;18(1):69–89. doi: 10.1016/0378-8733(95)00256-1. [DOI] [Google Scholar]
  74. Valente TW. Network models and methods for studying the diffusion of innovations. In: Carrington PJ, Scott J, Wasserman S, editors. Models and Methods in Social Network Analysis. Cambridge, U.K.: Cambridge University Press; 2005. [Google Scholar]
  75. Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science. 2000;46(2):186–204. [Google Scholar]
  76. Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: A review of theories and constructs. Administration and Policy in Mental Health and Mental Health Services Research. 2013 doi: 10.1007/s10488-013-0486-4. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Yetton P, Sharma R, Southon G. Successful IS innovation: The contingent contributions of innovation characteristics and implementation process. Journal of Information Technology. 1999;14(1):53–68. doi: 10.1080/026839699344746. [DOI] [Google Scholar]
  78. Zahra SA, George G. Absorptive capacity: A review, reconceptualization, and extension. Academy of Management Review. 2002;27(2):185–203. doi: 10.5465/amr.2002.6587995. [DOI] [Google Scholar]

RESOURCES