Skip to main content
. 2015 Nov 14;10:160. doi: 10.1186/s13012-015-0348-4

Table 1.

Theoretical constructs, measures, data sources, and data collection timing

Construct Measures Source Timing
Implementation support Frequency, duration, and mode of PF contacts PF contact logs I
Frequency, duration, and mode of academic detailing contact Contact, webinar logs I
Attendance at regional collaborative meetings Attendance logs I
Practice capacity for QI Change Process Capacity Questionnaire KI survey B, E, F
Adaptive Reserve Questionnaire Provider/staff survey B, E, F
Organizational readiness Organizational Readiness for Change Questionnaire (ORIC) Provider/staff survey B
Implementation Policies and Practices (IPPs) Key Driver Implementation Scales PF Ratings I
Implementation barriers, facilitators, and IPPs KI interview B, E
Implementation climate Implementation climate questionnaire Provider/staff survey E, F
Implementation effectiveness ABCS measures/clinical measures HIE B, I, E, F
Acceptability of implementation support KI interview E
Innovation effectiveness Patient outcomes (communication, shared decision-making) Patient survey B, E
Patient outcomes (healthcare utilization and mortality) Claims data B, E, F
Practice outcomes (e.g., financial benefits) KI Interview E, F
Inner context Practice characteristics, patient population, EMR capabilities KI survey, PF contact logs B, E,F
Outer context External policies and incentives, market conditions KI survey, KI interview, PF contact logs B, E, F

Because readiness is conceived as an organization-level construct, we will test whether sufficient inter-rater reliability and inter-rater agreement exist to aggregate individual responses to the practice level [2226]. If tests do not justify aggregation, we will use a measure of intra-practice variability in readiness rather than a practice-level mean in our analysis [15, 16].

Source: PF practice facilitator, KI key informant; timing: B baseline, I intervention, E end of intervention, F 6 and 12 months post-intervention