Abstract
Few questionnaires have been developed to screen for potentially poor implementers of school-based interventions. This study combines teacher characteristics, perceptions and teaching/training experiences to develop a short screening tool that can identify potential “low-performing” or “high-performing” teachers pre-implementation. Data were gathered from 208 teachers and 4411 students who participated in the national implementation of an evidence-based HIV intervention in The Bahamas. Sensitivity and specificity were evaluated for the detection of “low-performing” and “high-performing” teachers. The validity of the screening tool was assessed using receiver operating characteristics (ROC) analysis. The School Pre-implementation Screening Tool consists of seven predictive factors: duration as teacher, working site, attendance at training workshops, training in interactive teaching, perceived importance of the intervention, comfort in teaching the curriculum, and program priority. The sensitivity and specificity were 74% and 57% in identifying “low-performing” teachers and 81% and 65% with “high-performing” teachers. The screening tool demonstrated an acceptable/good validity [area under the ROC curve (AROC) was 0.68 for “low-performing teachers” and 0.78 for “high-performing” teachers]. Our brief screening tool can facilitate teacher training and recruitment of engaged teachers in implementation of school-based interventions.
Keywords: Implementation, evidenced-based intervention, sensitivity, specificity, screening tool, The Bahamas
Over the past decade, the research field of implementation science has been transformed from one with minimal guidance through explanatory structures (Eccles, Grimshaw, Walker, Johnston, & Pitts, 2005) to one with an abundance of theories, models and frameworks (Tabak, Khoong, Chambers, & Brownson, 2012). Despite this increase in the number of theories relevant to implementation science, many implementation science studies are still being conducted without a guiding theoretic model (Davies, Walker, & Grimshaw, 2010.) To allow the field to benefit from the explanatory/guiding models, investigators have identified clusters of theories from which researchers can make selections based on research intention (Nilsen, 2015).
A related issue has been the variability in identification of underlying theoretic constructs on which research data collection tools and measures used in implementation research have been based. Constructs purported to impact the implementation process and/or outcomes are frequently only loosely identified or defined. Moreover, the psychometric properties of such instruments frequently have not been assessed and/or are weak (Chaudoir, Dugan, & Barr, 2013; Squires et al., 2011). A review of 62 measures used in implementation research reported in the literature revealed that 30 (48%) of the measures did not report criterion validity, defined by the authors as one or more of five of implementation outcomes (adoption, fidelity, cost, penetration or sustainability.) Among the 32 reporting criterion validity, “adoption” was the most commonly assessed (90%) while only 5 (16%) reported fidelity. None of the other remaining outcomes were reported as having been assessed (Chaudoir et al., 2013).
Implementation science offers the opportunity for enormous health gains including those to be derived from the prevention and treatment of HIV/AIDS (Chang et al., 2013). School systems across the globe have the potential for enormous reach to generations of children and youth with strong, evidence-based prevention programs (Forman et al., 2013). Recently we have described characteristics of teachers which are associated with subsequent variation in implementation of an evidence-based HIV prevention intervention among grade-6 youth and the association of these differences in implementation with student outcomes (Wang et al., 2015). The pre-implementation questionnaire contains 21 questions assessing teacher’s individual characteristics, training and teaching experiences and perceptions of the importance of prevention program. The internal consistency of pre-implementation perceptions was 0.75. While useful as a research tool, the questionnaire assessing teacher characteristics was lengthy. We speculated that a shorter instrument with good predictive properties would be more useful to school systems seeking to identify beforehand low-performing teachers (who might require more training and support) and high-performing teachers (who might be involved in efforts to increase teacher performance). Accordingly, in this brief report, we describe the development and evaluation of such a teacher-risk performance “screening tool” with strong psychometric properties.
Methods
Background
Focus on Youth in the Caribbean (FOYC) is an evidence-based HIV-prevention intervention that was shown through a randomized, controlled trial conducted among 1360 grade-6 Bahamian youth followed over 36 months to reduce HIV risk factors and behavior and increase protective factors (Chen et al., 2010). In response, the Bahamian Ministry of Education implemented FOYC among all grade-6 students attending government schools throughout The Bahamas, offering the opportunity to examine patterns of implementation and relate them to student outcomes.
The implementation evaluation is organized around three frameworks including a combined version of Chaudoir’s “Multi-level teamwork predicting implementation outcomes” (Chaudoir et al., 2013) and Proctor’s “Types of outcomes in implementation research” (Proctor et al., 2011) models as illustrated in Supplemental Figure A. Finally, based on Durlak’s “Ecological framework for understanding effective implementation” we identified a series of community factors, provider characteristics and innovation characteristics as well as organizational capacity within the school system and training support had been shown in a range of studies to influence implementation fidelity and effectiveness (Durlak & DuPre, 2008). (See Table 1 for a listing of those factors and characteristics that were included in the screening measure.)
Table 1.
Association between teachers’ pre-implementation characteristics and perceptions and degree of implementation
| Variables | Overall | Degree of implementation (No. of core activities taught) |
||||
|---|---|---|---|---|---|---|
| Low (0–8) |
Moderate (9–22) |
High (23–30) |
χ2 | p | ||
| Number of teachers (n)† | (170) | (35) | (92) | (43) | ||
| Demographics characteristics | ||||||
| Education level | ||||||
| Associate degree/teaching certificate | 12.7% | 14.7% | 13.3% | 9.5% | 3.19 | 0.5263 |
| Bachelor degree | 74.1% | 73.5% | 70.0% | 83.3% | ||
| Master degree | 13.2% | 11.8% | 16.7% | 7.1% | ||
| Total years as a teacher or guidance counselor | ||||||
| 1–5 years | 16.5% | 20.0% | 16.3% | 13.9% | 7.45 | 0.1137 |
| 6–10 years | 26.5% | 22.9% | 20.7% | 41.9% | ||
| >10 years | 57.1% | 57.1% | 63.0% | 44.2% | ||
| Islands | ||||||
| Capital island | 65.9% | 45.7% | 68.5% | 76.7% | 8.87 | 0.0119 |
| Other family islands | 34.1% | 54.3% | 31.5% | 23.3% | ||
| Training and teaching experiences | ||||||
| Attendance of training workshop | ||||||
| Did not attend training workshop | 47.7% | 51.3% | 51.1% | 37.2% | 3.76 | 0.0525 |
| Attended part of a workshop | 19.4% | 25.6% | 18.2% | 16.3% | ||
| Fully attended a training workshop | 32.9% | 23.1% | 30.7% | 46.5% | ||
| Training in interactive teaching | ||||||
| A little/none | 41.8% | 48.6% | 47.8% | 23.3% | 8.11 | 0.0173 |
| A lot/some | 58.2% | 51.4% | 52.2% | 76.7% | ||
| Prior experience of teaching FOYC | ||||||
| No | 86.5% | 91.4% | 88.0% | 79.1% | 2.94 | 0.2296 |
| Yes | 13.5% | 8.6% | 12.0% | 20.9% | ||
| Prior experience of teaching other HIV prevention programs | ||||||
| No | 77.1% | 65.7% | 78.3% | 83.7% | 3.70 | 0.1570 |
| Yes | 22.9% | 34.3% | 21.7% | 16.3% | ||
| Perceptions | ||||||
| Importance of Focus on Youth for Grade 6 youth in your school | ||||||
| Somewhat important/not at all | 13.5% | 22.9% | 13.0% | 7.0% | 4.05 | 0.0442 |
| Very important | 86.5% | 77.1% | 87.0% | 93.0% | ||
| Comfort level in teaching the FOYC lessons | ||||||
| Somewhat/not at all | 44.4% | 55.9% | 43.7% | 36.6% | 2.72 | 0.0988 |
| Very comfortable | 55.6% | 44.1% | 56.3% | 63.4% | ||
| Having other priorities (than teaching FOYC) | ||||||
| No | 62.4% | 62.9% | 48.9% | 90.7% | 21.80 | 0.0001 |
| Yes | 37.6% | 37.1% | 51.1% | 9.3% | ||
| FOYC curriculum is a Bahamian curriculum | ||||||
| Somewhat/not at all | 44.2% | 53.1% | 39.8% | 46.3% | 0.22 | 0.6405 |
| Very much | 55.8% | 46.9% | 60.2% | 53.7% | ||
| Compared to the time spent teaching reading skills in grade 6, the time spent teaching FOYC was: | ||||||
| Less important | 17.1% | 20.7% | 13.6% | 21.4% | 0.06 | 0.8058 |
| About the same/more important | 82.9% | 79.3% | 86.4% | 78.6% | ||
In order to develop a tool that would be useful to school systems (e.g., pose a minimal response burden) we sought to build upon our prior analyses to determine if we could develop a screening tool requiring fewer questions that prior to teaching the curriculum could reasonably identify teachers at high-risk for poor implementation and teachers likely to be high-implementers.
Measures
Factors associated with implementation
The pre-implementation questionnaire used in the prior analyses (Stanton et al. 2015; Wang et al. 2015) contains 21 questions assessing teacher’s: level of formal education; years as a teacher/guidance counselor; islands where the teachers worked; attendance at FOYC training workshop; training in interactive teaching; prior experience of teaching FOYC or other HIV prevention programs; perceptions of the importance of prevention programs, HIV prevention and FOYC intervention; comfort level in teaching the FOYC intervention; sense of “ownership” of the curriculum (e.g., a belief that the intervention addresses a local issue and reflects Bahamian values and input); whether the teacher had competing priorities other than teaching FOYC; and the relative importance of their time spent in teaching FOYC compared to the time spent teaching reading skills in grade-6. (See Supplemental Table A). Responses to perception and confidence items were based on a three-point Likert scale: perception of program importance: 1=very important, 2=somewhat important, 3=not at all important; and comfort level: 1=very comfortable, 2= somewhat comfortable, 3=not at all comfortable. These questions were based on the theoretic constructs described by Durlak as important for implementation described above (Durlak & DuPre, 2008) and a literature search of variables found to be associated with implementation (Beets et al., 2008; Dusenbury et al., 2003). The Cronbach’s alpha for perceptions of program importance (five items) was 0.75, and for comfort level in teaching FOYC, CImPACT and role plays (three items) was 0.87. One hundred seventy six teachers submitted their pre-implementation questionnaires.
Implementation dose
Teachers were asked to complete a Teacher Implementation Checklist (available upon request from the authors) specific for each of the eight sessions of FOYC after they had taught the session. The checklist includes all 46 activities in the FOYC curriculum, 30 of which were identified by the developers as “core activities” those activities believed to be critical to the effectiveness of the intervention (McKleroy et al., 2006). The teachers indicated which activities they had and had not taught in each session. Two hundred eight teachers submitted Teacher Implementation Checklists. Implementation fidelity (dose) was defined as the number of core activities (from among a total of 30) actually taught. In the absence of any evidence base to direct the cut-offs between classification of “probable low-implementers” and “high-implementers”, we sought to stratify 170 teachers completing measures into the lowest quartile (those completing the fewest core activities from among 30 in total) and those in the highest quartile (completing the highest number of the 30 core activities) and the middle 50% of the teachers who completed an “average” number of core activities. Accordingly, in bivariate analysis, teachers’ implementation was categorized as “low” (taught 0–8 core activities, equivalent to less than two sessions), “moderate” (9–22 core activities), and “high” (23–30 core activities, equivalent to 7–8 sessions).
Student outcomes
An anonymous curricular assessment instrument was administered to students at the beginning of grade-6 before receipt of FOYC and again at the end of the school-year. At baseline, 4411 students completed the curriculum assessment; 4168 students completed the follow-up assessment. The instrument includes a scale of 15 true/false statements to assess level of HIV/AIDS knowledge (“knowledge”) (Cronbach’s α=0.85); a six-item adaptation of the Condom-use Skills Checklist (Cronbach’s α=0.83) (Stanton et al., 2009) to assess reproductive health skills (“functionality”); a three-item self-efficacy scale regarding pregnancy/STI prevention methods (“self-efficacy”) (Cronbach’s α=0.81); and, one question assessing the youth’s likelihood of using a condom if he/she were to engage in sexual intercourse within the next six months (“intention to use protection”) [five-point Likert scale ranging from 1 (very unlikely) through 5 (very likely)].
Analysis
Our analysis includes three steps. First, bivariate analysis (Pearson’s χ2 and Cochran-Mantel-Haenszel test) was conducted to examine the association of individual characteristics, teaching and training experience, and pre-implementation perceptions of the teachers with their levels of implementation (i.e., “low”, “moderate” and “high” implementation). Second, multinomial logistic regression analysis was performed to identify questionnaire items discriminating between “low-implementers” (i.e., “at-risk” teachers) or “high-implementers” (i.e., “high-performing”) and other teachers (Ramayah et al. 2010). The model included potential factors that were associated with teacher’s degree of implementation in bivariate analysis (p<0.10). Items that were found to be statistically significant in the logistic regression analyses and two variables recognized in the literature as being highly relevant to implementation science (i.e., comfort level in teaching the intervention curriculum, training in interactive teaching) (Han & Weiss 2005; Stigler, Neusel, & Perry, 2011) were retained in the final predictive model (“screening tool”). A composite risk index was created by summing the number of predictive factors for “at-risk” (range 0–6) and “high-performing” teachers (0–6). Five of the 6 items for identifying “at-risk” and “high-performing” teachers are identical; the last item is different (“comfort level in teaching FOYC lesson” for the “at-risk” group and “other competing priorities” for the “high-performing” group). Teachers received a score of 1 if they were “positive” for any one of the six items in the screening tool; a score of 2 if they were “positive” for any two of the six items; a score of 3 if they were “positive” for any three items, etc. Receiver operating characteristics (ROC) analyses were used to investigate predictive accuracy of the screening tools. The area under the ROC curve (AROC), sensitivity, specificity, positive and negative predictive values as to being an “at-risk” teacher or “high-performing” teacher were calculated for the final questionnaire. Although considered to be somewhat arbitrary cut-offs, AROC values of 0.60 to 0.70, 0.70 to 0.90, and those above 0.90 are generally associated with an acceptable, good, and excellent discriminant test, respectively (Swets, 1988). Finally, the association of teachers’ degree of implementation (“fidelity”) with student (“client”) outcomes [HIV/AIDS knowledge, preventive reproductive health skills (“functionality”) and self-efficacy and intention to use protection (symptomatology)] was examined using mixed-effects modeling, adjusting for age, gender, baseline difference, clustering effects of classroom and/or school. The anonymous student questionnaires were not linked at the level of the individual student; however, the questionnaires were linked to the teacher (classroom) and school. School and classroom were included as random effects variables in the mixed model. All the statistical analyses were performed using SAS 9.4 statistical software package (SAS Institute Inc., Cary, NC, USA).
In our study, sensitivity was calculated based on “true positive rates”; that is, how well the screening tool, when administered during the pre-implementation phase, identified teachers who were at increased likelihood of 1) performing poorly or 2) performing very well. Sensitivity was defined as the proportion of “at-risk” teachers who responded “no” to the measures (e.g., did not attend or attended part of a workshop training) or the proportion of “high-performing” teachers who responded “yes” to the measures (e.g., perceiving that “the FOYC intervention was very important for grade six students). Specificity was defined as the proportion of “not at-risk” teachers (including the moderate and the high implementation groups) who responded “yes” to the measures (e.g., fully attended a training workshop) or the proportion of “not high-performing” teachers (including the low and moderate implementation groups) who responded “no” to the measures (e.g., perceiving that “the FOYC intervention was not important for grade six students). Although there is not consensus in the literature, per De Luca Canto et. al (2015), we have used the following cutoffs for sensitivity and specificity: >80% excellent, 70–80% good, 60–69% fair and <60% poor. Positive predictive value (PPV) was defined as the proportion of teachers who responded “no” to the measures and truly were “at-risk” teachers or who responded “yes” to the measures and truly were “high-performing” teachers.” Negative predictive value (NPV) was defined as the proportion of teachers who responded “yes” to the measures and truly were “not at-risk” teachers or who responded “no” to the measures and truly were “not high-performing” teachers.
Results
Factors associated with teachers’ of implementation
As shown in Table 1, attendance of the FOYC training workshop, training in interactive teaching, perceptions of importance of the FOYC intervention for grade six youth, program priority and islands where the teachers worked were significantly associated with teachers’ implementation. Comfort level in teaching the FOYC lessons were marginally associated with teacher’s implementation (p<0.10) while prior experience of teaching FOYC or other HIV prevention programs, teachers’ level of education and program ownership were not associated with teachers’ implementation.
The results of the multinomial logistic regression analyses indicated that duration as a teacher, islands where the teachers worked, competing priorities and attendance of training workshop were associated with teachers’ degree of implementation (Wald test: χ2=39.59, p=0.0003; Goodness of fit test: χ2=1.07, p=0.3029). Compared to “moderate-performing” teachers, “high-performing” teachers were three times less likely to have worked more than 10 years as a teacher or guidance counselor (OR=0.30, 95% CI: 0.12~0.77; p=0.0121), three times less likely to have worked in the family islands (OR=0.35, 95% CI: 0.12~1.02; p=0.0551), and ten times less likely to report that they had competing priorities other than teaching FOYC (OR=0.10, 95% CI: 0.03~0.32; p=0.0001). “High-performing” teachers were more likely to have fully attended a training workshop (OR=2.56, 95% CI: 0.97~6.78; p=0.0587). Compared to “moderate-performing” teachers, “at-risk” teachers were three times more likely to have worked in the family islands (OR=2.83, 95% CI: 1.17~6.86; p=0.0210). (Supplemental Table B)
Sensitivity and specificity of individual predictor or combinations of key predictors of “low-performing” and “high-performing” teachers
As shown in Table 2, no single item for “low-performing” teachers demonstrated both high sensitivity and specificity. No or incomplete attendance at FOYC training workshop had high sensitivity but low specificity while perceiving that “the FOYC intervention was not important for grade six students” and working in the “family islands” had low sensitivity but high specificity. Similarly, no single item for “high-performing” teachers demonstrated both high sensitivity and specificity. Perceiving the FOYC intervention to be important and reporting no other competing priorities in daily work had very high sensitivity but low specificity, while attending the full FOYC training workshop had low sensitivity but high specificity (72%).
Table 2.
Sensitivity and specificity of individual predictor of teacher implementation in predicting “at-risk” or “high performing” teachers
| Variables | Sensitivity (95% CI) | Specificity (95% CI) | PPV | NPV |
|---|---|---|---|---|
| Predictors for “at-risk” teachers | ||||
| 1) Long duration of experience as a teacher (>10 years) | 57.1% (39.4%~73.7%) |
43.0% (34.5%~51.8%) |
20.6% | 79.5% |
| 2) Working in the “family islands” (other than the capital island) | 54.3% (36.7%~71.2%) |
71.1% (62.7%~78.6%) |
32.8% | 85.7% |
| 3) No or incomplete attendance at FOYC training workshops | 74.3% (56.7%~87.5%) |
34.8% (26.8%~43.5%) |
22.8% | 83.9% |
| 4) None/little training in interactive teaching | 48.6% (31.4%~66.0%) |
60.0% (51.2%~68.3%) |
23.9% | 81.8% |
| 5) Perceiving that the FOYC intervention was not important for grade six students | 22.9% (10.4%~40.1%) |
88.9% (82.3%~93.7%) |
34.8% | 81.6% |
| 6) Low levels of comfort in teaching FOYC lessons | 55.9% (37.9%~72.8%) |
58.6% (49.6%~67.2%) |
26.4% | 83.3% |
| Predictors for “high-performing” teachers | ||||
| 1) Short duration of experience as a teacher (≤10 years) | 55.8% (39.9%~70.9%) | 61.4% (52.4%~69.9%) | 32.9% | 80.4% |
| 2) Working in the capital island | 76.7% (61.4%~88.2%) | 37.8% (29.4%~46.8%) | 29.5% | 82.8% |
| 3) Complete attendance at FOYC training workshops | 46.5% (31.2%~62.4%) | 71.7% (63.0%~79.3%) | 35.7% | 79.8% |
| 4) A lot of training in interactive teaching | 76.7% (61.4%~88.2%) | 48.0% (39.1%~57.1%) | 33.3% | 85.9% |
| 5) Perceiving that the FOYC intervention was very important for grade six students | 93.0% (80.9%~98.5%) | 15.8% (9.9%~23.3%) | 27.2% | 87.0% |
| 6) No other competing priorities than teaching FOYC | 90.7% (77.9%~97.4%) | 47.2 % (38.3%~56.3%) | 36.8% | 93.8% |
Note: PPV: positive predictive value; NPV: negative predictive value.
Table 3 summarizes the performance of the questionnaire score in identifying “low-performing” and “high-performing” teachers across multiple score levels. The absence or presence of the six variables were tested to assess the best cut-offs for sensitivity and specificity. The presence of at least one variable (1 point) or two variables (2 points) in the questionnaire had 97% or 89% sensitivity for “low-performing” teachers. Higher scores produced higher specificity. Most “low-performing” teachers responded “yes” to multiple items; only 22.9% (eight teachers) had positive response to one or two questionnaire items. For a cut-off score of 3 points, this six-item questionnaire had a sensitivity of 74.3% and a specificity of 57%. None of the teachers without any of these six variables (0 points) were “high-performing” teachers. Most “high-performing” teachers responded “yes” to multiple items. The presence of at least four variables (4 points) in the questionnaire had a sensitivity of 81% and a specificity of 65%.
Table 3.
Sensitivity and specificity of combinations of key predictors of teacher implementation in predicting “at-risk” or “high performing” teachers
| Variables | Sensitivity (95% CI) |
Specificity (95% CI) |
PPV | NPV |
|---|---|---|---|---|
| Combinations of all six predictors (score) for “at-risk” teachers | ||||
| 1 | 97.1% | 3.0% | 20.6% | 80.0% |
| 2 | 88.6% | 22.2% | 22.8% | 88.2% |
| 3 | 74.3% | 57.0% | 31.0% | 89.5% |
| 4 | 40.0% | 83.0% | 37.8% | 84.2% |
| 5–6 | 14.3% | 94.1% | 38.5% | 80.9% |
| Combinations of all six predictors (score) for “high-performing” teachers | ||||
| 1 | 100% | 0.8% | 21.7% | 100% |
| 2 | 97.7% | 6.3% | 26.1% | 88.9% |
| 3 | 97.7% | 25.2% | 30.7% | 97.0% |
| 4 | 81.4% | 64.6% | 43.8% | 91.1% |
| 5 | 51.2% | 85.0% | 52.5% | 83.1% |
| 6 | 14.0% | 100% | 100% | 77.4% |
Note: PPV: positive predictive value; NPV: negative predictive value.
The negative predictive value of 0.90 for a cut-off score of 3 points for identifying “poor performing teachers” and of 0.91 for a cut-off score of 4 points identifying “high performing teachers” are both high while the positive predictive values of 31% and 44% respectively are low.
ROC curve analysis revealed that the c statistic was 0.68 for “low-performing” teachers and 0.78 for “high-performing” teachers, indicating that our screening tool performed well in identifying “low-performing” and “high performing” teachers (Figures 1 and 2).
Figure 1.

Receiver operating characteristic curve for predictors of “at-risk” teachers (area under the curve: 0.68)
Figure 2.

Receiver operating characteristic curve for predictors of “high-performing” teachers (area under the curve: 0.78)
Association between teachers’ degree of implementation (“low”, “moderate” and “high”) and student outcome
The results of the mixed-effects models indicate that teachers’ degree of implementation (“fidelity”) was significantly related to improvement in three of the four student outcome measures. At follow-up, compared to students whose teachers were “low” implementers, students whose teachers were “high” implementers demonstrated higher levels of HIV/AIDS knowledge (β=1.33, SE=0.27, t=5.03, p<0.001), reproductive health skills (β=0.42, SE=0.10, t=4.22, p=0.001), and intention to use protection if they were to engage in sex (β=0.30, SE=0.14, t=2.20, p<0.05); students whose teachers were “moderate” implementers demonstrated higher levels of HIV/AIDS knowledge (β=1.02, SE=0.23, t=4.41, p<0.001) and reproductive health skills (β=0.20, SE=0.09, t=2.27, p<0.05). Teachers’ degree of implementation was not associated with students’ self-efficacy.
Discussion
Implementation of evidence-based interventions faces multiple challenges in real world settings. Many effective interventions are delivered with poor quality (Lundgren, Amodeo, Cohen, Chassler, & Horowitz, 2011), resulting in intervention decay or a lack of any intervention effect (Feldman, Silapaswan, Schaefer, & Schermele, 2014). Studies report that providers often selectively implement program components, drop core elements and/or significantly modify the interventions (Galbraith et al., 2009), which impairs program outcomes. In order to enhance high-quality delivery to achieve program goals, it is important to identify potentially poor implementers at the pre-implementation stage and provide them with intensive training and necessary technical assistance. The present study describes the development and preliminary psychometric examination of a simple, effective “screening tool” for identifying “low-performing” or “high-performing” teachers in the delivery of school-based HIV prevention interventions.
Our short questionnaire (consisting of 7 questions requiring five minutes to complete) which combines teachers’ characteristics, teaching/training experience and pre-implementation perceptions has a good diagnostic accuracy demonstrated by good sensitivity of (74% and 81% respectively) and specificity (57% and 65%) and a high negative predictive value (90%) for the screening of “low-performing” or “high-performing” teachers. Although the sensitivity of 74% for “low-performing” teachers is not as high as might be desired and the specificity of 57% thatch is on the cusp of fair and poor, according to De Luca Canto et al. (2015), it is comparable with previous questionnaires such as the Diabetes Medication Risk Screening tool, which showed a sensitivity of 76.5% and specificity of 59.5% (Claydon-Platt et al., 2014). Since the purpose of identification of potential low implementers is to provide them with intensive training, the sensitivity is more important than specificity in assessing the predictability of the screening tool for identifying “at-risk” teachers (as we want to minimize the number of false negative screens for “at-risk” teachers) (Polito et al., 2015). However, for “high-performing” teachers, as we do not want to mistakenly identify teachers as “high-performing teachers” and ask them to provide support for other teachers, both sensitivity and specificity are important (Castellanos-Ryan et al. 2013). Therefore, this measure should be used with caution (e.g., as only one criterion) in identifying teachers to serve as tutors or aides for others.
Our study reveals that degree of implementation was significantly related to student outcomes. Students whose teachers were “high” implementers demonstrated greatest gains (improvements in knowledge, skills and intention) and students whose teachers were “moderate” implementers showed improvements in knowledge and skills compared to students whose teachers were “low” implementers. These findings are consistent with previous research suggesting that implementation dose and implementation fidelity influences program outcomes (Durlak & DuPre, 2008).
There are several potential limitations in this study. First, all the measures used in this analysis were based on teachers’ and students’ self-reports. It is possible that teachers misreported their level of implementation of the intervention curriculum. Second, 32 teachers did not submit their pre-implementation questionnaires which might have introduced bias as these teachers taught fewer core activities than those who submitted their pre-implementation questionnaires (11.7 vs. 16.2, t=3.04, p=0.0027). Third, in developing the composite score for all six predictors, we assigned the same weight to each predictor, although arguably some predictors may be less important than others. Despite these limitations, this study addresses an important limitation in the implementation literature. Our brief screening tool can facilitate teacher training and recruitment of engaged teachers in implementation of school-based prevention programs in other countries.
Supplementary Material
Table 4.
Brief school Pre-implementation Screening tool for “at-risk” or “high performing” teachers
| 1 | How long did you work as a teacher or guidance counselor? | ||
| a) 1–5 years | b) 6–10 years | c) >10 years | |
| 2 | Which island were you working? | ||
| a) Capital island | b) Other family islands | ||
| 3 | Did you attend a training workshop? | ||
| a) did not attend training workshop | b) attended part of a workshop | ||
| c) fully attended a training workshop | |||
| 4 | How much training have you received in interactive teaching (such as role plays, discussions about sensitive topics) | ||
| a) a lot (at least 2 teacher workshops and/or a full course on topic) | |||
| b) some (1 workshop or several lectures but not a full course) | |||
| c) a little (part of a workshop or 1 lecture) | |||
| d) None | |||
| 5 | How important do you think Focus on Youth in the Caribbean (FOYC) is for the grade 6 students in your school? | ||
| a) Very important | b) Somewhat important | c) Not at all | |
| 6 | How comfortable do you think you will feel in teaching the materials in FOYC? | ||
| a) Very comfortable | b) Somewhat comfortable | c) Not at all | |
| 7 | Do you have other competing priorities than teaching FOYC? | ||
| a) Yes | b) No | ||
Acknowledgments
The research on which this article is based was supported by the National Institute of Child Health and Human Development (R01HD064350). We thank program staff at the Bahamas Ministries of Health and Education for their participation in field data collection.
References
- Beets MW, Flay BR, Vuchinich S, Acock AC, Li KK, Allred C. School climate and teachers’ beliefs and attitudes associated with implementation of the Positive Action Program: A diffusion of innovations model. Prevention Science. 2008;9:264–275. doi: 10.1007/s11121-008-0100-2. [DOI] [PubMed] [Google Scholar]
- Castellanos-Ryan N, O’Leary-Barrett M, Sully L, Conrod P. Sensitivity and specificity of a brief personality screening instrument in predicting future substance use, emotional, and behavioral problems: 18-month predictive validity of the Substance Use Risk Profile Scale. Alcoholism, Clinical and Experimental Research. 2013;37(Suppl 1):E281–290. doi: 10.1111/j.1530-0277.2012.01931.x. [DOI] [PubMed] [Google Scholar]
- Chang LW, Serwadda D, Quinn TC, Wawer MJ, Gray RH, Reynolds SJ. Combination implementation for HIV prevention: moving from clinical trial evidence to population-level effects. The Lancet Infectious diseases. 2013;13:65–76. doi: 10.1016/S1473-3099(12)70273-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science. 2013;8:22. doi: 10.1186/1748-5908-8-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen X, Stanton B, Gomez P, Lunn S, Deveaux L, Brathwaite N, Harris C. Effects on condom use of an HIV prevention programme 36 months postintervention: a cluster randomized controlled trial among Bahamian youth. International Journal of STD & AIDS. 2010;21:622–630. doi: 10.1258/ijsa.2010.010039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Claydon-Platt K, Manias E, Dunning T. Development and evaluation of a screening tool to identify people with diabetes at increased risk of medication problems relating to hypoglycaemia and medication non-adherence. Contemporary nurse. 2014;48:10–25. doi: 10.1080/10376178.2014.11081922. [DOI] [PubMed] [Google Scholar]
- Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science. 2010;5:14. doi: 10.1186/1748-5908-5-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- De Luca Canto G, Pachêco-Pereira C, Aydinoz S, Major PW, Flores-Mir C, Gozal D. Diagnostic capability of biological markers in assessment of obstructive sleep apnea: a systematic review and meta-analysis. Journal of Clinical Sleep Medicine. 2015;11:27–36. doi: 10.5664/jcsm.4358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41:327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
- Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education and Research. 2003;18:237–256. doi: 10.1093/her/18.2.237. [DOI] [PubMed] [Google Scholar]
- Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. Journal of Clinical Epidemiology. 2005;58:107–112. doi: 10.1016/j.jclinepi.2004.09.002. [DOI] [PubMed] [Google Scholar]
- Feldman MB, Silapaswan A, Schaefer N, Schermele D. Is there life after DEBI? Examining health behavior maintenance in the diffusion of effective behavioral interventions initiative. American Journal of Community Psychology. 2014;53:286–313. doi: 10.1007/s10464-014-9629-3. [DOI] [PubMed] [Google Scholar]
- Forman SG, Shapiro ES, Codding RS, Gonzales JE, Reddy LA, Rosenfield SA, Stoiber KC. Implementation science and school psychology. School Psychology Quarterly. 2013;28:77–100. doi: 10.1037/spq0000019. [DOI] [PubMed] [Google Scholar]
- Galbraith JS, Stanton B, Boekeloo B, King W, Desmond S, Howard D, Carey JW. Exploring implementation and fidelity of evidence-based behavioral interventions for HIV prevention: lessons learned from the Focus on Kids diffusion case study. Health Education & Behavior. 2009;36:532–549. doi: 10.1177/1090198108315366. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lundgren L, Amodeo M, Cohen A, Chassler D, Horowitz A. Modifications of evidence-based practices in community-based addiction treatment organizations: a qualitative research study. Addictive Behaviors. 2011;36:630–635. doi: 10.1016/j.addbeh.2011.01.003. [DOI] [PubMed] [Google Scholar]
- McKleroy VS, Galbraith JS, Cummings B, Jones P, Harshbarger C, Collins C, ADAPT Team Adapting evidence-based behavioral interventions for new settings and target populations. AIDS Education and Prevention. 2006;18:59–73. doi: 10.1521/aeap.2006.18.supp.59. [DOI] [PubMed] [Google Scholar]
- Nilsen P. Making sense of implementation theories, models and frameworks. Implementation Science. 2015;10:53. doi: 10.1186/s13012-015-0242-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polito CC, Isakov A, Yancey AH, 2nd, Wilson DK, Anderson BA, Bloom I, Sevransky JE. Prehospital recognition of severe sepsis: development and validation of a novel EMS screening tool. American Journal Emergency Medicine. 2015;33:1119–1125. doi: 10.1016/j.ajem.2015.04.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ramayah T, Ahmad NH, Halim HA, Zainal SRM, Lo MC. Discriminant analysis: An illustrated example, African Journal of Business Management. 2010;4:1654–1667. [Google Scholar]
- Squires JE, Estabrooks CA, O’Rourke HM, Gustavsson P, Newburn-Cook CV, Wallin L. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implementation Science. 2011;6:83. doi: 10.1186/1748-5908-6-83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stanton B, Deveaux L, Lunn S, Yu S, Brathwaite N, Li X, Marshall S. Condom-use skills checklist: a proxy for assessing condom-use knowledge and skills when direct observation is not possible. Journal of Health, Population, and Nutrition. 2009;27:406–413. doi: 10.3329/jhpn.v27i3.3383. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stanton B, Wang B, Deveaux L, Lunn S, Rolle G, Mortimer A, Li X, Marshall S, Poitier M, Adderley R. Teachers’ patterns of implementation of an evidence-based intervention and their impact on student outcomes: results from a nationwide dissemination over 24-months follow-up. AIDS and Behavior. 2015 doi: 10.1007/s10461-015-1110-2. Published online. [DOI] [PubMed] [Google Scholar]
- Stigler MH, Neusel E, Perry CL. School-based programs to prevent and reduce alcohol use among youth. Alcohol Research & Health. 2011;34:157–162. [PMC free article] [PubMed] [Google Scholar]
- Swets JA. Measuring the accuracy of diagnostic systems. Science. 1988;240:1285–1293. doi: 10.1126/science.3287615. [DOI] [PubMed] [Google Scholar]
- Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. American Journal of Preventive Medicine. 2012;43:337–350. doi: 10.1016/j.amepre.2012.05.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang B, Stanton B, Deveaux L, Poitier M, Lunn S, Koci V, Rolle G. Factors influencing implementation dose and fidelity thereof and related student outcomes of an evidence-based national HIV prevention program. Implementation science. 2015;10:44. doi: 10.1186/s13012-015-0236-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
