Access to high-quality school services for students with autism is critical, as over 90% of children with autism are primarily served in public schools (Snyder et al., 2019; Brookman-Frazee et al., 2009). In California’s public schools, autism is now the third largest qualifying disability for special education services, with over 132,359 students (16% of the total population of students with disabilities) receiving services (California Department of Education, n.d.). Federal legislation specifies that school practices must be supported by scientifically-based evidence and professional wisdom (Every Student Succeeds Act, 2015; Individuals with Disabilities Education Act, 2004). Systematic literature reviews identify several evidence-based practices (EBPs) for students with autism (Steinbrenner et al., 2020). Unfortunately, these interventions have not historically been incorporated into classroom practice (e.g., Hess et al., 2008; Morrier et al., 2010; Stahmer & Ingersoll, 2004; Suhrheinrich, 2011). In recent studies, 50-97% of teachers self-reported using at least one EBP (Brock et al., 2020; (Dynia et al, 2020). However, even when teachers are attempting to use EBP for autism, they often have low levels of fidelity, or adherence to the intervention procedures (Suhrheinrich & Schreibman, 2007; Suhrheinrich, et al., 2013).
The field of implementation science has developed with a focus on identifying methods to promote the adoption and integration of EBPs into routine care (Eccles & Mittman, 2006). In contrast to intervention research, which focuses on how specific EBPs improve student outcomes, targets of implementation research include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration (spread or access within the organization), and sustainability of the practice (Proctor et al., 2011). Individual provider-level factors have been indicated as key indicators in implementation of EBP, and are highlighted as inner context factors within multiple implementation science frameworks (Aarons et al., 2011; Beidas et al., 2014). Within autism implementation research there is some evidence of specific factors linked to teacher EBP use, including teacher knowledge and perceived “social validity” (McNeil, 2019) and teacher ratings of training quality and “ease of use” of the practice (Suhrheinrich et al., 2020).
Scaling up the use of EBP across educators, schools, districts and regions presents an additional challenge. At a system level, education programs targeting implementation strategies or drivers at both the organizational and the individual provider level report greater success than those who do not have implementation plans (Fixsen et al., 2007). For example, intentionally targeting competency drivers such as staff selection, effective training and coaching and leadership support will support the success of the implementation efforts and EBP fidelity. However, most state-wide systems have very limited capacity for monitoring these drivers and scaling up interventions in ways that lead to meaningful improvements in student outcomes (Fixsen et al., 2013) indicating a clear need for continued development. States have rarely scaled up EBP successfully indicating additional tools and processes are needed to support effective implementation.
We have employed implementation frameworks to describe factors related to the initial development and continued growth of the California Autism Professional Training and Information Network (CAPTAIN; Suhrheinrich et al., 2020). Although multiple discrete implementation strategies (Powell et al., 2015) are incorporated within CAPTAIN activities, we have directly targeted interagency collaboration, use of evidence-based training and coaching practices, leader engagement, and data-driven continuous improvement cycles (Suhrheinrich et al., 2020). Here we extend the work to provide preliminary evaluation of CAPTAIN model as a potential implementation strategy to support state-wide scale up.
The CAPTAIN Model
In 2008, the California Inter-agency Autism Planning Group (IAPG) was created to align efforts and develop a common training curriculum for autism EBPs. The IAPG participated in a school-based technical assistance project through the National Professional Development Center on Autism Spectrum Disorder (NPDC-ASD; Odom et al., 2013). In California, fidelity to the target EBP increased by 63% on average (44%-85%) and exceeded 80% for four EBP. In addition, all participating students (n=18) made progress on annual goals based on Goal Attainment Scaling (Ruble et al., 2012) with 44% exceeding expected progress (Suhrheinrich et al., 2020). Overall program quality, measured by the Autism Program Environmental Rating Scales (APERS) (Odom et al., 2018) also increased. Based on these positive outcomes, the IAPG expanded to include additional service sectors, and was renamed the California Autism Professional Training and Information Network (CAPTAIN, www.captain.ca.gov), with a focus on disseminating and implementing EBP for individuals impacted by autism.
CAPTAIN is organized into 17 regional teams across the state, that develop regional plans for information dissemination about autism and EBPs, promoting effective EBP implementation and cross-agency regional collaboration and coordination. CAPTAIN currently has over 400 members (called Cadre) representing special education, developmental disabilities and family support service sectors and university programs. All CAPTAIN Cadre are required to provide training to increase awareness of autism and knowledge of EBP, and CAPTAIN Cadre representing special education services have the additional expectation of providing EBP-specific training and implementation coaching to at least three providers or programs per year.
In California, special education services are funded through regional special education local plan areas (SELPAs). SELPA’s provide special education compliance monitoring as well as training and technical assistance to the over 1100 local education agencies within their respective catchment areas serving students from ages 3 – 22 years of age. Each of the 132 SELPAs was offered a designated number of CAPTAIN cadre positions based on the number of students they served who were qualified for special education services for Autism (1 cadre member per every 500 identified students with Autism). Selected cadre are individuals within the special education system who possess a strong base of knowledge about ASD and have the capacity to train and coach others from within their SELPA catchment area. During the 2018-2019 academic year 92% of the SELPAs participated in CAPTAIN, with a total of over 200 school-based Cadre actively participating in the required training and coaching activities.
Based on recent data collected for CAPTAIN monitoring and quality improvement purposes, we can estimate the impact of CAPTAIN cadre in terms of reach (see Suhrheinrich et al., 2020 for a methodological description of the annual survey). The majority of the 223 school-based CAPTAIN Cadre reported they met or exceeded expectations during 2018-19. Overall, Cadre reported completing over 1500 trainings and over 350 hours of coaching. On average they trained 88.21 (SD = 96.32) providers (including special educators, paraeducators, general educators, and other direct service providers) and coached 30.32 (SD = 58.54) providers. Given these self-reported data, we estimate annual totals of 19,495 providers receiving training and 6,701 receiving EBP coaching (Brookman-Frazee et al., in press).
Preliminary data on CAPTAIN impact are encouraging, and suggest further exploration of implementation outcomes associated with the CAPTAIN model. Specifically, we are interested in potential impacts of CAPTAIN at the provider level. The current study aims to 1) evaluate differences in attitudes toward EBP and use of EBP between direct service providers who had, or had not, been trained by CAPTAIN members, and 2) evaluate differences in EBP knowledge, EBP fidelity and overall classroom quality between teachers who had, or had not, been trained by CAPTAIN members.
Community Involvement
This study was conducted with the California Autism Professional Training and Information Network (CAPTAIN) as a community partner. All participants were community-based services providers within school programs. Additionally, co-author PS is a certified behavior analyst and community service provider.
Methods
This work aimed to evaluate differences in implementation outcomes related to EBPs for autism by comparing providers who received training from CAPTAIN members and providers who had not received training from CAPTAIN. Survey data were collected from providers statewide, across two phases, with unique aims.
Participants
Phase 1 data drew from a statewide survey of administrators and providers serving autistic students in public schools (early intervention to post-secondary) throughout California. The subsample of data analyzed for this study included only direct services providers (n=1,543). As seen in Table 1, the majority of participants were Special Education teachers (n = 838; 54%) and paraprofessionals (n = 252; 16%). The majority of the sample held a Master’s degree (n = 874; 57%), followed by a bachelor’s degree (n = 461; 30%). Most participants reported extensive hands-on experience working with students with autism (n = 836; 54%). A portion of the sample had received EBP training from a CAPTAIN member (n = 326; CAPTAIN trained providers), while the majority of participants had not received training from a CAPTAIN member or were unsure if they had received training (n = 1217; non-CAPTAIN trained providers).
Table 1.
Demographics of Survey Participants
| Statewide Survey N (%) |
Follow-Up Survey N (%) |
|
|---|---|---|
| Job Title | ||
| Special Education Teacher (serving on single school site) | 823 (53.3) | 220 (98.2) |
| Paraprofessional | 252 (16.3) | 0 (0) |
| SLP/SLPA | 152 (9.9) | 0 (0) |
| Psychologist | 150 (9.7) | 0 (0) |
| Itinerant Special Education Teacher (serving on multiple school sites) | 34 (2.2) | 2 (.9) |
| General Education Teacher | 33 (2.1) | 2 (.9) |
| OT/OTA | 32 (2.1) | 0 (0) |
| Mental Health Counselor/Social Worker | 18 (1.2) | 0 (0) |
| Special Education Teacher (serving on multiple school sites) | 15 (1.0) | 0 (0) |
| Specialist (e.g., Behavior Specialist, Autism Specialist) | 4 (.3) | 0 (0) |
| Physical Therapist | 3 (.2) | 0 (0) |
| Case Manager | 1 (.1) | 0 (0) |
| Teacher on Special Assignment (TOSA) | 2 (.1) | 0 (0) |
| Not Reported | 24 (1.6) | 0 (0) |
| Education | ||
| High School | 32 (2.1) | 0 (0) |
| Associate degree | 118 (7.6) | 0 (0) |
| Bachelor’s Degree | 461 (29.9) | 82 (36.6) |
| Master’s Degree | 874 (56.6) | 142 (63.4) |
| Doctorate | 34 (2.2) | 0 (0) |
| Not Reported | 24 (1.6) | 0 (0) |
| ASD Experience | ||
| Extensive | 836 (54.2) | 145 (64.7) |
| Moderate | 428 (27.7) | 55 (24.6) |
| Some Recent | 221 (14.3) | 204 (8.9) |
| Some Distant | 20 (1.3) | 4 (1.8) |
| Little to None | 14 (.9) | 0 (0) |
| Not Reported | 24 (1.6) | 0 (0) |
Phase 2 participants included only teacher participants. Teachers who agreed to be contacted for further research received a follow-up survey that asked specifically about classroom characteristics/quality, EBP fidelity, and EBP knowledge. The Phase 2 sample included 224 teachers, including some who had received training by a CAPTAIN member (n = 55; CAPTAIN trained teachers) and some who had not or were unsure (n = 169; non-CAPTAIN trained teachers). The majority were female (n = 179, 80%), White (n = 162, 72%), Special Education teachers (n = 220; 98%), held a Master’s degree (n = 142; 63%) and reported extensive hands-on experience working with students with autism (n = 145; 65%). See Table 1.
Procedures
This study was approved by the University of California Davis, Institutional Review Board. During Phase 1, a survey was administered via Qualtrics and distributed through CAPTAIN social media (Facebook and Instagram), email and recruitment postcards (n=4500). CAPTAIN Cadre were asked to distribute the email invitation and recruitment postcards to their school sites, professional organizations and to teachers and other service providers they worked with directly. Average survey completion was 30 to 45 minutes. Participants were entered in a 1 in 20 opportunity drawing to win a $50 gift card upon completion of the statewide survey. Recruitment began in May 2018 and ended in March 2020.
A Phase 2 follow-up online survey was sent out to participants who selected “teacher” as their job title (e.g., special education teacher, general education teacher, itinerant teacher) on the statewide survey and indicated they were willing to be contacted for future research. This survey asked teachers to report on their classroom practices/quality, knowledge of EBP and fidelity of EBP implementation for the EBPs they primarily used in their classroom. Average survey completion was 60 minutes. All participants who completed the follow-up survey received a $50 gift card. Recruitment began in November 2019 and ended in March 2020.
Measures
Phase 1 - Statewide Survey Measures
EBP Attitude.
Respondents completed the Evidence-Based Practice Attitude Scale (EBPAS) (Aarons, 2004), a 15-item measure that assesses four general attitudes toward adoption of EBPs: appeal, requirements, openness, divergence. The EBPAS assesses provider attitudes toward adoption of EBP in public sector service settings and has been used in mental health, medical, school, and social service settings. Items are scored on a 5-point Likert scale (0-4) with answers ranging from “Not at all” to “Very great extent.” Domain scores were calculated by averaging the item scores in each domain. The EBPAS demonstrates good internal consistency reliability (α = .76) and concurrent and predictive validity.
Report of EBP Use.
Participants were asked to select all the EBPs they used (out of 27 EBPs; Wong et al., 1015) in the past week as well as the primary EBP they used in the past week with an autistic student. They reported 1) number of days the EBP was used in the past week (0-5 days); 2) whether they collected fidelity data on their EBP use (Yes, I meant to but didn’t have time, No); 3) whether they collected student outcome data (Yes, I meant to but didn’t have time, No); and 4) the number of students with autism with whom they used the EBP (One student, Some students, Most or all students). The report of use measure was developed by members of the research team.
Implementation Outcomes.
Participants completed an adapted version of the Evidence-Based Practice Outcomes Scale (Ehrhart et al., 2015) which asked them the extent to which they 1) use all components of their primary EBP, 2) have adapted their primary EBP, 3) feel competent implementing their primary EBP, 4) feel knowledgeable explaining their primary EBP. Participants self-rated on a 5-point Likert scale (0-4) with answers ranging from “Not at all” to “Very great extent”.
Phase 2 - Follow-Up Survey Measures
Fidelity of EBP Implementation.
Participants reported their fidelity to the components of their primary EBP by completing an implementation checklist. Implementation checklists were pulled from the Autism Focused Intervention Resources & Modules (AFIRM) on the NPDC website (National Professional Development Center on Autism Spectrum Disorder, n.d.). The total number of items on the implementation checklists varied by EBP, but all were divided in to three stages of implementation: Planning, Using, and Monitoring. Participants were asked to check off whether they completed each component on the checklist for their primary EBP. Fidelity was calculated as a percentage of the total number of components completed.
EBP Knowledge.
Participants answered true or false and multiple-choice questions assessing their knowledge of their primary EBP. Knowledge surveys were pulled from the Autism Focused Intervention Resources & Modules (AFIRM) on the NPDC website (https://afirm.fpg.unc.edu/afirm-modules, National Professional Development Center on Autism Spectrum Disorder, 2011). The total number of knowledge survey items varied across specific EBP. The percent of correct responses was used for analysis.
Classroom Quality.
Participants completed a self-report version of the Autism Program Environmental Rating Scale (APERS) designed to assess the overall quality of program environments for students with autism. The APERS Self-Assessment Tool (National Professional Development Center on Autism Spectrum Disorder, 2011) consists of 64 items across 11 domains for the preschool/elementary form and 66 items across 11 domains for the middle school/high school form. Participants completed the version that aligned with the primary age group they taught. Domains include Learning Environments, Classroom Structure and Schedule, Positive Learning Climate, Assessment, Curriculum and instruction, Communication, Social Competence, Personal Independence and Competence, Functional Behavior, Family Involvement, and Teaming. Items are scored on a 3-point Likert scale with answers ranging from “1= This is a challenge for our program” to “3 = This element is consistently in place, but we still have some work to do” to “5 = This is a real strength for our program”. Classroom quality was calculated based on the average score for each domain.
Data Analysis
Data were analyzed using IBM SPSS Statistics. Statistical analysis varied depending upon the data form. Data was examined for normality and appropriate transformations were applied for highly skewed and kurtotic data based on the recommendations of Tabachnick and Fidell (2018). Means, standard deviations, and frequencies were calculated to describe the data. Independent t-tests and Chi-square analyses were used to examine group differences between CAPTAIN trained providers and non-CAPTAIN trained providers in report of use. Independent t-tests were used to examine group differences between CAPTAIN trained providers and non-CAPTAIN trained providers in EBP attitude, implementation outcomes, classroom outcomes, fidelity of EBP implementation, and knowledge of EBP.
Results
Overall, results indicated variability across measures, with some significant differences between CAPTAIN trained and non-CAPTAIN trained providers.
Phase 1 - Statewide Survey Results
EBP Attitude
Overall, participants reported moderate scores on the Openness (mean = 3.13, SD = .76), Appeal (mean = 3.32, SD = .72), and Requirements (mean = 2.99, SD = .96) subscales of the EBPAS, and relatively lower scores on the Divergence subscale (mean = 2.30, SD = .67). There was a significant group difference in EBP Openness between CAPTAIN trained providers versus non-CAPTAIN trained providers, t (579) = −3.29, p = .001. Specifically, CAPTAIN trained providers reported significantly higher levels of EBP Openness (mean = 3.24, SD = .68) than non-CAPTAIN trained providers (mean = 3.09, SD = .77). No significant associations were found for the Appeal, Divergence, and Requirement subscales (p >.05). See Table 2.
Table 2.
Outcomes in CAPTAIN Trained and Non-CAPTAIN Trained Providers
| Provider outcomes | CAPTAIN Trained |
Statistics | p | |
|---|---|---|---|---|
| Yes | No | |||
| Direct service providers and teachers | ||||
| Report of Use | ||||
| Use primary EBP with most or all students | 69.4% | 62.0% | χ2 (2) = 11.4 | .003 |
| Days/week use of primary EBP | Mean 3.6 | Mean 3.5 | t (570) = 1.8 | .078 |
| Collect fidelity data on primary EBP | 42.6% | 35.6% | χ2 (2) = 10.9 | .004 |
| Collect student data on EBP response | 55.2% | 44.3% | χ2 (2) = 14.2 | .001 |
| Implementation Outcomes Scale | ||||
| Use all components of primary EBP (max 4) | Mean 2.8 | Mean 2.6 | t (555) = −2.9 | .004 |
| Competence implementing primary EBP (max 4) | Mean 2.9 | Mean 2.7 | t (527) = −3.7 | <.001 |
| Knowledge explaining primary EBP (max 4) | Mean 2.6 | Mean 2.4 | t (533) = −3.6 | <.001 |
| Adapted primary EBP (max 4) | Mean 2.4 | Mean 2.3 | t (1443) = 1.6 | .111 |
| EBPAS | ||||
| Openness (max 4) | Mean 3.2 | Mean 3.1 | t (579) = −3.3 | .001 |
| Appeal (max 4) | Mean 3.4 | Mean 3.3 | t (563) = −1.6 | .118 |
| Divergence (max 4) | Mean 2.3 | Mean 2.3 | t (548) = −1.6 | .114 |
| Required (max 4) | Mean 3.0 | Mean 3.0 | t (1282) = −0.7 | .486 |
| Teachers | ||||
| NPDC EBP Knowledge Assessments % correct for primary EBP | 67.2% | 56.9% | t (222) = −2.5 | .012 |
| Self report fidelity of implementation checklist % fidelity for primary EBP | 88.9% | 86.3% | t (219) = 0.5 | .583 |
| APERS Learning Environment (max 5) | Mean 3.9 | Mean 3.7 | t (202) = −2.1 | .041 |
Notes: APERS = Autism Program Environmental Rating Scale, EBP = Evidence-based practice, EBPAS = Evidence-Based Practice Attitude Scale, NPDC = National Professional Development Center on Autism Spectrum Disorder
Report of EBP Use
Regarding the total number of EBPs used, CAPTAIN trained providers reported on average using 9.29 EBPs (SD = 5.02) and non-CAPTAIN trained providers reported 9.02 EBPs (SD = 5.05), with no significant difference between the two groups, t (1,541) = .87, p = .38. The most frequently used primary EBP reported by CAPTAIN trained providers was Visual Supports (22%), followed by Reinforcement (20%), and Prompting (10%); whereas the most frequently used primary EBP reported by providers not trained by CAPTAIN was Reinforcement (20%), followed by Prompting (16%), and Visual Supports (14%).
The majority of the providers reported using their primary EBP 4-5 days in the past week (75%) while about 12% reported 3 days, 7% reported 2 days, about 5% reported 1 day, and 1% reported 0 days in the last week. About a third (36%) of the providers reported collecting fidelity data. Close to half of the providers (46%) reported collecting student data, and 62% reported using their primary EBP with most or all students.
When investigating the impact of being trained by CAPTAIN, results indicated that a larger proportion of CAPTAIN trained providers reported collecting fidelity data χ2 (2, N = 1,191) = 10.95, p= .004], collecting student data χ2 (2, N = 1,185) = 14.19, p =.001], and reported using their primary EBP with “most or all students” [χ2 (2, N = 1,514) = 11.41, p = .003] than providers not trained by CAPTAIN. Frequency of primary EBP use did not different significantly between CAPTAIN and non-CAPTAIN providers (t (570) = −1.76, p = .08). See Table 2.
Implementation Outcomes
The implementation outcomes associated with providers’ primary EBP were moderate across all items: use of all components of EBP (mean = 2.63, SD = .97), adaptation of EBP (mean = 2.34, SD = 1.06), feeling competent in implementing EBP (mean = 2.70, SD = .97), and feeling knowledgeable explaining EBP (mean = 2.45, SD = 1.07).
There were significant group differences between CAPTAIN trained providers and non-CAPTAIN trained providers for several implementation outcomes. Specifically, CAPTAIN trained providers reported higher levels of using all components of the primary EBP (mean = 2.76, SD = .88) than non-CAPTAIN trained providers (mean = 2.59, SD =.98, t (555) = −2.91, p = .004); CAPTAIN trained providers reported higher levels of feeling competent implementing EBPs (mean = 2.88, SD = .92) than non-CAPTAIN trained providers (mean = 2.65, SD = .98, t (527) = −3.75, p <.001); CAPTAIN trained providers reported higher levels of feeling knowledgeable explaining their primary EBP (mean = 2.63, SD = 1.01) than providers not trained by CAPTAIN (mean = 2.40, SD = 1.08, t (533) = −3.60, p <.001). See Table 2.
No significant differences were found between CAPTAIN trained providers and non-CAPTAIN trained providers on the adaptation of EBP item (p >.05).
Follow-Up Survey Results
Fidelity of EBP Implementation
The mean percentage of specific EBP fidelity for teachers’ primary EBP was 87% (N = 221; SD = 15.31%). There was no significant group difference in the self-reported fidelity of teachers’ primary EBP between CAPTAIN trained and non-CAPTAIN trained teachers, t (219) = .55, p = .583. See Table 2.
EBP Knowledge
The mean percentage of correct responses on specific EBP knowledge for teachers’ primary EBP was 59.42% (N = 224; SD = 26.26%). CAPTAIN trained teachers had significantly higher EBP knowledge (mean = 67.15%, SD = 25.53%) than teachers not trained by CAPTAIN (mean = 56.90%, SD = 26.08%), t (222) = −2.54, p = .012.
Classroom Quality
There was a significant group difference between CAPTAIN trained teachers versus non-CAPTAIN trained teachers regarding the learning environment subscale. Specifically, CAPTAIN trained teachers reported higher levels in learning environment (meanCAPTAIN = 3.92, SDCAPTAIN = .77; meanNonCAPTAIN = 3.65, SDNonCAPTAIN = .82) t (202) = −2.05, p = .041. No significant associations were found for all other APERS subscales or the overall score (p > .05).
Discussion
The use of research-based practices is mandated by IDEA and ESSA and has been linked to best outcomes for students with autism, which highlights effective implementation and scale-up of EBPs in schools as a critical priority. The growing literature on factors that support the implementation process indicates key drivers which can be considered targets of implementation interventions to improve implementation outcomes. In this study, we explored implementation outcomes at the direct service provider level and evaluated differences between CAPTAIN trained and non-CAPTAIN trained providers using one of the first large-scale state-wide examinations across multiple levels of the special education service system. Overall outcomes indicate CAPTAIN-trained providers and teachers report more favorable attitudes toward EBP, better implementation outcomes related to data collection, and use with students, higher knowledge of their primary EBP, and better ratings of learning environment. These findings show great promise for CAPTAIN as a model to support statewide scale-up if EBP for autism and are discussed in more detail below.
One factor that has been linked to positive implementation outcomes is individual provider attitudes toward EBPs (Aarons, 2004; Aarons et al., 2011; Reding et al., 2014). This intuitively makes sense, as it is important that providers are open to learning about and using these EBPs prior to adoption and effective implementation. Multiple studies have found that provider attitudes before training, especially openness to the use of evidence-based practice and perceptions of the appeal of the practice, are linked to fidelity to the intervention after training (Aarons et al., 2011; Beidas et al., 2014). Furthermore, attitudes toward a specific practice have been linked to reported use of that practice (Reding et al., 2014) and negative beliefs about a practice may be a barrier to adoption (e.g., Harned, Dimeff, Woodcock, & Contreras, 2013) Additionally, it has been suggested that measures of provider attitude, such as the EBPAS, could be applied in the education sector to examine the impact of implementation interventions (Cook et al., 2018). Our findings reveal that CAPTAIN trained providers were much more open to EBP use than those who were not CAPTAIN trained. This may be a result of their interaction with a CAPTAIN trainer, or it may be that those who were more open initially sought out the type of training being offered by CAPTAIN. Openness to EBPs may lead to these teachers seeking out additional EBP trainings in their future professional development, thus further expanding their use of effective practices. Determining if openness can be impacted by interactions like those with CAPTAIN trainers is an area for further study.
One of the primary goals of CAPTAIN is to increase provider knowledge of EBPs as an initial step towards implementation. It is encouraging that CAPTAIN trained teachers not only scored significantly higher on EBP knowledge assessments, but also felt confident explaining the EBPs to others. Passing on EBP knowledge to other team members who may have limited access to professional development, such as paraeducators, is essential and can help to promote the spread and scale up of EBP. EBP use can be conceptualized as adherence to protocol (fidelity) as well as dosage (frequency of use) and reach (number of students receiving the intervention). Using EBPs with high fidelity, with moderate to high dosage across many students will likely maximize student impact, so these are important measures to consider. CAPTAIN trained providers reported using fidelity checklists to monitor their own implementation at higher rates than the other providers and also reported greater use of all components of their primary EBPs. Self-monitoring of implementation fidelity could be an effective way to prevent implementation drift that often occurs following an initial period of high integrity to an EBP. Because using EBP checklists to monitor fidelity is an essential component of the CAPTAIN model of coaching, perhaps this influenced providers to self-monitor their fidelity after coaching ended. There is some research to suggest that self-monitoring of fidelity provides a practical and effective approach to maintaining EBP fidelity over time (e.g., Nelson et al., 2015). However, additional research is needed to examine the validity and accuracy of self-recorded EBP fidelity as well as the role of this process in sustainment. A greater proportion of CAPTAIN trained providers reported using their primary EBP with most or all students, thus suggesting that they may generalize use of EBPs classroom wide, across students. In addition, their report of the learning environment classroom quality indicator from the APERS was also significantly higher.
This study suggests that the CAPTAIN model can support scale of EBP in special education and meet the significant needs of teachers supporting autistic students. Next steps to improving scale up efforts to meet state needs will involve better understanding the mechanisms by which CAPTAIN improves EBP fidelity and use for educators. For example, data indicate that participating with CAPTAIN trainers may provide important social support networks for educators attempting to implement EBP (McGhee Hassrick, et al., 2020). Additionally, it may be helpful to better understand how training in EBP fidelity tools may supported EBP fidelity. While all educators had access to the AFIRM fidelity tools, those trained by CAPTAIN received explicit instruction in how to use the tools, which may have by increasing understanding of the expectations for using the EBP.
One concerning finding is the generally poor climate for implementation of innovation in special education systems generally. This is consistent with other studies which have found poorer ratings of implementation climate and leadership in autism special education when compared to public mental health systems (Jobin et al., 2018). This has implications for state-levels needs to improve implementation leadership and climate in special education generally. Positive implementation climate and use of support strategies such as training availability, ongoing monitoring of performance etc. has been linked to better sustainment of innovation, improved child outcomes and decreased staff burnout and turn over (Novins et al., 2013). When leaders provide clear guidance during implementation, facilitate support among co-workers and from administration for effective implementation, trainees report an increased sense of competence and satisfaction (Green et al., 2014). Statewide training in implementation leadership and the use of implementation support strategies across the system would likely improve any scale-up efforts throughout the state.
These promising findings support the potential use of the CAPTAIN model for successful EBP scale up, however, there are several limitations to the current study. One limitation of the study is related to characterizing the reach of recruitment efforts and representativeness of the sample participants. Our primary recruitment strategies involved social media distribution and broad distribution through email with requests that the recruitment information be forwarded to others within the educational sector. Therefore, accurate measurement of the rate of response to recruitment request and the representativeness of the sample are not feasible. Related to the sample, our current data set did not allow for analysis of specific outcomes at the individual practice level. That is, there was not sufficient power to compare results between individual EBP. Another primary limitation of this study is that data were collected through provider self-report. This may contribute to the lack of significant difference on specific EBP fidelity measures, in that self-report scores were overall very high. Future studies should assess fidelity using more objective measures, and could potentially examine whether self-monitoring of EBP is accurate and helps to sustain EBP use over time. Additional objective measures, such an independent evaluator conducting an APERS on a program could be helpful in further evaluating the impacts of CAPTAIN on overall EBP use and classroom quality. Finally, the associations presented should be interpreted cautiously and with the understanding that there are no available provider measures prior to CAPTAIN training.
In summary, these preliminary findings show promise for the efficacy of the CAPTAIN model to increase dissemination and implementation of EBP at the classroom level. Future research will involve objective assessment of teacher and student outcomes that result from CAPTAIN participation.
Acknowledgments
Funding for this work was supported by a grant from the National Institute of Mental Health K01MH109574 (Suhrheinrich), Institute of Educational Science R324A170063 (Stahmer) and the California Department of Education.
Footnotes
The authors declare that they have no conflict of interest to report.
References
- Aarons GA (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74. 10.1023/b:mhsr.0000024351.12294.65 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beidas RS, Edmunds J, Ditty M, Watkins J, Walsh L, Marcus S, & Kendall P (2014). Are Inner Context Factors Related to Implementation Outcomes in Cognitive-Behavioral Therapy for Youth Anxiety? Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 788–799. 10.1007/s10488-013-0529-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brock ME, Dynia JM, Dueker SA, & Barczak MA (2020). Teacher-reported priorities and practices for students with autism: Characterizing the research-to-practice gap. Focus on Autism & Other Developmental Disabilities, 1–12. 10.1177/1088357619881217 [DOI] [Google Scholar]
- Brookman-Frazee L, Baker-Ericzén M, Chan J, Dickson K, Rieth S, Haine-Schlagel R, Stadnick N, Stahmer A, Suhrheinrich J (in press). Applying dissemination and implementation science to facilitate community implementation of evidence-based interventions. Handbook of Autism and Pervasive Developmental Disorder. Springer. [Google Scholar]
- Brookman-Frazee L, Baker-Ericzén M, Stahmer A, Mandell D, Haine RA, & Hough RL (2009). Involvement of youths with autism spectrum disorders or intellectual disabilities in multiple public service systems. Journal of Mental Health Research in Intellectual Disabilities, 2(3), 201–219. 10.1080/19315860902741542 [DOI] [PMC free article] [PubMed] [Google Scholar]
- California Department of Education (n.d.). California’s Longitudinal Pupil Achievement Data System. Retrieved August 17, 2020, from https://www.calpads.org
- Cook CR, Davis C, Brown EC, Locke J, Ehrhart MG, Aarons GA, Larson M, & Lyon AR (2018). Confirmatory factor analysis of the Evidence-Based Practice Attitudes Scale with school-based behavioral health consultants. Implementation Science, 13(1). 10.1186/s13012-018-0804-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dingfelder HE, & Mandell DS (2011). Bridging the research-to-practice gap in autism intervention: An application of diffusion of innovation theory. Journal of Autism and Developmental Disorders, 41(5), 597–609. 10.1007/s10803-010-1081-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dynia JM, Walton K, Brock ME & Tiede G (2020) Early childhood special education teachers’ use of evidence-based practices with children with autism spectrum disorder. Research in Autism Spectrum Disorders, 77. 10.1016/i.rasd.2020.101606. [DOI] [Google Scholar]
- Ehrhart MG, Aarons GA, & Farahnak LR (2015). Going above and beyond for implementation: the development and validity testing of the Implementation Citizenship Behavior Scale (ICBS). Implementation Science : IS, 10, 65. 10.1186/s13012-015-0255-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Every Student Succeeds Act. (2015). Public Law 114 - 95 - GovInfo. https://www.govinfo.gov/app/details/PLAW-114publ95
- Fixsen DL, Blasé KA, Timbers GD, & Wolf MM (2007). In search of program implementation: 792 replications of the Teaching-Family Model. The Behavior Analyst Today, 8(1), 96–110. 10.1037/h0100104 [DOI] [Google Scholar]
- Fixsen D, Blase K, Metz A, & Dyke MV (2013). Statewide implementation of evidence-based programs. Exceptional Children. 10.1177/001440291307900206 [DOI] [Google Scholar]
- Green AE, Albanese BJ, Shapiro NM, & Aarons GA (2014). The roles of individual and organizational factors in burnout among community-based mental health service providers. Psychological Services, 11(1), 41–49. doi: 10.1037/a0035299 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hess K, Morrier M, Heflin L, & Ivey M (2008). Autism treatment survey: Services received by children with autism spectrum disorders in public school classrooms. Journal of Autism and Developmental Disorders, 38, 961–971. 10.1007/s10803-007-0470-5 [DOI] [PubMed] [Google Scholar]
- Individuals with Disabilities Education Act (2004), Public Law 101-476, GovInfo. https://www.govinfo.gov/app/details/CREC-2004-11-20/CREC-2004-11-20-pt1-PgS11669 [Google Scholar]
- Jobin A, Stahmer A, Dickson KS, Nahmias A, Chlebowski C, & Brookman-Frazee L (November, 2018). Characterization of inner context implementation factors related to the implementation of evidence-based interventions for children with ASD: Comparing implementation climate and leadership in education and mental health settings. In Dickson KS (Chair), Examination of Implementation Leadership and Climate on Implementation in Schools and Community Mental Health Services. Symposium conducted at the 52nd Annual Convention of the Association for Behavioral and Cognitive Therapies, Washington, DC. [Google Scholar]
- McGhee Hassrick E, Freidman C, Schetter P, Melgarejo M, Nahmias A, Li J, Suhrheinrich J, & Stahmer A (2020, January). The impact of social networks on scaling up evidence-based autism practices in schools. Poster presented at the annual Principal Investigators Meeting for the Institute for Education Sciences. Washington, DC. [Google Scholar]
- McNeill J (2019). Social validity and teachers’ use of evidence-based practices for autism. Journal of Autism and Developmental Disorders, (0123456789). 10.1007/s10803-019-04190-y [DOI] [PubMed] [Google Scholar]
- Morrier MJ, Hess KL, & Heflin LJ (2010). Teacher training for implementation of teaching strategies for students with autism spectrum disorders. Teacher Education and Special Education. 10.1177/0888406410376660 [DOI] [Google Scholar]
- Moullin JC, Dickson KS, Stadnick NA, Rabin B, & Aarons GA (2019). Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implementation Science, 14(1), 1. 10.1186/s13012-018-0842-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Professional Development Center on Autism Spectrum Disorder (2011). Retrieved July 14, 2021. https://autismpdc.fpg.unc.edu/national-professional-development-center-autism-spectrum-disorder
- Nelson JR, Oliver RM, Hebert MA, & Bohaty J (2015). Use of self-fidleity monitoring to maintain program fidelity of multi-tiered interventions. Remedial and Special Education, 36(1), 14–19. DOI: 10.1177/0741932514544970 [DOI] [Google Scholar]
- Novins DK, Green AE, Legha RK, & Aarons GA (2013). Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child and Adolescent Psychiatry, 52(10), 1009–1025.e1018. doi: 10.1016/j.jaac.2013.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Odom SL, Cox AW, & Brock ME (2013). Implementation science, professional development, and autism spectrum disorders. Exceptional Children, 79(2), 233–251. Retrieved from http://libproxy.sdsu.edu/login?url=https://www-proquest-com.libproxy.sdsu.edu/scholarly-journals/implementation-science-professional-development/docview/1270781690/se-2?accountid=13758 [Google Scholar]
- Odom SL, Cox A, Sideris JH, Hume KA, Hedges S, Kucharczyk S, Shaw E, Boyd BA, Reszka SS, & Neitzel J (2018). Assessing quality of program environments for children and youth with autism: Autism Program Environment Rating Scale (APERS). Journal of Autism and Developmental Disorders. 10.1007/s10803-017-3379-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reding MEJ, Chorpita BF, Lau AS, & Innes-Gomberg D (2013). Providers’ Attitudes Toward Evidence-Based Practices: Is it Just About Providers, or Do Practices Matter, Too? Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 767–776. 10.1007/s10488-013-0525-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ruble L, McGrew JH, & Toland MD (2012). Goal attainment scaling as an outcome measure in randomized controlled trials of psychosocial interventions in autism. Journal of Autism and Developmental Disorders, 42(9), 1974–1983. 10.1007/s10803-012-1446-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Snyder TD, De Brey C, & Dillow SA (2019). Digest of Education Statistics 2018 (NCES 2020-009) (54th ed.). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. https://nces.ed.gov/pubs2020/2020009.pdf [Google Scholar]
- Stahmer AC, & Ingersoll B (2004). Inclusive programming for toddlers with autism spectrum disorders. Journal of Positive Behavior Interventions, 6(2), 67–82. 10.1177/10983007040060020201 [DOI] [Google Scholar]
- Steinbrenner JR, Hume K, Odom SL, Morin KL, Nowell SW, Tomaszewski B, Szendrey S, McIntyre NS, Yücesoy-Özkan S, & Savage MN (2020). Evidence-based practices for children, youth, and young adults with Autism. The University of North Carolina at Chapel Hill, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team. [Google Scholar]
- Suhrheinrich, Rieth SR, Dickson KS, & Stahmer AC (2020). Exploring Associations Between Inner-Context Factors and Implementation Outcomes. Exceptional Children, 86(2), 155–173. 10.1177/0014402919881354 [DOI] [Google Scholar]
- Suhrheinrich J, Stahmer AC, & Schreibman L (2007). A preliminary assessment of teachers’ implementation of Pivotal Response Training. The Journal of Speech and Language Pathology – Applied Behavior Analysis, 2(1), 1–13. 10.1037/h0100202 [DOI] [Google Scholar]
- Suhrheinrich J (2011). Training teachers to use pivotal response training with children with autism: coaching as a critical component. Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 34(4), 339–349. 10.1177/0888406411406553 [DOI] [Google Scholar]
- Suhrheinrich J, Stahmer A, Reed S, Schreibman L, Reisinger EM, & Mandell DS (2013). Implementation challenges in translating Pivotal Response Training into community settings. Journal of Autism and Developmental Disorders, 43, 2970–2976. 10.1007/s10803-013-1826-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Suhrheinrich J, Schetter P, England A, Melgarejo M, Nahmias AS, Dean M, & Yasuda P (2020). Statewide interagency collaboration to support evidence-based practice scale up: The California Autism Professional Training and Information Network (CAPTAIN). Evidence-Based Practice in Child and Adolescent Mental Health, 5(4), 468–482. 10.1080/23794925.2020.1796545 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tabachnick B, & Fidell L (2018). Using Multivariate Statistics (7th ed.). Pearson. [Google Scholar]
- Wong C, Odom SL, Hume KA, Cox AW, Fettig A, Kucharczyk S, Brock ME, Plavnick JB, Fleury VP, & Schultz TR (2015). Evidence-Based Practices for Children, Youth, and Young Adults with Autism Spectrum Disorder: A Comprehensive Review. Journal of autism and developmental disorders, 45(7), 1951–1966. 10.1007/s10803-014-2351-z [DOI] [PubMed] [Google Scholar]
