Skip to main content
Implementation Research and Practice logoLink to Implementation Research and Practice
. 2023 Sep 6;4:26334895231199465. doi: 10.1177/26334895231199465

Implementation readiness for evidence-based autism practices in school systems

Aubyn C Stahmer 1,3,, Jessica Suhrheinrich 2,3, Yue Yu 1, Melina Melgarejo 2,3, Patricia Schetter 1,4, Greg A Young 1
PMCID: PMC10486229  PMID: 37790182

Abstract

Background

The increase in the number of autistic children being identified has led to increased demand on public schools to provide high-quality services. Effectively scaling up evidence-based practice (EBP) use for autistic students is challenging, given the complicated organization of special education. Teachers have significant challenges implementing autism EBP with fidelity. Factors such as implementation leadership and climate and attitudes toward EBP are linked to successful EBP use and may vary at different levels of the education system. Examining mechanisms of successful implementation is a critical step to support scale-up.

Method

In this observational study, conducted from September 2018 to March 2020, California school personnel (n = 2273) at multiple levels of the system completed surveys related to implementation climate, leadership, and attitudes toward EBP. Data were collected throughout California at the Special Education Local Plan Areas, County Office of Education, and district and school levels from educators and administrators working in public schools supporting autistic students. Multi-level modeling was conducted to characterize implementation readiness.

Results

Overall, implementation climate and leadership scores are low across levels with regional levels rated more positively than districts or schools. Attitudes toward EBP were moderate, with those working in schools having the poorest ratings and specialists/trainers and related service providers (e.g., speech-language pathologists) having the highest ratings.

Conclusions

Outcomes provide a unique opportunity to compare implementation factors across organizational levels with a large, statewide sample. These data provide guidance for developing implementation interventions at multiple levels of the education system to increase readiness for effective scale-up of autism EBP in schools. Personnel and leaders at different organizational levels may need differentiated training targeting improved implementation climate and leadership. Personnel within districts and schools may experience a particular benefit from leadership support for EBP implementation.

Keywords: autism, education system, scale-up, readiness, implementation climate, leadership, evidence-based

Plain Language Summary

The increase in the number of autistic children being identified in schools is increasing. To address this, schools are trying to do a better job of using high-quality practices based on research. However, teachers have had difficulty using research-based strategies for autistic students the way the manuals indicate they should be used. This might be due to the complexity of the strategies or limited support from special education leadership and infrastructure. Research shows that leaders can be very important in helping teachers use effective strategies. Over 2200 school personnel in California, including administrators, professional development providers, teachers, and paraprofessionals completed surveys asking about how their leaders, schools, districts, and regions supported the use of research practices for autistic students. Overall, limited support is provided in special education, with regional agencies providing more support than districts or schools. These data suggest that school and district leaders need training in how to support educators in using autism-specific strategies.

Introduction

Currently, one in 44 children in the United States have autism spectrum disorder (ASD; Maenner et al., 2021), and the number of children with autism served by schools has grown eight-fold, from 93,000 (0.4% of students) in 2000 to 803,029 (1.6% of students) in 2020 (Irwin et al., 2021). Since the education system is the primary service provider for autistic children (Brookman-Frazee et al., 2009), this escalation has increased demand on public schools to improve service quality by scaling up the use of autism-specific evidence-based practices (EBPs).

Although schools are required to provide research-supported intervention (ESSA, 2015; IDEA, 2004), implementation is difficult. Even when teachers are aware of EBPs and are attempting to use them, they often have low levels of fidelity, which is the degree to which an intervention is implemented as intended (Suhrheinrich et al., 2007; Suhrheinrich et al., 2013). This is troubling because research indicates that stronger EBP fidelity leads to better child outcomes (Durlak & DuPre, 2008; Zitter et al., 2021).

One factor that may impact EBP implementation in schools is the structure of the special education system and the limited capacity most state systems have for scaling up EBPs (IDEA, 2004). There is some evidence of differences in leader perceptions of culture and climate between general and special education (Moore et al., 2021). This may be because special education has a more distributed leadership structure than general education with less authority at the school site level and more at the centralized district level. In California, 1,037 school districts are divided into 132 regional consortiums for the provision of special education (California Department of Education, 2022a). These Special Education Local Plan Areas (SELPAs) work with districts to meet student's educational needs (Petek, 2019). Other states have similar regional entities that provide supports for special education (Moran & Sullivan, 2015). For example, Pennsylvania has Intermediate Units (Joint State Government Commission, 1997), and New York has Boards of Cooperative Educational Services ((https://www.boces.org/).

Special education decision-making about EBP use, including training and supervision resources, requires extensive collaboration between regional teams, school staff, district leaders, classroom staff, and families. Regional administrators often lead efforts to meet special education requirements across multiple districts and control provision of EBP training and coaching resources. Additionally, County Offices of Education (COEs) may provide special education programming for students with complex needs or direct educational services for smaller districts. District-level directors (e.g., special education directors) are responsible for special education programming, including curricula, staffing decisions, and resource allocation within a district (Bray & Dickey, 2020).

Administrators at all levels have a strong influence on the culture and climate related to EBP implementation and resource allocation for scale-up (e.g., Durand et al., 2016; Melgarejo et al., 2022; Rohrbach et al., 2005; Williams et al., 2022). Districts and SELPAs may also have specialists (e.g., autism coordinators, behavior specialists) who provide support to educators. School principals play an important leadership role at school sites and influence the culture and climate related to EBP implementation (Stadnick et al., 2019; Williams et al., 2021). However, most principals have limited training, knowledge, and confidence in special education leadership (Crockett, 2002; Sun & Xin, 2020). Although principals directly supervise special educators, these educators often perceive special education directors as the decision-makers for autism programs. While strong school leadership is associated with better implementation climate (Melgarejo et al., 2020), it is unclear how principals directly impact EBP implementation.

Given these complexities, it is imperative to examine how factors including implementation climate, leadership, and provider characteristics affect successful EBP implementation in schools across organizational levels. While scarce studies examine EBP implementation across levels (Grol et al., 2007), leadership support plays a key role in EBP implementation success (Fixsen et al., 2007; Odom et al., 2020; Odom et al., 2022).

One organizational factor that can determine the outcome of implementation efforts is implementation climate or the extent to which an innovation or EBP is expected, supported, and rewarded (Weiner et al., 2011). Implementation climate has been linked to increased EBP sustainment, decreased staff burnout, and improved child outcomes (Ehrhart et al., 2014; Locke et al., 2019; Lyon et al., 2018; Novins et al., 2013). In fact, implementation climate predicts fidelity of EBP in public school classrooms (Dingfelder & Mandell, 2011).

Another consideration is implementation leadership, which refers to specific behaviors that demonstrate leader support for EBP implementation and is linked to success of EBP use (Aarons et al., 2014; Aarons et al., 2017a). Furthermore, implementation leadership supports better implementation climate (Aarons et al., 2015; Melgarejo et al., 2020), which in turn is associated with higher EBP fidelity in schools (Williams et al., 2022). Staff report higher EBP competency and satisfaction when leaders provide implementation support (Green et al., 2014).

Teachers, paraprofessionals, and related service professionals (RSPs) provide direct services to students. Their education, experience, and attitudes can affect EBP use (Suhrheinrich et al., 2007). For example, provider attitudes, including openness to new EBPs, predict EBP use (Aarons et al., 2011; Aarons, 2004; Reding et al., 2014). Conversely, negative attitudes toward a practice can be a barrier to adoption (Harn et al., 2013). Provider openness to EBP and perceptions of EBP appeal are also linked to EBP fidelity (Augustsson et al., 2015; Beidas et al., 2014).

This study aims to describe potential implementation mechanisms (i.e., implementation climate, implementation leadership, and attitudes toward EBP) across multiple levels of a statewide education system, examining both organization type and provider role. Understanding these factors across levels will lead to recommendations to improve EBP scale-up in the current educational context.

Method

This observational study describes implementation mechanisms (i.e., implementation climate and leadership and attitudes toward EBP) across multiple levels of a statewide education system, examining both organization type and provider role. School personnel across California completed a survey assessing implementation climate and implementation leadership in their organization and their attitudes toward autism EBP.

Study Sample

Data were collected throughout California at the SELPA, COE, and district and school levels from educators and administrators working in public schools supporting autistic students. Teachers, paraprofessionals, and other RSPs (together referred to as direct service providers) had to be working with at least one autistic student at the time of the survey. Participants in an administrative or training role needed to provide support (resources, training, or decision-making about EBP use) to programs serving autistic students.

Study Procedures

Recruitment

A cascade approach was used for recruitment beginning with state SELPA directors. All 132 SELPA directors received study information at a state directors’ meeting and were asked to complete the surveys themselves and then distribute to their staff, districts, and COEs using an email template with a survey link. Directors nominated program specialists and special education directors from each district and county in their SELPA to participate (mean districts per SELPA = 8; min = 1; max = 48). Special education directors then distributed survey information to all autism or behavior coordinators and principals at elementary, K-12, and middle and high school campuses, and in turn principals distributed study information to their teachers, paraprofessionals, and RSPs serving autistic students in their schools.

In addition, recruitment was conducted at the California Autism Professional Training and Information Network (CAPTAIN) annual summit. CAPTAIN provides statewide professional development in autism EBP, and members consist of over 300 educators. CAPTAIN members were invited to complete the survey and assist with disseminating information to eligible educators. Recruitment postcards provided information about the study including a survey link. Study information was sent to professional educator organizations (e.g., California Association for Special Educators, California Association of School Psychologists) and distributed through social media. Because of the wide distribution of study information, we could not calculate response rate.

The survey was available between May 2018 and March 2020, and participants responded based on the 2018–2019 school year. The survey remained open to increase participation and was closed immediately prior to COVID-19-related school closures. Participants completed consent prior to survey completion. Participants were entered into an opportunity drawing where one in 20 received a $50 gift card.

Measures

SELPA-level measures were completed by SELPA directors and program specialists employed by the SELPA; district- and COE-level measures were completed by special education directors, autism/behavior specialists, and RSPs employed by a district or COE. School-level measures were completed by principals, teachers, and paraprofessionals. The measures varied in wording based on the participants’ role and organizational level.

Implementation Climate Scale (ICS)

This study used a combination of the original ICS (Ehrhart et al., 2014) and a version adapted for schools (S-ICS; Thayer et al., 2022) measuring perceptions of the policies, practices, procedures, and behaviors that are expected, rewarded, and supported to facilitate effective EBP implementation. Participants rated the extent to which they agreed with statements about EBP values and priorities from 0 (not at all) to 4 (very great extent). Participants completed six scales from the S-ICS: (1) focus, (2) educational support, (3) recognition, (4) rewards, (5) use of data, and (6) existing supports to deliver EBP. They completed two ICS scales: (1) selection for EBP and (2) selection for openness. The mean of the subscales was computed to create the total ICS score. The ICS subscales demonstrate strong internal consistency (α = 0.81–0.91; Ehrhart et al., 2014). Subscale internal consistency was also strong in the S-ICS (α = 0.85–0.97; Lyon et al., 2018). Direct service providers (teachers, paraprofessionals, and RSPs) completed the ICS about both their district and their school sites.

School-Implementation Leadership Scale (S-ILS)

Participants completed four scales of the S-ILS (Lyon et al., 2018) assessing the degree to which a leader is (1) knowledgeable about EBP, (2) supportive about the use of EBP, (3) proactive about the use of EBP, and (4) perseverant in implementing EBP. Participants rated leadership of their identified organizational level. Direct service providers selected and rated their primary leader for autism EBP in their district. Participants in a leadership role (e.g., principals, directors) also rated their own leadership. This will be called the ILS referent (self vs. other) in the analyses and results. Participants rated the extent to which they agreed with statements from 0 (not at all) to 4 (very great extent). The mean of the subscales was computed to create the total ILS score. The S-ILS demonstrates excellent internal consistency (α = .99) and convergent and discriminant validity.

Evidence-Based Practice Attitude Scale (EBPAS)

The 15-item EBPAS (Aarons, 2004) measures attitudes toward the adoption of EBPs. The EBPAS is composed of four subscales including the following: Appeal, Requirements, Openness, and Divergence. The Appeal scale represents the extent to which the provider would adopt an EBP if it were intuitively appealing or was being used by colleagues. The Requirements scale assesses the extent to which the provider would adopt an EBP if it were required. The Openness scale assesses the extent to which the provider is open to trying new interventions. The Divergence scale assesses the extent to which the provider perceives EBPs as not clinically useful and less important than clinical experience. The total score represents one's global attitude toward EBP adoption. Participants rated each question on a 5-point scale ranging from 0 (not at all) to 4 (to a very great extent). Subscale scores are calculated by taking the mean accross questions in that subscale. The total score is the sum of all the subscale scores. Participants were given the following definition of EBP for the purposes of completing the survey. EBPs for ASD are defined as practices that have sufficient scientific evidence to confidently determine positive outcomes or effects for students with ASD. The scientific evidence includes well-designed research studies published in peer-reviewed journals which have been determined to meet the high standards of scientific methods. The overall Cronbach's alpha reliability is good (α = .79), and subscale alphas range from .93 to .66 (Aarons et al., 2007).

Statistical Analyses

Each measure (ICS, ILS, EBPAS) was analyzed separately, and a similar model building approach was employed: measure subscale score was treated as a repeated measure and multi-level modeling accounted for subject-level dependencies for these repeated measures. For example, for the ICS, data were structured such that each subject had multiple rows, each corresponding to a different subscale (i.e., eight subscales = eight rows of data per participant). Each subscale score appeared under a single “ICS score” variable, with “ICS subscale type” as the repeated measure variable. Subject-level dependencies were accounted for with random intercepts, and subscale type was evaluated as a fixed effect. This allowed for efficient testing of explanatory variables, such as the organizational level, on overall measure score (i.e., the main effect of the organizational level on the total mean score; in models without the presence of an interaction term between the explanatory variable and subscale type) and on each separate subscale (in models with the presence of an interaction term between the explanatory variable and subscale type, in which case, a significant interaction is interpreted as a differential effect of the explanatory variable on individual subscale types). Additional layers of dependency were handled in a similar way, where ILS referent (self vs. other) became an additional repeated measure, again affording the opportunity to examine differential effects as a function of the main effects of the explanatory variables of interest. This modeling approach was utilized as the most parsimonious way of testing our hypotheses since it avoided (a) building numerous, independent models for each subscale within a given measure, (b) the unnecessary increase in family-wise error such an approach would entail, and (c) the loss of power any adjustments to significance testing would require (e.g., Bonferroni corrections). Given the nested nature of the data within SELPAs, SELPA was also included as a random effect in all models.

Modeling proceeded by testing successively more complex models against simpler models to obtain omnibus tests of overall effects. For each measure, modeling proceeded by first including subscale as a repeated measure fixed effect for use as a baseline model. Subsequently, more complex models then included additional variables of interest (e.g., other repeated measures such as referent in the ILS or organizational level), and each of these nested models was compared to a simpler model without the effect as a way to test for the effect of interest. In Aim 2, the dependent variable (DV) was the ICS score variable described above, and the independent variables (IVs) were the organizational level and ICS subscale type. To assess the main effect of the organizational level on ICS, the following two models were compared (presented in R codes):

Model 1: ics_score ∼ ics_scale_type + (1 | id) + (1 | selpa_final)

Model 2: ics_score ∼ ics_scale_type + organizational_level + (1 | id) + (1 | selpa_final)

The main effect of the organizational level in Model 2 was interpreted as the effect of the organizational level across the ICS subscale type which is mathematically equivalent to the effect of the organizational level on the total (or average) ICS score. To examine differences between organizational levels on each of the subscales, the next model involved testing the interaction between ICS subscale type and organizational level (presented in R code):

Model 3: ics_score ∼ ics_scale_type × organizational_level + (1 | id) + (1 | selpa_final)

The interaction term in Model 3 was examined for significance (by adding them to the main effect in Model 2). When not significant, the simpler main effect model (i.e., Model 2) supported the conclusion that the effect of the organizational level was similar for each subscale type and subscale type was irrelevant. When the interaction term was significant, the conclusion was that the effect of the organizational level depended on which subscale was examined. This was then followed by tests of the organizational level at each level of subscale type in order to determine the specific nature of the effect of the organizational level. That is, when the interaction effect was significant, we examined simple comparisons between organizational settings for each subscale. Tukey's method for multiple comparisons was used. Because only direct service providers completed the ICS, we did not examine provider role as an IV.

For ILS, the DV was overall ILS score, and IVs were ILS referent, organizational level, and ILS subscales. Modeling proceeded by testing successively more complex models against simpler models in order to obtain omnibus tests of overall effects. Model building followed a similar path as that for the ICS analyses. To assess the main effect of referent, the following two models were compared:

Model 4: ils_score ∼ (1 | id) + (1 | selpa_final)

Model 5: ils_score ∼ ils_referent + (1 | id) + (1 | selpa_final)

To assess the main effect of ILS subscale, the following two models were compared (presented in R codes):

Model 5: ils_score ∼ ils_referent + (1 | id) + (1 | selpa_final)

Model 6: ils_score ∼ ils_referent + ils_scale + (1 | id) + (1 | selpa_final)

To assess the interaction between the two variables (referent X ILS subscale), this model is used:

Model 7: ils_score ∼ ils_referent × ils_scale + (1 | id) + (1 | selpa_final)

To examine the three-way interaction between referent, ILS subscales, and organizational levels, this model is used:

Model 8: ils_score ∼ ils_referent × ils_scale × organizational level + (1 | id) + (1 | selpa_final)

For EBPAS, the DV was overall EBPAS score, and the IVs were organizational level, EBPAS subscales, and provider roles. The test between models was conducted using the likelihood ratio test of model fit (−2loglikelihood values) distributed as a Chi-square statistic where the degrees of freedom were the difference in the number of estimated parameters between the models. The descriptives of the measures’ total and subscale scores were reported in estimated marginal means (EMMs), which adjust for other variables in the model. All simple comparisons examining the main effects or interaction effects were corrected for family-wise error rates using Tukey's method for multiple comparisons. All analyses were conducted in R, Version 4.0.2, using the lme4 package (Bates et al., 2014).

Results

A total of 2,438 participants provided data. Participants represented 1,379 unique schools, with an average of 1.21 (SD = 0.65) participants per school. Schools are nested within 473 unique districts with an average of 1.17 (SD = 0.60) schools per district. The districts and COE are nested within 132 unique SELPAs with an average of 4.17 (SD = 6.93) per SELPA. The majority of participants were White (87%), were non-Hispanic (82%), were female (86%), had a master's or doctorate degree (70%), and had a supervisory role (61%). A total of 158 participants reported working for a COE, 695 for a district, 1,347 for a school, and 98 for a SELPA. Participants included administrators (e.g., principal, program specialist, assistant principal; n = 270), directors (e.g., SELPA director, superintendent, director of special education; n = 153); teachers (n = 873), paraprofessionals (n = 231), RSPs (e.g., speech therapist, occupational therapist, psychologist, n = 398), and specialists/trainers (i.e., assigned to provide training to educators; n = 373). See Table 1 for more details.

Table 1.

Participants’ Characteristics (n = 2438)

Survey participants’ characteristics (n = 2438)  
Organizational level
COE
(n = 158)
%
District
(n = 695)
%
School
(n = 1347)
%
SELPA
(n = 98)
%
Role
Administrator (n = 270) 8.89 32.59 50.74 7.78
Director (n = 153) 8.50 79.09 7.84 4.58
Teacher (n = 873) 0.12 4.93 93.47 1.48
Paraprofessional (n = 231) 0.00 15.58 82.68 1.73
Related service professional (n = 398) 19.35 45.73 32.16 2.76
Specialist/trainer (n = 373) 11.53 60.32 16.89 11.26
Sex
Female (n = 1683) 3.86 33.75 57.22 5.17
Male (n = 274) 2.92 29.20 65.69 2.19
Other (n = 5) 0.00 0.00 100.00 0.00
Education level
High school (n = 33) 6.06 12.12 78.79 3.03
AA (n = 124) 3.23 20.97 75.00 8.07
BA (n = 540) 3.70 14.26 80.00 2.04
Master's degree (n = 1497) 7.88 35.87 50.97 5.28
Doctorate degree (n = 103) 13.59 48.54 32.04 5.83
Supervisory role
No (n = 328) 10.98 58.84 17.07 13.11
Yes (n = 468) 9.40 51.50 31.20 5.77
Race
Native American (n = 19) 5.26 47.37 42.11 5.26
Asian (n = 82) 4.88 40.24 52.44 2.44
African American/Black (n = 36) 2.78 36.11 52.78 8.33
Pacific Islander (n = 9) 0.00 11.11 88.89 0.00
White (n = 1496) 4.01 32.75 58.02 5.21
More than one race (n = 68) 2.94 30.88 61.77 4.41
Ethnicity
Hispanic (n = 335) 2.69 33.73 60.60 2.99
Non-Hispanic (n = 1555) 3.99 32.61 58.26 5.15

Note. AA = associate in Arts; BA = bachelor of Arts; COE = County Office of Education; SELPA = Special Education Local Plan Areas.

To assess representativeness of our sample, we compared the demographics of the teachers (a major composite of our sample) in this sample to public school teachers in California (2018–2019 school year; California Department of Education, 2022b) and in the United States (2017–2018 school year; Institute of Education Sciences). Among the 873 teachers in this study who provided their sex, 74% identified as female (compared to 73% in CA, 76% nationwide) and 12.7% as male (compared to 17% in CA, 24% nationwide), 59% had a master's degree or higher (not available in CA, 58% nationwide), 15% identified as Hispanic (21%, 9%), 67% were White (61%, 79%), 1.5% were Black (4%, 7%), 4% were Asian (6%, 2%), 3% were two or more races (1%, 2%), 1% were American Indian/Alaska Native (5%, 1%), and 0.4% were Pacific Islander (3%, <1%).

Implementation Climate Scale

The main effect of the organizational level on the overall ICS score was significant (χ2 = 138.29, df = 3, p < .001). Total ICS EMM scores ranged from 13.40 to 19.40 across organizational levels. Specifically, personnel rated SELPAs (EMM = 19.40, SE = 0.47, 95% CI = [18.48–20.32]) significantly higher than COEs (EMM = 17.30, SE = 0.62, 95% CI = [16.08–18.52]), districts (EMM = 13.40, SE = 0.32, 95% CI = [12.77–14.03]), or schools (EMM = 13.90, SE = 0.53, 95% CI = [12.86–14.94]), respectively, and COEs higher than schools or districts, respectively (see Table 2).

Table 2.

Estimated Marginal Means, SEs, 95% CI, and Post Hoc Comparisons of ICS Total and Subscale Scores Across Organizational Levels

COE EMM (SE),95% CI District EMM (SE), 95% CI School EMM (SE), 95% CI SELPA EMM (SE), 95% CI Significant post hoc comparisons (p < .05)
Focus 2.67 (.09),[2.49–2.85] 1.92 (.04),[1.84–2.00] 1.94 (.08),[1.78–2.10] 2.83 (.07),[2.69–2.97] SELPA > district, school; COE > district, school
Education support 2.44 (.10),[2.24–2.64] 1.90 (.04),[1.82–1.98] 1.64 (.08),[1.48–1.8] 2.84 (.07),[2.70–2.98] SELPA > COE, district, school; COE > district, school; district > school
Recognition 2.32 (.09),[2.14–2.50] 1.85 (.04),[1.77–1.93] 1.83 (.08),[1.67–1.99] 2.53 (.07),[2.39–2.67] SELPA > district, school; COE > district, school
Rewards 1.09 (.09),[0.91–1.27] 0.85 (.04),[0.77–0.93] 0.89 (.08),[0.73–1.05] 2.40 (.07),[2.26–2.54] SELPA > district, school
Selection EBP 2.19 (1.01),[0.21–4.17] 1.77 (1.10),[−0.39 to 3.93] 1.60 (1.17),[−0.69 to 3.89] 2.41 (1.08),[0.29–4.53] SELPA > district, school; COE > district, school
Selection open 2.70 (0.09),[2.52–2.88] 2.30 (.04),[2.22–2.38] 2.85 (.08),[2.69–3.01] 2.97 (.07),[2.83–3.11] SELPA > district; COE > district, school > district
Existing support 2.06 (.09),[1.88–2.24] 1.55 (.04),[1.47–1.63] 1.49 (.08),[1.33–1.65] 2.45 (.07),[2.31–2.59] SELPA > COE, district, school; COE > district, school
Use of data 1.89 (.09),[1.71–2.07] 1.44 (.04),[1.36–1.52] 1.62 (.08),[1.46–1.78] 2.00 (.07),[1.86–2.14] SELPA > district, school; COE > district,
Total score 17.30 (.62),[16.08–18.52] 13.40 (.32),[12.77–14.03] 13.90 (.53),[12.86–14.94] 19.40 (.47),[18.48–20.32] SELPA > COE > district, school

Note. COE = County Office of Education; SELPA = Special Education Local Plan Areas; ICS = Implementation Climate Scale; EMM = estimated marginal mean; CI = confidence interval; EBP = evidence-based practice. Subscale score range 0–4.

A significant interaction between ICS subscale and organizational level was found (χ2 = 278.42, df = 1, p < .001). ICS subscale scores varied by organizational levels. SELPAs consistently had the highest ratings across all subscales, typically significantly higher than districts and schools and higher than COE for educational support and existing support. For example, for the educational support subscale, personnel rated SELPAs (EMM = 2.84, SE = 0.07, 95% CI = [2.70–2.98]) significantly higher than COEs (EMM = 2.44, SE = 0.10, 95% CI = [2.24–2.64]), districts (EMM = 1.90, SE = 0.04, 95% CI = [1.82–1.98]), or schools (EMM = 1.64, SE = 0.08, 95% CI = [1.48–1.80]), respectively. Selection for openness was the only scale on which SELPAs were not higher than schools. Specifically, personnel rated SELPAs (EMM = 2.97, SE = 0.07, 95% CI = [2.83–3.11]) significantly higher than districts (EMM = 2.30, SE = 0.04, 95% CI = [2.22–2.38]), and they rated school (EMM = 2.85, SE = 0.08, 95% CI = [2.69–3.01]) significantly higher than districts. COEs rated significantly higher than districts and schools except on rewards and use of data. Districts and schools rated equally on most scales, except educational support where districts rated more highly and selection for openness where schools rated more highly (see Tables 2 and 3).

Table 3.

Simple Comparisons Between the Organizational Levels for Each ICS Subscale

Focus EBP Education support Recognition Rewards
Estimate 95% CI z p Estimate 95% CI z p Estimate 95% CI z p Estimate 95% CI z p
COE—district 0.76 [0.56, 0.96] 7.58 <.001 0.54 [0.35, 0.74] 5.39 <.001 0.48 [0.27, 0.68] 4.63 <.001 0.26 [0.06, 0.46] 2.50 0.060
COE—school 0.73 [0.50, 0.97] 6.12 <.001 0.79 [0.56, 1.03] 6.63 <.001 0.48 [0.24, 0.72] 3.94 0.001 0.20 [−0.04, 0.44] 1.64 0.359
COE—SELPA −0.17 [−0.39, 0.05] −1.49 0.444 −0.42 [−0.65, −0.20] −3.68 0.001 −0.23 [−0.46, 0.00] −1.99 0.190 −0.15 [−0.38, 0.08] −1.28 0.577
District—school −0.03 [−0.20, 0.15] −0.32 0.989 0.25 [0.08, 0.42] 2.84 0.023 0.00 [−0.17, 0.18] 0.05 <0.99 −0.06 [−0.23, 0.12] −0.66 0.913
District—SELPA −0.93 [−1.09, −0.77] −11.39 <.001 −0.97 [−1.13, −0.80] −11.76 <.001 −0.71 [−0.87, −0.55] −8.46 <.001 −0.41 [−0.57, −0.24] −4.87 <.001
School—SELPA −0.90 [−1.11, −0.70] −8.62 <.001 −1.22 [−1.42, −1.01] −11.60 <.001 −0.71 [−0.92, −0.51] −6.69 <.001 −0.35 [−0.56, −0.14] −3.30 0.005
Selection EBP Selection open Existing support Use of data
Estimate 95% CI z p Estimate 95% CI z p Estimate 95% CI z p Estimate 95% CI z p
       
COE—district 0.44 [0.23, 0.64] 4.23 0.001 0.41 [0.22, 0.61] 4.10 0.002 0.53 [0.32, 0.73] 5.11 <.0001 0.45 [0.25, 0.65] 4.43 0.001
COE—school 0.56 [0.32, 0.8] 4.61 <.001 −0.15 [−0.39, 0.08] −1.29 0.570 0.56 [0.33, 0.80] 4.64 <.0001 0.26 [0.02, 0.5] 2.17 0.133
COE—SELPA −0.23 [−0.46, 0] −1.94 0.213 −0.28 [−0.51, −0.06] −2.44 0.070 −0.41 [−0.64, −0.18] −3.47 0.003 −0.13 [−0.36, 0.10] −1.13 0.672
District—school 0.13 [−0.05, 0.3] 1.41 0.493 −0.57 [−0.74, −0.40] −6.46 <.001 0.04 [−0.14, 0.21] 0.43 0.974 −0.19 [−0.37, −0.02] −2.13 0.143
District—SELPA −0.66 [−0.83, −0.50] −7.93 <.001 −0.70 [−0.86, −0.53] −8.42 <.001 −0.93 [−1.1, −0.77] −11.23 <.001 −0.58 [−0.75, −0.42] −7.00 <.001
School—SELPA −0.79 [−1, −0.58] −7.43 <.001 −0.13 [−0.33, 0.08] −1.22 0.617 −0.97 [−1.18, −0.76] −9.18 <.001 −0.39 [−0.6, −0.19] −3.71 0.001

Note. COEs = County Offices of Education; SELPA = Special Education Local Plan Areas; ICS = Implementation Climate Scale; CI = confidence interval; EBP = evidence-based practice.

Implementation Leadership Scale

In terms of implementation leadership, there was a significant main effect for raters, where leaders rated themselves (EMM = 11.36, SE = 0.27, 95% CI = [10.83–11.89]) significantly higher than their employees rated their leaders (EMM = 10.43, SE = 0.26, 95% CI = [9.92–10.94]; χ2 = 685.06, df = 1, p < .001) by 1.71 points. Leader-self-rated ILS EMMs ranged from 9.72 (SE = 0.16) to 12.36 (SE = 0.34) across organizational levels. However, employee-rated ILS EMMs ranged from 8.08 (SE = 0.17) to 10.43 (SE = 0.40) across organizational levels (see Table 4).

Table 4.

Estimated Marginal Means, SEs, 95% CI, and Post Hoc Comparisons of Self-Rated and Employee-Rated ILS Total and Subscale Scores Across Organizational Levels

COE EMM (SE), 95% CI District EMM (SE), 95% CI School EMM (SE), 95% CI SELPA EMM (SE), 95% CI Significant post hoc comparisons (p < .05)
Employee rating
Proactive 2.33 (0.10), [2.13–2.53] 1.74 (0.05), [1.64–1.84] 2.09 (0.04), [2.01–2.17] 2.18 (0.12), [1.94–2.42] SELPA or COE or school > district
Knowledgeable 2.68 (0.10), [2.48–2.88] 1.97 (0.05), [1.87–2.07] 2.63 (0.04), [2.55–2.71] 2.64 (0.12), [2.40–2.88] SELPA or COE or school > district
Supportive 2.72 (0.09), [2.54–2.90] 2.27 (0.05), [2.17–2.37] 2.51 (0.04), [2.43–2.59] 2.91 (0.12), [2.67–3.15] COE > district; SELPA > school > district
Perseverant 2.59 (0.10), [2.39–2.79] 2.09 (0.05), [1.99–2.19] 2.44 (0.04), [2.36–2.52] 2.67 (0.12), [2.43–2.91] SELPA or COE or school > district
Total 10.33 (0.33), [9.68–10.98] 8.08 (0.17), [7.75–8.41] 9.66 (0.14), [9.39–9.93] 10.43 (0.40), [9.65–11.21] COE > school > district; SELPA > district
Leader self-rating
Proactive 2.54 (0.10), [2.34–2.74] 2.22 (0.05), [2.12–2.32] 1.98 (0.05), [1.88–2.08] 2.54 (0.12), [2.30–2.78] COE > district > school; SELPA > school
Knowledgeable 3.21 (0.10), [3.01–3.41] 2.81 (0.05), [2.71–2.91] 2.25 (0.05), [2.15–2.35] 3.20 (0.12), [2.96–3.44] SELPA or COE > district > school
Supportive 3.43 (0.10), [3.23–3.63] 3.15 (0.05), [3.05–3.25] 2.82 (0.05), [2.72–2.92] 3.40 (0.12), [3.16–3.64] COE > district > school, SELPA > school
Perseverant 3.18 (0.10), [2.98–3.38] 2.95 (0.05), [2.85–3.05] 2.68 (0.05), [2.58–2.78] 3.09 (0.12), [2.85–3.33] SELPA or COE or district > school
Total 12.36 (0.34), [11.69–13.03] 11.12 (0.17), [10.79–11.45] 9.72 (0.16), [9.41–10.03] 12.24 (0.40), [11.46–13.02] COE > SELPA > district > school

Note. COEs = County Offices of Education; SELPA = Special Education Local Plan Areas; ILS = Implementation Leadership Scale; EMM = estimated marginal mean; CI = confidence interval. Subscale score range 0–4.

There was a significant interaction effect between rater (self or employee) and organizational level on ILS total score (χ2 = 543.25, df = 3, p < .001). Specifically, for employee ratings, COE, SELPA, and school were significantly higher than district (ps < .0001). For self-rated ILS, COE, and SELPA were significantly higher than district (ps < .01 and .05, respectively), and COE, SELPA, and district were significantly higher than school (ps < .0001; see Tables 4 and 5).

Table 5.

Simple Comparisons Between the Organizational Level by Rater (Self or Employee)

Rater Contrast Estimate 95% CI z p
Employee rating COE—district 2.25 [1.56, 2.93] 6.47 <.001
COE—school 0.66 [0.01, 1.32] 1.99 0.191
COE—SELPA −0.11 [−1.09, 0.88] −0.21 0.997
District—school −1.58 [−1.94, −1.22] −8.64 <.001
District—SELPA −2.35 [−3.17, −1.53] −5.61 <.001
School—SELPA −0.77 [−1.57, 0.03] −1.89 0.234
Leader self-rating COE—district 1.23 [0.53, 1.94] 3.44 0.003
COE—school 2.64 [1.95, 3.33] 7.54 <.001
COE—SELPA 0.11 [−0.89, 1.11] 0.22 0.996
District—school 1.41 [1.02, 1.80] 7.09 <.001
District—SELPA −1.12 [−1.95, −0.29] −2.66 0.032
School—SELPA −2.53 [−3.34, −1.71] −6.10 <.001

Note. COEs = County Offices of Education; SELPA = Special Education Local Plan Areas; CI = confidence interval.

There was also a significant three-way interaction between rater (self or employee), organizational level, and ILS scale (χ2 = 118.70, df = 9, p < .001; see Table 6). Specifically, when rated by employees, district leaders were rated as having lower implementation leadership than other organizational-level leaders on all subscales (ps < .01). There were no significant differences between SELPA, COE, and school for all subscales, except supportive, where SELPAs (EMM = 2.91, SE = 0.12, 95% CI = [2.67–3.15]) were higher than schools (EMM = 2.51, SE = 0.04, 95% CI = [2.43–2.59], estimate = −.40, SE = 0.12, 95% CI = [−.64 to −.17], p < .005). Leader self-ratings were more variable across subscales with school leaders rating themselves significantly lower than the other leaders on most scales. Specifically, COE leaders rated themselves significantly higher than district leaders rated themselves in all subscales, except perseverance, where no significant difference was found between COE and district. COE, SELPA, and district leaders rated themselves significantly higher than school leaders rated themselves in all subscales. SELPA leader self-ratings were significantly higher than district leader self-ratings only for the knowledgeable subscale. In sum, employees rated their district leaders low in implementation leadership, compared to other organizational levels. The three-way interaction showed that this was true across all subscales of ILS. School leaders rated themselves low in implementation leadership, compared to other organizational levels, and this was true for all ILS subscales.

Table 6.

Simple Comparisons Between the Organizational Levels by Rater (Self or Employee) for ILS Subscales

Employee rating Leader self-rating
Knowledgeable Estimate 95% CI z p Estimate 95% CI z p
COE—district 0.72 [0.52, 0.92] 6.98 <.001 0.40 [0.19, 0.61] 3.72 0.001
COE—school 0.05 [−0.14, 0.25] 0.53 0.952 0.96 [0.75, 1.17] 9.00 <.001
COE—SELPA 0.04 [−0.25, 0.34] 0.28 0.992 0.00 [−0.30, 0.30] 0.02 1.00
District—school −0.66 [−0.77, −0.56] −12.13 <.001 0.56 [0.44, 0.68] 9.05 <.001
District—SELPA −0.67 [−0.92, −0.43] −5.33 <.001 −0.40 [−0.64, −0.15] −3.17 0.008
School—SELPA −0.01 [−0.25, 0.23] −0.08 <0.999 −0.96 [−1.20, −0.71] −7.68 <.001
Perseverant
COE—district 0.49 [0.29, 0.70] 4.82 <.001 0.23 [0.02, 0.44] 2.13 0.144
COE—school 0.15 [−0.04, 0.34] 1.52 0.424 0.50 [0.29, 0.71] 4.63 <.001
COE—SELPA −0.08 [−0.38, 0.21] −0.56 0.945 0.09 [−0.22, 0.39] 0.55 0.946
District—school −0.34 [−0.45, −0.24] −6.29 <.001 0.27 [0.14, 0.39] 4.27 0.001
District—SELPA −0.58 [−0.83, −0.33] −4.56 <.001 −0.15 [−0.39, 0.10] −1.15 0.656
School—SELPA −0.23 [−0.48, 0.01] −1.89 0.231 −0.41 [−0.66, −0.17] −3.28 0.006
Proactive
COE—district 0.58 [0.38, 0.78] 5.71 <.001 0.32 [0.11, 0.53] 2.93 0.018
COE—school 0.24 [0.05, 0.43] 2.46 0.066 0.56 [0.35, 0.77] 5.19 <.001
COE—SELPA 0.15 [−0.15, 0.44] 0.98 0.762 0.00 [−0.30, 0.30] 0.01 <0.999
District—school −0.34 [−0.45, −0.24] −6.31 <.001 0.24 [0.12, 0.36] 3.85 0.007
District—SELPA −0.44 [−0.68, −0.19] −3.50 0.003 −0.32 [−0.57, −0.07] −2.50 0.060
School—SELPA −0.10 [−0.33, 0.14] −0.78 0.862 −0.56 [−0.80, −0.31] −4.41 0.001
Supportive
COE—district 0.45 [0.25, 0.65] 4.44 0.001 0.28 [0.07, 0.5] 2.60 0.046
COE—school 0.21 [0.02, 0.40] 2.15 0.137 0.61 [0.40, 0.82] 5.73 <.001
COE—SELPA −0.19 [−0.48, 0.10] −1.29 0.570 0.03 [−0.27, 0.34] 0.22 0.996
District—school −0.24 [−0.35, −0.14] −4.46 <.001 0.33 [0.21, 0.45] 5.36 <.001
District—SELPA −0.64 [−0.89, −0.40] −5.21 <.001 −0.25 [−0.50, 0.00] −1.96 0.204
School—SELPA −0.40 [−0.64, −0.17] −3.34 0.005 −0.58 [−0.83, −0.33] −4.61 <.001

Note. COEs = County Offices of Education; SELPA = Special Education Local Plan Areas; ILS = Implementation Leadership Scale; CI = confidence interval.

Evidence-Based Practice Attitude Scale

Attitudes toward EBP were moderate overall, with total scale EMMs ranging from 11.50 (SE = 0.07) to 12.40 (SE = .26) across organizational levels. There was a significant main effect for the organizational level on the overall EBPAS score (χ2 = 22.87, df = 3, p < .001). Those working in schools (EMM = 11.50, SE = 0.07, 95% CI = [11.36–11.64]) reported significantly lower attitudes than district (EMM = 12.00, SE = 0.10, 95% CI = [11.80–12.20]) or SELPA (EMM = 12.40, SE = 0.26, 95% CI = [11.89–12.91]) staff (ps < .05). There was no significant interaction between the organizational level and subscales (ps > .05), indicating that same pattern of differences between the organizational level seen in the total EBPAS score was also seen in subscale scores.

The total scale EMMs ranged from 11.40 (SE = 0.15) to 12.10 (SE = 0.13) across roles. There was a significant main effect for professional roles on the overall EBPAS score (χ2 = 22.28, df = 4, p < .001). Specifically, other RSP (EMM = 11.90, SE = 0.12, 95% CI = [11.66–12.14]) scored significantly higher than administrators (EMM = 11.40, SE = 0.15, 95% CI = [11.11–11.69]), and specialists/trainers (EMM = 12.10, SE = 0.13, 95% CI = [11.85–12.35]) scored significantly higher than administrators, teachers (EMM = 11.60, SE = 0.08, 95% CI = [11.44–11.76]), and paraprofessionals (EMM = 11.50, SE = 0.17, 95% CI = [11.17–11.83]; ps < .05). Test of the interaction between roles and subscales revealed a significant interaction (χ2 = 47.06, df = 12, p < .001). When compared across roles, specialists/trainers had the highest EBPAS scores across most scales, although the statistically significant differences by role varied. Specifically, for required, specialists/trainers had significantly higher scores than teachers (p < .0001) and paraprofessionals (p = .05); for appealing, RSP and specialists/trainers scored significantly higher than paraprofessionals; and for openness, specialists/trainers scored significantly higher than teachers and administrators, ps < .05 (see Table 7).

Table 7.

Estimated Marginal Means, SEs, 95% CI, and Post Hoc Comparisons of Roles Across EBPAS Subscales

Admin
EMM (SE),
95% CI
RSP
EMM (SE),
95% CI
Para
EMM (SE),
95% CI
Specialist
EMM (SE),
95% CI
Teacher
EMM (SE),
95% CI
Significant post hoc comparisons (p < .05)
Required 3.00 (0.05),
[2.90–3.10]
3.09 (0.04),
[3.01–3.17]
2.99 (0.06),
[2.87–3.11]
3.18 (0.04),
[3.10–3.26]
2.92 (0.03),
[2.86–2.98]
Other RSP or specialists > teacher
Appealing 3.18 (0.05),
[3.08–3.28]
3.39 (0.04),
[3.31–3.47]
3.11 (0.06),
[2.99–3.23]
3.35 (0.04),
[3.27–3.43]
3.31 (0.03),
[3.25–3.37]
Other RSP > admin, other RSP or specialist or teacher > para
Openness 2.97 (0.05),
[2.87–3.07]
3.15 (0.04),
[3.07–3.23]
3.14 (0.06),
[3.02–3.26]
3.24 (0.04),
[3.16–3.32]
3.10 (0.03),
[3.04–3.16]
Other RSP or special > admin; specialist > teacher
Divergence 2.20 (0.05),
[2.10–2.30]
2.28 (0.04),
[2.20–2.36]
2.28 (0.06),
[2.16–2.40]
2.38 (0.04),
[2.30–2.46]
2.30 (0.03),
[2.24–2.36]
No significant differences
Total 11.40 (0.15),
[11.11–11.69]
11.90 (0.12),
[11.66–12.14]
11.50 (0.17),
[11.17–11.83]
12.10 (0.13),
[11.85–12.35]
11.60 (0.08),
[11.44–11.76]
Other RSP > admin; specialist > admin or para or teacher

Note. Admin = administrator; RSP = related service professional; para = paraprofessional; specialist = specialist/trainer; EBPAS = Evidence-Based Practice Attitude Scale; EMM = estimated marginal mean; CI = confidence interval. Subscale score range 0–4.

When compared among the four subscales, divergent was scored significantly lower in all professional roles compared to the other three subscales. Appealing was significantly higher in administrators, other RSP, and teachers compared to the other three subscales. See Table 7 for a detailed breakdown of subscales by provider roles.

Discussion

This study examined implementation climate and leadership and attitudes toward autism EBP across special education organization type and provider role to understand implementation readiness in a state education system. Outcomes provide a unique opportunity to compare implementation factors across organizational levels with a statewide sample. Overall, participants rated implementation leadership as relatively low, and implementation climate and attitudes toward EBP fell in the moderate range compared to validation samples. Outcomes inform identification of strengths and areas for targeted intervention for scaling up EBP from a special education systems’ perspective.

SELPA personnel rated implementation climate the strongest, and district- and school-level personnel rated implementation climate the lowest. These results are not unexpected given that, within this sample, the SELPA is the only educational organization exclusively focused on special education. This suggests that the organizational focus, and perhaps staff expertise in delivery of special education programming, may be related to implementation climate. SELPA employees likely have a professional credentials and a service delivery background in special education and greater familiarity with EBP to support autistic students, which may impact their leadership of autism initiatives. Leaders and educators at the district and school levels have multiple competing priorities related to overall educational initiatives, including, but not limited to, special education. Another consideration is that SELPA staff may have less contact with students on a day-to-day level than district or school personnel and a greater focus on EBP training. SELPA personnel may also be more involved in decisions about providing EBP supports and thus more aware of factors impacting higher implementation climate ratings.

Results related to implementation leadership revealed similar differences by organization type, with SELPA and COE consistently rated higher than districts and schools, and leaders’ self-ratings were higher than employee ratings of leaders within their organization. The differences between leaders and employees could be interpreted as staff not being aware of leaders’ activities to support EBP implementation. Alternatively, leaders may overestimate the benefit and impact of their support of staff during implementation activities which may be associated with lower effectiveness as a leader (Atwater et al., 1995). This is important because discrepancies in perception of implementation supports may affect climate for things such as performance feedback, which is critical to EBP coaching (Chaffin et al., 2012). Additionally, leaders who rate themselves lower than followers tend to have a better organizational climate for EBP which seems to be a necessary component of effective EBP implementation (Aarons et al., 2017b). Finally, prior studies found a significant relationship between implementation leadership and attitudes toward EBP (Meza et al., 2021).

School personnel reported relatively positive attitudes toward the use of EBP for autistic students. Teacher EBP attitudes were similar to those from a California sample over a decade ago (Stahmer & Aarons, 2009) potentially indicating limited change in attitudes about EBPs for autism over time. The lower divergence scores on the EPBAS across all roles indicate that educators generally perceive research to be impactful, which is important since IDEA and the ESSA both support research-informed practices. The positive attitude toward the use of EBPs for autism within California public schools is a promising finding that can be built upon to improve delivery, increase equitable access, and improve student outcomes.

When comparing EBPAS outcomes across personnel categories, RSP and specialists/trainers consistently reported the most positive EBP attitudes. Specialized staff may more directly observe the beneficial impact of EBP on student learning or have more time and resources to effectively use EBP as opposed to general education staff who have a broader focus on initiatives that affect all students. Similarly, the job responsibilities of specialists/trainers may be more aligned with EBP use. For example, specialists/trainers may be specifically hired to support EBP implementation; therefore openness to EBP and understanding of requirements for EBP use may be part of selection criteria for these positions. This is promising given they are likely to be providing professional development regarding autism EBP and may “champion” these efforts across implementation phases.

In summary, while attitudes toward EBP were moderate to good compared to samples across other service sectors, participant ratings of implementation climate and implementation leadership were comparatively low. This is concerning because positive implementation climate and leadership have been linked with higher EBP fidelity use (Williams et al., 2022). Targeted implementation interventions at the organizational and leader levels may result in increased implementation leadership and climate, thereby facilitating increased EBP use.

Recent research highlights the importance of tailored multi-level capacity building to support effective professional development (Metz et al., 2022). Our findings indicate personnel and leaders at different organizational levels may need differentiated support or training targeting improved implementation climate and leadership. Personnel within districts and schools may experience a particular benefit given their lower ratings. One such intervention tested in community mental health and school settings is an adaptation of the Leadership and Organizational Change for Implementation (LOCI; Aarons et al., 2015; Brookman-Frazee and Stahmer, 2018) which provides implementation leadership training, coaching, and data-driven strategic planning to support development of a positive EBP climate (Aarons et al., 2017a). Preliminary data suggest that the intervention improves implementation leadership and climate in districts to support autism EBP use (Jobin et al., 2021). Because implementation leadership and climate were rated significantly higher at the SELPA level, building the capacity of regional entities to provide EBP supports for district and school site leadership may improve implementation outcomes and site leader knowledge and confidence in supporting autistic students. Innovative methods to increase implementation leadership and climate are likely to facilitate scale-up and improve student outcomes.

There are several limitations to the current exploratory study. All data were self-reported and based on respondent perceptions about working with or supporting autistic students, their own attitudes toward EBP, and organizational implementation climate and implementation leadership. These constructs are necessarily measured through perceptions of the respondent; however, they may not represent the specific behaviors and activities related to implementation within their organization. Although this was a large sample of respondents across organizational types and roles within special educational services, it was drawn from only one state in the United States and from a group who self-selected to respond. Therefore, outcomes may not be representative of U.S. national trends or globally. Additionally, these data were collected immediately prior to a significant shift in school services due to COVID-19 restrictions. Changes to service delivery and implementation supports may remain despite a return to in-person education. Despite limitations, these data provide an important contribution to the literature on implementation mechanisms and a first look at how implementation leadership, climate, and attitudes toward EBP vary across educational structures and organizational roles.

Conclusions

This study provides a unique opportunity to explore implementation leadership, climate, and attitudes across organizational levels within a large statewide sample of educators supporting autistic students. Data indicate that implementation leadership and climate is limited in special education compared to other service sectors, and implementation support varies by the organizational level. Regional programs focused on special education may have the best capacity for supporting EBP implementation. Leaders in public education systems would benefit from training in implementation leadership and strategies to build implementation climate. Outcomes extend the knowledge of factors influencing implementation leadership, climate, and attitudes generally and inform targeted intervention opportunities in state school systems.

Acknowledgments

The authors would like to thank the CAPTAIN leadership team for the collaboration on all aspects of this project and the many educators who completed the study surveys.

Footnotes

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project was funded by the Institute for Education Sciences, Office of Special Education Programs, and Office of Special Education and Rehabilitative Services (grant number R324A170063).

References

  1. Aarons G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: The evidence-based practice attitude scale (EBPAS). Mental Health Services Research, 6(2), 61–74. 10.1023/B:MHSR.0000024351.12294.65 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons G. A., Ehrhart M. G., Farahnak L. R., Hurlburt M. S. (2015). Leadership and organizational change for implementation (LOCI): A randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implementation Science, 10(11), 1–12. 10.1186/s13012-014-0192-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons G. A., Ehrhart M. G., Farahnak L. R., Sklar M. (2014). Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health, 35(1), 255–274. 10.1146/annurev-publhealth-032013-182447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aarons G. A., Ehrhart M. G., Moullin J. C., Torres E. M., Green A. E. (2017a). Testing the leadership and organizational change for implementation (LOCI) intervention in substance abuse treatment: A cluster randomized trial study protocol. Implementation Science, 12(1), 29. 10.1186/s13012-017-0562-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aarons G. A., Ehrhart M. G., Torres E. M., Finn N. K., Beidas R. (2017b). The humble leader: Association of discrepancies in leader and follower ratings of implementation leadership with organizational climate in mental health. Psychiatric Services, 68(2), 115–122. https://doi.org/https://https://doi.org/10.1176/appi.ps.201600062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Aarons G. A., McDonald E. J., Sheehan A. K., Walrath-Greene C. M. (2007). Confirmatory factor analysis of the evidence-based practice attitude scale (EBPAS) in a diverse sample of community mental health providers. Administration and Policy in Mental Health and Mental Health Services Research, 34(5), 465–469. 10.1007/s10488-007-0127-x [DOI] [PubMed] [Google Scholar]
  7. Aarons G., Hurlburt M., Horwitz S. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Atwater L. E., Roush P., Fischthal A. (1995). The influence of upward feedback on self and follower ratings of leadership. Personnel Psychology, 48, 35–59. 10.1111/j.1744-6570.1995.tb01745.x [DOI] [Google Scholar]
  9. Augustsson H., von Thiele Schwarz U., Stenfors-Hayes T., Hasson H. (2015). Investigating variations in implementation fidelity of an organizational-level occupational health intervention. International Journal of Behavioral Medicine, 22(3), 345–355. 10.1007/s12529-014-9420-8 [DOI] [PubMed] [Google Scholar]
  10. Bates D., Mächler M., Bolker B., Walker S. (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823. 10.48550/arXiv.1406.5823 [DOI] [Google Scholar]
  11. Beidas R. S., Edmunds J., Ditty M., Watkins J., Walsh L., Marcus S., Kendall P. (2014). Are inner context factors related to implementation outcomes in cognitive-behavioral therapy for youth anxiety? Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 788–799. 10.1007/s10488-013-0529-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bray K., Dickey A. (2020). Statutory functions of county superintendents of schools and county boards of education. California County Superintendents Educational Services Association (CCSES) 40–41. https://www.scoe.org/files/Statutory_Functions-5-13-20.pdf . [Google Scholar]
  13. Brookman-Frazee L., Baker-Ericzén M., Stahmer A., Mandell D., Haine R. A., Hough R. L. (2009). Involvement of youths with autism spectrum disorders or intellectual disabilities in multiple public service systems. Journal of Mental Health Research in Intellectual Disabilities, 2(3), 201–219. 10.1080/19315860902741542 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Brookman-Frazee L., Stahmer A. C. (2018). Effectiveness of a multi-level implementation strategy for ASD interventions: Study protocol for two linked cluster randomized trials. Implementation Science, 13(66), 66. 10.1186/s13012-018-0757-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. California Department of Education. (2022a). California Special Education Local Plan Areas. https://www.cde.ca.gov/sp/se/as/caselpas.asp [Google Scholar]
  16. California Department of Education. (2022b). Fingertip facts on Education in California. https://www.cde.ca.gov/ds/ad/ceffingertipfacts.asp [Google Scholar]
  17. Chaffin M., Hecht D., Bard D., Silovsky J. F., Beasley W. H. (2012). A statewide trial of the SafeCare home-based services model with parents in child protective services. Pediatrics, 129, 509–515. 10.1542/peds.2011-1840 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Crockett J. B. (2002). Special education's role in preparing responsive leaders for inclusive schools. Remedial and Special Education, 23(3), 157–168. https://doi.org/10.1177%2F07419325020230030401 [Google Scholar]
  19. Dingfelder H. E., Mandell D. S. (2011). Bridging the research-to-practice gap in autism intervention: An application of diffusion of innovation theory. Journal of Autism and Developmental Disorders, 41(5), 597–609. 10.1007/s10803-010-1081-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Durand F. T., Lawson H. A., Wilcox K. C., Schiller K. S. (2016). The role of district office leaders in the adoption and implementation of the common core state standards in elementary schools. Educational Administration Quarterly, 52(1), 45–74. 10.1177/0013161X15615391 [DOI] [Google Scholar]
  21. Durlak J. A., DuPre E. P. (2008). Implementation matters: A review of the research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327–350. 10.1007/s10464-008-9165-0 [DOI] [PubMed] [Google Scholar]
  22. Ehrhart M. G., Aarons G. A., Farahnak L. R. (2014). Assessing the organizational context for EBP implementation: The development and validity testing of the implementation climate scale. Implementation Science, 9(1), 157–168. 10.1186/s13012-014-0157-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Every Student Succeeds Act, 20 U.S.C. § 6301 (2015). congress.gov/114/plaws/publ95/PLAW114publ95.pdf.
  24. Fixsen D. L., Blasé K. A., Timbers G. D., Wolf M. M. (2007). In search of program implementation: 792 replications of the teaching-family model. The Behavior Analyst Today, 8(1), 96. 10.1037/h0100104 [DOI] [Google Scholar]
  25. Green A. E., Albanese B. J., Cafri G., Aarons G. A. (2014). Leadership, organizational climate, and working alliance in a children’s mental health service system. Community Mental Health Journal, 50(7), 771–777. 10.1007/s10597-013-9668-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Grol R. P., Bosch M. C., Hulscher M. E., Eccles M. P., Wensing M. (2007). Planning and studying improvement in patient care: The use of theoretical perspectives. The Milbank Quarterly, 85(1), 93–138. 10.1111/j.1468-0009.2007.00478.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Harn B., Parisi D., Stoolmiller M. (2013). Balancing fidelity with flexibility and fit: What do we really know about fidelity of implementation in schools? Exceptional Children, 79(2), 181–193. https://doi.org/10.1177%2F0014402913079002051 [Google Scholar]
  28. Individuals with Disabilities Education Act, 20 U.S.C. § 1400. (2004). https://ies.ed.gov/ncser/pdf/pl108-446.pdf .
  29. Irwin V., Zhang J., Wang X., Hein S., Wang K., Roberts A., York C., Barner A., Bullock Mann F., Dilig R., Parker S. (2021). Report on the condition of education 2021. US Department of Education: National Center for Education Statistics. https://nces.ed.gov/pubs2021/2021144.pdf . [Google Scholar]
  30. Jobin A., Stahmer A. C., Nahmias A., Estabillo J., Lau A. S., Brookman-Frazee L. (2021, November 18–21). Testing the effectiveness of an organizational implementation strategy for two evidence-based autism interventions in school and mental health: Implementation mechanisms [Paper presentation]. 55th Annual Convention of the Association for Behavioral and Cognitive Therapies, New Orleans, LA, United States.
  31. Joint State Government Commission Working Group Intermediate Units (1997). The role of educational service agencies in promoting equity in basic education. General Assembly of the Commonwealth of Pennsylvania. http://jsg.legis.state.pa.us/resources/documents/ftp/publications/1997-89-iureport.pdf [Google Scholar]
  32. Locke J., Lawson G. M., Beidas R. S., Aarons G. A., Xie M., Lyon A. R., Stahmer A., Seidman M., Frederick L., Oh C., Dorsey S., Spaulding C. (2019). Individual and organizational factors that affect implementation of evidence-based practices for children with autism in public schools: A cross-sectional observational study. Implementation Science, 14(1), 1–9. 10.1186/s13012-019-0877-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Lyon A. R., Cook C. R., Brown E. C., Locke J., Davis C., Ehrhart M., Aarons G. A. (2018). Assessing organizational implementation context in the education sector: Confirmatory factor analysis of measures of implementation leadership, climate, and citizenship. Implementation Science, 13(1), 1–14. 10.1186/s13012-017-0705-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Maenner M. J., Shaw K. A., Bakian A. V., Bilder D. A., Durkin M. S., Esler A., Furnier S. M., Hallas L., Hall-Lande J., Hudson A., Hughes M. M., Patrick M., Pierce K., Poynter J. N., Salinas A., Shenouda J., Vehorn A., Warren Z., Constantino J. N., DiRienzo M., Cogswell M. E. (2021). Prevalence and characteristics of autism Spectrum disorder among children aged 8 years—Autism and developmental disabilities monitoring network, 11 sites, United States, 2018. Morbidity and Mortality Weekly Report. Surveillance Summaries, 70(11), 1–16. 10.15585/mmwr.ss7011a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Melgarejo M., Lind T., Stadnick N. A., Helm J. L., Locke J. (2020). Strengthening capacity for implementation of evidence-based practices for autism in schools: The roles of implementation climate, school leadership, and fidelity. American Psychologist, 75(8), 1105–1115. 10.1037/amp0000649 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Melgarejo M., Nahmias A. S., Suhrheinrich J., Schetter P. L., Dean M., Holt T., Li J., Stahmer A. C. (2022). Exploring organizational differences in perceptions of implementation climate and leadership in schools: A mixed methods study of autism EBP implementation. Focus on Autism and Other Developmental Disabilities, 38(1), 17–31. 10.1177/10883576221140149 [DOI] [Google Scholar]
  37. Metz A., Jensen T., Farley A., Boaz A. (2022). Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implementation Research and Practice, 3, 263348952211055. 10.1177/26334895221105585 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Meza R. D., Triplett N. S., Woodard G. S., Martin P., Khairuzzaman A. N., Jamora G., Dorsey S. (2021). The relationship between first-level leadership and inner-context and implementation outcomes in behavioral health: A scoping review. Implementation Science, 16(69) 10.1186/s13012-021-01104-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Moore S. A., Landa R., Azad G. (2021). Organizational context in general and special education: An exploratory investigation to describe the perspective of school leaders. Global Implementation Research and Applications, 1(4), 233–245. 10.1007/s43477-021-00023-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Moran J., Sullivan M. (2015). Regional and statewide special education service delivery in selected states. https://www.cga.ct.gov/2015/rpt/2015-R-0013.htm
  41. Novins D. K., Green A. E., Legha R. K., Aarons G. A. (2013). Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child & Adolescent Psychiatry, 52(10), 1009–1025. 10.1016/j.jaac.2013.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Odom S. L., Hall L. J., Suhrheinrich J. (2020). Implementation science, behavior analysis, and supporting evidence-based practices for individuals with autism. European Journal of Behavior Analysis, 21(1), 55–73. 10.1080/15021149.2019.1641952 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Odom S. L., Sam A. M., Tomaszewski B. (2022). Factors associated with implementation of a school-based comprehensive program for students with autism. Autism, 26(3), 703–715. 10.1177/13623613211070340 [DOI] [PubMed] [Google Scholar]
  44. Petak G. (2019). Overview of Special Education in California. A Legislative Analysis Office Report. https://www.selpa.fcoe.org/sites/selpa.fcoe.org/files/2019-11/LAO%20-%20Overview%20of%20Special%20Education%20in%20California%2C%20November%206%2C%202019.pdf
  45. Reding M. E., Chorpita B. F., Lau A. S., Innes-Gomberg D. (2014). Providers’ attitudes toward evidence-based practices: Is it just about providers, or do practices matter, too? Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 767–776. 10.1007/s10488-013-0525-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Rohrbach L. A., Ringwalt C. L., Ennett S. T., Vincus A. A. (2005). Factors associated with adoption of evidence-based substance use prevention curricula in US school districts. Health Education Research, 20(5), 514–526. 10.1093/her/cyh008 [DOI] [PubMed] [Google Scholar]
  47. Stadnick N. A., Meza R. D., Suhrheinrich J., Aarons G. A., Brookman-Frazee L., Lyon A. R., Mandell D. S., Locke J. (2019). Leadership profiles associated with the implementation of behavioral health evidence-based practices for autism spectrum disorder in schools. Autism, 23(8), 1957–1968. 10.1177/1362361319834398 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Stahmer A. C., Aarons G. A. (2009). Attitudes toward adoption of evidence-based practices: A comparison of autism early intervention providers and children’s mental health providers. Psychological Services, 6(3), 223–234. 10.1037/a0010738 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Suhrheinrich J., Stahmer A. C., Reed S., Schreibman L., Reisinger E., Mandell D. (2013). Implementation challenges in translating pivotal response training into community settings. Journal of Autism and Developmental Disorders, 43(12), 2970–2976. 10.1007/s10803-013-1826-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Suhrheinrich J., Stahmer A. C., Schreibman L. (2007). A preliminary assessment of teachers’ implementation of pivotal response training. The Journal of Speech and Language Pathology–Applied Behavior Analysis, 2(1), 1–13. 10.1037/h0100202 [DOI] [Google Scholar]
  51. Sun A. Q., Xin J. F. (2020). School principals’ opinions about special education services. Preventing School Failure: Alternative Education for Children and Youth, 64(2), 106–115. 10.1080/1045988X.2019.1681354 [DOI] [Google Scholar]
  52. Thayer A. J., Cook C. R., Davis C., Brown E. C., Locke J., Ehrhart M., Aarons G., Picozzi E., Lyon A. R. (2022). Construct validity of the school-implementation climate scale. Implementation Research and Practice, 3, 1–14. 10.1177/26334895221116065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Weiner B. J., Belden C. M., Bergmire D. M., Johnston M. (2011). The meaning and measurement of implementation climate. Implementation Science, 6(1). 10.1186/1748-5908-6-78 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Williams N. J., Frederick L., Ching A., Mandell D., Kang-Yi C., Locke J. (2021). Embedding school cultures and climates that promote evidence-based practice implementation for youth with autism: A qualitative study. Autism, 25(4), 982–994. 10.1177/1362361320974509 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Williams N. M., Hugh M. L., Cooney D. J., Worley J. A., Locke J. (2022). Testing a theory of implementation leadership and climate across autism evidence-based interventions of varying complexity. Behavior Therapy, 53(5), 900–912. 10.1016/j.beth.2022.03.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Zitter A., Rinn H., Szapuova Z., Avila-Pons V. M., Coulter K. L., Stahmer A. C., Robins D. L., Vivanti G. (2021). Does treatment fidelity of the early start denver model impact skill acquisition in young children with autism? Journal of Autism and Developmental Disorders, 53, 1618–1628. 10.1007/s10803-021-05371-4 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Implementation Research and Practice are provided here courtesy of SAGE Publications

RESOURCES