Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 1.
Published in final edited form as: Psychol Sch. 2022 Sep 16;60(3):743–760. doi: 10.1002/pits.22800

Practice Makes Proficient: Evaluation of Implementation Fidelity Following COMPASS Consultation Training

Lisa Ruble 1, Lindsey Ogle 2, John McGrew 3
PMCID: PMC9937020  NIHMSID: NIHMS1834613  PMID: 36816883

Abstract

Objective:

To test a training package for COMPASS, a multi-level consultation and coaching intervention for improved educational outcomes of students with ASD.

Method:

Using a Hybrid Type 3 design with emphasis on implementation and a multidimensional approach to evaluating implementation outcomes, we tested a training package with community-based consultant trainees (CTs) unfamiliar with COMPASS and evaluated acceptance, appropriateness, feasibility, and fidelity from multiple sources (trainees, teachers, and parents).

Results:

Results confirm that COMPASS-naïve CTs can be successfully trained. At least one feedback session was needed to achieve proficiency. Initial fidelity ratings between researchers and CTs were disparate suggesting self-report may not be adequate. Four feedback opportunities were required to achieve proficiency in writing intervention plans, an activity particularly challenging for CTs. Teachers and parents perceived COMPASS as acceptable, appropriate, and feasible. CTs knowledge of EBPs increased significantly following training; however, positive attitudes toward EBPs did not.

Conclusions:

The implementation outcomes suggest that the training package was effective for training CTs; however, additional practice with writing intervention plans is warranted.

Keywords: Autism, COMPASS, Consultation, Fidelity, Training


Understanding effective implementation strategies for improving the use of evidence-based practices (EBP) for students with autism spectrum disorders (ASD) in public schools is an understudied, yet critical and growing area of research (Dickson & Suhrheinrich, 2021; Odom et al., 2013; Odom et al., 2020; Stahmer et al., 2019; Stahmer et al., 2018; Suhrheinrich et al., 2013). Targeting schools to study interventions to improve implementation outcomes is especially important because schools are required by federal law to provide services to all children with disabilities, including those with ASD who represent at least an estimated 1 in 44 children (Maenner, et al., 2021) and 803,029 students in U.S. public schools during the 2019–2020 school year (National Center for Education Statistics, 2020; U.S. Department of Education, 2020). If we can leverage best-practice ASD services in our public schools, we can have a significant impact on the educational and social emotional outcomes of students with ASD broadly (Dale et al., 2022).

The Collaborative Model for Promoting Competence and Success (COMPASS) utilizes an evidence-based practice in psychology (EBPP) approach to provide individualized, ecologically informed goal setting that is matched to evidence based instructional planning. COMPASS works by embedding teacher training and support within a parent-teacher consultation and coaching framework consistent with empirically supported practices for professional development (Dunst et al., 2015; McLeod et al., 2018). COMPASS has been tested successfully in three randomized controlled trials (RCTs) for students with ASD in general, resource, or special education classrooms, in preschool through elementary (Ruble et al., 2010; Ruble et al., 2013) and high school (Ruble et al., 2018) settings, including when provided via web-based technology (Ruble et al., 2013). COMPASS is based on a shared decision-making approach that occurs over two distinct activities. During the initial consultation with the parent and teacher, the consultant (a) facilitates discussion of parent and teacher perspectives using the COMPASS profile - a joint summary report that aggregates parent and teacher perceptions, providing a 360 degree view of the child. Parent and teacher perceptions of the child’s behavioral, social, communicative, and adaptive skills at home, school, and in the community (available for public use at www.compassforautism.org) allow for a holistic assessment and understanding; (b) facilitates development of goals guided by the COMPASS profile reflecting best practice recommendations for critical areas of intervention and social emotional learning for students with ASD (NRC, 2001) i.e., social, communication, and independent learning skills; and (c) assists with writing personalized intervention plans utilizing evidence based practices that are ecologically informed and contextualized to account for the unique strengths and challenges of the teacher, teaching situation, and student (McGrew et al., 2016). This last activity is particularly crucial. While teachers report that social and communication skills are priority areas of instruction (Knight er al., 2019; Dynia et al., 2020), they also indicate that they are often ill prepared to provide such instruction (Knight et al, 2019).

Once the personalized goals and intervention plans have been developed, consultants provide four follow-up teacher coaching sessions with contingent outcome-based performance feedback to help ensure faithful implementation of intervention plans, modifying the plans with the teacher as necessary to meet the changing needs of the student. These activities are rooted in evidence-based coaching (Brock & Carter, 2017; DiGennaro et al., 2007; Noell et al., 2005) reflective of (a) the development of mastery as outlined by Dunst et al. (2013) at both the intervention (teacher) and implementation (consultant) levels and (b) teachers reported sources of decision-making that considers student characteristics, professional experience, and strategies that work within their settings (Sulek et al., 2019). Coaching sessions follow a protocol that includes video review of teachers’ implementation of instructional plans with students, progress monitoring using student goal attainment scales (Ruble et al., 2012a), and discussion of issues and next steps. Teachers and parents receive a report from the consultant of the student’s progress on the three pivotal goals following each coaching session. Both the initial consultation and four coaching sessions follow a manualized protocol that incorporates fidelity assessment (Ruble et al., 2012a). Consultants meet with the teachers approximately 7 hours total over the course of the school year.

To date, however, studies of COMPASS have been limited to using the expert consultants who designed the intervention, thus, interpretation of effectiveness conflates consultant expertise with the COMPASS intervention itself, resulting in a science and practice gap. Consequently, it is imperative to establish that COMPASS-naïve individuals can be trained to deliver COMPASS with equivalent outcomes. Because research on consultation training is limited in implementation science (Fan et al., McLeod et al., 2021; 2018), the primary aim of the current study was to develop and evaluate implementation outcomes of a COMPASS training package. Of the research available, consultant trainees are often graduate students (Hubel et al., 2020; Newell and Newell, 2018; Newmann et al., 2021; Sheridan et al., 2017a; Sheridan et al., 2017b; Sheridan et al, 2018). There are few examples of training community-based consultants (e.g., Fan et al., 2021), and even fewer that explore implementation outcomes. Of the studies that are available, strategies that have proven effective in training school-based consultants include direct instruction training workshops, modeling of consultation skills, role-playing activities, and discussion of cases or vignettes (Fan et al., 2021; McLeod et al., 2018). Consultation training packages that include components such as role-playing, video examples, and individualized feedback have additionally proven beneficial in school settings (Sheridan et al., 2017a; Sheridan et al., 2017b; Sheridan et al, 2018).

The training program was designed to teach COMPASS-naïve consultants to deliver the intervention using a Hybrid Type 3 design (Curran et al., 2012). Hybrid designs, as proposed by Curran et al., (2012), are offered as approaches to advance the translation of research into practice more rapidly compared to stand-alone clinical effectiveness or implementation studies. They outline three types of design: (a) one that evaluates that effectiveness of a clinical intervention primarily with a secondary goal of observing implementation strategies; (b) the second evaluates both the effectiveness of a clinical intervention and the implementation strategies equally; and (c) a third that focuses primarily on the effectiveness of the implementation strategy with a secondary goal of observing the clinical intervention’s effectiveness.

Implementation outcomes as defined by Proctor et al. (2011) are “the effects of deliberate and purposive actions to implement new treatment, practices, and services (p. 65).” Studies of implementation outcomes are critical for determining implementation success because observed outcomes may be explained by the quality or extent of implementation of the intervention or due to other factors unrelated to the intervention. That is, assessment of implementation is essential for understanding whether poor outcomes are due to inadequate implementation or to problems of intervention theory (Allen et al., 2012). Implementation outcomes are proximal markers of the implementation processes and key intermediate outcomes in intervention effectiveness research (Proctor et al., 2011). COMPASS is a complex, multi-level intervention with empirical evidence that changes at the consultant implementation level impact changes at the teacher intervention level, which, in turn, produce changes at the student level which then result in improved student IEP outcomes (Dunst & Trivette, 2009; Ruble & McGrew, 2015). Wong et al. (2018) generated empirical evidence of this relationship using serial mediation. The indirect impact of consultation and coaching fidelity on student IEP outcomes was mediated by teaching quality and student engagement.

Building on prior work (Wong et al., 2018) and a need for more research on implementation outcomes of school-based consultation, we applied a multi-dimensional assessment approach and evaluated four different features of implementation outcomes: acceptability, feasibility, appropriateness, and fidelity (adherence and quality of delivery; Figure 1). Acceptability refers to satisfaction with an intervention as rated by stakeholders with direct experience with the intervention (Proctor et al., 2011). Appropriateness concerns the perceived fit or compatibility of the intervention within a given setting. It differs from acceptability because the intervention may be a good fit, but not acceptable (Proctor et al., 2011). We also included a measure of therapeutic alliance (Johnson et al., 2000) as an additional measure of appropriateness. Therapeutic alliance assesses the degree to which there is shared agreement on goals, on interventions to address those goals and on the quality of the bond between the consultant and consultee (Bordin, 1994). Elevated degrees of alliance imply a consultee’s satisfaction with the consultant and intervention as well as satisfactory fit of the intervention as shown by the ability of the consultant and consultee to form shared agreements on intervention goals and tasks. Feasibility refers to whether the intervention can be provided successfully within a specific setting. Feasibility differs from appropriateness because an intervention may be appropriate for a certain setting, but not feasible due to resource or training needs (Proctor et al., 2011). Fidelity refers to the degree to which the intervention is implemented as intended by the developers. For this study, we examined two types of fidelity – adherence and quality of delivery. Adherence is defined as the extent to which program components are implemented (Dane & Schneider, 1998). Quality of delivery, on the other hand, evaluates practitioners’ process skills in delivering the intervention. In addition to implementation outcomes, we also examined cognitive and attitudinal outcomes. McLeod et al. (2018) identified cognitive- and skill-based outcomes of training based on cognitive and attitude-based mechanisms that may change as the result of training. Thus, for cognitive mechanisms we evaluated CT knowledge of ASD EBPs; for attitude-based mechanisms, we assessed attitude change in the use of EBPs.

Figure 1.

Figure 1.

Implementation Outcomes

To test the effectiveness of our COMPASS training package with naïve community-based consultants, we conducted a descriptive study and asked the following questions: (1) Can we develop a training package for ASD consultants that results in acceptable levels of consultant trainee (CT) implementation outcomes (appropriateness; feasibility; fidelity)? (2) How much follow-up/feedback was necessary to achieve acceptable implementation outcomes? (3) Would parents and teachers report COMPASS when delivered by school consultants as acceptable, appropriate, and feasible? Secondary questions evaluated (4) the impact of the training package on CT EBP knowledge and EBP attitudes.

Method

Participants

Consultant trainees (CTs), special education teachers, parents of children with ASD, and students with ASD participated. Teachers were included if they had a student with ASD on their caseload. Children were eligible if they had ASD as verified using a two-step process (1) parents and teachers confirmed that the child had a formal diagnosis of autism and was receiving special education services under the category of autism, and (2) a score on the parent-reported Social Communication Questionnaire (SCQ) – Current (Rutter, Bailey, & Lord, 2003) consistent with the cutoff criterion for an ASD diagnosis (≥11). In total, 105 participants were recruited for the study (see Table 1). The study was approved by the first author’s Institutional Review Board.

Table 1.

Participants

Pre-Pilot Wave 1 Wave 2 Total

Consultant-Trainees 3 2 7 12
Teachers 3 8 20 31
Parents 3 8 20 31
Students with ASD 3 8 20 31
Total 12 26 67 105

COMPASS Consultant-Trainees (CTs).

In total, 12 consultants were trained. All CTs were white and female and ranged in age from 24 to 60 years old. Ten were experienced consultants or coaches with 1 to 30 years (M = 12.5, SD = 7.03) of experience, and two CTs were advanced doctoral students in clinical or school psychology and worked as school consultants as part of an ASD treatment program provided by their universities (different from the researchers’ universities) located in rural areas and often accessed for diagnostic and treatment services. Of the 10 school-based CTs, seven were integrated into a specific school system (i.e., internal consultants) while three were regional school consultants responsible for multiple school systems (i.e., external consultants). Two CTs (17%) had education specialist degrees in school psychology, seven (58%) had master’s degrees in special education, two (17%) had master’s degrees in clinical psychology, and one (8%) had a master’s degrees in both school counseling and educational leadership. Eight CTs (67%) had special education teaching experience. One CT was certified in applied behavior analysis, but all reported having taken coursework and attending professional development trainings specific to autism. Five CTs had received formal training in consultation or coaching prior to being trained in COMPASS.

Teachers.

In total, 31 licensed public-school special education teachers participated. All teachers were white and 29 were female. They ranged in age from 22 to 56 (M = 37.96, SD = 9.82) and had an average of 11.14 years (SD = 8.14) of teaching experience with an average of 9.50 years (SD = 7.08) of experience teaching students with ASD. They reported teaching between 1 and 9 students with autism during the study. Ten teachers (32%) reported attending more than five professional development opportunities (e.g., in-service training, coursework, supervised field work, coaching, workshops/conferences, etc.) specific to ASD, while 3 (10%) had no prior professional development experiences specific to ASD.

Students with Autism and Parents.

In addition to receiving special education services under the educational category of autism (Individuals with Disabilities Education Act, 2004), all 31 students met the SCQ screening criterion for autism based on parent report (cutoff scores > 11; M = 21.79, SD = 6.68; Barnard-Brak et al., 2016; Moody et al., 2017). Students ranged in age from 4–13 years old with a mean of 7.39 years (SD = 2.67). Eighty four percent of participants were male, 80% were white, 6% black, 3% multiracial, and 10% did not report race. Students had a mean composite standard score on the Vineland-II Teacher Form (Sparrow et al., 2005) of 54.79 (SD = 19.43) indicating a relatively low level of adaptive functioning. Students’ families reported a total annual household income of less than $10,000 (7%), between $10,000-$24,999 (13%), between $25,000-$49,999 (26%), between $50,000-$100,000 (36%), $100,000 or more (3%), and unknown (16%).

Sampling Procedures

Participants were recruited in a multistep fashion from a mid-southern state. After permission was granted by the supervisor, CTs were contacted directly and recruited via phone call. After the first training session, CTs identified teachers of students formally identified with autism in their schools and received their permission for the recruitment team to contact them about the study. Once teachers were recruited, teachers provided the initials of the students with ASD, and one was randomly chosen from their caseload if they had more than one student. The teacher then contacted the parent for permission to share their information with the research team, who then contacted the parent directly.

Measures

Measures of implementation outcomes described by Dane and Schneider (1998) and expanded by Proctor et al., (2011) included acceptability, appropriateness, feasibility, and fidelity. Table 2 shows when the implementation outcome measures were collected and by whom.

Table 2.

Implementation Measures Collected by Sequence of Activity

Activity Measure Source
Researcher CT Teacher Parent

COMPASS Training Package (before consultation)
 Acceptability CT Consultation Training X
CT Coaching Training X
CANVAS Website X
Feedback X

Following Initial Consultation Session (beginning of the year, after training)
 Fidelity: Adherence Consultation Adherence X X
Intervention Plan Quality Scale X X
 Fidelity: Quality of Delivery Consultation Process Skills X X
 Acceptability Teacher/Parent Satisfaction X X

Following Coaching Session(s) (after consultation)
 Fidelity: Adherence Coaching Adherence X X
 Fidelity: Quality of Delivery Coaching Process Skills X X
 Appropriateness Intervention Plan Feedback X
Session Rating Scale X
 Acceptability Coaching Feedback X

End of School Year Assessment
 Feasibility Usability, Feasibility, Burden X

Note: CT = consultant trainee

CT Characteristics

CTs completed a background measure and two additional measures at baseline and end of the school year regarding knowledge of ASD EBPs and attitudes toward evidence-based practices (EBPs) for students with autism. The background measure asked about years of experience as an ASD consultant, experience teaching children with ASD and number of years, and training in ASD. The EBP Knowledge Scale, developed by the researchers, (α=.76) asked CTs to rate their knowledge (1 ‘Not at all knowledgeable’ to 3 ‘Very knowledgeable’) of the 27 EBPs for students with autism identified by the National Professional Development Center on Autism Spectrum Disorders (autismpdc.fpg.unc.edu/evidence-based-practices). The EBP Attitudes Scale (Aarons, 2004; α=.80) asked CTs to rate their agreement with (1 ‘Not at all’ to 5 ‘To a very great extent’) a series of statements assessing attitudes toward new or empirically validated practices (e.g., “I like to use new types of interventions”).

Acceptability

Two levels of acceptability were assessed. The first level concerned CT acceptability of the COMPASS training package. The second level concerned parent and teacher acceptability of the COMPASS intervention as delivered by the CT.

CTs ratings of acceptability were obtained over three timepoints, after each of the two in person training days and at the end of the school year. For the first two in-person training days, ratings were obtained immediately following both the consultation training (10 items; α=.84; 1 ‘Strongly disagree’ to 5 ‘Strongly agree’) and coaching training (10 items; α=.72; 1 ‘Strongly disagree’ to 5 ‘Strongly agree’) (see Table 2) using a measure developed by the researchers. Examples of items were “The topics covered were relevant to me,” “The training experience will be useful in my work,” and “The content was organized and easy to follow.”

The final feedback on acceptability was obtained at the end of the school year when CTs rated the usefulness/helpfulness of the entire training platform for the COMPASS training package. The training package included a hybrid, web-based CANVAS platform for asynchronous online learning and face-to-face training. The CANVAS platform is free for educators and was set up with four modules, two modules provided background information and general resources and two modules focused on the initial consultation and the coaching activities. The content of the training was based in part on focus group feedback from ASD consultants and school administrators on how best to train ASD consultants and on the features of effective consultation (Love et al., 2021). Consultant acceptability of the training was assessed for the: (a) CANVAS website (11 items; 1 ‘Not at all helpful’ to 5 ‘Extremely helpful’; α=.88; e.g., “How helpful were (a) PowerPoint lectures with audio; (b) examples of completed COMPASS forms; (c) case study examples.”), and (b) performance feedback sessions (8 items; 1 ‘Not at all useful’ to 5 ‘Extremely useful’; α=.92; e.g., “how useful do you find: (a) self-reflections through process forms and fidelity; (b) the feedback summary document used during the session; and (c) the zoom meeting and verbal feedback”).

Teacher and parent acceptability of COMPASS was assessed using three measures developed and tested by the researchers in prior studies (Ruble et al., 2010; 2013) related to participants’ satisfaction with and acceptability of the intervention components. Acceptability of the initial consultation was assessed independently by parents (α =.92) and teachers (α =.95) using a 25-item Likert-type response scale (1 ‘strongly disagree’ to 5 ‘strongly agree’). Example items included “I felt involved during the consultation and able to express my views; The consultant was able to adapt recommendations / suggestions based on my particular situation / classroom.” Teacher acceptability of coaching was assessed with the COMPASS Coaching Feedback Form (Ruble et al., 2012a) which consisted of 12 Likert-type items (α =.91; 1 ‘Not very much/well’ to 4 ‘Very much/well’) also related to participants’ satisfaction and reception of the intervention. Example items included the coaching sessions “Supported me to help the child reach his/her IEP goals” and “Supported me to document progress.” A third measure of teacher acceptability of COMPASS consisted of a 4-item scale and was rated using a 4-point Likert scale (α=.96; 1 ‘not at all likely’ to 4 ‘very likely’). Example items included “How likely would you be to use COMPASS for all your students” and “How likely would you be to recommend use of COMPASS for all teachers of special education students.”

Appropriateness

Appropriateness as rated by teachers was assessed using two measures, one for the coaching sessions and another for the intervention plans developed during the initial consultation. For the former, the Session Rating Scale (SRS) was used, which is a measure of the therapeutic alliance (Johnson, Miller, & Duncan, 2000). The four items on the SRS are rated using a 10-point Likert-type response scale (α=1; scaled from 1–10, with 10 being the most positive). Example items were “We worked on and talked about what I wanted to work on and talk about; The consultant’s approach is a good fit for me.” The six-item Intervention plan feedback scale (α=1), developed by the researchers, focuses on the social importance and compatibility of treatment goals, procedures, and outcomes, and was completed by the teacher following each coaching session. Items were rated using a 4-point Likert scale (1 ‘Not at all’ to 4 ‘Very’) that assessed how much the intervention plans were clear, relevant to the goals, realistic to implement, appealing, consistent with values and teaching philosophy, and effective.

Feasibility, Usability and Burden

COMPASS feasibility, usability and burden were assessed by teachers at the end of the school year using a 22-item Likert-type survey developed by the researchers with guidance from Lewis et al. (2015) for items for the three domains: usability (13 items; α=.91; 1 ‘Strongly disagree’ to 4 ‘Strongly agree’), feasibility (6 items; α=.84; 1 ‘Very hard’ to 4 ‘Not at all hard’), and burden (4 items; α=.71; 1 ‘Very burdensome’ to 4 ‘Not at all burdensome’). Example questions for feasibility included how hard was it to make time for “completing the initial consultation,” “implementing the intervention plans,” and “the coaching sessions?” Examples of items for usability were “the COMPASS assessment forms used for the initial planning were very useful,” “the goal attainment scale forms were very useful,” and “the intervention plans were very useful.” For burden, examples items were how burdensome in terms of time and effort was the “initial consultation,” “coaching sessions,” and “data collection activities.”

Fidelity

For establishing interrater reliability of the fidelity measures, the trainer coder (first author) trained a primary coder (second author) and graduate student research assistant using consultations conducted in the pre-pilot phase of the study until 80% agreement was achieved. Two consultation and two coaching sessions were necessary to reach agreement for both the primary coder and research assistant with the trainer coder. The primary coder completed all the fidelity assessments. The research assistant independently coded a randomly selected sample of consultation and coaching sessions in Wave 1 and Wave 2 for fidelity agreement. In addition, at each wave, the primary and trainer coders independently coded two consultations and two coaching sessions to ensure maintenance of agreement and no drift. The percent agreement between independent coders is reported for each of the fidelity measures below.

Consultation and Coaching Adherence.

CT adherence was assessed with three measures developed by the researchers, one that focused on the initial consultation, a second that focused on the intervention plan, and a third that focused on the follow-up coaching sessions (Table 2). Consultant adherence to the COMPASS consultation protocol was assessed with a 25-item, yes-no checklist (Ruble et al., 2012a) completed independently by both CTs (self-assessment) and researchers (KR-20=.82). Example items included (a) whether goals included those suggested from home and family, (b) the process included a description of the student at home, in the community, and at school, (c) goals were described in behavioral terms, and (d) at least three priority concerns were identified and prioritized. Intervention plans were assessed using the Intervention Plan Quality Scale (IPQS), a 16-item yes-no checklist completed by both the CT and the researcher (KR-20=.72) following the consultation. The checklist assessed whether the CT adhered to evidence-based principles of a well-developed intervention plan (e.g., SMART goals, clear and specific plans, utilization of EBPs, prompting, plans for reinforcement, etc.; Ruble et al., 2020). Mean sample interrater reliability (percent agreement) for 39% of all Wave 1 and Wave 2 consultations was 93% for consultation adherence and 94% for the IPQS.

Consultant adherence to the COMPASS coaching protocol was assessed following each coaching session with a 14-item, yes-no checklist completed by both CTs (self-assessment) and researchers (KR-20=.88). Example items include whether the CT (a) reviewed the most current intervention plan and updated it for each goal, (b) observed the child’s progress for each goal, (c) after observing each skill, the CT asked teachers for their input, and (d) discussed at least one idea from the intervention plan either to maintain or change. Mean sample interrater reliability (percent agreement) for 50% of all coaching sessions in Wave 1 and Wave 2 was 93% for coaching adherence.

Consultation and coaching quality of delivery.

Initial consultation process skills were assessed independently by the CTs (self-assessment) and researchers using a 27-item yes-no checklist (KR-20=.86) developed by the researchers. Process skills are generalized communication skills common across consultation approaches, and are not unique to COMPASS necessarily, there are specific activities that were important to complete such as checking for understanding, including all participants, and moving through the information effectively and efficiently while summarizing information as the profile was discussed. Coaching process skills were assessed using a 12-item Likert-type scale (1 ‘Attempted’ to 3 ‘Superior’) by both CTs and researchers (α =.97). Example items from both measures included whether the CT (a) asked open ended questions, (b) attended to the time involved, (c) summarized concerns, and (d) checked everyone’s understanding. Interrater reliability resulted in a mean sample agreement of 91% for consultation process skills and 90% for coaching process skills.

Procedures

Development of the COMPASS Training Package

The COMPASS training package was developed using an iterative process. First, focus groups were conducted with 31 school-based stakeholders (special education directors and school principals; special education classroom teachers; ASD school consultants; and parents of children with ASD) to obtain information on effective consultation processes and outcomes (Love et al., 2021). The focus group results were used to inform the development of the training package to ensure that stakeholder views were included in the training objectives, training plans, and outcome analysis. For example, to train effective consultants, stakeholders reported a need for consultants who can (a) develop egalitarian, collaborative relationships, as opposed to hierarchical expert relationships; (b) tailor consultation to student, parent and teacher preferences and needs; (c) empower teachers through joint problem solving and reflective and guided questioning; (d) provide timely, reliable, and responsive communication; and (e) facilitate parent-teacher interactions. They also indicated that new interventions such as COMPASS should be complementary and consistent with current approaches and philosophies. Information on preferred modality of training also was obtained, such as web-based versus in person training; use of case studies, preferences for in class activities, etc. Based on the focus group feedback, a hybrid preliminary training package that could be delivered remotely and in person was developed and included: a) an online training platform featuring “homework” modules on consultation and coaching in general and for COMPASS specifically that were completed prior to in-person training; b) two 8 hour in-person training days focused on consultation and coaching respectively that included a tell (didactics on the efficacy of consultation/coaching; elements of consultation, etc.), show (case study analysis using written examples and audio and videotapes of actual COMPASS consultation and coaching sessions) and do (practice sessions for conducting the consultation and coaching sessions) approach, and c) 1 hour feedback sessions provided online following each of the consultation and coaching sessions.

The training package was then tested initially in a small pre-pilot sample with three consultants who each delivered an abbreviated form of COMPASS consisting of an initial consultation and one coaching session to one teacher-parent dyad. Feedback and results from the pre-pilot were used to refine the training package for the Wave 1 pilot group who implemented the full COMPASS intervention the next school year. Feedback and results from Wave 1 were then used to inform Wave 2, which was implemented the following school year and formed the basis for the final version of the COMPASS training package. Feedback from the pre-pilot group primarily focused on ways to streamline information using brief one-page checklists to help organize the different activities for each part of the consultation and coaching sessions. These checklists were further enhanced following the Wave 1 pilot and resulted in a more comprehensive COMPASS Toolkit with written step-by-step checklists, forms, and scripts added and tested in the final iteration of the training package.

Consultant Feedback Following Training

Following each consultation and coaching session, CTs received data-driven feedback via video conferencing and written performance feedback. The researcher-trainers received the following information after each consultation and coaching session to facilitate feedback: 1) an audio recording of the entire consultation/coaching session, 2) a copy of the student’s COMPASS profile discussed during the consultation (initial consultation only), 3) a consultation/ coaching summary report with the student’s individualized goals and intervention plans and any edits to them made during the coaching session, 4) CT self-report ratings of consultation/ coaching adherence, 5) ratings of teacher/parent acceptability and feasibility, 6) CT self-report ratings of adherence to writing intervention plans (consultation only), and 7) teacher assessment of intervention plan appropriateness. Table 2 provides a description of the implementation measures including when they were obtained and who completed them.

Initially two researcher trainers reviewed all the materials and listened to the audio recordings separately. After listening to the audio recording of the consultation or coaching session and reviewing the written materials, the two researcher-trainers completed a feedback form that compiled data reflective of consultation and coaching fidelity (adherence, quality of delivery), and teacher/ parent ratings of feasibility and acceptability and made edits and comments on the goals and intervention plans described in the consultation or coaching summary report as needed. After completing this procedure separately, the trainers met to compare the forms, discuss differences in observations, and reconcile those differences until 80% agreement was reached. It took four consultations to achieve agreement. The lead trainer was also consulted on challenging cases and offered feedback prior to the feedback session with the CT after listening to the audio recording of the session. CT feedback sessions generally lasted one hour during which the researcher-trainer guided the CT in reflection by reviewing the feedback form and highlighting aspects the CT did well along with areas for improvements.

Data Analysis

Descriptive statistics were used to answer our first three and final research questions regarding the implementation outcomes of acceptability, feasibility, appropriateness, and fidelity, the amount of feedback necessary for acceptable fidelity (>80% adherence), parent and teacher acceptability, and impact of dosage and coaching component (performance feedback alone vs performance feedback with consultant-assisted problem solving). Paired samples t-tests were used to compare pre-post training ratings of CT EBP attitudes and EBP knowledge.

Results

Can we develop a training package for novice ASD consultants that results in acceptable levels of CT acceptability, feasibility, appropriateness, and fidelity (consultation and coaching adherence, and quality of delivery)?

CTs general satisfaction with the COMPASS training package for both days of training was above average (M = 4.71 and M = 4.63 out of 5). CTs reported that the training was acceptable (M = 3.46), usable (M = 3.57), mostly feasible (M = 2.95), and of relatively low burden (M = 3.25). Researcher ratings of CT consultation adherence obtained after the first consultation was 76% and increased to 100% for those who completed four consultations. Table 3 provides information on the number of consultants who completed 1,2,3, or 4 consultations and the increase in adherence over time with practice. As noted, adherence scores steadily improved over time with each consultation and feedback session. Interestingly however, in contrast to the researchers’ ratings, CTs’ self-reported adherence remained relatively elevated (90–94%) and constant over the four consultation sessions. This mismatch between researcher and CT self-reported adherence was particularly pronounced in the first consultation where CT self-report of adherence was 13% higher than the researcher report (Table 3). Nevertheless, the overall mean adherence across all conditions was similar, 89.5% by CT self-report and 88.25% by researcher report. Direct comparisons between CT and researcher initial consultation quality of delivery (process skills) indicated some discordance with CTs ratings averaging 8% higher (81%) than researcher ratings (73%). Analysis of researcher-based process skill fidelity ratings showed steady improvement from 73% for the first consultation to 94% for CTs who implemented at least four consultations. In contrast, and similar to the findings for consultation adherence, CT self-reported process skills remained roughly stable over time (80–83%).

Table 3.

Consultation Adherence, Process Skills, Intervention Plan Quality

Consultation Order Consultation Adherence Consultation Process Skills Intervention Plan Adherence

CT Researcher CT Researcher CT Researcher

Consultation 1 (n=12 CTs) 90% 76% 81% 73% 80% 55%
Consultation 2 (n=9 CTs) 87% 83% 80% 83% 74% 55%
Consultation 3 (n=6 CTs) 87% 94% 88% 88% 77% 68%
Consultation 4 (n=4 CTs) 94% 100% 83% 92% 81% 80%

Overall Mean 90% 88% 83% 84% 78% 65%

Note: CT = ratings from consultant trainee; Researcher == ratings from researchers

Analysis of intervention plan adherence to evidence based principles generated during the initial consultation revealed that CTs provided higher ratings of quality of their intervention plans initially (80%) compared to researcher ratings (52%), a finding similar to their self-assessment of the first consultation session adherence. Although the CT’s self-reported ratings stayed relatively consistent even as they gained experience across consultations (74–78%), the researcher ratings steadily improved from an average of 52% to 80% as the CTs gained experience in writing plans over the four consultations. However, unlike consultation and coaching adherence (reported below), CTs required four opportunities to achieve acceptable (i.e., > 80%) adherence to writing intervention plans.

For coaching adherence, results are reported based on number of coaching opportunities (range from 1 to 6) and order or sequence of coaching (Table 4). Overall, like consultation adherence, coaching adherence increased with repeated practice from 79% to 100% according to researcher ratings with steady improvement over time. When evaluated by order, coaching adherence increased over time, with a notable increase from the first to the second coaching session. Interestingly, assessment of coaching process skills revealed consistently higher scores from researcher ratings compared to CT self-report ratings.

Table 4.

Mean Coaching Adherence and Process Skills at Last Coaching by Number of Coaching Sessions Completed by CT

Number of Coaching Sessions Completed Coaching Adherence Process Skills

1 Coaching (n=4 CTs) 76% 2.25
2 Coaching (n=2 CTs) 88% 2.04
3 Coaching (n=2 CTs) 82% 2.63
6 Coaching (n=2 CTs) 97% 2.84

Note: Rated by researchers.

How much feedback was necessary to achieve acceptable adherence?

Analysis of dosage of feedback needed to obtain adherence greater than 80% for the initial consultations and coaching sessions revealed that it took one consultation and one coaching session with feedback for CTs to reach 80% researcher-rated adherence as demonstrated by the second consultation or coaching session. That is, except for the first session, researcher-rated coaching adherence was 80% or greater for all coaching sessions. Thus, for the most part, CTs achieved criterion levels of coaching adherence of greater than 80% from the very start.

Would parents and teachers report COMPASS when delivered by school consultants as acceptable, appropriate, and feasible?

Teacher (M = 4.74, SD = .40) and parent (M = 4.73, SD = .36) acceptability scores with the initial consultation were above average (maximum score of 5). Similarly, teacher acceptability with coaching was strong (M = 3.96, SD = .09). Similarly, appropriateness as measured by teacher ratings of therapeutic alliance also was high (M= 9.9; SD = 2.8 out of 10). Teacher feedback on the intervention plans generated following the initial consultation, received the highest possible score (M= 4.0; SD = .00 out of 4). For consultation and coaching, teacher report of feasibility, usability, and burden ratings indicated above average scores for the initial consultation (range of 3.2 to 3.7 out of 4.0) and for coaching (range of 3.4 to 3.5 out of 4; Figure 2).

Figure 2.

Figure 2.

Teacher Report of Mean COMPASS Acceptability, Usability, Feasibility, and Burden by Follow-up Received After the Initial Consultation

Secondary questions:

(a) What was the impact of training on pre-post ratings of CT EBP attitudes and EBP knowledge? At baseline, CTs rated themselves as somewhat to very knowledgeable about 24 of the 27 identified EBPs for students with autism (M = 2.38, out of 3 maximum) and held favorable attitudes towards the use and promotion of evidenced-based practices (M = 4.01, out of 5 maximum). Analysis of pre-post paired t-tests in Table 5 indicated a significant increase in EBP knowledge, but not in EBP attitudes. However, EBP knowledge levels were modest at pre-test, whereas, EBP attitudes were relatively elevated at pre-test (mean score of 2.3 out of total possible of 3), which may explain the differences in the findings.

Table 5.

Comparison of Pre-Post Means of CT Variables

CT Variable Pre-Test
M (SD)
Post-Test
M (SD)
t (df) p

EBP Knowledge 2.38 (.39) 2.66 (.23) −3.21 (8) .012
EBP Attitudes 4.01 (.41) 4.19 (.16) −.134 (8) .897

Discussion

Despite the number of accessible online training materials on EBPs available today (Sam et al., 2020; OCALI, 2021) for students with ASD, teachers continue to report being underprepared and lacking confidence in how best to instruct students with ASD (Brock et al., 2020; Brock et al., 2014; Knight et al., 2019). This finding is not surprising as the gap between science and practice extends across disciplines and is often attributed to issues of training (Lyon et al., 2011; McHugh & Barlow, 2010; McLeod, et al., 2018).

Consultation is an ideal method for embedding EBPs into teacher’s repertoires and student instruction. Although we have recognized for decades that the “consult and hope” approach for improving the use of research supported interventions in our classrooms is ineffective (Erchul et al., 2010), empirical evidence of effective training of consultants lags behind training on evidence-based practices and is considered one of the reasons for the research-practice gap (Lyon et al., 2011). Our evaluation of the COMPASS training package both revealed new information and confirmed prior knowledge: (a) overall the training package was successful as indicated by acceptable implementation outcomes; (b) at least one feedback session was necessary to achieve adequate fidelity for the initial consultation and coaching session; (c) repeated opportunities for practice resulted in higher quality consulting, as measured not only by higher fidelity, but also by better intervention plans, which required at least four opportunities for feedback; and (d) CT self-report of fidelity may not be an appropriate substitute for initial researcher-ratings of fidelity. We review each area below.

First, the training package was effective for training COMPASS-naïve consultants to implement COMPASS with proficiency (>80%; Wilczynski & Christian, 2008) using a multidimensional approach to implementation outcome assessment. CT and researcher implemented consultation was notably similar in ratings of adherence and quality of delivery when compared with prior research (Ruble et al., 2013). Teacher and parent acceptability of the COMPASS intervention was positive. Not only were parents and teachers satisfied, but teachers also perceived the intervention as usable, feasible, acceptable, and relatively nonburdensome.

Because we provided opportunities for CTs to implement COMPASS with performance feedback, we learned that at least one feedback session was required to obtain >80% adherence for both the initial consultation and coaching sessions. Thus, not surprisingly, effective consultation, like effective professional development in general, requires follow-up coaching (Brock & Carter, 2017; Kraft et al., 2018) with attention to critical intervention elements (e.g., performance feedback). Nevertheless, the notable lack of studies of the effectiveness of implementation outcomes of training in consultation and coaching creates a challenging problem for ensuring a teaching workforce that can provide EBPs with positive student outcomes (McLeod et al., 2018).

Moreover, our results showed that researcher-trainer ratings of adherence and quality of delivery were important and cannot be reliably replaced with self-report ratings, at least for the first feedback session. However, after review of performance feedback, CT’s self-report of adherence became more concordant with researcher ratings. Dickson and Suhrheinrich (2021) evaluated the concordance between research coders (researchers), supervisors (trainers), and providers (teachers) and found similarities between supervisors and coders, a finding consistent with our results. They also found that research coders had greater scores more frequently than provider self-report ratings, a finding also generally consistent with our results. While the CTs in our study overestimated their initial levels of consultation and coaching adherence and the use of evidence-based principles in their written intervention plans, they then over adjusted and tended to underestimate adherence for the remaining sessions after receiving one feedback session. Feedback, in which these discrepancies were discussed and reflected upon, was an important aspect of our training package. This finding has important implications for future research and training as reliance on self-report is not sufficient to train novice consultants in a complex consultation intervention and opportunities for growth through at least one feedback session was important. Moreover, researcher-rated adherence increased over time indicating that results demonstrate true improvement as opposed to the unreliability of self-report.

Unlike adherence to the consultation and coaching session that only required one feedback session to achieve acceptable levels of proficiency, consultants needed at least four sessions to obtain adherence to writing intervention plans using evidence-based principles (Ruble et al, 2020; Wilczynski, 2015). Intervention plans were rated based on common elements of effective teaching sequences (Ruble et al., 2020) which measures adherence to evidence-based principles of a well-developed intervention plan (e.g., SMART goals, clear and specific plans, utilization of EBPs, prompting, plans for reinforcement, etc.). We were surprised that four sessions were required to achieve adequate adherence. However, this sheds light on the difficulty consultant and teachers have in generating individualized intervention plans using an EBPP approach, despite teacher report that student characteristics and professional judgment are used for decision-making for intervention plans (Knight et al., 2019). Also, these findings may reflect challenges based on the need for multiple EBPs that require adaptation to the student and their context. Analysis of COMPASS intervention plans revealed that on average five EBPs were included in the teaching plans (Ruble et al., 2022). Future training should include more opportunities for writing intervention plans customized to the needs, preferences, and strengths of students with adherence feedback.

In addition, the COMPASS training package produced additional benefits for consultants. Specifically, the training resulted in increased knowledge of ASD EBPs, but had no impact on attitudes toward EBPs. The lack of findings for attitudes is likely due to scores being near ceiling at baseline. Nevertheless, it was surprising that the knowledge of ASD EBPs increased given that our CTs were the designated ASD trainer / consultant specialist for the school. We believe that this result may be due to the use of an EBPP framework where teacher, student, and EBP factors are taken together for decision making in contrast to a sole focus on the EBP only.

Limitations and Future Directions

There were several limitations with the study. Because of the small sample of consultants who participated in the study, we view these results as preliminary, yet promising, and in need of further verification with a larger and different, more heterogeneous, sample to fully understand cultural considerations and implementation outcomes. We purposely sampled consultants who were the designated ASD trainers / consultants for their school system, thus, we cannot say if the training would result in similar outcomes from less knowledge and skilled consultants. We believe the knowledge of ASD EBPs and consultation skills are important and may help explain our results. Another limitation is that many of the implementation outcome measures were developed by the researchers and are specific to COMPASS. At the time of study initiation, we wanted to compare results from this study with our prior RCTs of COMPASS using similar measures so that we could directly compare implementation outcomes. When the original COMPASS RCTs were conducted more than 15 years ago, implementation fidelity measures were unavailable compared to today (e.g., Weiner et al., 2017).

In sum, effective training on COMPASS consultation is promising. We outlined our process for developing our training package and testing its effectiveness that can be applied to other consultation frameworks. While COMPASS is focused on ASD, the components of effective consultation available within the model can be generalized to other student populations. In their meta synthesis of teacher professional development research, Dunst et al. (2015) identified the common elements of effective professional development that resulted in changes in teacher and student outcomes. As mentioned, core effective ingredients include activities that increase knowledge (introduction, demonstration, and explanation of benefits), opportunities that include authentic learning experiences, self-reflection, and coaching/mentoring with feedback, and follow-up of sufficient frequency and intensity that result in notable teacher and student outcomes. Future consultation research that considers the active mechanisms will inform what aspects are most critical for effective consultation implementation outcomes.

Highlights:

School-based autism consultants can be trained to provide high quality COMPASS consultation.

Consultants needed at least one feedback session to achieve proficiency in delivery of the initial consultation and coaching session.

Most difficult for consultants were writing intervention plans because four feedback sessions were necessary.

Acknowledgements:

We wish to thank school administrators who allowed their consultants and teachers the extra time and effort to help us learn about COMPASS and how to support the adults who teach the children. We are also grateful to parents who generously gave their time to help us learn. We would also like to thank Abigail Love for helping with the initial training package.

Grant information

This work was supported by Grant R34MH111783 from the National Institute of Mental Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Mental Health or the National Institutes of Health.

Footnotes

The authors have no competing interests related to this publication.

Contributor Information

Lisa Ruble, Department of Special Education, Ball State University.

Lindsey Ogle, Department of Special Education, Ball State University.

John McGrew, Department of Psychology, Indiana University-Purdue University.

References

  1. Aarons GA (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2). 10.1023/b:Mhsr.0000024351.12294.65 20 U.S.C., § 1401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Azad GF, Minton KE, Mandell DS, & Landa RJ (2021). Partners in school: An implementation strategy to promote alignment of evidence-based practices across home and school for children with autism spectrum disorder. Administration and Policy in Mental Health and Mental Health Services Research, 48(2), 266–278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Barnard-Brak L, Brewer A, Chesnut S, Richman D, & Schaeffer AM (2016). The sensitivity and specificity of the social communication questionnaire for autism spectrum with respect to age. Autism Research, 9(8), 838–845. [DOI] [PubMed] [Google Scholar]
  4. Beidas RS, Edmunds JM, Marcus SC, & Kendall PC (2012). Training and consultation to promote implementation of an empirically supported treatment: A randomized trial. Psychiatric Services (Washington, D.C.), 63(7), 660–665. 10.1176/appi.ps.201100401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bordin ES (1994). Theory and research on the therapeutic working alliance: New directions. In Horvath AO & Greenberg LS (Eds.), The working alliance: Theory, research, and practice (pp. 13–37). John Wiley & Sons. [Google Scholar]
  6. Brock ME, & Carter EW (2017). A meta-analysis of educator training to improve implementation of interventions for students with disabilities. Remedial and Special Education, 38(3), 131–144. [Google Scholar]
  7. Brock ME, Dynia JM, Dueker SA, & Barczak MA (2020). Teacher-reported priorities and practices for students with autism: characterizing the research-to-practice gap. Focus on Autism and Other Developmental Disabilities, 35(2), 67–78. [Google Scholar]
  8. Brock ME, Huber HB, Carter EW, Juarez AP, & Warren ZE (2014). Statewide assessment of professional development needs related to educating students with autism spectrum disorder. Focus on Autism and Other Developmental Disabilities, 29(2), 67–79. [Google Scholar]
  9. Carroll C, Patterson M, Wood S, Booth A, Rick J, & Balain S (2007). A conceptual framework for implementation fidelity. Implementation science, 2(1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Curran GM, Bauer M, Mittman B, Pyne JM, & Stetler C (2012). Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50(3), 217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dale B, Rispoli K, & Ruble L (2022). Social Emotional Learning in Young Children with Autism Spectrum Disorder. Perspectives on Early Childhood Psychology and Education. Manuscript in press. [Google Scholar]
  12. Dane AV, & Schneider BH (1998). Program integrity in primary and early secondary prevention: are implementation effects out of control?. Clinical Psychology Review, 18(1), 23–45. [DOI] [PubMed] [Google Scholar]
  13. Dickson KS, & Suhrheinrich J (2021). Concordance between community supervisor and provider ratings of fidelity: Examination of multi-level predictors and outcomes. Journal of Child and Family Studies, 30(2), 542–555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. DiGennaro FD, Martens BK, & Kleinmann AE (2007). A comparison of performance feedback procedures on teachers’ treatment implementation integrity and students’ inappropriate behavior in special education classrooms. Journal of Applied Behavior Analysis, 40(3), 447–461. 10.1901/jaba.2007.40-447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Dunst CJ, & Trivette CM (2009). Using research evidence to inform and evaluate early childhood intervention practices. Topics in Early Childhood Special Education, 29(1), 40–52. 10.1177/0271121408329227 [DOI] [Google Scholar]
  16. Dunst CJ, & Trivette CM (2012). Meta-analysis of implementation practice research. In Kelly B & Perkins D (Eds.), Handbook of Implementation Science for Psychology in Education (pp. 68–91). Cambridge University Press. [Google Scholar]
  17. Dunst CJ, Trivette CM, & Raab M (2013). An Implementation Science Framework for Conceptualizing and Operationalizing Fidelity in Early Childhood Intervention Studies. Journal of Early Intervention, 35(2), 85–101. 10.1177/1053815113502235 [DOI] [Google Scholar]
  18. Dusenbury L, Brannigan R, Falco M, & Hansen WB (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. 10.1093/her/18.2.237 [DOI] [PubMed] [Google Scholar]
  19. Dynia JM, Walton KM, Brock ME, & Tiede G (2020). Early childhood special education teachers’ use of evidence-based practices with children with autism spectrum disorder. Research in Autism Spectrum Disorders, 77, 101606. [Google Scholar]
  20. Erchul WP, DuPaul GJ, Grissom PF, Junod REV, Jitendra AK, Mannella MC, Tresco KE, Flammer Rivera LM, & Volpe RJ (2007). Relationships among relational communication processes and consultation outcomes for students with attention deficit hyperactivity disorder. School Psychology Review, 36(1), 111–129. [Google Scholar]
  21. Erchul WP, Martens BK, & SpringerLink (Online service). (2010). School consultation conceptual and empirical bases of practice (3rd ed.). Springer. 10.1007/978-1-4419-5747-4 [DOI] [Google Scholar]
  22. Fan CH, Juang YT, Yang NJ, & Zhang Y (2021). An examination of the effectiveness of a school-based behavioral consultation workshop. Consulting Psychology Journal: Practice and Research, 73(1), 88. [Google Scholar]
  23. Joyce BR, & Showers B (1988). Student achievement through staff development. Longman. [Google Scholar]
  24. Joyce BR, & Showers B (2002). Student achievement through staff development (3rd ed.). Association for Supervision and Curriculum Development. [Google Scholar]
  25. Knight VF, Huber HB, Kuntz EM, Carter EW, & Juarez AP (2019). Instructional practices, priorities, and preparedness for educating students with autism and intellectual disability. Focus on Autism and Other Developmental Disabilities, 34(1), 3–14. 10.1177/1088357618755694 [DOI] [Google Scholar]
  26. Knoche LL, Sheridan SM, Edwards CP, & Osborn AQ (2010). Implementation of a relationship-based school readiness intervention: A multidimensional approach to fidelity measurement for early childhood. Early Childhood Research Quarterly, 25(3). 10.1016/j.ecresq.2009.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kratochwill T, Altschaefl M, & Bice-Urbach B (2008). Best practices in school-based problem-solving consultation: Applications in prevention and intervention systems. Best practices in school psychology V, 1673–1688. [Google Scholar]
  28. Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, & Martinez RG (2015). Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implementation Science, 10(1), 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Maenner MJ, Shaw KA, Bakian AV, Bilder DA, Durkin MS, Esler A, … & Cogswell ME. (2021). Prevalence and Characteristics of Autism Spectrum Disorder Among Children Aged 8 Years—Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2018. MMWR Surveillance Summaries, 70(11), [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. McGrew J, Ruble L, & Smith I (2016). Autism Spectrum Disorder and Evidence-Based Practice in Psychology. Clinical Psychology: Science and Practice, 23(3), 239–255. [Google Scholar]
  31. McLeod BD, Cox JR, Jensen-Doss A, Herschell A, Ehrenreich-May J, & Wood JJ (2018). Proposing a mechanistic model of clinician training and consultation. Clinical Psychology: Science and Practice, 25(3), e12260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. McNeill J (2019). Social validity and teachers’ use of evidence-based practices for autism. Journal of Autism and Developmental Disorders, 49(11), 4585–4594. [DOI] [PubMed] [Google Scholar]
  33. Moody EJ, Reyes N, Ledbetter C, Wiggins L, DiGuiseppi C, Alexander A, … & Rosenberg SA. (2017). Screening for autism with the SRS and SCQ: variations across demographic, developmental and behavioral factors in preschool children. Journal of Autism and Developmental Disorders, 47(11), 3550–3561. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. National Center for Education Statistics. (2021, February 4). Table 204.50. Children 3 to 21 years old served under Individuals with Disabilities Education Act (IDEA), Part B, by age group and sex, race/ethnicity, and type of disability: 2019–20. https://nces.ed.gov/programs/digest/d20/tables/dt20_204.50.asp
  35. National Research Council. (2001). Educating children with autism. National Academy Press. http://www.nap.edu/books/0309072697/html/ [Google Scholar]
  36. Noell GH, Witt JC, Slider NJ, Connell JE, Gatti SL, Williams KL, Koenig JL, Resetar JL, & Duhon GJ (2005). Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies. School Psychology Review, 34(1), 87–106. [Google Scholar]
  37. OCALI. (2021, August 1). Autism Internet Modules. https://autisminternetmodules.org/
  38. O’Donnell CL (2008) Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research [Information Analyses Academic Journal Report]. Review of Educational Research, 78(1), 33–84. 10.3102/0034654307313793 [DOI] [Google Scholar]
  39. Odom S, Cox A, & Brock M (2013). Implementation science, professional development, and autism spectrum disorders. Exceptional Children, 79(2), 233–251. [Google Scholar]
  40. Odom SL, Hall LJ, & Suhrheinrich J (2020). Implementation science, behavior analysis, and supporting evidence-based practices for individuals with autism. European Journal of Behavior Analysis, 21(1), 55–73. 10.1080/15021149.2019.1641952 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Parsons LD, Miller H, & Deris AR (2016). The effects of special education training on educator efficacy in classroom management and inclusive strategy use for students with autism in inclusion classes [Academic Journal Report]. Journal of the American Academy of Special Education Professionals, 7–16. [Google Scholar]
  42. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, … & Hensley M. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and policy in mental health and mental health services research, 38(2), 65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Ruble L, Dalrymple N, & McGrew J (2010). The effects of consultation on Individualized Education Program outcomes for young children with autism: The Collaborative Model for Promoting Competence and Success. Journal of Early Intervention, 32(4), 286–301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Ruble L, Dalrymple N, & McGrew. (2012a). The Collaborative Model for Promoting Competence and Success for Students with ASD. Springer: NY. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Ruble L, McGrew J, Johnson L, & Pinkman K (2022). Matching autism interventions to goals with planned adaptations using COMPASS. Remedial and Special Education. Manuscript in press. [Google Scholar]
  46. Ruble L, McGrew J, & Toland M (2012b). Goal attainment scaling as outcome measurement for randomized controlled trials. Journal of Autism and Developmental Disorders, 42 (9), 1974–1983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Ruble LA, Love AM, Wong VW, Grisham-Brown JL, & McGrew JH (2020). Implementation fidelity and common elements of high quality teaching sequences for students with autism spectrum disorder in COMPASS. Research in Autism Spectrum Disorders, 71, 101493.’ [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Ruble LA, McGrew JH, Toland M, Dalrymple N, Adams M, & Snell-Rood C (2018). Randomized control trial of COMPASS for improving transition outcomes ofsStudents with autism spectrum disorder. Journal of Autism and Developmental Disorders, 48(10), 3586–3595. [DOI] [PubMed] [Google Scholar]
  49. Ruble LA, McGrew JH, Toland MD, Dalrymple NJ, & Jung LA (2013). A randomized controlled trial of COMPASS web-based and face-to-face teacher coaching in autism. Journal of Consulting and Clinical Psychology. 81(3), 566–572. doi: 10.1037/a0032003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Rutter M, Bailey A, & Lord C (2003). The social communication questionnaire: Manual. Western Psychological Services. [Google Scholar]
  51. Sam AM, Cox AW, Savage MN, Waters V, & Odom SL (2020). Disseminating information on evidence-based practices for children and youth with autism spectrum disorder: AFIRM. Journal of Autism and Developmental Disorders, 50(6), 1931–1940. [DOI] [PubMed] [Google Scholar]
  52. Schwartz IS, & Baer DM (1991). Social validity assessments: Is current practice state of the art? [Opinion Papers]. Journal of Applied Behavior Analysis, 24(2), 189–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Schwartz IS, & Sandall SR (2010). Is autism the disability that breaks Part C? A commentary on “Infants and Toddlers with Autism Spectrum Disorder: Early Identification and Early Intervention,” by Boyd, Odom, Humphreys, and Sam [Opinion Papers Reports - Evaluative]. Journal of Early Intervention, 32(2), 105–109. [Google Scholar]
  54. Sheridan SM, Kratochwill TR, & Bergan JR (1996). Conjoint behavioral consultation: A procedural manual. A procedural manual New York, NY, US: Plenum Press. [Google Scholar]
  55. Sparrow SS, Balla DA, & Cicchetti DV (2005). Vineland adaptive behavior scales: Survey forms manual. AGS Publ. [Google Scholar]
  56. Stahmer AC, Collings NM, & Palinkas LA (2005). Early intervention practices for children with autism: Descriptions from community providers. Focus on Autism and Other Developmental Disabilities, 20(2), 66–79. 10.1177/10883576050200020301 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Stahmer AC, Dababnah S, & Rieth SR (2019). Considerations in implementing evidence-based early autism spectrum disorder interventions in community settings. Pediatric Medicine, 2. 10.21037/pm.2019.05.01 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Stahmer AC, Suhrheinrich J, Schetter PL, & McGhee Hassrick E (2018). Exploring multi-level system factors facilitating educator training and implementation of evidence-based practices (EBP): A study protocol. Implement Sci, 13(1), 3. 10.1186/s13012-017-0698-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Suhrheinrich J, Stahmer AC, Reed S, Schreibman L, Reisinger E, & Mandell D (2013). Implementation challenges in translating pivotal response training into community settings. Journal of Autism and Developmental Disorders, 43(12), 2970–2976. 10.1007/s10803-013-1826-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Sulek R, Trembath D, Paynter J, & Keen D (2019). Factors influencing the selection and use of strategies to support students with autism in the classroom. International Journal of Disability, Development and Education, 1–17. [Google Scholar]
  61. Trivette CM, Dunst CJ, Hamby DW, & O’Herin CE (2009). Characteristics and consequences of adult learning methods and strategies (Vol. 2). Winterberry Press. [Google Scholar]
  62. U.S. Department of Education. (2020, April 9). OSEP Fast Facts: Children Identified with Autism. https://sites.ed.gov/idea/osep-fast-facts-children-with-autism-20/
  63. Wilczynski S, & Christian L (2008). The natinal standards project: Promoting evidence-based practice in autism spectrum disorders. In Luiselli J, Russo D, Christian W, & Wilcyznski S (Eds.), Effective practices for children with autism: Educational and behavior support interventions that work, pp. 37–60. Oxford University Press. [Google Scholar]
  64. Wong V, Ruble LA, McGrew JH, & Yu Y (2018). An empirical study of multidimensional fidelity of COMPASS consultation. School Psychology Quarterly, 33(2), 251–263 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES