Abstract
Evidence-based practice (EBP) reviews abound in early childhood autism intervention research. These reviews seek to describe and evaluate the evidence supporting the use of specific educational and clinical practices, but give little attention to evaluating intervention outcomes in terms of the extent to which they reflect change that extends beyond the exact targets and contexts of intervention. We urge consideration of these outcome characteristics, which we refer to as ‘proximity’ and ‘boundedness’, as key criteria in evaluating and describing the scope of change effected by EBPs, and provide an overview and illustration of these concepts as they relate to early childhood autism intervention research. We hope this guidance will assist future researchers in selecting and evaluating intervention outcomes, as well as in making important summative determinations of the evidence base for this population.
Keywords: Children, Intervention Early, Evidence-Based Practice, Outcome Assessment, Autism
Lay Summary
Recent reviews have come to somewhat different conclusions regarding the evidence base for interventions geared towards autistic children, perhaps because such reviews vary in the degree to which they consider the types of outcome measures used in past studies testing the effects of treatments. Here, we provide guidance regarding characteristics of outcome measures that research suggests are particularly important to consider when evaluating the extent to which an intervention constitutes “evidence-based practice.”
Evidence-Based Practice Reviews of Autism Interventions
Recently, researchers seeking to classify treatments for autistic1 children as evidence-based practices (EBPs) have conducted broad systematic reviews of early childhood interventions (French & Kennedy, 2018; Sandbank et al., 2020; Weitlauf et al., 2014; Reichow et al., 2012; 2018), parent-mediated interventions (Neville et al., 2018; Oono et al., 2013), and educational interventions (Bond et al., 2016), as well as broader reviews of interventions applied across a wide range of settings, age groups, and outcomes (National Autism Center, 2015; Steinbrenner et al., 2020; Wong et al., 2015). Although each of these reviews differ (in some cases to great extent) in their inclusion criteria, methodology, quality standards, and conclusions, they are similar in that their intent is to guide practice selection for clinicians and educators working with this population. Among the wide methodological differences of the EBP reviews to date is the varied attention to the evaluation of intervention study outcomes: there is often a lack of critical appraisal of the specific measures used to judge intervention effectiveness for young children on the spectrum (Bottema-Beutel & Crowley, 2020).
A Relative Lack of Outcome Evaluation
A key problem with the field’s reliance on these broad reviews as determiners of EBPs is that most of them (with the exception of our own recent review; Sandbank et al., 2020), fail to provide any serious scrutiny or description of intervention outcomes in terms of the extent to which they reflect change that extends beyond the specific targets or contexts of intervention. Although we are not the first scholars to prioritize these outcome attributes, we refer to them, respectively, as boundedness, which characterizes the extent to which an outcome reflects change that is likely generalized, or extends beyond the context of intervention, and proximity, which characterizes the extent to which an outcome reflects learning or development in areas that are distal to or extend beyond the exact targets of intervention (Sandbank et al., 2020; Yoder et al., 2013). These outcome characteristics are integral to evaluations of the meaningfulness of intervention effects. That is, while proximal, context-bound change may reflect important, specific, incremental learning, change that extends beyond the targets and contexts of intervention is arguably more likely to reflect development, which is highly meaningful change that holds greater promise of maintaining and generating cascading improvements in developmental trajectories. Unfortunately, primary studies do not usually specify where on these two continua the outcome measures lie. Indeed, many researchers discuss all intervention effects as if they are distal and generalized, even if they were not measured in such a way that would permit this interpretation. As such, some recent, focused meta-analyses and reviews have forefronted evaluations of these outcome characteristics (Carruthers et al., 2020; Fuller & Kaiser, 2020; Fuller et al., 2020), and federal granting agencies have begun to embed these considerations in evaluations of grant applications (US Department of Education, Institute of Education Sciences, What Works Clearinghouse, 2016).
However, despite the increasing attention being given to these outcome characteristics in some reviews, they are rarely evaluated or described in the identification of EBPs. The result of this oversight is that such reviews give equivocal designations to interventions that have been shown to effect highly proximal, context-bound change, and to interventions that have been shown to effect generalized, distal change. As an example, according to EBP standards from the National Clearinghouse on Autism Evidence and Practice (NCAEP; Steinbrenner et al., 2020), an intervention that has been shown to facilitate acquisition of one to three targeted communicative responses within specific contexts highly similar to treatment could be similarly designated as an ‘evidence-based intervention for supporting communication’ as an intervention that has been shown in multiple RCTs to facilitate clinically and statistically significant gains on global assessments of language and communication. This example is not hypothetical, as the 2020 NCAEP report designates both functional communication training and naturalistic intervention as evidence-based practices for supporting communication in autistic children. The former intervention was designed to reduce challenging behavior by teaching one or more specific communicative responses that serve the same behavioral function, while the latter might be better described as a central aspect of many comprehensive interventions designed to support broader linguistic and communicative development.
Although each of these practices certainly has an evidence base that supports their use for improving communication-related outcomes, what is lost in their indiscriminate designation as EBPs is the scope of change supported by available evidence. Importantly, for any given outcome, the scope of change that it represents is often inversely associated with the magnitude of intervention effect: intervention effects are likely to be larger for outcomes that are highly proximal and context-bound, relative to outcomes that are generalized and distal (Sandbank et al., 2020). Thus, studies of interventions that exclusively effect specific incremental change are more likely to be overrepresented in literature that informs EBP reviews. When systematic reviews fail to describe outcomes in terms of their boundedness and proximity, clinicians relying on such reports to identify useful practices may errantly give equal weighting to interventions that are widely disparate in terms of their demonstrated effects on children.
Overview of Findings Underscoring the Importance of Boundedness and Proximity
While prior systematic reviews have called attention to the varying quality standards employed in evaluations of autism-focused intervention literature (Gates et al., 2017; French & Kennedy 2018), few reviews to date have thoroughly considered the degree to which characteristics of outcome measures influence findings in favor of treatment effects in their interpretation of the weight of evidence. That is, few reviews provide a clear summary of the extent to which interventions have been shown to support generalized and distal change in children on the autism spectrum. This may be due to the widely varied terms and definitions that have been used to describe these constructs, and to the few available resources that offer guidance for their evaluation. Thus, the remainder of this paper will provide an overview and illustration of the concepts of boundedness and proximity that can guide outcome measure selection for future primary studies evaluating the efficacy of candidate interventions, and outcome evaluation that can be applied in future EBP reviews and meta-analyses.
Overview of Boundedness and Proximity
Learning and development that extends beyond the specific targets and context of intervention has long interested scholars in education, psychology, communication sciences and disorders, and related fields. In cognitive psychology, learning is often described in terms of its transfer from controlled experimental conditions to more applied contexts (i.e., near versus far transfer). Similarly, developmental psychologists conceptualize learning in terms of its proximity to the teaching focus of intervention (i.e., proximal versus distal). Behaviorist scholars have described these concepts as setting/stimulus and response generalization, while education scholars tend to describe any learning that extends beyond the direct teaching focus and setting/materials of intervention as simply ‘generalization.’ We believe learning that extends beyond the intervention context is related but distinct from that which extends beyond the intervention targets and, therefore, conceptualize these two separate continua of extension as boundedness and proximity, respectively.
Boundedness
Boundedness refers to the extent to which learning facilitated by an intervention is potentially bound to the context or conditions of teaching, as opposed to being highly generalized across many contextually important dimensions. The degree of boundedness may be judged by the number of dimensions of the assessment context which are meaningfully different from that of the intervention context. Prior conceptualizations of relevant dimensions of difference have traditionally included the intervention setting (e.g., a classroom, the home environment, the community), the interventionist or interaction partner (e.g., a teacher, an unfamiliar clinician/researcher, a caregiver), and the materials (e.g., toys, curricula, etc.; Stokes and Osnes, 1980). We include these dimensions in our own definition, and add a fourth: interaction style, which we refer to as the nature of the interaction in which the outcome was measured (e.g., child-led free play vs. a structured adult-led assessment). Thus, the boundedness of an outcome can be thought of as existing on a continuum, where assessment contexts that closely mirror those of intervention reflect learning that is potentially ‘context-bound,’ and contexts with greater degrees of difference increasingly reflect learning that is ‘generalized.’
Illustration.
To illustrate this concept, let us consider a hypothetical study testing the efficacy of focused stimulation for increasing the use of targeted expressive language forms in young children. Focused stimulation involves repeated conversational modeling of a small set of specific linguistic targets (e.g., grammatical morphemes, new expressive vocabulary) within a brief naturalistic play interaction (Fey, 1986). The hypothetical study setting of this intervention is a child-friendly clinic. The relevant hypothetical materials include toys which allow natural repeated modeling of linguistic targets (e.g., handheld toys that easily lend themselves to functional and symbolic play). The hypothetical interventionist is a trained clinician (e.g., a speech language pathologist). Finally, the hypothetical intervention interaction style is a one-on-one play-based interaction in which the adult and child share control of its focus and pace.
If we wished to choose an outcome measure that demonstrated the efficacy of focused stimulation within contexts similar to the intervention, an observational measure of child use of targeted language forms derived from a naturalistic play-based interaction with a trained clinician equipped with similar toys in an unfamiliar but child-friendly clinical setting would suffice. Even if this measure was derived from interactions during which the clinician was not modeling target forms, the similarity of the intervention and assessment contexts across all relevant contextual dimensions increases the likelihood that any change detected would be context-bound. We would also consider an outcome context-bound if the measurement context only superficially differed from that of intervention (e.g., if the interaction partner was a trained clinician who was not the interventionist, the setting was a different room in the same clinic, the materials were a different but nearly identical set of toys with the same functional and symbolic play affordances). To capture whether the intervention facilitated learning that extended at least slightly beyond the immediate context of intervention, we might derive the same measure from an observation of a naturalistic play-based interaction with a trained clinician who was not the interventionist2 in the home setting with similar toys.
In the aforementioned case, the interaction style, type of interaction partner, and materials mirror the context of the intervention, but the setting has meaningfully changed from one that is relatively similar to the treatment context to one that is more clearly dissimilar. The degree of separation between the intervention and assessment contexts is still small, so any detected change is likely still bound to contexts that are highly similar to that of the intervention. If we wished to capture increasingly generalized intervention effects, we might measure child use of targeted language forms during a play-based interaction with an untrained caregiver in the home. In this case, the setting, interaction partner, and arguably interaction style would differ from the context of intervention. Thus, change captured by this measure would more likely reflect generalized learning. Finally, highly generalized change might be detected in an observational measure derived from interactions with peers during free-play in a preschool classroom, where the setting, interaction partner, interaction style, and materials all meaningfully differ from their intervention counterparts.
Table 1 illustrates varying degrees of context boundedness, and we note that the order of varied dimensions (e.g., setting, partner, materials, interaction) is arbitrary. For the purposes of illustrating that the boundedness of change is distinct from its proximity, we have relied on examples that reflect the assessment of a highly proximal outcome across increasingly different contexts. However, in the hypothetical test of focused stimulation, we would also expect that a standardized developmentally-scaled assessment of language would capture highly generalized change, because the context of its administration (i.e., a structured, adult-led interaction with standardized materials) would differ from the intervention context across many dimensions.
Table 1.
Setting | Partner | Materials | Interaction | Degrees of Meaningful Difference |
---|---|---|---|---|
Intervention Context | ||||
Child-friendly clinic | Trained clinician | Handheld functional and symbolic play toys | Naturalistic play-based interaction | N/A |
Context-bound Outcome | ||||
Same | Same | Same | Same | 0 |
Context-bound Outcome | ||||
Different room, same clinic | Trained clinician, not the interventionist | Toys with the same functional and symbolic play affordances, but superficially different from intervention | Same | 0 |
Likely Context-bound Outcome | ||||
Home | Same | Same | Same | 1 |
Likely Generalized Outcome | ||||
Home | Untrained caregiver | Same | Same | 2 |
Highly Generalized Outcome | ||||
Preschool Classroom | Untrained peers | Classroom Toys | Unstructured play with peers in centers | 3–4 |
Proximity
While boundedness characterizes the extent to which intervention change extends beyond the intervention context, proximity characterizes the extent to which change extends beyond the specific targets of the intervention. The proximity of an intervention outcome may be judged based on its similarity to the specific skills or behaviors that were modeled or taught during the intervention. Like boundedness, proximity may also be best conceptualized as a continuum, where intervention outcomes that exactly match intervention targets are considered highly proximal, or ‘over-aligned,’ and are likely to yield highly inflated estimates of intervention effectiveness (US Department of Education, Institute of Education Sciences, What Works Clearinghouse, 2016). Outcomes consisting of items/skills that are similar to intervention targets might also be considered proximal. Intervention outcomes that reflect broader change in related skills within the developmental domain targeted in the intervention could be considered distal, and those that reflect change in untargeted developmental domains would be very distal. Implicit in the concept of distal change is the notion that development has likely occurred. That is, change in outcomes that are highly dissimilar from intervention targets suggest that the intervention’s targeting of proximal outcomes may have triggered a developmental cascade that facilitated growth within and across domains (though intervention researchers that wish to make such assertions should, ideally, employ mediation analyses to demonstrate that effects of treatment on proximal outcomes translated to change in more distal outcomes; see Pickles et al., 2015, Watson et al., 2017, and Yoder et al., 2021 for examples).
Illustration.
Returning to our hypothetical study of a focused stimulation intervention allows us to illustrate the concept of proximity. Focused stimulation always involves practitioner selection of a specific and finite set of linguistic targets which are repeatedly modeled (but never prompted) during naturalistic play. In our hypothetical study, the intervention targets are a set of 10 expressive vocabulary words. A highly proximal (and over-aligned) outcome would be children’s use of the exact intervention targets. An outcome that indexed child use of any words (e.g., number of different words spoken during observation, inclusive of targeted words) would also be considered proximal. In contrast, a developmentally scaled standardized assessment of expressive vocabulary or broader spoken language would index distal change, and a standardized assessment of cognition would index highly distal change. Table 2 summarizes this illustration.
Table 2.
Highly Proximal | Proximal | Distal | Highly Distal |
---|---|---|---|
Example Intervention Target: 10 Expressive Vocabulary Words | |||
Exact Target | Similar Nontarget | Broader Development in Targeted Domain | Broader Development in Nontargeted Domains |
Observational measure of child use of 10 targeted words | Observational measure of child use of similar nontarget words | Score from a developmentally scaled vocabulary or broader language assessment | Score from a developmentally scaled cognitive assessment |
Recommendations for Future Research
These outcome distinctions have numerous implications for future research, and their thorough consideration at every stage of intervention development and evaluation will require collaborative efforts from development researchers, measurement experts, external funders, intervention researchers, and meta-analysts. Developmental research is needed to understand developmental trajectories that inform theories of change, and to facilitate identification of the potential proximal and distal outcomes that can be influenced by intervention. Measures that adequately and reliably reflect these constructs in autistic populations, and which are sensitive to change over time are needed to index intervention effects. Single case design studies, which are suited primarily to examining intervention effects on proximal and context-bound outcomes (Ledford & Gast, 2018), and feasibility studies can be used to refine the parameters of intervention in ways that maximize proximal effects that are linked to distal effects.
Recommendations for the Conduct of Clinical Trials
Intervention researchers conducting clinical trials should gauge intervention effects on outcomes that span the continua of boundedness and proximity, clearly describe each of their outcome measures in terms of these characteristics, and plan to test theorized mechanisms of change or cascading effects (that link proximal to distal outcomes) by embedding mediation tests with clear directional hypotheses in their analysis plans (and power their studies accordingly). Mediation analyses can also be used to identify potential ‘active ingredients’ of the intervention, if they are theoretically and specifically linked to proximal and/or distal effects. An important issue here is that the magnitude of effect size estimates that are used to conduct power analyses are linked to the boundedness and proximity of outcomes. Thus, researchers seeking to determine the sample sizes needed to detect effects on distal and generalized outcomes should anticipate modest effects and power their studies conservatively.
Recommendations for Reporting and Interpretation of Intervention Effects
Authors of individual intervention studies should cautiously interpret intervention effects on context-bound and proximal outcomes and avoid implying that such effects will generalize or facilitate further development in the absence of evidence supporting this conclusion. Similarly, EBP reviews and meta-analyses of interventions for autistic children should describe intervention effects in terms of magnitude and scope. Systematic reviewers should code outcomes in terms of boundedness and proximity to evaluate and specify whether any designated EBPs have been demonstrated to effect specific, incremental change, or broader developmental change (see Sandbank et al., 2020 for example decision trees that can aid reliable coding of boundedness and proximity). Meta-analysts should endeavor to summarize intervention effects separately for context-bound and generalized outcomes, and for proximal and distal outcomes. Emphasizing these distinctions can help prevent misinterpretations regarding the potential of interventions to effect developmental change, and will further aid clinicians and teachers in selecting interventions based on the individual needs of children and the overarching objectives of treatment.
We close with a final note in regards to the interpretation of small or null effects of interventions on distal and/or generalized outcome measures. There is a temptation in autism intervention research to treat such effects as indicative of autistic children’s potential. Indeed, the phrase ‘failure to generalize’ has been used to describe an inherent feature of autism (Frith, 1989; Swettenham, 1996), which is meant to convey that autistic children are unable to take what they have learned and flexibly apply it to other contexts. As such, generalized and distal outcomes may be considered out of reach for many autistic children, leading researchers to encourage stakeholders’ investment in interventions that exclusively promote context-bound and proximal effects. We strongly encourage researchers to reject these characterizations, and to instead consider small or null effects on distal or generalized outcomes as illustrative of the potential of the intervention being studied and/or of the measures researchers selected to index intervention effects, and not of the potential of autistic children. If interventions are unable to promote highly meaningful outcomes, or if measures are ill-equipped to capture such outcomes, researchers should continue to refine and innovate, rather than lower their standards for what interventions should be expected to achieve.
Footnotes
We recognize that there is a range of opinions regarding the terminology used to describe people on the spectrum in research (Kenny et al., 2016; Vivanti, 2020). We have elected to use identity-first rather than person-first language following recent recommendations from both autistic and nonautistic researchers (Botha et al., 2021; Bottema-Beutel et al., 2020).
We have suggested that a trained clinician who is not the interventionist assess outcomes to avoid the introduction of detection bias, as the interventionist is aware of the child’s receipt of intervention and may subtly and unconsciously shift their behavior to influence child word use. Ideally, the trained clinician assessing this outcome would be masked to participant group assignment.
References
- Bond C, Symes W, Hebron J, Humphrey N, Morewood G, & Woods K (2016). Educational interventions for children with ASD: A systematic literature review 2008–2013. School Psychology International, 37(3), 303–320. [Google Scholar]
- Botha M, Hanlon J, & Williams G (2021). Does language matter? Identity-first versus person-first language use in autism research: A response to Vivanti. Journal of Autism and Developmental Disorders 10.1007/s10803-020-04858-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bottema-Beutel K, Kapp SK, Lester JN, Sasson NJ, & Hand BN (2020). Avoiding ableist language: Suggestions for autism researchers. Autism in Adulthood 10.1089/aut.2020.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bottema-Beutel K, & Crowley S (2020). Synthesizing classroom intervention effects for autistic students: Commentary on Watkins et al., 2019. Research in Autism Spectrum Disorders, 76, 101586. [Google Scholar]
- Carruthers S, Pickles A, Slonims V, Howlin P, & Charman T (2020). Beyond intervention into daily life: A systematic review of generalisation following social communication interventions for young children with autism. Autism Research, 13(4), 506–522. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fey M (1986). Language intervention with young children Austin, TX: Pro-Ed. [Google Scholar]
- French L, & Kennedy EM (2018). Annual Research Review: Early intervention for infants and young children with, or at‐risk of, autism spectrum disorder: A systematic review. Journal of Child Psychology and psychiatry, 59(4), 444–456. [DOI] [PubMed] [Google Scholar]
- Frith U (1989). Autism: Explaining the enigma Oxford, Blackwell. [Google Scholar]
- Fuller EA, & Kaiser AP (2019). The effects of early intervention on social communication outcomes for children with autism spectrum disorder: A meta-analysis. Journal of Autism and Developmental Disorders, 1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fuller EA, Oliver K, Vejnoska SF, & Rogers SJ (2020). The effects of the Early Start Denver Model for children with autism spectrum disorder: A meta-analysis. Brain Sciences, 10(6), 368. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gates JA, Kang E, & Lerner MD (2017). Efficacy of group social skills interventions for youth with autism spectrum disorder: A systematic review and meta-analysis. Clinical Psychology Review, 52, 164–181. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ledford JR, & Gast DL (2018). Single case research methodology: Applications in special education and behavioral sciences Routledge. [Google Scholar]
- Kenny L, Hattersley C, Molins B, Buckley C, Povey C, & Pellicano E (2016). Which terms should be used to describe autism? Perspectives from the UK autism community. Autism, 20(4), 442–462. [DOI] [PubMed] [Google Scholar]
- Oono IP, Honey EJ, & McConachie H (2013). Parent-mediated early intervention for young children with autism spectrum disorders (ASD). The Cochrane database of systematic reviews, (4), CD009774. 10.1002/14651858.CD009774.pub2 [DOI] [PubMed] [Google Scholar]
- Pickles A, Harris V, Green J, Aldred C, McConachie H, Slonims V, & PACT Consortium. (2015). Treatment mechanism in the MRC preschool autism communication trial: Implications for study design and parent‐focussed therapy for children. Journal of Child Psychology and Psychiatry, 56(2), 162–170. [DOI] [PubMed] [Google Scholar]
- National Autism Center. (2015). Findings and conclusions: National standards project, phase 2 [Google Scholar]
- Nevill RE, Lecavalier L, & Stratis EA (2018). Meta-analysis of parent-mediated interventions for young children with autism spectrum disorder. Autism, 22(2), 84–98. [DOI] [PubMed] [Google Scholar]
- Reichow B, Barton EE, Boyd BA, & Hume K (2012). Early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASD). Cochrane Database of Systematic Reviews, (10). [DOI] [PubMed] [Google Scholar]
- Reichow B, Hume K, & Barton EE, Boyd BA (2018). Early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASD). Cochrane Database of Systematic Reviews, 5, 1465–1858. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sandbank M, Bottema-Beutel K, Crowley S, Cassidy M, Dunham K, Feldman JI, & Woynaroski TG (2020). Project AIM: Autism intervention meta-analysis for studies of young children. Psychological Bulletin, 146(1), 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steinbrenner JR, Hume K, Odom SL, Morin KL, Nowell SW, Tomaszewski B, & Savage MN (2020). Evidence-based practices for children, youth, and young adults with autism The University of North Carolina at Chapel Hill, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team. [Google Scholar]
- Swettenham J (1996). Can children with autism be taught to understand false belief using computers?. Journal of Child Psychology and Psychiatry, 37(2), 157–165. [DOI] [PubMed] [Google Scholar]
- US Department of Education, Institute of Education Sciences, What Works Clearinghouse. (2016). What Works Clearinghouse: Procedures and Standards Handbook (Version 3.0) [Google Scholar]
- Vivanti G (2020). Ask the editor: What is the most appropriate way to talk about individuals with a diagnosis of autism?. Journal of autism and developmental disorders, 50(2), 691–693. [DOI] [PubMed] [Google Scholar]
- Watson LR, Crais ER, Baranek GT, Turner-Brown L, Sideris J, Wakeford L, & Nowell SW (2017). Parent-mediated intervention for one-year-olds screened as at-risk for autism spectrum disorder: A randomized controlled trial. Journal of Autism and Developmental Disorders, 47(11), 3520–3540. [DOI] [PubMed] [Google Scholar]
- Weitlauf AS, McPheeters ML, Peters B, Sathe N, Travis R, Aiello R, & Warren Z (2014). Therapies for children with autism spectrum disorder: Behavioral Interventions Update. Comparative Effectiveness Review No. 137.(Prepared by the Vanderbilt Evidence-based Practice Center under Contract No. 290–2012-00009-I.) AHRQ Publication No. 14-EHC036-EF Rockville, MD: Agency for Healthcare Research and Quality. Available from: www.effectivehealthcare.ahrq.gov/reports/final.cfm. [PubMed] [Google Scholar]
- Wong C, Odom SL, Hume KA, Cox AW, Fettig A, Kucharczyk S, & Schultz TR (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45(7), 1951–1966. [DOI] [PubMed] [Google Scholar]
- Yoder PJ, Bottema-Beutel K, Woynaroski T, Chandrasekhar R, & Sandbank M (2013).Social communication intervention effects vary by dependent variable type in preschoolers with autism spectrum disorders. Evidence-based Communication Assessment and Intervention, 7(4), 150–174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoder PJ, Stone WL, & Edmunds SR (2021). Parent utilization of ImPACT intervention strategies is a mediator of proximal then distal social communication outcomes in younger siblings of children with ASD. Autism, 5, 44–57. [DOI] [PMC free article] [PubMed] [Google Scholar]