Abstract
The onset of the COVID-19 pandemic affected the delivery of early intensive behavioral intervention (EIBI) services. As a result, many EIBI service providers shifted to either temporarily or permanently providing some or all of their services via telehealth. Most published research on behavior analytic approaches to telehealth has focused on training others to implement behavior analytic interventions in an in-person setting. In contrast, relatively few researchers have evaluated direct EIBI service delivery via telehealth (i.e., professionals directly providing behavior analytic interventions to clients/learners using technology). Little is known about the effectiveness of behavior analytic interventions delivered remotely to learners via telehealth compared to standard in-person intervention delivery. The purpose of the present study was to directly compare the effectiveness of discrete trial training delivered remotely via telehealth and in-person on the acquisition of labeling occupations for children diagnosed with autism spectrum disorder in an EIBI program. The results and implications of the effectiveness of the different teaching modalities and observed generalization and maintenance will be discussed.
Evaluating the effectiveness of behavior analytic interventions delivered remotely via telehealth compared to standard in-person delivery could help increase access to services for those in need.
We found little difference in the acquisition of labeling occupations across the two modalities for all three participants.
Future researchers should consider how to incorporate strategies to promote generalization into direct telehealth services.
Future researchers should evaluate how learners without previous exposure to discrete trial training may perform during direct telehealth services.
Keywords: Telehealth, In-person, Discrete trial training, Expressive object labeling, Preschool
Early intensive behavioral intervention (EIBI) is an evidence-based strategy that uses behavior analytic concepts and interventions to teach skills (e.g., imitation, receptive and expressive language, and play skills) to children diagnosed with autism spectrum disorder (ASD; Gould et al., 2011). EIBI programs consist of individualized interventions that target student-specific skill domains. Behavior analytic principles underlie these interventions. EIBI programs are also characterized by the intensity of these individualized behavioral interventions, with 20–30 hr a week typically being the norm (Reichow et al., 2018). In a meta-analysis of the EIBI literature, Eldevik et al. (2009) examined 34 EIBI studies, 9 of which included a control group to compare the performance of children with ASD. They found that EIBI programs had large to moderate effect size changes in IQ and Vineland Adaptive Behavior Composite scores versus the control condition. Based on these results, the authors concluded that EIBI is an effective intervention for many children on the autism spectrum.
One common teaching approach in EIBI programs is discrete trial training (DTT). DTT is a highly structured, instructor-led method of teaching that involves repeatedly presenting small units of instruction lasting only a few seconds each, called discrete trials, to the learner (Smith, 2001). Five different components make up the discrete trial. The first is presenting a discriminative stimulus (instruction) which could be verbal and/or nonverbal. Next, when necessary, the instructor prompts the learner to respond correctly to the discriminative stimulus. The third component of the learning trial is the learner’s response following the discriminative stimulus or prompt, which can be categorized as correct, incorrect, or no response. Fourth, the instructor delivers a differential consequence depending on the learner’s response. For example, if the learner responds correctly, the instructor delivers a reinforcer, and if the child responds incorrectly or gives no response, the instructor implements a brief error correction procedure. Finally, there is an intertrial interval of a few seconds following the delivery of the consequence before the next learning trials begin. Researchers have found DTT to be effective in teaching novel behaviors, discriminations, imitation, receptive and expressive language, social skills, and using alternative communication systems (Smith, 2001).
The onset of the COVID-19 pandemic affected the delivery of EIBI services. As a result, many EIBI service providers shifted to either temporarily or permanently providing some or all of their services via telehealth. Telehealth, as defined by the Health Resources and Services Administration (n.d.), is the “. . . use of electronic information and telecommunication technologies to support and promote long-distance clinical health care, patient and professional health-related education, public health, and health administration (p. 1).” The move to telehealth-based interventions brought on by the COVID-19 pandemic spurred the Council of Autism Service Providers (CASP) to put forth practice parameters for implementing behavior analytic services, such as EIBI, via telehealth. CASP describes one modality for service delivery as synchronous telehealth. Synchronous telehealth simultaneously uses technology to communicate audio or video to deliver information (Council of Autism Service Providers, 2021). Audio and video communication technology allows providers to continue delivering EIBI services to consumers remotely using technology such as telehealth. In the context of EIBI, telehealth might mean using Zoom to teach parents how to implement behavioral interventions at home. One might also deliver behavioral interventions directly to consumers with the aid of online platforms like Google Classroom or Google Slides. Zoom and other online platforms can allow teaching materials to be shared directly with consumers. With caregiver assistance, behavior analysts could use this technology to implement DTT via telehealth.
Most published research on behavior analytic approaches to telehealth has focused on training others to implement behavior analytic interventions in an in-person setting. In a recent paper reviewing behavior analytic approaches to telehealth, Schieltz and Wacker (2020) noted that researchers have used telehealth to train caregivers and staff to conduct functional analyses (Barretto et al., 2006; Frieder et al., 2009; Machalicek et al., 2009; Martens et al., 2019; Wacker et al., 2013b) and to treat challenging behavior through functional communication training (Benson et al., 2018; Dimian et al., 2018; Gibson et al., 2010; Hoffmann et al., 2019; Lindgren et al., 2016, 2020; Monlux et al., 2019; Schieltz et al., 2018; Simacek et al., 2017; Suess et al., 2014, 2016; Tsami et al., 2019; Wacker et al., 2013a). In their article on evaluating telehealth and in-person (i.e., instructor and participant are in the same physical location) training outcomes, Sump et al. (2018) found that telehealth was effective in training staff to deliver early intervention services (Barkaia et al., 2017; Fisher et al., 2014), conducting preference assessments (Higgins et al., 2017) and implementing discrete trial training (Hay-Hansson & Eldevik, 2013; Sump et al., 2018). Thus, telehealth may be an effective means of remotely training caregivers and staff to implement behavior analytic interventions.
In contrast, relatively few researchers have evaluated direct EIBI service delivery via telehealth (i.e., professionals directly providing behavior analytic interventions to clients/learners using technology). Although there are a small number of studies in which a direct intervention via telehealth has been evaluated, the initial results are promising. For example, Ferguson et al. (2020) conducted a study evaluating the effectiveness of DTT and instructive feedback delivered via telehealth to six children diagnosed with ASD to teach expressive object labeling (EOL) relations. Instructive feedback is when an instructor provides extra vocal and/or visual stimuli in the instruction or consequence portion of the DTT sequence. These extra stimuli are often referred to as secondary targets. When implementing instructional feedback, the learner is not required to respond to this secondary target; it is simply presented to the learner. In this study, the authors used telehealth-based DTT to teach labeling superheroes as the primary target and instructive feedback in the form of a verbal statement of the superhero’s power as the secondary target. The authors used HIPAA-compliant Zoom and PowerPoint presentations to conduct their research sessions. Researchers included dyads, or two participants, within the same session, each with individualized target stimuli. The researchers found that five of six participants successfully learned their primary and secondary targets. One limitation of the study is that one participant only learned the primary response and did not learn the secondary response. The authors hypothesized that this could be due to the telehealth modality.
Nohelty et al. (2021) conducted a study evaluating the effectiveness of direct telehealth delivery to children diagnosed with ASD. The authors used DTT and natural environment training (NET) to teach different skill acquisition targets to seven participants. The telehealth sessions began when a behavior technician and participant, accompanied by a caregiver, connected to a video communication system (i.e., Zoom). The behavior technician implemented DTT and NET procedures during these sessions to teach client-specific skills. Researchers presented materials electronically by sharing a screen, holding physical stimuli to the computer camera, or using stimuli in the participant’s environment. The authors found that all participants met mastery and maintenance criteria for the skills taught (e.g., language, adaptive, and social skills). Although the reported outcomes are positive, the study is not without limitations. Nohelty et al. (2021) did not compare the targets taught through telehealth to typical in-person DTT instruction. Without a direct comparison, the conclusions about the effectiveness of telehealth services compared to in-person services are limited. A comparison of the different modalities and their effects on target acquisition would offer additional information about the use of telehealth as a strategy for delivering direct therapy.
To address these limitations, Knopp et al. (2023) compared direct telehealth and in-person delivered DTT to teach EOL targets to three children with ASD. Participants were ages four, five, and ten. Two participants received 15 hr a week of ABA, whereas one participant received 32.5 hr a week. In addition, all participants had been receiving at least 2 years of behavior analytic services and had previous experience with DTT. In this study, the authors taught participants to expressively label comic book characters. These targets were selected based on reported interest and hypothesized increases in social interactions with peers.
Knopp et al. (2023) delivered DTT in-person and via telehealth to compare these modalities on the acquisition of the EOL targets. The authors collected data on participant responses during probe sessions, the number of stimuli sets that met their mastery criteria and differences in efficiency. During the in-person condition, the authors sat next to the participant and used a laptop with PowerPoint to share teaching materials. In the telehealth condition, the authors met with the participants via Zoom and shared their screen with teaching materials directly with the participant. During baseline and maintenance, the authors presented a single probe and then measure participant’s responses. During intervention the authors presented eight trials for each condition. In this phase the authors added praise or corrective feedback depending on correct or incorrect responses. For both telehealth and in-person conditions, the authors found that all participants successfully met mastery criteria and maintained the skill over time. In addition, researchers found little difference in efficiency across both modalities. The author’s results suggest that using DTT to teach EOL targets via telehealth produced similar results to in-person sessions.
One limitation of Knopp et al. (2023) can be found in the telehealth condition of this study. Participants met with the researchers via Zoom; however, some participants were still located in the same building as the researchers. This may not necessarily encompass the environmental conditions that service providers experience when delivering behavior analytic interventions remotely via telehealth. For example, interruptions may occur during remote telehealth sessions, such as siblings distracting the client, clients walking away from the computer or tablet, or parents intervening at inopportune times. As such, additional research may be necessary to make this direct comparison using a more natural telehealth environment.
Initial support for the effectiveness of synchronous telehealth services is promising. Preliminary evidence suggests there may be little difference in performance across instructional delivery modalities, but the research is not without limitations. Little is known about the effectiveness of behavior analytic interventions delivered remotely to children in their homes via telehealth compared to standard in-person intervention delivery. To date, only Knopp et al. (2023) have directly compared implementing DTT via telehealth and in-person to determine the effects of these modalities on skill acquisition. However, as mentioned above, the children in the study did not experience the telehealth condition in a traditional telehealth environment (i.e., while children are in their homes); rather, they were located in the same building as the researchers. Therefore, the present study aims to fill this gap in the research literature by comparing the effectiveness of DTT delivered remotely to children located in their homes via telehealth and traditional in-person DTT instruction on the acquisition of labeling occupations for children diagnosed with ASD in an EIBI program.
Method
Participants
Three young children with a diagnosis of ASD participated in this study. All participants received in-person behavior analytic services from a university-based early intervention program. All participants and their families had access to a computer or tablet and a stable internet connection; a caregiver was present for all telehealth sessions of the study. Researchers informed parents that both telehealth and in-person research sessions would last between 5 and 10 minutes. Eligibility to participate in this study required that the child have (1) a diagnosis of ASD; (2) be between the ages of 3 and 5; (3) be able to remain in a designated area; and (4) engage in 1–2-word vocalizations. We also asked that at least one caregiver was present during the 5–10-minute telehealth sessions. Before starting baseline sessions, each child and caregiver met with the first author to verify that they met all criteria for inclusion in the study.
Eddy was a 4-year-old Asian American boy diagnosed with ASD. Eddy received 12 hr of weekly small-group behavior analytic services at the university-based program. Eddy had received 1 year of behavior analytic services, at the time of the study. Eddy was also enrolled in an inclusion classroom at his preschool throughout the study. Eddy scored 74.5 on the Verbal Behavior Milestones Assessment and Placement Program (VB-MAPP), which is considered below age-appropriate. Eddy primarily communicated using short 1–2-word phrases. Eddy’s mother and father also participated in this study. Eddy’s parents both spoke English at home. Both parents graduated college with a bachelor’s degree. The socioeconomic status of the family was described as upper middle class. Both parents lived at home with Eddy and his two siblings. Eddy’s father typically worked during the day and mom stayed home with Eddy’s siblings. Both of Eddy’s parents had previous exposure to participating in behavior analytic services in the home setting.
Lloyd was a 5-year-old white male diagnosed with ASD. Lloyd received 12 hr a week of small group behavior analytic intervention at the university-based program. Lloyd had experienced 1½ years of behavior analytic services at the time of the study. Lloyd was also enrolled in an inclusion classroom at his preschool throughout the study. Lloyd scored 122.5 on the VB-MAPP, again below age appropriate. Lloyd frequently engaged in spontaneous verbal behavior, and used whole sentences to communicate with those around him. Lloyd’s mother and father also participated in this study. Both parents spoke English at home. Lloyd’s mother graduated with her master’s degree and Dad graduated with his bachelor’s degree. The socioeconomic status of the family was middle class at the time of the study. Both parents lived at home with Lloyd and his sibling. Mom worked during the day and Dad stayed home with Lloyd’s sibling. Both parents had previous experience with participating in behavior analytic services at home.
Suzy was a 4-year-old white female diagnosed with ASD. Suzy received 20 hr a week of behavior analytic intervention at the university-based program. Suzy had received 7 months of behavior analytic services at the time of her participation in the study. Suzy was not enrolled in any other preschool during the study. Suzy scored 120.5 on the VB-MAPP, also below age appropriate and communicated using whole sentences. Suzy’s mother also participated in this study. Suzy’s mother and father both spoke English at home. Both parents were working on graduating with their master’s degrees at the time of the study. The socioeconomic status of the family was described as middle class. Both parents lived at home with Suzy. Both parents had previous history receiving behavior analytic services in the home setting.
Settings
Telehealth
Researchers implemented telehealth sessions remotely via computer using video, audio, and HIPAA-compliant Zoom to deliver instructions. Participants connected to research sessions from their homes using a computer or tablet, and researchers connected from the university clinic or the researcher’s home. Before beginning the study, researchers asked caregivers to find a location in their home with minimal distractions. Researchers delivered materials embedded in Microsoft PowerPoint presentations via the share screen option on Zoom. A caregiver sat near the participant for all sessions to deliver edible reinforcers for correct responding as instructed by the researcher. Caregivers also provided redirection to attend to the materials as necessary and as instructed by the researcher (discussed later).
In-Person
Researchers conducted in-person sessions in a 2 m x 3 m research room in the university’s early intervention clinic. The room included a small table, two chairs for the participant and researcher, a plate with edible reinforcers, and a camera that recorded all sessions.
Materials
Materials for telehealth included a PowerPoint presentation with a 3-in x 5-in picture of the target stimulus on each slide. We used physical materials consisting of 3-in x 5-in laminated flashcards during the in-person sessions. Researchers included paper flashcards during the in-person condition to mimic the typical instructional variables that the participants experienced at the university-based clinic. Researchers delivered edibles identified as highly preferred through an informal preference assessment in both telehealth and in-person sessions. Researchers sent the edible reinforcers home with the participant’s caregiver to use throughout the study. Materials also included a data sheet and pencil for the researcher to collect data.
Identification of Stimuli
Researchers identified stimuli using the results of each participant’s VB-MAPP assessment. For all three participants, an area needing support was labeling occupations. The stimuli used throughout the study are listed in Table 1.
Table 1.
Participant Targets
| Eddy | Lloyd | Suzy | ||||
|---|---|---|---|---|---|---|
| Telehealth | In Person | Telehealth | In Person | Telehealth | In Person | |
| Stimulus 1 | Cleaner | Barber | Barber | Cashier | Barber | Cleaner |
| Stimulus 2 | Lawyer | Cashier | Cleaner | Florist | Cashier | Lawyer |
| Stimulus 3 | Plumber | Nurse | Lawyer | Solider | Nurse | Plumber |
Researchers conducted a preassessment to determine stimuli that could be included in the stimuli sets. Researchers selected stimuli based on the participant’s intervention goals (e.g., expressive object labeling). The researchers conducted the preassessment by presenting each stimulus in a semirandom order four times. In particular, the researcher would say, “What is it?” while simultaneously presenting a picture of the target stimulus. Researchers delivered a neutral statement (e.g., “thank you”) regardless of correct or incorrect responses during the preassessment. A stimulus labeled incorrectly four times was incorporated as a target.
Stimulus Sets
All conditions and phases of the study included three target stimuli presented four times during each session for a total of 12 trials per session. Each condition (i.e., telehealth vs. in-person) had a unique stimulus set that included a group of three target stimuli and was presented in random order four times during each condition. We taught these two sets of three stimuli during the intervention phase. We assigned stimuli to groups following best practice recommendations from Cariveau et al. (2021) based on: visual appearance (i.e., different visual appearance in the same group), the number of syllables (i.e., a similar number of syllables across groups), rhyming (i.e., words that rhymed were spread across groups). In addition, we distributed any stimuli that contained similar beginning, middle, or end sounds across stimuli sets.
Measurement
The primary dependent variable was trials to criterion for each stimulus set, defined as the number of trials presented before participants met the mastery criteria of three sessions above 90% correct. A trial was defined as the presentation of a verbal and nonverbal discriminative stimulus (i.e., “What is it?” + [target stimuli]) and the response that followed. This included a correct response or an incorrect/no response. Researchers implemented an error correction procedure following an incorrect/no response which started a new trial. Exposure to the stimulus in each set was the same across conditions (i.e., each stimulus was presented four times across the three stimuli in that set).
We also collected data on the number of targets acquired by counting the number of targets that met the predetermined criterion. Researchers took data on the number of correct, prompted correct, prompted incorrect, and incorrect responses that occurred during the DTT sequence. A correct response was defined as vocalizing the appropriate stimulus that corresponded to the discriminative stimulus presented within 5 s. Researchers defined an incorrect response as vocalizing anything other than the stimulus that corresponded to the discriminative stimulus presented. An incorrect response could also occur if the participant engaged in no response after 5 s. A prompted correct response was defined as engaging in a target response following a verbal model from the researcher within 5 s of the prompt. We defined a prompted incorrect response as any behavior other than the target behavior, following a verbal model from the researcher. This also included when the participant gave no response after 5 s had elapsed.
To measure efficiency, researchers collected data on the average session duration for each condition for both modalities. Researchers utilized session recordings to determine the length of each research session. Researchers calculated the average session by totaling the duration of each session per condition and then dividing that number by the total number of sessions in that condition.
Interobserver Agreement (IOA)
Two independent observers recorded data on correct, incorrect, prompted correct, and prompted incorrect responses. In this study, the independent observers consisted of trained undergraduate and graduate-level research assistants. These independent observers reviewed recordings of the research sessions and collected interobserver agreement data for at least 33% of sessions for all participants for each condition. In order to collect IOA data, we video-recorded all sessions for data collection purposes and used these recordings for IOA scoring. Researchers determined sessions for IOA by using an online random number generator. We defined an agreement as both observers scoring the same response for the same trial. A trial-by-trial agreement was used to determine the percentage of interobserver agreement. We calculated this IOA by dividing the number of trials with agreement, dividing that number by the number of trials with agreement plus disagreement, and then multiplying by 100 to get a percentage.
We assessed IOA across 46% of Eddy’s sessions with a mean agreement of 98.8% (range: 83%–100%). There was an agreement of 100% during baseline, 100% during intervention, 91.5% (range: 83%–100%) during generalization probes, and 100% during maintenance. Researchers collected IOA data across 52.4% of Lloyd’s sessions with a mean agreement of 100% across all phases of the study. Our independent observers collected IOA data across 52.7% of Suzy’s sessions with a mean agreement of 98.6% (range: 83%–100%). There was an agreement of 100% during baseline, 95.8% (range: 83%–100%) during intervention, 100% during generalization probes, and 100% during maintenance.
Treatment Integrity
Researchers completed treatment integrity for at least 33% of sessions for all participants for each condition. Trained undergraduate and graduate-level research assistants independently observed the session recordings and collected data according to the steps outlined in Table 2. The independent observer scored “+” if the researcher completed the step correctly, “-” if the researcher completed the step incorrectly or did not complete the step at all, or “N/A” if the step was not applicable to the trial. Treatment integrity was calculated by taking the number of steps performed correctly during the session and dividing it by the total number of steps, then multiplying by 100 to get a percentage.
Table 2.
Steps Scored per Condition for Treatment Integrity
|
Baseline, Generalization, and Maintenance 1. The researcher conducted an informal preference assessment 2. The researcher presented the correct targeted stimulus and antecedent verbal stimulus for each trial 3. The researcher allowed up to 5 s for the participant to respond for each trial 4. The researcher and caregiver provided no programmed consequence for correct and incorrect responding 5. The researcher presented mastered targets every third trial throughout the session 6. Reinforcer was delivered contingent on correct response to mastered target |
|
Intervention 1. The researcher conducted an informal preference assessment 2. The researcher presented the correct targeted stimulus and antecedent verbal stimulus for each trial 3. The researcher allowed up to 5 s for the participant to respond for each trial 4. Contingent upon independent correct, edible reinforcer was delivered and praise was delivered 5. Contingent on incorrect response, the trial is represented until the participant echoes the correct response |
For Eddy, researchers coded treatment integrity for 71.6% of all sessions. For Eddy, treatment integrity was 98.9% (range: 97%–100%) for baseline, 99.3% (range: 97%–100%) for intervention, 99.3% (range: 97%–100%) for generalization probes, and 100% for maintenance sessions. For Lloyd, researchers collected treatment integrity for 52.4% of research sessions. For Lloyd, treatment integrity was 100% for baseline, 98.5% (range: 97%–100%) for intervention, 100% for generalization probes, and 98% (range: 96%–100%) for maintenance sessions. For Suzy, we analyzed treatment integrity for 69.5% of sessions. For Suzy, treatment integrity was 99.6% (range: 97%–100%) for baseline, 98% (range: 94%–100%) for intervention, 98.5% (range: 97%–100%) for generalization probes, and 98.5% (range: 97%–100%) for maintenance sessions.
Experimental Design
We used an adapted alternating treatments design embedded into a nonconcurrent multiple baseline across participants design to evaluate the effects of the different modalities on the acquisition of expressive labels across participants. The first author was responsible for running all research sessions. The order of conditions was counterbalanced across participants to control for sequence effects. To further strengthen the study's internal validity, we counterbalanced the stimuli included in the study across conditions and participants. Each participant experienced both conditions in a single day (i.e., telehealth in the morning and in-person in the afternoon). For Suzy and Lloyd, some sessions occurred across multiple days (i.e., telehealth on Monday and in-person on Wednesday) due to illness.
Participants moved from baseline to intervention following at least 5 data points across each condition. A staggered start was then implemented for Lloyd and Suzy before they started the intervention. Lloyd and Suzy started intervention once a clear treatment effect was demonstrated for the previous participant.
Procedures
Before beginning sessions, the first author met with the parents individually via Zoom to review the study procedures. The researcher used verbal instructions to explain to the parents how to assess preference, when to deliver reinforcers, and how to redirect their child (i.e., verbally prompting the child to look at the computer screen) if necessary. During initial telehealth sessions the researcher provided verbal prompts to the parents to engage in the specific behavior, if necessary.
We conducted sessions two to three times a week. Each session began by conducting an informal preference assessment for edible reinforcers. In the telehealth condition this preference assessment involved the caregiver presenting the question, “What do you want to work for?” Participants then selected one type of edible from five different options. Participants could switch their preference for edible reinforcers at any time during the session by vocal request. We included the same procedure in the in-person setting, with the researcher implementing the informal preference assessment.
Teaching sessions across modalities were almost identical. The only difference between the two conditions was that in the telehealth condition, caregivers presented edible reinforcers instead of the researcher, and the materials were presented via PowerPoint presentations rather than via paper flashcards. All other components of the teaching sessions were identical.
Baseline
Researchers conducted baseline sessions by presenting the three target stimuli, in a semi-randomized order, four times for the different conditions. The researcher provided a verbal SD (e.g., “What is it?” + [target stimulus]) and allowed 5 s for the participant to respond. Researchers delivered a neutral statement (e.g., “thank you”) regardless of correct or incorrect responses and then the researcher presented the subsequent trial following the participant's response. In an effort to promote continued responding during baseline, after every third trial, researchers provided a nonrelated participant-specific previously mastered target (e.g., “Touch your nose”) and delivered edible reinforcers contingent on the participant correctly responding to the mastered trial.
Treatment
Treatment sessions involved introducing differential consequences for correct and incorrect responding and an error correction procedure. Similar to baseline, the researcher presented the three target stimuli, in a semi-randomized order, four times for the different conditions. Researchers presented the verbal SD “What is it?” and the picture of the target stimulus. Contingent on a correct response, the researcher delivered praise and an edible reinforcer. Researchers waited for 10 s for the participant to consume the edible before presenting the subsequent trial. If the participant responded incorrectly or gave no response, the researcher implemented an error correction procedure. During the error correction procedure, the researcher withheld praise, withheld the edible reinforcer, and removed all instructional stimuli. In the telehealth condition, this involved presenting a blank PowerPoint slide. The researcher then represented the SD “What is it?,” the picture of the target stimulus, and provided a verbal model of the correct response with a 0-s delay. Contingent on a correct prompted response, the researcher would provide a neutral statement (e.g., “that is the [target]”). Following this neutral statement, the researcher then represented the SD “What is it?” and picture of the target stimulus for the participant to respond independently. Contingent on a correct independent response following the error correction procedure, the researcher provided praise and moved to the next target stimulus. If the participant responded incorrectly to the prompt, researchers redelivered the error correction procedure until the participant responded correctly. Only trials where the participant responded correctly to the initial SD were graphed for each session.
Generalization
Researchers conducted generalization probes following baseline and intervention. Generalization probes follow the same procedures as baseline. Generalization probes involved researchers presenting one novel image of the respective occupation for each participant. Generalization stimuli also included a background displaying relevant stimuli for that occupation (i.e., the construction worker was displayed in front of a bulldozer.) Researchers selected these stimuli based on treatment goals for each participant.
Maintenance
Researchers conducted maintenance sessions following the same procedures from baseline. Maintenance sessions occurred 1, 2, and 3 weeks after the participant met mastery criteria during intervention. In maintenance, the session started by presenting the three target stimuli, in a semi-randomized order, four times for the different conditions. The researcher provided a verbal SD “What is it?” + [target stimulus] and allowed 5 s for the participant to respond. Researchers delivered a neutral statement (e.g., “thank you”) regardless of correct or incorrect responses, and the researcher presented the subsequent trial following the participant's response. To ensure responding during maintenance, researchers provided a nonrelated participant-specific previously mastered target after every third trial. We delivered edible reinforcers contingent on correct responses to the mastered targets.
Results
Figure 1 displays the percentage of independent correct responses during baseline, intervention, generalization probes, and maintenance phases. Figure 2 represents the sessions to mastery criteria for each condition and participant. During baseline, Eddy engaged in no correct responding for both instructional and generalization targets. Once we introduced the intervention, his responding increased to 100% correct for three consecutive sessions for both the telehealth and in-person conditions. For the telehealth condition, his responding in intervention averaged 78.2% correct (range: 8%–100%), and the average for the in-person condition was 88.2% correct (range: 58%–100%). He mastered all targets for both conditions in five sessions. After intervention, we conducted our generalization probes. Eddy performed at 0% correct for in-person and 25% correct for telehealth conditions. However, he maintained his intervention levels of responding at 100% correct during the 1-, 2-, and 3-week probes across both conditions.
Fig. 1.
Participants Results. Note. Graphic representation of the percentage of independent correct responses across participants and modalities. Open squares denote the in-person condition. Closed circles denote the telehealth condition. The triangles denote generalization probes
Fig. 2.
Sessions to Criterion. Note. The black bar represents the telehealth condition. The gray bar represents the in-person condition
Likewise, during baseline, Lloyd responded at 0% correct across instructional and generalization targets. During intervention, his performance increased across both conditions, successfully meeting the mastery criteria of three consecutive sessions above 90% correct. For the telehealth condition, he averaged 86.3% correct (range: 42%–100%) and for the in-person condition, he averaged 70.8% correct (range: 0%–100%). It took Lloyd six sessions to reach the mastery criteria for both conditions. During our intervention generalization probes, he performed at 0% correct for in-person and 25% correct for telehealth conditions. During maintenance, Lloyd averaged 94.7% correct (range: 92%–100%) for the in-person condition. Responding in maintenance during the telehealth condition was 92% correct.
Suzy did not engage in any correct responses to any instructional or generalization targets during baseline. In intervention, she met the mastery criteria for both conditions. Like Lloyd, Suzy met the mastery criteria for each condition in six sessions. Her average percentage correct was 83.5% (range: 58%–100%) for the telehealth condition and 80.7% (range: 50%–100%) for the in-person condition. During the in-person intervention generalization probes, she performed at 25% correct for the in-person condition and 17% correct during the telehealth condition. At 1-, 2-, and 3-week follow-ups, Suzy maintained 100% correct responding for all stimuli across both instructional modalities.
Table 3 displays data on efficiency for each participant. For all three participants, on average, the telehealth condition was more efficient during baseline but the in-person condition was more efficient during intervention. On average, both conditions were similar in efficiency during maintenance, although in-person sessions were shorter by 13 s.
Table 3.
Average Session Duration
| Participant | Baseline | Treatment | Maintenance | |||
|---|---|---|---|---|---|---|
| In-Person (Avg.) | Telehealth (Avg.) | In-Person (Avg.) | Telehealth (Avg.) | In-Person (Avg.) | Telehealth (Avg.) | |
| Eddy | 5:27 | 3:55 | 5:05 | 5:39 | 2:40 | 3:11 |
| Lloyd | 4:12 | 3:26 | 5:32 | 6:20 | 3:30 | 3:27 |
| Suzy | 3:43 | 3:40 | 5:22 | 6:28 | 3:57 | 4:08 |
| Total (Avg.) | 4:27 | 3:40 | 5:19 | 6:09 | 3:22 | 3:35 |
For Eddy, during baseline, the telehealth condition was more efficient by 1:32. During intervention the in-person condition was slightly more efficient by 34 s. During maintenance, the in-person condition was more efficient by 31 s.
For Lloyd, the in-person was the most efficient condition for telehealth and maintenance. For example, telehealth intervention sessions lasted on average 48 s longer than in-person sessions. However, the largest difference for Lloyd was during baseline. In this condition telehealth sessions were on average 52 s shorter than in-person sessions.
Likewise for Suzy, the in-person condition was more efficient during intervention and maintenance sessions. Intervention and maintenance sessions were more efficient by 1:12 and 12 s respectively. However, during baseline both in-person and telehealth conditions were equally efficient.
Discussion
The purpose of the present study was to compare the effectiveness of DTT delivered remotely via telehealth and in-person on the acquisition of labeling occupations for children diagnosed with ASD in an EIBI program. Our results demonstrate that all participants met mastery criteria for labeling occupations across both telehealth and in-person conditions. In addition, all participants required the same number of training sessions to meet the mastery criteria for both conditions. Further, we found the telehealth condition was more efficient during baseline sessions, the in-person condition was more efficient during intervention sessions, and both had similar efficiency during maintenance. Overall, our results suggest that using Telehealth was as effective as in-person instruction for teaching this particular skill with these particular participants. Further, participants in the present study maintained their performance over 3 weeks at an average of 97% in telehealth and 98% in in-person conditions. These results add to the growing body of research on the effectiveness of telehealth-based behavior analytic instruction.
Our results are consistent with the findings of Knopp et al. (2023). In both studies, researchers found little difference in acquisition across telehealth and in-person conditions. The current results and those found by Knopp et al. (2023) suggest that using direct telehealth to deliver EOL instruction to children with ASD may be a viable strategy. Compared to in-person instruction, there was minimal difference in acquisition across conditions. In addition, along with Knopp et al., we also found that all participants maintained the skills over time. However, one variable to consider is the participants’ history with DTT. Across both studies, participants previously experienced DTT instruction prior to their participation. Their previous experience with DTT may have contributed to their rapid acquisition of targets across both conditions. Future researchers should consider including participants without prior DTT experience to determine how this variable affects their acquisition of skills in the telehealth format.
Given the impact of the COVID-19 pandemic, these results offer some potential practical implications. First, the current findings suggest that direct and remote telehealth service provision can improve skill repertoires in children with ASD. Although the research is still limited, the evidence suggests that teaching EOL targets remotely via telehealth to consumers may be as effective as in-person instruction. Behavior analysts can use telehealth technologies to promote skill acquisition from a distance while adhering to local and state requirements. This could also be helpful outside of pandemic contexts with families living in rural areas or areas with limited behavior analytic services. Future researchers should continue to evaluate the effectiveness of behavior analytic interventions delivered via Telehealth. Additional research could provide further support for the Telehealth modality as a tool to access behavioral services remotely.
Second, the environmental conditions of the telehealth condition in this study may serve as a more accurate reflection of what behavior analysts experience when delivering telehealth interventions than those that were present in Knopp et al. (2023). For example, Suzy often joined sessions from her couch in her living room. She often lay across or sat on her mother's lap during reinforcement breaks or before instruction began. Likewise, Eddy’s sessions occasionally needed to be paused so their mother could assist with younger sibling requests. Finally, Lloyd's sessions experienced minor interruptions by the family dog, but a verbal redirect from caregivers helped gain attention before instruction began. Even with these environmental variables at play, the results demonstrate that telehealth was as effective as in-person instruction in teaching expressive labels to these children.
Third, using PowerPoint to display instructional materials during DTT may offer some assistance in easing the burden of material creation. Behavior analytic services often require many physical stimuli (e.g., flashcards, reinforcers). PowerPoint might serve as one way to manage these materials. We found that using PowerPoint to display materials allowed for quick creation and ease in manipulating the presentation order of the stimuli. In addition, all files can be saved directly to a computer that can be easily accessed later. This might help prevent losing or breaking materials.
Although the present study demonstrates little difference in the effectiveness of DTT delivered via telehealth and in-person on EOL targets, it is not without limitations. Participant availability limited the degree to which we could rapidly alternate between conditions. Due to illness, Lloyd and Suzy could not receive intervention sessions in a rapid alternating fashion. For example, due to illness, Lloyd received the telehealth intervention and could not receive the in-person instruction. Instruction resumed for the in-person condition once Lloyd returned. However, given the rapid acquisition in both conditions, this variable is unlikely to influence their responding significantly.
We attempted to extend previous research by testing for generalization across conditions. However, the lack of stimulus generalization we observed is an unsurprising yet significant limitation of this study. Although producing stimulus generalization was not a primary goal of the current study, it is important to note that we did not produce it to a significant degree. In baseline, participants responded at an average of 0% across telehealth and in-person conditions. During intervention, participants responded at an average of 22.3% in the telehealth and 8.3% in in-person conditions. A likely reason we may have failed to observe stimulus generalization is that we did not explicitly program for it in our DTT procedures. That is, during intervention, the same depiction of an occupation was presented four times during each session (e.g., the same cashier was presented four times during each teaching session) across all teaching sessions. Given that we did not teach multiple exemplars of each stimulus, this may have contributed to the lack of stimulus generalization in the current study. Stokes and Baer (1977) recommend that to promote stimulus generalization, it is vital to teach sufficient exemplars and not rely on the “train and hope” method that we employed. Given the limited research in this area, researchers in this study did not employ multiple exemplar training because we were primarily focused on evaluating DTT delivered via telehealth and in-person on the acquisition of EOL targets, not necessarily generalization of the skill. Future researchers should consider how to incorporate multiple exemplar training or other methods to promote generalization into direct telehealth services and their impact on generalization.
An additional limitation to acknowledge is that parents were available during all telehealth sessions. This is a limitation of the study because it may not necessarily reflect what occurs in typical environments. Often, parents may not be available to participate directly in behavior analytic services due to a number of variables (e.g., work or lack of childcare). This affects their ability to deliver reinforcers or redirect their child during telehealth-based sessions. Therefore, it is important for future researchers to evaluate the how to minimize reliance on caregivers during telehealth sessions.
Further, the parents that participated in this study all have previous experience implementing behavior analytic interventions in their home. This likely influenced the parents ability to implement telehealth sessions. For example, in this study, the researcher used verbal instructions to explain what behaviors the caregiver needed to engage in during telehealth sessions. For families just beginning services, verbal instructions may not be enough for them to engage in the desired behaviors. In these situations, including more rigorous training procedures such as behavioral skills training may be required (Schieltz & Wacker, 2020).
Our preliminary results suggest that DTT delivered via telehealth and in-person to teach EOL of occupations to children with ASD produced similar effects. Although this line of research adds to the literature on telehealth, additional research is needed. Researchers should continue to evaluate which interventions can be effectively delivered via telehealth and compare those results to what is typically done in-person. This line of research may open the door to those who might benefit from behavior analysis but cannot receive services in-person. In addition, future researchers may consider the impact that previous exposure to DTT may have on the acquisition of these skills. Individuals who have not experienced in-person DTT may respond differently to the telehealth-based DTT than our participants did. Research in this area could provide information about how behavior analysts can effectively provide services to this population of learners.
Future researchers may also consider the impact that caregivers play in telehealth-based sessions. In this study, the parents had previous exposure implementing behavioral interventions and were available to sit in on all sessions. It may not be feasible for parents to participant in all telehealth-based sessions. Future research evaluating ways to support child independence via telehealth is warranted. This line of research could support families who would benefit from telehealth services but are unable to participate in all sessions.
Finally, future research may also consider the social validity of participant or caregiver preference for intervention modality. Evaluating preference for intervention modality for participants and caregivers could allow behavior analyst to take into consideration the client’s and stakeholder’s choice for treatment. This would allow behavior analysts to continue to promote socially valid behavioral intervention delivery. In addition, if participants do prefer telehealth-based interventions it would be worthwhile to assess potential barriers for caregivers to participate in telehealth-based services. Assessing the barriers to implementation could allow behavior analysts to identify solutions to support families to access interventions via telehealth (e.g., teaching children the prerequisite skills necessary to participate in telehealth services).
All in all, we found that both DTT delivered remotely via telehealth and in-person produced similar effects in acquisition, maintenance, and efficiency for all participants. Our results were congruent with Knopp et al. (2023). Our findings add to the existing literature on interventions delivered via telehealth compared to in-person delivery and add support to using telehealth as a modality for teaching EOL targets to children with ASD. We believe it is important for researchers to continue to evaluate the effectiveness of interventions delivered via telehealth. Doing so would allow our field to best support consumers who could greatly benefit from behavior analytic interventions.
Data Availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics Approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional review board of the university and with the 1964 Helsinki declaration and its later amendments.
Informed Consent
Written informed consent was obtained from legal guardians for all participants included in the study.
Conflict of Interest
The authors have no conflicts of interests to declare.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Barkaia, A., Stokes, T. F., & Mikiashvili, T. (2017). Intercontinental telehealth coaching of therapists to improve verbalizations by children with autism. Journal of Applied Behavior Analysis,50, 582–589. 10.1002/jaba.391 [DOI] [PubMed] [Google Scholar]
- Barretto, A., Wacker, D. P., Harding, J., Lee, J., & Berg, W. K. (2006). Using telemedicine to conduct behavioral assessments. Journal of Applied Behavior Analysis,39(3), 333–340. 10.1901/jaba.2006.173-04 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Benson, S. S., Dimian, A. F., Elmquist, M., Simacek, J., McComas, J. J., & Symons, F. J. (2018). Coaching parents to assess and treat self-injurious behavior via telehealth. Journal of Intellectual Disability Research,62(12), 1114–1123. 10.1111/jir.12456 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cariveau, T., Batchelder, S., Ball, S., & La Cruz Montilla, A. (2021). Review of methods to equate target sets in the adapted alternating treatments design. Behavior Modification,45(5), 695–714. 10.1177/0145445520903049 [DOI] [PubMed] [Google Scholar]
- Council of Autism Service Providers (2021). Practice parameters for telehealth-implementation of applied behavior analysis: Second Edition. Wakefield, MA: Author.
- Dimian, A. F., Elmquist, M., Reichle, J., & Simacek, J. (2018). Teaching communicative responses with a speech-generating device via telehealth coaching. Advances in Neurodevelopmental Disorders,2(1), 86–99. 10.1007/s41252-018-0055-7 [Google Scholar]
- Eldevik, S., Hastings, R. P., Hughes, J. C., Jahr, E., Eikeseth, S., & Cross, S. (2009). Meta-analysis of early intensive behavioral intervention for children with autism. Journal of Clinical Child & Adolescent Psychology,38(3), 439–450. 10.1080/15374410902851739 [DOI] [PubMed] [Google Scholar]
- Ferguson, J. L., Majeski, M. J., McEachin, J., Leaf, R., Cihon, J. H., & Leaf, J. B. (2020). Evaluating discrete trial teaching with instructive feedback delivered in a dyad arrangement via telehealth. Journal of Applied Behavior Analysis,53(4), 1876–1888. 10.1002/jaba.773 [DOI] [PubMed] [Google Scholar]
- Fisher, W. W., Luczynski, K. C., Hood, S. A., Lesser, A. D., Machado, M. A., & Piazza, C. C. (2014). Preliminary findings of a randomized clinical trial of a virtual training program for applied behavior analysis technicians. Research in Autism Spectrum Disorders,8(9), 1044–1054. 10.1016/j.rasd.2014.05.002 [Google Scholar]
- Frieder, J. E., Peterson, S. M., Woodward, J., Crane, J., & Garner, M. (2009). Teleconsultation in school settings: Linking classroom teachers and behavior analysts through web-based technology. Behavior Analysis in Practice,2(2), 32–39. 10.1007/BF03391746 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gibson, J. L., Pennington, R. C., Stenhoff, D. M., & Hopper, J. S. (2010). Using desktop videoconferencing to deliver interventions to a preschool student with autism. Topics in Early Childhood Special Education,29(4), 214–225. 10.1177/0271121409352873 [Google Scholar]
- Gould, E., Dixon, D. R., Najdowski, A. C., Smith, M. N., & Tarbox, J. (2011). A review of assessments for determining the content of early intensive behavioral intervention programs for autism spectrum disorders. Research in Autism Spectrum Disorders,5(3), 990–1002. 10.1016/j.rasd.2011.01.012 [Google Scholar]
- Hay-Hansson, A. W., & Eldevik, S. (2013). Training discrete trials teaching skills using videoconference. Research in Autism Spectrum Disorders,7(11), 1300–1309. 10.1016/j.rasd.2013.07.022 [Google Scholar]
- Health Resources & Services Administration. (n.d.). What is telehealth? Retrieved November 8, 2022, from https://www.hrsa.gov/rural-health/topics/telehealth/what-is-telehealth
- Higgins, W. J., Luczynski, K. C., Carroll, R. A., Fisher, W. W., & Mudford, O. C. (2017). Evaluation of a telehealth training package to remotely train staff to conduct a preference assessment. Journal of Applied Behavior Analysis,50, 238–251. 10.1002/jaba.370 [DOI] [PubMed] [Google Scholar]
- Hoffmann, A. N., Bogoev, B. K., & Sellers, T. P. (2019). Using telehealth and expert coaching to support early childhood special education parent-implemented assessment and intervention procedures. Rural Special Education Quarterly,38(2), 95–106. 10.1177/8756870519844162 [Google Scholar]
- Knopp, K., Ferguson, J. L., Piazza, J., Weiss, M. J., Lee, M., Cihon, J. H., & Leaf, J. B. (2023). A comparison between direct telehealth and in-person methods of teaching expressive labels to children diagnosed with autism spectrum disorder. Behavior Modification,47(2), 432–453. [DOI] [PubMed] [Google Scholar]
- Lindgren, S., Wacker, D., Suess, A., Schieltz, K., Pelzel, K., Kopelman, T., Lee, J., Romani, P., & Waldron, D. (2016). Telehealth and autism: Treating challenging behavior at lower cost. Pediatrics,137(Suppl 2), S167–S175. 10.1542/peds.2015-2851O [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindgren, S., Wacker, D., Schieltz, K., Suess, A., Pelzel, K., Kopelman, T., Lee, J., Romani, P., & O’Brien, M. (2020). A randomized controlled trial of functional communication training via telehealth for young children with autism spectrum disorder. Journal of Autism & Developmental Disorders,50, 4449–4462. 10.1007/s10803-020-04451-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Machalicek, W., O’Reilly, M., Chan, J. M., Lang, R., Rispoli, M., Davis, T., Shogren, K., Sigafoos, J., Lancioni, G., Antonucci, M., Langthorne, P., Andrews, A., & Didden, R. (2009). Using videoconferencing to conduct functional analysis of challenging behavior and develop classroom behavioral support plans for students with autism. Education & Training in Developmental Disabilities, 44(2), 207–217. http://www.jstor.org/stable/24233495
- Martens, B. K., Baxter, E. L., McComas, J. J., Sallade, S. J., Kester, J. S., Caamano, M., Dimian, A., Simacek, J., & Pennington, B. (2019). Agreement between structured descriptive assessments and functional analyses conducted over a telehealth system. Behavior Analysis: Research & Practice,19(4), 343–356. 10.1037/bar0000153 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Monlux, K. D., Pollard, J. S., Bujanda Rodriguez, A. Y., & Hall, S. S. (2019). Telehealth delivery of function-based behavioral treatment for problem behavior exhibited by boys with fragile X syndrome. Journal of Autism & Developmental Disorders,49(6), 2461–2475. 10.1007/s10803-019-03963-9 [DOI] [PubMed] [Google Scholar]
- Nohelty, K., Bradford, C. B., Hirschfeld, L., Miyake, C., & Novack, M. N. (2021). Effectiveness of telehealth direct therapy for individuals with autism spectrum disorder. Behavior Analysis in Practice,15, 643–658. 10.1007/s40617-021-00603-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reichow, B., Hume, K., Barton, E. E., & Boyd, B. A. (2018). Early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASD). The Cochrane Database of Systematic Reviews,5(5):CD009260. 10.1002/14651858.CD009260.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schieltz, K. M., & Wacker, D. P. (2020). Functional assessment and function-based treatment delivered via telehealth: A brief summary. Journal of Applied Behavior Analysis,53(3), 1242–1258. 10.1002/jaba.742 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schieltz, K. M., Romani, P. W., Wacker, D. P., Suess, A. N., Huang, P., Berg, W. K., Lindgren, S. D., & Kopelman, T. G. (2018). Single-case analysis to determine reasons for failure of behavioral treatment via telehealth. Remedial & Special Education,39(2), 95–105. 10.1177/0741932517743791 [Google Scholar]
- Simacek, J., Dimian, A. F., & McComas, J. J. (2017). Communication intervention for young children with severe neurodevelopmental disabilities via telehealth. Journal of Autism & Developmental Disorders,47(3), 744–767. 10.1001/s10803-016-3006-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith, T. (2001). Discrete trial training in the treatment of autism. Focus on Autism and Other Developmental Disabilities,16(2), 86–92. 10.1177/108835760101600204 [Google Scholar]
- Stokes, T. F., & Baer, D. M. (1977). An implicit technology of generalization 1. Journal of Applied Behavior Analysis,10(2), 349–367. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Suess, A. N., Romani, P. W., Wacker, D. P., Dyson, S. M., Kuhle, J. L., Lee, J. F., Lindgren, S. D., Kopelman, T. G., Pelzel, K. E., & Waldron, D. B. (2014). Evaluating the treatment fidelity of parents who conduct in-home functional communication training with coaching via telehealth. Journal of Behavioral Education,23(1), 34–59. 10.1007/s10864-013-9183-3 [Google Scholar]
- Suess, A., Wacker, D., Schwartz, J. E., Lustig, N., & Detrick, J. (2016). Preliminary evidence in the use of telehealth in an outpatient behavior clinic. Journal of Applied Behavior Analysis,49(3), 686–692. 10.1002/jaba.305 [DOI] [PubMed] [Google Scholar]
- Sump, L. A., Richman, D. M., Schaefer, A. M., Grubb, L. M., & Brewer, A. T. (2018). Telehealth and in-person training outcomes for novice discrete trial training therapists. Journal of Applied Behavior Analysis,51(3), 466–481. 10.1002/jaba.461 [DOI] [PubMed] [Google Scholar]
- Tsami, L., Lerman, D., & Toper-Korkmaz, O. (2019). Effectiveness and acceptability of parent training via telehealth among families around the world. Journal of Applied Behavior Analysis,52(4), 1113–1129. 10.1002/jaba.645 [DOI] [PubMed] [Google Scholar]
- Wacker, D. P., Lee, J. F., Padilla Dalmau, Y. C., Kopelman, T. G., Lindgren, S. D., Kuhle, J., Pelzel, K. E., Dyson, S., Schieltz, K. M., & Waldron, D. B. (2013a). Conducting functional communication training via telehealth to reduce the problem behavior of young children with autism. Journal of Developmental & Physical Disabilities,25(1), 35–48. 10.1007/s10882-012-9314-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wacker, D. P., Lee, J. F., Padilla Dalmau, Y. C., Kopelman, T. G., Lindgren, S. D., Kuhle, J., Pelzel, K. E., & Waldron, D. B. (2013b). Conducting functional analyses of problem behavior via telehealth. Journal of Applied Behavior Analysis,46(1), 31–46. 10.1002/jaba.29 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.


