Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2021 Jan 19;14(1):120–130. doi: 10.1007/s40617-020-00507-x

Using Transfer Trials to Teach Tacting to Children With Autism Spectrum Disorder

Alexandria R Dell’Aringa 1, Jessica F Juanico 1,2,, Kelley L Harrison 1,2
PMCID: PMC7900303  PMID: 33732582

Abstract

Transfer trials are a component of discrete-trial training in which the therapist re-presents the initial instruction following a prompted trial to provide an opportunity for the learner to answer independently. Transfer trials may expedite the transfer of stimulus control, are commonly used by practitioners and researchers, and are often recommended as best practice by applied behavior analysis organizations. However, there is little research comparing the efficiency and efficacy of transfer trials to more traditional teaching procedures. The current study evaluated and compared transfer trials to a nontransfer trial procedure for two-component tacting with three children diagnosed with an autism spectrum disorder. Results indicated both procedures were both effective and efficient for teaching two-component tacts for all learners, supporting the inclusion of transfer trials in discrete-trial training.

Keywords: behavior analysis, autism spectrum disorder, discrete-trial training, transfer trials, skill acquisition, tact training


Discrete-trial training (DTT) is a method for simplifying and individualizing instruction to promote skill acquisition. DTT is commonly used to teach new skills to individuals with autism spectrum disorder (ASD) and other developmental disabilities and is composed of discrete trials. Discrete trials are small units of instruction, each consisting of five parts: a cue (or instruction), a prompt, a response, a consequence, and an intertrial interval (Smith, 2001). For example, a therapist may perform a listener-responding discrete trial by laying out three pictures, saying “Point to cow” (instruction), and pointing to the cow picture (prompt). The learner might respond by pointing to the cow picture (response) and receive praise and a small candy (consequence). The therapist may shuffle or remove the cards before presenting the next discrete trial (intertrial interval).

Prompts, one of the five parts of a discrete trial (Smith, 2001), occur concurrently or immediately after the presentation of the instruction and involve presenting a stimulus (i.e., prompt) that already controls a behavior (Coon & Miguel, 2012). Prompts are gradually and systematically faded to transfer stimulus control from the prompt to the instruction. Procedures that are used to transfer stimulus control from the prompt to the instruction include constant and progressive time delays, graduated guidance, most-to-least prompting, least-to-most prompting (Wolery & Gast, 1984), and transfer trials (e.g., Carbone, 2016; Carbone et al., 2006; Valentino et al., 2015).

Transfer trials are recommended as a procedure to include during skill acquisition by multiple organizations, including the Carbone Clinic (Carbone, 2016), the PaTTAN Autism Initiative (Hozella & Ampuero, 2014), and the Pennsylvania Verbal Behavior Project (2009). Transfer trials occur after prompted trials and give the learner a chance to respond independently. Carbone (2016) described transfer trials as a component of errorless teaching. After the instruction and an immediate prompt are presented, Carbone suggested re-presenting the instruction, allowing an opportunity for the individual to respond independently, and removing the opportunity for an error following the initial instruction. The second presentation of the instruction is referred to as the transfer trial. That is, the therapist models the response and provides an opportunity for the student to respond. Transfer trials have also been implemented following correct prompted responses in error correction (e.g., Frampton & Shillingsburg, 2018; Valentino et al., 2012). It is possible that the additional practice trial in the absence of an error increases the rate of skill acquisition and long-term maintenance. Similarly, the addition of the transfer trial may allow the response to come under stimulus control of the instruction rather than the prompt (Valentino et al., 2015). Although it appears that transfer trials may enhance the transfer of stimulus control, there have been few demonstrations of this effect in the literature as compared to other procedures. Therefore, it is important that researchers evaluate the effects of transfer trials on skill acquisition programs.

Although there is little research that specifically targets the efficacy of transfer trials as compared to more common DTT procedures, several studies have programmed transfer trials within their skill acquisition programming and have demonstrated acquisition of tacts (e.g., Carbone et al., 2006; Frampton et al., 2016; Frampton et al., 2019), mean length of utterances (e.g., Shillingsburg et al., 2020), intraverbal behavior (e.g., Valentino et al., 2012; Valentino et al., 2015), and other skills (e.g., Frampton & Shillingsburg, 2018). For example, Carbone et al. (2006) used transfer trials in their comparison of two approaches for tact training for one individual with ASD. Transfer trials were part of the “stimulus control transfer procedure” in both the vocal-alone and total communication training conditions. During teaching trials, the therapist conducted a prompted trial with a model (i.e., presented a picture and said “What is it? Broom.”), followed by a transfer trial. They re-presented the image, asked “What is it?” and waited 3 s for the learner to respond independently. Incorrect responses led to error correction, and correct responses were followed by praise, then two to three mastered tasks and a second transfer trial (referred to as a “test trial”). Results of the study suggest that the total communication training was more effective than the vocal-alone training for the acquisition of tacts. Although transfer trials were included in both conditions, it is unclear why one training procedure was more effective than the other, which may suggest that transfer trials enhance the efficacy of some training procedures, but not all.

Similarly, Valentino et al. (2015) used transfer trials in their study on intraverbal storytelling with three children with ASD. Specifically, transfer trials were included to enhance the transfer of stimulus control from the text to the instruction. A transfer trial was conducted following prompted trials. In the prompted trial, the experimenter presented the instruction (i.e., “Tell me a story about . . .”) and opened the book to the target segment, which included both the text and pictures. After the prompted trial, the experimenter implemented a transfer trial and covered target segments with a blank page, requiring the learners to independently respond to the instruction (i.e., “Tell me a story about . . .”) by describing the next story segment. Incorrect responses led to error correction, and correct responses were praised. The results suggest that the training procedures were effective in teaching storytelling for all three children, suggesting that transfer trials may enhance the effectiveness and efficiency of some training procedures. Although it has not been directly evaluated, it is possible that the transfer trials enhanced the transfer of stimulus control from the prompt (i.e., story text) to the instruction (i.e., “Tell me a story about . . .”).

The results of these studies and others (e.g., Carbone et al., 2006; Valentino et al., 2015) suggest that the inclusion of transfer trials may expedite the transfer of stimulus control from prompted to unprompted conditions; however, it is unclear whether transfer trials increase the efficiency of these DTT procedures. Because transfer trials are recommended by organizations (e.g., Carbone, 2016; Hozella & Ampuero, 2014; The Pennsylvania Verbal Behavior Project, 2009) and used by both practitioners and researchers (e.g., Carbone et al., 2006; Valentino et al., 2015), it is important to conduct research on the efficacy of transfer trials. Therefore, the purpose of this study was to evaluate and compare the effectiveness and efficiency of transfer trials to a more traditional DTT procedure for two-component tacting. Determining if transfer trials increase the rate of skill acquisition may provide support as to whether practitioners and researchers should include transfer trials during DTT.

Method

Participants

We recruited learners from a clinic-based applied behavior analysis (ABA) program. To be included in the study, learners were required to (a) echo phrases with three to five syllables (e.g., “girl running,” “unicorn painting”); (b) tact animals, people, and verbs; and (c) sit at a table for DTT for up to 15 min. We excluded learners with severe problem behavior that delayed or interrupted instruction. Based on our inclusion and exclusion criteria, three 4-year-old boys with a diagnosis of ASD participated. All three learners had a common deficit in two-component tacting, scored between 85.5 and 141 on the Verbal Behavior Milestones Assessment and Placement Program (VB-MAPP; Sundberg, 2008), and had experience with transfer trial and nontransfer trial procedures within the context of DTT.

Logan scored 141 on the VB-MAPP, Steve scored 106.5, and Peter scored 85.5. All three learners were able to tact at least 100 items, mand using two or more words, fill in the blanks for simple intraverbal phrases (e.g., “1, 2, . . .”), follow one-step instructions, match identical objects in a messy array of eight items, echo short phrases with three to five syllables, and spontaneously imitate tasks in the natural environment. Steve and Logan were also able to identify some letters and numbers, count to five, engage in 10 min of play without adult intervention, select items based on color/shape, and match nonidentical items in a messy array of 10.

Therapists and Data Collectors

Therapists and data collectors were seven Registered Behavior Technicians and one Board Certified Behavior Analyst. They were selected based on their experience, proficiency with DTT programming, recommendations from the clinical team, and their client caseload. Therapists who regularly worked with the learners selected for the study were given priority over therapists who did not have clients in the study. Selected therapists had been working at the ABA clinic for between 6 months and 2 years. Alexandria R. Dell’Aringa trained all therapists and data collectors prior to implementing sessions or collecting data. Additionally, treatment integrity measures were taken to ensure high procedural fidelity of the therapists’ behavior throughout the study (see Treatment Integrity).

Setting and Materials

Trained therapists conducted sessions at the learners’ assigned workspaces in the ABA clinic. Peter’s sessions sometimes took place at a workspace in the hallway of his preschool where he received ABA services 2 days per week. Regardless of location, session materials included a table, two chairs, nine 2D picture cards (three cards assigned to each of the three conditions), paper data sheets, and two highly preferred edible items (identified by a paired-stimulus preference assessment). Up to six sessions were conducted each day, with no more than two sessions per condition per day. Each session lasted approximately 3 to 5 min, but session length varied based on the condition and the learner’s responses.

Response Measurement

Trained data collectors collected data on participant behavior on a trial-by-trial basis using paper and pencil. For each trial, data collectors recorded the presence or absence of a correct response, prompted response, error response, or no response for participant behavior. A correct response was defined as an independent response within 3 s of the initial instruction, prompted response as a correct response following a model prompt, error response as any response other than the correct response within 3 s of the initial instruction, and no response as failing to respond within 3 s of the initial instruction. The main dependent variable was the percentage of correct responding. We determined the percentage of correct responding by calculating the total number of trials with a correct response, dividing by the total number of trials, and multiplying by 100. We also collected data on correct, incorrect, and no responses following a transfer trial; however, these data were not included in the data analysis.

Treatment Integrity

Trained data collectors also collected data on therapist behavior on a trial-by-trial basis using paper and pencil. For each trial, data collectors recorded whether transfer trial, reinforcement, and error correction were implemented correctly. A correct implementation of a transfer trial was defined as re-presenting the initial instruction and allowing the learner 3 s to answer independently following a 0-s delay trial in the transfer trial condition, the absence of a transfer trial in the nontransfer trial or control condition, or the absence of a transfer trial following a 3-s delay trial in the transfer trial condition. A correct implementation of reinforcement was defined as providing praise and a small edible after a correct response or not providing praise or edibles after an error response across all conditions. A correct implementation of error correction was defined as conducting a 0-s prompt-delay trial immediately following any error response in the nontransfer trial condition, conducting a 0-s prompt-delay trial immediately following an error response to the transfer trial in the transfer trial condition, not conducting error correction following a correct response in any condition, or not conducting error correction in the control condition. For all three therapist behaviors, we calculated treatment integrity by dividing the total number of correct trials by the total number of trials and multiplying by 100 for a given session. The mean treatment integrity was 99.76% for Steve (range 94.44%–100%), 99.89% for Logan (range 97.22%–100%), and 99.96% for Peter (range 97.22%–100%).

Interobserver Agreement

A second independent observer collected data for at least 30% of all sessions for each participant. Interobserver agreement (IOA) was calculated across all participant behavior (i.e., correct, prompted, error, and no response) and therapist behavior (i.e., correct implementation of a transfer trial, reinforcement, and error correction) using trial-by-trial agreement. An agreement was defined as both observers scoring the presence or absence of the same behavior on the same trial for participant behavior and a correct or incorrect implementation of the same therapist behavior on the same trial for therapist behavior. For each behavior, we divided the number of agreements by the total number of trials and multiplied by 100. The mean IOA for participant behavior for Steve was 99.83% (range 91.67%–100%) and was collected during 31.58% of sessions, for Peter was 99.90% (range 91.67%–100%) and was collected during 41.67% of sessions, and for Logan was 98.53% (range 83.33%–100%) and was collected during 33.33% of sessions. The mean IOA for therapist behavior for Steve was 98.29% (range 75%–100%) and was collected during 31.58% of sessions, for Peter was 100% and was collected during 41.67% of sessions, and for Logan was 100% and was collected during 33.33% of sessions.

Prestudy Assessments

Paired-Stimulus Preference Assessment

A paired-stimulus preference assessment (Fisher et al., 1992) was conducted using eight edible items that were readily available in the center. After presession exposure, all items were presented in pairs systematically such that each item appeared with every other item once. Items were ranked from most to least preferred based on the frequency of selection, with the item that was chosen most frequently as the most preferred and the item that was chosen least frequently as the least preferred. A choice of two of the top three items was offered for correct responses during treatment. However, following Session 18, Logan requested that his most highly preferred item no longer be included. Therefore, we provided a choice of the second and third most highly preferred items for Logan for subsequent sessions.

Pretests

A bank of fifty 2D stimuli was created to select nine unknown two-component tacts for each learner (i.e., noun–verb combination; e.g., “girl running”). Therapists conducted three probe trials for each stimulus to identify unknown two-component tacts. During each probe trial, therapists presented a picture and asked the learner, “What’s happening?” Feedback was not provided for correct or incorrect responses. Nine tacts with no correct responses across all three probes were selected for the study, and each tact was randomly assigned to an experimental condition (control, nontransfer trial, or transfer trial) such that each experimental condition was associated with three different tacts (Table 1). The number of syllables in each tact was held constant across conditions, and no nouns or verbs repeated between conditions. For two of the learners (Steve and Peter), a second set of two-component tacts was selected. These were identified by conducting a second pretest identical to the first pretest, except that previously mastered stimuli were not included in the second pretest.

Table 1.

Target Responses for Each Participant by Condition

Participant Set Condition Target stimuli
Steve A Control pigs riding, kids sledding, baby building
Transfer trial cats climbing, boy hiding, people clapping
Nontransfer trial dog drinking, woman crying, girl bouncing
B Control bears fighting, unicorn painting, baby waving
Transfer trial monkeys hugging, bird hatching, animals playing
Nontransfer trial rabbit sweeping, goats surfing, elephant eating
Peter A Control pigs eating, baby calling, man drinking
Transfer trial cats sleeping, people waving, girl jumping
Nontransfer trial dog sitting, woman riding, kids running
B Control tiger reading, bird singing, kangaroo boxing
Transfer trial rabbit brushing, goat watching, koala teaching
Nontransfer trial turtle waving, sheep knitting, animals sledding
Logan A Control dog jumping, man waving, baby climbing
Transfer trial pigs cooking, boy bouncing, people dancing
Nontransfer trial cats hiding, kids swimming, woman shopping

Procedures

Prior to the start of each session, the therapist allowed the learner to select between two of the top three preferred items, as identified in the paired-stimulus preference assessment. That is, the therapist presented two highly preferred items to the participant and allowed the participant to select and consume one item (DeLeon et al., 2001). The selected item was used during the session as a consequence following all correct responses. We attempted to always offer the top two preferred items; however, sometimes one of these items was unavailable due to the nature of the ABA center (e.g., another client was using the item, the center was out of the preferred food). In these cases, the third most highly preferred item was substituted for the missing item from the top two most highly preferred items.

All sessions were composed of 12 trials (i.e., four presentations of three different targets) and 5 mastered trials (i.e., five presentations of previously mastered tasks). All trials began when the therapist presented the instruction “What’s happening?” Throughout all sessions, the therapist interspersed mastered trials after every two to three target trials. For all mastered trials, praise and the chosen edible were delivered for correct responses, and error correction was conducted if the learner responded incorrectly (i.e., the therapist immediately re-presented the mastered task and delivered a prompt that resulted in the correct response). However, learners rarely responded incorrectly to mastered trials. Across all sessions, the therapist ignored problem behavior. The mastery criterion for target trials was defined as 90% independent responding across three consecutive sessions (Fuller & Fienup, 2018).

Baseline

During baseline, we rapidly alternated the three conditions (i.e., control, nontransfer trial, and transfer trial). For all baseline sessions, the therapist presented the instruction “What’s happening?” and waited 3 s. If the learner responded correctly, the therapist delivered praise and the chosen edible before moving to the next trial. If the learner did not respond or responded incorrectly, the therapist moved on to the next trial.

Treatment

Following baseline, we continued to rapidly alternate the three conditions, but treatment was introduced during the nontransfer and transfer trial sessions (see Figure 1 for a flowchart of these two treatment conditions). The control condition remained identical to baseline. That is, for all control sessions, the therapist presented the instruction “What’s happening?” and waited 3 s. If the learner responded correctly, the therapist delivered praise and the chosen edible before moving on to the next trial. If the learner did not respond or responded incorrectly, the therapist moved on to the next trial.

Figure 1.

Figure 1

Flowchart Outlining the Sequence of 0-s Delay Trials. Note. Nontransfer trial conditions are illustrated at the top of the figure, and transfer trial conditions are at the bottom.

Nontransfer Trial

During the first two nontransfer trial sessions, the therapist used a 0-s prompt delay (see Figure 1). That is, the therapist presented a picture and immediately provided the instruction ("What's happening?") and a full vocal prompt (e.g., “Girl running.”). If the learner correctly repeated the prompted response, then the therapist delivered praise and restated the noun–verb tact (e.g., “Good job! The girl is running.”) while delivering the preferred edible identified for the session. If the learner did not respond or responded incorrectly, the therapist implemented error correction. That is, the therapist re-presented the trial once and immediately presented the full vocal prompt to the learner at a 0-s prompt delay. If the learner emitted a correct response following error correction, the therapist delivered a neutral statement (e.g., “That’s girl running.”). If the learner emitted a second incorrect response or no response, the therapist moved on to the next trial.

Following two sessions using a 0-s prompt delay, the therapist switched to a 3-s prompt delay, which was used for the remainder of the sessions. That is, the therapist presented a picture, immediately presented the instruction (“What’s happening?”), but then waited 3 s for a response. All other procedures remained the same as the first two sessions for correct, incorrect, and no responses.

Transfer Trial

During the first two transfer trial sessions, all trials began with a 0-s prompt delay that was followed by a transfer trial (see Figure 1). That is, following the initial instruction ("What's happening?") and 0-s full vocal prompt (e.g., “Girl running.”), the therapist immediately presented the instruction again (i.e., transfer trial; “What’s happening?”), but then waited 3 s for a response. If the learner responded correctly and within 3 s of the transfer trial, the therapist delivered praise and restated the noun–verb tact (e.g., “Good job! The girl is running.”) while delivering the preferred edible. If the learner did not respond or responded incorrectly, the trial was re-presented and error correction was implemented. If the learner emitted a correct response following error correction, the therapist delivered a neutral statement (e.g., “That’s girl running.”) and moved on to the next trial. If the learner emitted a second incorrect response or no response, the therapist moved on to the next trial.

Following two sessions using a 0-s prompt delay, the therapist switched to a 3-s prompt delay, which was used for the remainder of the sessions. That is, the therapist presented a picture, immediately presented the instruction (“What’s happening?”), and waited 3 s for a response. If the learner did not respond or responded incorrectly, the therapist implemented error correction. That is, the therapist re-presented the trial once and immediately presented the full vocal prompt to the learner at a 0-s prompt delay. If the learner emitted a correct response following error correction, the therapist delivered a neutral statement (e.g., “That’s girl running.”). If the learner emitted a second incorrect response or no response, the therapist moved on to the next trial. No transfer trial was conducted during the 3-s prompt delay.

Maintenance

For Logan and Steve, maintenance probes were conducted 6–7 weeks following treatment. During maintenance, we again rapidly alternated the three experimental conditions. Procedures were identical to baseline for all three conditions.

Experimental Design

We used a multielement design to demonstrate experimental control by alternating control, nontransfer trial, and transfer trial sessions. Sessions were conducted in a quasirandom order such that no more than two of the same conditions were conducted in a row.

Social Validity

To assess social validity, we developed a survey regarding nontransfer and transfer trials as teaching procedures. At the conclusion of the study, Alexandria R. Dell’Aringa gave therapists a three-item questionnaire to complete regarding preference for the procedures (i.e., “Which protocol did you like better?”), ease of the procedures (i.e., “Which protocol was easier to conduct?”), and efficiency of the procedures (i.e., “Which protocol was more efficient to implement?”). For all questions, therapists selected either transfer trials or nontransfer trials.

Results

Figures 2, 3, and 4 depict the percentage of correct responses during baseline, treatment, and maintenance trials (Steve and Logan only) for Steve, Peter, and Logan, respectively. Data from two sets of stimuli are presented for Steve and Peter, and data from one set of stimuli are presented for Logan..

Figure 2.

Figure 2

Steve’s Percentage Correct Across Phases and Conditions. Note. Percentage correct for Steve is illustrated across baseline, treatment, and maintenance phases for Set A and Set B stimuli in the control (closed circles), transfer trial (open squares), and nontransfer trial (closed triangles) conditions.

Figure 3.

Figure 3

Peter’s Percentage Correct Across Phases and Conditions. Note. Percentage correct for Peter is illustrated across baseline and treatment phases for Set A and Set B stimuli in the control (closed circles), transfer trial (open squares), and nontransfer trial (closed triangles) conditions.

Figure 4.

Figure 4

Logan’s Percentage Correct Across Phases and Conditions. Note. Percentage correct for Logan is illustrated across baseline, treatment, and maintenance phases for stimuli in the control (closed circles), transfer trial (open squares), and nontransfer trial (closed triangles) conditions.

During baseline for Set A (Figure 2, top panel), Steve responded correctly during 0% of trials across all three conditions. During treatment, an increase in correct responding was immediately observed for the control condition; however, correct responding maintained at 33.3% for the remainder of the phase, suggesting chance responding. After two treatment sessions, Steve demonstrated an increase in responding during both the transfer trial and the nontransfer trial phases, and correct responding increased with each subsequent session. Steve met the mastery criterion (i.e., 90% correct across three sessions) in five sessions (see Table 2) in the nontransfer trial condition and in six sessions (see Table 2) in the transfer trial condition. During maintenance probes, Steve correctly responded on 100%, 91.67%, and 33.33% of trials during nontransfer trial, transfer trial, and control conditions, respectively.

Table 2.

Sessions to Mastery and Percentage of Correct Responses During Maintenance Probes for Each Participant

Participant Set Condition Sessions to mastery Maintenance (% correct)
Steve A Transfer trial 6 91.67
Nontransfer trial 5 100
B Transfer trial 5 83.33
Nontransfer trial 5 91.67
Peter A Transfer trial 6
Nontransfer trial 7
B Transfer trial 5
Nontransfer trial 7
Logan A Transfer trial 12 91.67
Nontransfer trial 11 91.67
Average All Transfer trial 6.8 88.89
Nontransfer trial 7.0 94.45

Steve’s responding with Set B (Figure 2, bottom panel) was similar to that of Set A. We observed low levels of correct responding for all sessions during baseline. During treatment, we saw an increase in correct responding during both nontransfer trial and transfer trial sessions. However, unlike with Set A, we did not see any correct responding during the control condition. Steve met the mastery criterion in five sessions (see Table 2) in both the nontransfer and transfer trial conditions. During maintenance, correct responding increased during the control condition (25%) but remained low in relation to the nontransfer trial and transfer trial conditions, which maintained at 91.67% and 83.33%, respectively.

During baseline for Set A (Figure 3, top panel), Peter responded correctly during 0% of trials across all three conditions. During treatment, an increase in correct responding was immediately observed for the control condition; however, correct responding maintained at 33.3% for the remainder of the phase, suggesting chance responding. After the first two sessions, Peter demonstrated increases in correct responding during both the transfer trial and the nontransfer trial phases. Peter met the mastery criterion (i.e., 90% correct across three sessions) in six sessions (see Table 2) in the transfer trial condition and in seven sessions (see Table 2) in the nontransfer trial condition. Maintenance probes were not conducted for Peter.

Peter’s responding with Set B (Figure 3, bottom panel) was similar to that of Set A. We observed 0% correct responding for all sessions during baseline. During treatment, an increase in correct responding was immediately observed for the control condition; however, correct responding maintained at 75% for the remainder of the phase. When treatment began, we saw an increase in correct responding during both nontransfer and transfer trial sessions. Peter met the mastery criterion in five sessions (see Table 2) in the transfer trial condition and in seven sessions (see Table 2) in the nontransfer trial condition.

During baseline, Logan responded correctly during 0% of trials across control and transfer trial conditions and had one correct response (8.33%) during one session of the nontransfer trial condition (Figure 4). After the first two sessions of treatment, Logan demonstrated increases in correct responding during both the transfer trial and the nontransfer trial phases. Logan met the mastery criterion (i.e., 90% correct across three sessions) in the nontransfer trial condition in 12 sessions (see Table 2) and in the transfer trial condition in 11 sessions (see Table 2). During maintenance probes, Logan correctly responded on 91.67%, 91.67%, and 0% of trials during the nontransfer trial, transfer trial, and control conditions, respectively.

Overall, learners met the mastery criterion in an average of 6.8 sessions and 7.0 sessions in the transfer trial and nontransfer trial conditions, respectively. Although some correct responding was observed in the control condition, no learners met the mastery criterion. During maintenance probes, learners responded correctly on 88.89% and 94.45% of trials in the transfer trial and nontransfer trial conditions, respectively.

Of the seven therapists who participated in the study, four submitted the social validity survey. Four out of four therapists selected “transfer trials” for “Which protocol did you like better?” Three out of four therapists selected “nontransfer trials” for “Which protocol was easier to run?” Finally, three out of four therapists selected “nontransfer trials” for “Which protocol was faster to implement?”

Discussion

This is the first study to compare transfer trials to a more traditional DTT procedure; however, it supports the previous use of transfer trials in research and clinical practice (e.g., Carbone et al., 2006; Frampton et al., 2016; Frampton et al., 2019; Hozella & Ampuero, 2014; Shillingsburg et al., 2020; Valentino et al., 2012; Valentino et al., 2015). We evaluated whether transfer trials would increase the rate of skill acquisition compared to nontransfer trial DTT instruction in two-component tact training with three learners. All learners met the mastery criterion (i.e., 90% correct across three sessions) during both transfer trial and nontransfer trial conditions, and differences in the number of sessions to mastery were minimal. Interestingly, responding maintained at a higher level in the nontransfer trial condition as compared to the transfer trial condition for one learner (Steve). Overall, the results of the current study extend the literature by demonstrating that the inclusion of transfer trials in two-component tact training is an effective teaching procedure but does not necessarily increase the rate of skill acquisition when compared to a more traditional DTT procedure (i.e., nontransfer trial condition).

The results of the current study suggest that both nontransfer trial and transfer trial conditions were effective teaching procedures and that neither teaching procedure is necessarily more efficient than the other. It could be assumed that the addition of the transfer trial helps transfer stimulus control from the prompt (e.g., “girl running”) to the instruction (i.e., “What’s happening?”) by allowing the learner to contact reinforcement for independent responding. It is also possible that the addition of the transfer trial decreased sessions to criterion due to repetition during the transfer trial. That is, during the 0-s delay sessions in the transfer trial condition, learners heard each instruction twice. In the nontransfer trial condition, learners heard each instruction and prompt once. Therefore, it seems reasonable that the additional exposure to the instruction may result in more efficient acquisition for some learners. However, this was not the case in the current study. The differences in the number of sessions to mastery during transfer trial conditions as compared to nontransfer trial conditions were not significant, with learners meeting the mastery criterion in an average of 6.8 sessions and 7.0 sessions in the transfer trial and nontransfer trial conditions, respectively. Previous research on skill acquisition procedures like prompting (e.g., Libby et al., 2008; Seaver & Bourret, 2014) and error correction (e.g., Rodgers & Iwata, 1991; Smith et al., 2006) has led to similar results. For example, in a study on error correction, Rodgers and Iwata (1991) found that one participant performed better in the baseline condition, with no error correction and only differential reinforcement. Two participants performed better in a practice condition, three participants performed better in the avoidance condition, and the final participant performed equally across all conditions. Rodgers and Iwata attributed the variability across conditions to differences in individual learning histories. It is possible that any difference in sessions to criterion for the transfer and nontransfer trial conditions in the current study may simply be due to individual differences in the learning histories of our learners.

Although neither procedure appears more efficient than the other, it was clear that therapists preferred the transfer trial procedure. Despite three out of four surveyed therapists identifying nontransfer trials as both faster and easier to implement, all four therapists preferred the transfer trial procedure. Follow-up questions were not presented, so it is not known why transfer trials were preferred over nontransfer trials. One explanation may be that therapists appreciated the extra practice provided by the transfer trials or preferred to reinforce independent responses instead of prompted ones. Regardless of the reasoning, the current research suggests both transfer and nontransfer trials are equally effective teaching procedures.

There are several limitations worth noting. First, the stimuli selected for the two-component tact program included common noun–verb phrases like “girl reading,” “dog drinking,” and “man running.” It is possible that some of these common stimuli were incidentally acquired outside of the study. This may explain why an increase in correct responding was observed during maintenance for some learners (e.g., Steve and Peter), despite a lack of formal teaching. It is also possible that the individual components of the two-component tacts were already in the individual’s repertoire, thus resulting in recombinative generalization of two-component tacts across both procedures (e.g., Goldstein & Mousetis, 1989). Therefore, it would be important for researchers to assess the individual components of tacts in addition to the two-component tacts to control for this confound in future studies. Second, there were only three learners in the current study, resulting in five data sets. It is possible that running the procedures with additional learners may have led to different patterns of responding across teaching conditions. However, the current skill acquisition research suggests that some procedures like prompting hierarchies (e.g., Libby et al., 2008; Seaver & Bourret, 2014) and error correction (e.g., Rodgers & Iwata, 1991; Smith et al., 2006) are idiosyncratic and depend on the individual learner. A final limitation may be the lack of control for the participants’ histories with either teaching procedure. Coon and Miguel (2012) suggested that the prompt method most recently used during teaching is more likely to result in fewer trials to mastery as compared to a prompt method that has not been used recently. This suggests that a participant’s most recent history with either the transfer trial procedure or the nontransfer trial procedure outside of the current study may influence the future effectiveness of each teaching procedure. Although it is likely that all participants had a learning history with both procedures, this was not a variable for which we explicitly controlled.

One future area of study is to evaluate the amount of time needed to complete a nontransfer trial as compared to a transfer trial. The transfer trial sequence has the potential to more than double the length of each prompted trial, resulting in longer teaching sessions. In the social validity survey, three out of four therapists selected the nontransfer trial condition when asked which condition was quicker to implement; however, the length of each session was not recorded in the current study. Therefore, it is unclear whether the transfer trial condition resulted in a longer teaching session than a more traditional DTT procedure. Knowing the length of sessions for each teaching procedure is important because this changes the overall efficiency of the procedure. For example, although Peter met the mastery criterion in fewer sessions under the transfer trial condition, it is unclear how much time was spent teaching in each condition. That is, fewer sessions using transfer trials may account for equal or more therapist time as compared to a traditional DTT procedure; however, it is unclear from the current study, as no data were collected on the duration of teaching sessions. Thus, researchers may consider collecting data on the length of sessions to better analyze the efficiency of transfer trial teaching procedures.

Future researchers may also consider examining the use of transfer trials with other skill acquisition programs (e.g., intraverbals, receptive skills) as compared to more traditional DTT procedures such as error correction. It is possible that when teaching novel or complex skills, transfer trials may be a more effective teaching procedure. In fact, several studies have included transfer trials with more complex skills and demonstrated acquisition of those complex skills (e.g., Shillingsburg et al., 2020; Valentino et al., 2012; Valentino et al., 2015). For example, Shillingsburg et al. (2020) increased the mean length of utterances of six children with ASD. Another route for future research may be determining whether modifications to the transfer trial procedure can increase its efficacy and efficiency. For example, researchers may try to add distractor trials between the prompted trial and the transfer trial or repeat the transfer trial multiple times. In a study on error correction for increasing sight word acquisition, Worsdell et al. (2005) found that six adults with developmental disabilities acquired an average of 33% more sight words under the multiple-response repetition condition (i.e., trial repeated five times) when compared to the single-response repetition condition (i.e., trial repeated once). Therefore, it is possible that similar modifications to transfer trials could be made to enhance their effects. Finally, future researchers may also consider comparing these two procedures without the inclusion of a time-delay procedure. Interestingly, for all participants, it appears that once the prompt delay was moved from a 0-s prompt delay to a 3-s prompt delay, correct responses immediately began to increase, regardless of the teaching procedure. Thus, it is possible that the time delay played a significant role in the teaching procedure. That is, the time delay may be a necessary component of the teaching procedure. If removed, researchers may find larger differences in effectiveness or efficiency when comparing transfer and nontransfer trial teaching procedures.

The current study is an initial step toward determining whether transfer trials are an effective and efficient teaching method that should be included during DTT. Although the current study suggests that transfer trials are similarly effective to more traditional DTT teaching procedures, it remains unclear whether they are more efficient. Therefore, we suggest that researchers continue to evaluate the effects of transfer trials in different combinations and across different skills to further our knowledge on their effectiveness and efficiency for skill acquisition programming.

Author Note

The authors wish to thank Mikaela Martinez, Bailey Richeson, and Alexandra Ross for their work as instructors and data collectors during this project.

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Footnotes

Research Highlights

This paper:

• evaluates a commonly recommended clinical practice (i.e., transfer trials);

• provides support as to whether practitioners and researchers should include transfer trials during discrete-trial training;

• provides a comparison between the effects of transfer trials and those of a more traditional teaching procedure on the acquisition of two-component tacts; and

• provides researchers with several ideas for future studies on the effectiveness and efficiency of transfer trials.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Barbetta PM, Heron TE, Heward WL. Effects of active student response during error correction on the acquisition, maintenance, and generalization of sight words by students with developmental disabilities. Journal of Applied Behavior Analysis. 1993;26:111–119. doi: 10.1901/jaba.1993.26-111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Carbone, V. J. (2016, December). Overview of operants and teaching procedures [Paper presentation]. Meeting of the European Institute for the Study of Human Behavior, Parma, Italy. http://carboneclinic.com/portal/conferences/files/IESCUM%20Course%20Dec%202016/1.%20Thursday%20%20Dec%201,%202016/Power%20Point%20Slides/IESCUM%20Operants%20&TEACHING%20PROCEDURES2016.pdf
  3. Carbone VJ, Lewis L, Sweeney-Kerwin EJ, Dixon J, Louden R, Quinn S. A comparison of two approaches for teaching VB functions: Total communication vs. vocal-alone. Journal of Speech and Language Pathology – Applied Behavior Analysis. 2006;1(3):181–192. doi: 10.1037/h0100199. [DOI] [Google Scholar]
  4. Coon JT, Miguel CF. The role of increased exposure to transfer-of-stimulus-control procedures on the acquisition of intraverbal behavior. Journal of Applied Behavior Analysis. 2012;45:657–666. doi: 10.1901/jaba.2012.45-657. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. DeLeon IG, Fisher WW, Rodriguez-Catter V, Maglieri K, Herman K, Marhefka JM. Examination of relative reinforcement effects of stimuli identified through pretreatment and daily brief preference assessments. Journal of Applied Behavior Analysis. 2001;34(4):463–473. doi: 10.1901/jaba.2001.34-463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Drevno GE, Kimball JW, Possi MK, Heward WL, Gardner R, III, Barbetta PM. Effects of active student response during error correction on the acquisition, maintenance, and generalization of science vocabulary by elementary students: A systematic replication. Journal of Applied Behavior Analysis. 1994;27:179–180. doi: 10.1901/jaba.1994.27-179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Fisher W, Piazza CC, Bowman LG, Hagopian LP, Owens JC, Slevin I. A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis. 1992;25:491–498. doi: 10.1901/jaba.1992.25-491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Frampton SE, Shillingsburg MA. Teaching children with autism to explain how: A case for problem solving? Journal of Applied Behavior Analysis. 2018;51:236–254. doi: 10.1002/jaba.445. [DOI] [PubMed] [Google Scholar]
  9. Frampton SE, Thompson TM, Bartlett BL, Hansen C, Shillingsburg MA. The use of matrix training to teach color-shape tacts to children with autism. Behavior Analysis in Practice. 2019;12:320–330. doi: 10.1007/s40617-018-00288-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Frampton SE, Wymer SC, Hansen B, Shillingsburg MA. The use of matrix training to promote generative language with children with autism. Journal of Applied Behavior Analysis. 2016;49:869–883. doi: 10.1002/jaba.340. [DOI] [PubMed] [Google Scholar]
  11. Fuller JL, Fienup DM. A preliminary analysis of mastery criterion level: Effects on response maintenance. Behavior Analysis in Practice. 2018;11:1–8. doi: 10.1007/s40617-017-0201-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Goldstein H, Mousetis L. Generalized language learning by children with severe mental retardation: Effects of peers’ expressive modeling. Journal of Applied Behavior Analysis. 1989;22:245–259. doi: 10.1901/jaba.1989.22-245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Hozella, W., & Ampuero, M. (2014, August). Mand training basics [Paper presentation]. National Autism Conference, State College, PA, United States. https://autism.outreach.psu.edu/sites/default/files/37%20Presentation.pdf
  14. Libby ME, Weiss JS, Bancroft S, Ahearn WH. A comparison of most-to-least and least-to-most prompting on the acquisition of solitary play skills. Behavior Analysis in Practice. 2008;1(1):37–43. doi: 10.1007/bf03391719. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Rodgers TA, Iwata BA. An analysis of error-correction procedures during discrimination training. Journal of Applied Behavior Analysis. 1991;24:775–781. doi: 10.1901/jaba.1991.24-775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Seaver JL, Bourret JC. An evaluation of response prompts for teaching behavior chains. Journal of Applied Behavior Analysis. 2014;47:777–792. doi: 10.1002/jaba.159. [DOI] [PubMed] [Google Scholar]
  17. Shillingsburg, M. A., Frampton, S. E., Schenk, Y. A., Bartlett, B. L., Thompson, T. M., & Hansen, B. (2020). Evaluation of a treatment package to increase mean length of utterances for children with autism. Behavior Analysis in Practice. Advance online publication. 10.1007/s40617-020-00417-y [DOI] [PMC free article] [PubMed]
  18. Smith T. Discrete trial training in the treatment of autism. Focus on Autism and Other Developmental Disabilities. 2001;16:86–92. doi: 10.1177/108835760101600204. [DOI] [Google Scholar]
  19. Smith T, Mruzek DW, Wheat L, Hughes C. Error correction in discrimination training for children with autism. Behavioral Interventions. 2006;21:245–263. doi: 10.1002/bin.223. [DOI] [Google Scholar]
  20. Sundberg, M. L. (2008). Verbal Behavior Milestones Assessment and Placement Program: The VB-MAPP. AVB Press.
  21. The Pennsylvania Verbal Behavior Project. (2009). A beginning guide to the intensive teaching process of the verbal operants. http://www.pattan.net/category/Resources/Instructional%20Materials/Browse/Single/?id=4de79eb6cd69f98019560000
  22. Valentino AL, Conine DE, Delfs CH, Furlow CM. Use of a modified chaining procedure with textual prompts to establish intraverbal storytelling. The Analysis of Verbal Behavior. 2015;31(1):39–58. doi: 10.1007/s40616-014-0023-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Valentino AL, Shillingsburg MA, Call NA. Comparing the effects of echoic prompts and echoic prompts on intraverbal behavior. Journal of Applied Behavior Analysis. 2012;45:431–435. doi: 10.1901/jaba.2012.45-431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Wolery M, Gast DL. Effective and efficient procedures for the transfer of stimulus control. Topics in Early Childhood Special Education. 1984;4:52–77. doi: 10.1177/027112148400400305. [DOI] [Google Scholar]
  25. Worsdell AS, Iwata BA, Dozier CL, Johnson AD, Neidert PL, Thomason JL. Analysis of response repetition as an error-correction strategy during sight-word reading. Journal of Applied Behavior Analysis. 2005;38(4):511–527. doi: 10.1901/jaba.2005.115-04. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES