Abstract
The purpose of the present study was to compare the effects of an online stimulus equivalence procedure to that of an assigned reading when learning Skinner’s taxonomy of verbal behavior. Twenty-six graduate students participated via an online learning management system. One group was exposed to an online stimulus equivalence procedure (equivalence group) that was designed to teach relations among the names, antecedents, consequences, and examples of each elementary verbal operant. A comparison group (reading group) read a chapter from a popular textbook. Tests for the emergence of selection-based and topography-based intraverbal responses were then conducted, as were tests for generalization and maintenance. Overall, results suggest that the online equivalence procedure was not significantly more effective in promoting topography-based responses than the assigned reading. However, performance on selection-based tests was enhanced by the online equivalence procedure as was performance on topography-based tests when participants were required to provide operant names in response to consequences or examples. On average, the equivalence group performed at a level that was 10 percentage points (i.e., a full letter grade) above that of the reading group. The viability of the equivalence-based procedure is discussed in relation to the assigned reading.
Keywords: Selection-based responding, Stimulus equivalence, Topography-based responding, Verbal operant
Skinner’s (1958) teaching machines were an attempt to develop technology that could arrange the optimal conditions for learning. At their own pace, the learner worked through sets of interrelated “frames” for a particular concept with little to no errors in the process. One benefit of such an approach is that it eliminates students’ arbitrary decision making with regard to their mastery of material. One technology that lends itself well to this approach is stimulus equivalence.
Stimulus equivalence (Sidman 1971, 1994) refers to the emergence of untaught stimulus relations following a history of reinforcement for relating certain stimuli in finite ways. Among the many computer-based applications of stimulus equivalence, there exist only a handful of studies dedicated to improving teaching methods employed in higher education. For example, studies have demonstrated the success of stimulus equivalence procedures in the teaching of advanced mathematical functions (e.g., Fields et al. 2009; Ninness et al. 2005, 2006, 2009) and brain region-behavior relations (Fienup, Covey, and Critchfield 2010).
Research has also shown that skills resulting from selection-based (SB) instructional procedures may generalize to novel response topographies when using stimulus equivalence procedures. For example, Walker et al. (2010)) taught SB intraverbal relations to promote identification of topography-based (TB) disability-related terminology (i.e., disability names-definitions, disability names-primary causes, and primary causes-treatments). The authors used multiple-choice worksheets as the instructional mode and showed that untaught vocal intraverbal responses and untaught written intraverbal responses (e.g., naming a disability when given a common treatment or service) emerged during standard classroom posttest measures. In addition, Lovett et al.(2011) compared the effects of an online stimulus equivalence procedure to a traditional undergraduate lecture format in teaching the identification of single-subject experimental design concepts (i.e., design names-definitions, design names-design graphs, and design names-clinical vignettes). The results showed that the equivalence procedure was more effective in promoting the emergence of untaught TB tact responses (e.g., naming a single-subject design when shown a graphical depiction of the design). In a subsequent extension, Walker and Rehfeldt (2012) used an online equivalence procedure to teach similar relations and showed the emergence of typed TB intraverbals and generalization to novel graphs and novel clinical vignettes. This study is unique in that the experiment involved a typical classroom exercise conducted in an online environment.
The results of the aforementioned studies are not surprising given the active and engaged nature of equivalence-based instruction and the removal of arbitrary decision making with regard to a student’s mastery of the material. Traditional study habits leave the duration of study and identification of mastery criteria up to the student and require less preparation time on the part of instructors compared to simply requiring students to read material. However, one disadvantage of relying on assigned readings may be that students develop inaccurate perceptions about the duration of study or criteria necessary to master material. In comparison, equivalence-based procedures define mastery criteria for the student and thus determine the necessary duration of study based on student performance. One drawback might be that equivalence procedures can be time consuming, both in their development and implementation.
The primary purpose of the present study was to compare the effects of a structured online stimulus equivalence procedure to that of an assigned reading study method when learning Skinner’s (1957) elementary verbal operants. Skinner’s taxonomy of verbal behavior was selected as the subject matter because they might be difficult for some students to learn because the operants are defined by different antecedents but similar consequences.
Method
Participants
Twenty-six graduate students at a large, public university in the US Midwest participated in the experiment. This was the students’ first semester in a master’s program in behavior analysis and therapy. All were recruited from a course on single-subject research methodology and received extra course credit for their participation. To ensure equal group size, assignment to either the equivalence or reading group was alternated through quasi-random assignment according to the order in which signed informed consent documents were received. Each group was comprised of 13 participants. Eighteen of the 26 students completed a follow-up probe: 8 from the equivalence group and 10 from the reading group.
Setting and Stimuli
Participants completed all tests in one session on their own time and in a setting of their choosing. All sessions were conducted through Desire2Learn (D2L), the university’s learning management system. Experimental stimuli consisted of the following: names of verbal operants (A stimuli), antecedents for the operants (B stimuli), consequences for the operants (C stimuli), and examples of the verbal operants (D stimuli). Thus, a four-member stimulus class was conceptualized for each of the following operants: echoic (1), mand (2), tact (3), intraverbal (4), textual (5), and transcription (6). Antecedents (B stimuli) included the controlling antecedent events (e.g., verbal stimulus with point-to-point correspondence and formal similarity). Consequences (C stimuli) included one of two types of reinforcement (i.e., generalized conditioned reinforcement or specific reinforcement). Examples (D stimuli) were short vignettes that included an antecedent, response, and reinforcement contacted for the corresponding operants. Stimuli were presented during instruction in order to establish four-member (i.e., A, B, C, and D) stimulus equivalence classes for each of six operants. Vignettes were constructed such that the length of each description was similar. A template (see Table 1) was used to arrange experimental stimuli into the six four-member stimulus classes. A total of 96 experimental stimuli (available from the authors upon request) were created by filling the blanks in the template to create four different sets: water (stimulus set 1), daddy (stimulus set 2), oranges (stimulus set 3), and chair (stimulus set 4). All trials were automatically sourced from a randomized question bank written by the authors and stored on D2L. For the reading group, instructional materials included six pages (pp. 529–534) on Skinner’s (1957) elementary verbal operants found in Cooper, Heron, and Heward (2007). The reading provided plain English definitions, paragraph-length descriptions, and multiple examples of the antecedents and consequences for each verbal operant. Both groups were also exposed to a PowerPoint presentation and passed a nine-question multiple-choice quiz (30 slides total) on the definitions of point-to-point correspondence and formal similarity as described by Skinner (1957) prior to the experiment. SB trials required participants to click the correct comparison stimulus in response to a sample stimulus (i.e., multiple choice), while TB tasks required participants to type their response (i.e., fill in the blank).
Table 1.
Template for six four-member stimulus classes consisting of operant name, antecedent, consequence, and example
| Name (A) | Antecedent (B) | Consequence (C) | Example (D) | |
|---|---|---|---|---|
| (1) | Echoic (say “___”) |
Verbal stimulus with point-to-point correspondence and formal similarity (hear “___”) | Generalized conditioned reinforcement (GCR) (“yes, you said ___”) | An individual hears the word “___”, responds by saying “___”, and receives generalized conditioned reinforcement such as “yes, you said ___” |
| (2) | Mand (“___, please”) | Motivating operation (___) | Specific reinforcement (given ___) | An individual ___, responds by saying “___, please”, and receives specific reinforcement when given ___. |
| (3) | Tact (say “___”) | Nonverbal stimulus (see ___) | Generalized conditioned reinforcement (“yes, that’s ___”) | An individual sees ___, a nonverbal stimulus, responds by saying “___”, and receives generalized conditioned reinforcement such as “yes, that’s ___”. |
| (4) | Intraverbal (say “___”) | Verbal stimulus without point-to-point correspondence or formal similarity (hear “___”) | Generalized conditioned reinforcement (“yes, ___”) | An individual hears the words “___”, responds by saying “___”, and receives generalized conditioned reinforcement such as “yes, ___” |
| (5) | Textual (say “___”) | Verbal stimulus with point-to-point correspondence and without formal similarity (see ___ written) | Generalized conditioned reinforcement (“yes, you read ___”) | An individual sees ___ as a written word, responds by saying “___”, and receives generalized conditioned reinforcement such as “yes, you read ___” |
| (6) | Transcription (write ___) | Verbal stimulus with point-to-point correspondence and without formal similarity (hear “___”) | Generalized conditioned reinforcement (“yes, you wrote ___”) | An individual hears the word “___”, responds by writing ___, and receives generalized conditioned reinforcement such as “yes, you wrote ___” |
Blanks were filled with corresponding stimuli from each of the following sets to create a total of 96 experimental stimuli: stimulus set I (water), stimulus set II (daddy), stimulus set III (oranges), or stimulus set IV (chair)
Experimental Design
A between-group design was used to compare the effects of each condition, with the following condition sequence followed in each: pretest-instruction-posttest-generalization. The equivalence group completed a TB pretest, PowerPoint presentation and quiz as described previously, online equivalence procedure, TB and SB posttests, and tests for generalization, respectively. The reading group completed an identical sequence with the only exception being the substitution of the online equivalence procedure for an assigned reading. Follow-up probes were conducted 7 weeks after the last participant completed the final posttest. Efforts were made to ensure that the participants were not exposed to the instructional material outside the study by checking the syllabi of concurrent courses and confirming the absence of the relevant content.
Dependent Measures and Interobserver Agreement
The primary dependent measure was the percentage of correct responses on TB probe trials. This measure allowed for comparisons to be made between the emergence of TB responses after an online stimulus equivalence procedure or assigned reading. Secondary dependent measures were (1) the percentage of correct responses on SB probe trials, (2) response generalization during novel TB probe trials, and (3) response generalization during novel SB probe trials. Interobserver agreement was calculated for 45 % of TB trials across all trial types for each group using the following formula: Item-by-item agreement per participant and trial type was divided by agreements plus disagreements, multiplied by 100 %. Group mean interobserver agreement (IOA) scores for each trial type were used to calculate an overall group IOA score. The resulting IOA score was 98 % (range, 83–100 %) for the equivalence group and 98 % (range, 89–100 %) for the reading group. SB probe trials were automatically scored by D2L.
Procedure
TB Pretest Probes
The pretest assessed five relations—example-consequence (D-C), example-antecedent (D-B), example-name (D-A), antecedent-name (B-A), and consequence-name (C-A)—and consisted of six trials per relation (i.e., one per verbal operant) for a total of 30 trials from stimulus set 1. This order insured that the content of sample stimuli did not inform responses to subsequent trials. Trials within each subgroup were randomized. A trial ended when the participant clicked the next page button located directly below the comparison stimuli. Only one trial was presented on screen at any given time and participants were not able to view or edit previous trials. SB pretests were not performed due to the risk of exposing participants to sample and comparison stimuli contained within the TB trials. No feedback was provided for pretest responses.
Instruction
Both groups were instructed to first complete the PowerPoint presentation and quiz. After passing the quiz, participants in the reading group were automatically granted access to pages 529–534 of Cooper et al. (2007) through the D2L system and instructed to study the material thoroughly before completing the posttests. Participants in the equivalence group did not have access to the Cooper et al. reading and were instead instructed to complete the online equivalence procedure.
Online Equivalence Procedure
The purpose of the online equivalence procedure was to teach three relations: name-antecedent (A-B), name-consequence (A-C), and name-example (A-D). Experimental stimuli for the online equivalence procedure included stimulus set 1 and stimulus set 2. Each trial block was comprised of 12 randomized trials for a particular relation and included two trials for each of the six verbal operants. Trial onset was indicated by the presentation of a sample stimulus (e.g., A1) and six comparison stimuli (e.g., B1–6) from which to choose. The order of comparison stimuli was automatically randomized and a selection was made by clicking on small circles located to the left of each comparison stimulus. Trials were conducted in a similar fashion to pretest. A trial ended when the participant clicked the next page button located directly below the comparison stimuli. Only one trial was presented onscreen at any given time and participants were not able to view or edit previous trials. Onscreen corrective feedback was provided by D2L following the completion of each 12-trial block. Feedback included a representation of all trials including participant responses and correct responses highlighted. Mastery criterion was set at 92 % (i.e., 11 correct responses out of 12) and participants were allowed 10 attempts per test. As part of the online equivalence procedure, access to tests for symmetry was not granted until criteria were met on the taught relation. In other words, mastery of the antecedent-name (A-B) relation granted access to name-antecedent (B-A) tests. Following mastery of each taught relation, access was granted to a corresponding test for symmetry, that is, a 12-trial block of antecedent-name (B-A), consequence-name (C-A), or example-name (D-A) relations was conducted. Criterion was set at 92 % for each of these three tests. In the event that criterion was not met, participants were required to retake the instructional trial block. Two tests for transitivity were conducted in a similar fashion to tests for symmetry. These contained either example-antecedent (D-B) or example-consequence (D-C) relations and were conducted following mastery of all tests for symmetry. Tests for transitivity did not include a mastery criterion, nor were participants provided their score or corrective feedback by D2L.
SB and TB Posttest Probes
SB posttest probes were conducted for the following relations: A-B, B-A, A-C, C-A, A-D, D-A, D-B, and D-C. TB posttest probes were conducted for the same relations as were assessed at pretest: D-C, D-B, D-A, B-A, and C-A. TB probes were presented in an identical fashion to those in pretest probes with the addition of an equal amount of trials from stimulus set 2. These posttests assessed the same relations as pretest and no feedback was provided. SB posttest probes were presented in an identical fashion to those in the online equivalence procedure.
Generalization Posttest Probes
Both TB and SB generalization posttest probes were identical to posttest except that they were conducted using only example-name (D-A) relations from stimulus set 3 and stimulus Set 4. No feedback was presented and participants had not previously been exposed to any of the stimuli in these sets.
Generative test probes consisted of six TB trials during which participants were asked to provide a novel example of each verbal operant complete with antecedent, response, and consequence.
Follow-Up Probes
All participants were asked to complete follow-up probes approximately 7 weeks following their final experimental session. Additional course credit was offered for participants’ completion. Participants were again instructed to complete this test in one session. Follow-up probes were identical to SB, TB, and generalization posttest probes. They were comprised of 108 total trials.
Results and Discussion
Trials to criterion ranged from one to four for instructed relations. A Pearson product-moment correlation coefficient was computed to assess the relationship between trials to criterion during the online equivalence procedure and TB posttest scores for corresponding symmetry relations. There was a negative correlation between the two variables, r = 0.471, n = 39, p = 0.002. This suggests that participants who struggled during the online equivalence procedure had lower TB posttest scores for symmetry relations.
Employing a multivariate analysis of covariance (MANCOVA) design across five dependent variables—TB antecedent-name (B-A), consequence-name (C-A), example-name (D-A), example-antecedent (D-B), and example-consequence (D-C) relations—a pretest-posttest MANCOVA was performed employing the pretest as a covariate. Multivariate outcomes were evaluated by computing a group × covariate (2 × 2) MANCOVA design. Between-subjects outcomes failed to yield a significant familywise effect by group (where Wilk’s Λ = .734, F(5, 20) = 1.452, p = .249, η2 = .266). We examined Mauchly’s test, which assesses the degree of equivalent proportionality with regard to the hypothesized and the observed variance/covariance matrices. The test was nonsignificant, suggesting that there was no evidence of violations of assumptions regarding multivariate sphericity. In summary, the online equivalence procedure was not significantly more effective in promoting TB responses than reading a section from Cooper et al. (2007).
Group scores for SB and TB tests are displayed in Table 2. On TB posttests (Fig. 1), the equivalence group met criterion (i.e., 92 %) on two of the five TB relations: consequence-name (C-A) and example-name (D-A), while the reading group did not meet criterion on any of the five relations tested. This difference might be seen as that of the group receiving a C (i.e., 70–79 %) instead of an F (i.e., 0–59 %) grade on a paper-and-pencil examination. Fisher’s exact tests revealed that the difference in the number of participants who met criterion between the equivalence and reading groups was significant for the C-A relation (p = .002) and D-A relation (p = .037). Thus, performance was enhanced by the online equivalence procedure when participants were required to provide operant names in response to consequences and examples. Some support is provided for Walker et al. (2010) who showed that untaught written intraverbal responses emerged after equivalence procedures. At follow-up, neither the equivalence group nor the reading group met criterion on any of the relations tested. In this case, both groups would have received an F grade on a short-answer paper-and-pencil examination. SB data and all follow-up data are excluded from the figures.
Table 2.
Mean and standard deviations of SB and TB tests for the equivalence and reading groups
| Equivalence | Reading | ||||
|---|---|---|---|---|---|
| Test | Mean | SD | Mean | SD | |
| SB | Posttest | 95a | 6 | 88 | 13 |
| Follow-up | 93 (−2)a | 5 (−1) | 84 (−4) | 14 (+1) | |
| Generalization | 94a | 16 | 87 | 18 | |
| Follow-up | 90 (−4) | 18 (+2) | 80 (−7) | 22 (+4) | |
| TB | Pretest | 12 | 17 | 16 | 18 |
| Posttest | 72 (+60) | 16 (−1) | 57 (+41) | 23 (+5) | |
| Follow-up | 41 (−31) | 14 (−2) | 35 (−22) | 18 (−5) | |
| Generalization | 94a | 16 | 82 | 27 | |
| Follow-up | 76 (−18) | 25 (+9) | 64 (−18) | 25 (−2) | |
| Generative | 67 | 38 | 73 | 41 | |
| Follow-up | 71 (+4) | 39 (+1) | 53 (−20) | 47 (+8) | |
Values within parentheses indicate increases and decreases from pretest to posttest and from posttest to follow-up
aMet mastery criterion of 92 % or greater
Fig. 1.

The percentage of correct responses during pretests (open circles) and posttests (closed circles) for untaught TB relations in the equivalence (left) and reading (right) groups. Dashed horizontal lines represent the mastery criterion, while solid lines represent pretest (lower) and posttest (upper) means
On SB posttests, however, the equivalence group met criterion on all eight SB relations tested, whereas the reading group met criterion on only two of the eight relations. Fisher’s exact tests revealed that the difference in the number relations at criterion between the equivalence and reading groups was significant (p = .003). Thus, performance on SB tests was enhanced by the online equivalence procedure. At follow-up, the equivalence maintained criterion-level performance on five of the eight relations, whereas the reading group reached criterion on only one of the eight relations. Fisher’s exact tests revealed that the difference in the number relations at criterion between the equivalence and reading groups at follow-up was not significant (p = .059). The difference in scores between groups, at both posttest and follow-up, can be seen as the difference between receiving an A (i.e., 90–100 %) and a B (i.e., 80–89 %) grade on a multiple-choice examination.
On tests for generalization (Fig. 2) to novel example-name (D-A) relations, the equivalence group met criterion on both SB and TB posttests, whereas the reading group did not meet criterion on either SB or TB tests. Fisher’s exact tests revealed that the difference in the number of participants who met criterion between the equivalence and reading groups was not significant for the SB relation (p = .322) or TB relation (p = .08). Again, the difference is equivalent to one letter grade on a multiple-choice and paper-and-pencil examination, respectively. Neither the equivalence nor the reading group met criterion on SB or TB tests for generalization at follow-up. Neither the equivalence nor the reading group met criterion on TB generative posttests or follow-up.
Fig. 2.

Percentage of correct TB responses during generalization (left) and generative (right) for untaught relations in the equivalence (black bars) and reading (white bars) groups. The dashed line represents the mastery criterion
Although the equivalence group scored higher on all but one test (TB generative posttest), performance was comparable to that of the reading group. These results are similar to Lovett et al. (2011) and suggest that an online stimulus equivalence procedure may be as effective as methods that may be associated with assigned readings (e.g., massed practice, mnemonic devices, and acronyms). Between-group differences are therefore discussed in terms of the advantages and disadvantages of each approach.
There is an obvious advantage in preparation time for instructors when simply requiring students to read material. However, the results suggest that the reading group may have had inaccurate perceptions about their mastery of the material and could have performed at a higher level had they spent an equal amount of time studying as compared to the equivalence group. The equivalence group spent an average of 46 min (SD = 27) completing the stimulus equivalence procedure, while the reading group spent an average of 15 min (SD = 13) viewing six pages from the chapter on verbal behavior. It is unclear why participants 7 and 13 did not access the reading. Pearson product-moment correlation coefficients were computed to assess the relationship between instruction/reading time and SB, TB, and overall posttest scores. No significant correlations were found.
Of particular interest, for example, was the high accuracy of SB but relative absence of TB example-antecedent (D-B) and example-consequence (D-C) responses in both groups. Whether or not these results point to a limitation of the present online procedure in promoting lengthy TB responses or a limitation of both assigned reading and equivalence procedures in general remains unclear. What can be said is that this phenomenon further emphasizes the distinction between SB and TB responding. The former requires only an effective scanning repertoire and a conditional discrimination, while the latter involves the strengthening of a distinguishable topography given some specific controlling variable, as well as point-to-point correspondence between the response form and the response product (Michael 1985). This point seems worth stressing as a great many higher educational classrooms incorporate SB (i.e., multiple choice) examinations, the results of which may not generalize to relevant response topographies. Future investigations might evaluate a technique employed by O’Neill and Rehfeldt (2014) and examined by O’Neill et al. (2015) wherein a TB component is included in the procedure by asking participants to read responses aloud during SB instruction.
Skinner’s taxonomy of verbal behavior is often challenging for students and practitioners due to the complexity of the definitions involved. Similar to teaching machines of Skinner (1958), stimulus equivalence procedures share the benefit of controlling for arbitrary decision making on the part of the student with regard to their level of preparation for testing. Results suggest that stimulus equivalence procedures might be used to learn, with minimal errors, the foundational material that will allow instructors to focus direct instruction on honing the student’s TB repertoire. In addition, stimulus equivalence procedures could be employed in conjunction with other behavior analytic approaches that combine classroom instruction with online learning such as the computer-aided personalized system of instruction (Pear and Martin 2004) in an effort to promote a self-paced learning environment.
Contributor Information
John O’Neill, Phone: (407) 267-6880, Email: joneill@siu.edu.
Ruth Anne Rehfeldt, Email: rehfeldt@siu.edu.
Chris Ninness, Email: cninness@suddenlink.net.
Bridget E. Muñoz, Email: bridgemunoz@siu.edu
James Mellor, Email: jmellor@siu.edu.
References
- Cooper JO, Heron TE, Heward WL. Applied behavior analysis. 2. Upper Saddle River: Pearson Prentice Hall; 2007. [Google Scholar]
- Fields L, Travis R, Yadlovker ED, Roy D, Aguiar-Rocha L, Sturmey P. Equivalence class formation: a method for teaching statistical interactions. Journal of Applied Behavior Analysis. 2009;42:575–593. doi: 10.1901/jaba.2009.42-575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fienup DM, Covey DP, Critchfield TS. Teaching brain–behavior relations economically with stimulus equivalence technology. Journal of Applied Behavior Analysis. 2010;43:19–33. doi: 10.1901/jaba.2010.43-19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lovett S, Rehfeldt R, Garcia Y, Dunning J. Comparison of a stimulus equivalence protocol and traditional lecture for teaching single-subject designs. Journal of Applied Behavior Analysis. 2011;44:819–833. doi: 10.1901/jaba.2011.44-819. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michael J. Two kinds of verbal behavior plus a possible third. The Analysis of Verbal Behavior. 1985;3:2–5. doi: 10.1007/BF03392802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ninness C, Rumph R, McCuller G, Harrison C, Ford AM, Ninness SK. A functional analytic approach to computer-interactive mathematics. Journal of Applied Behavior Analysis. 2005;38:1–22. doi: 10.1901/jaba.2005.2-04. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ninness C, Barnes-Holmes D, Rumph R, McCuller G, Ford AM, Payne R, et al. Transformations of mathematical and stimulus functions. Journal of Applied Behavior Analysis. 2006;39:299–321. doi: 10.1901/jaba.2006.139-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ninness C, Dixon M, Barnes-Holmes D, Rehfeldt RA, Rumph R, McCuller G, et al. Constructing and deriving reciprocal trigonometric relations: a functional analytic approach. Journal of Applied Behavior Analysis. 2009;42:191–208. doi: 10.1901/jaba.2009.42-191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Neill J, Rehfeldt RA. Selection-based responding and the emergence of topography-based responses to interview questions. The Analysis of Verbal Behavior. 2014;30:178–183. doi: 10.1007/s40616-014-0013-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Neill J, Blowers AP, Henson L, Rehfeldt RA. Further analysis of selection-based instruction, lag reinforcement schedules, and the emergence of topography-based responses to interview questions. The Analysis of Verbal Behavior. 2015;31:126–136. doi: 10.1007/s40616-015-0031-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pear JJ, Martin TL. Making the most of PSI with computer technology. In: Moran DJ, Malott RW, editors. Evidence-based educational methods. San Diego: Elsevier & Academic Press; 2004. pp. 223–243. [Google Scholar]
- Sidman M. Reading and auditory-visual equivalences. Journal of Speech and Hearing Research. 1971;14:5–13. doi: 10.1044/jshr.1401.05. [DOI] [PubMed] [Google Scholar]
- Sidman M. Equivalence relations and behavior: a research story. Boston: Authors Cooperative; 1994. [Google Scholar]
- Skinner BF. Verbal behavior. Cambridge: Prentice Hall; 1957. [Google Scholar]
- Skinner BF. Teaching machines. Science. 1958;128:969–977. doi: 10.1126/science.128.3330.969. [DOI] [PubMed] [Google Scholar]
- Walker B, Rehfeldt RA. An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via blackboard. Journal of Applied Behavior Analysis. 2012;45:329–344. doi: 10.1901/jaba.2012.45-329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker B, Rehfeldt RA, Ninness CF. Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis. 2010;43:615–633. doi: 10.1901/jaba.2010.43-615. [DOI] [PMC free article] [PubMed] [Google Scholar]
