The process of teaching job interview skills to adults with intellectual disability can be time consuming and arduous for vocational developmental centers. On review of the existing literature, Schloss et al. (1988) recognized that although the social validity of interview skills has been well documented, few studies have attempted to validate effective strategies for teaching such behaviors. Behavioral skills training (BST) has been shown to be effective (Hall et al. 1980; Kelly et al. 1980; Schloss et al. 1988) but requires repeated instruction, modeling, rehearsal, and feedback. One strategy that might attenuate this issue has been subject to conceptual analysis in recent years. That is the potential of selection-based responding to promote the emergence of topography-based responding.
Michael (1985) identified two types of verbal behavior: selection-based and topography-based. The former requires an effective scanning repertoire, a subsequent conditional discrimination between stimuli, and no point-to-point correspondence between the response form and the response product (e.g., choosing the correct answer during a multiple-choice examination). In comparison, the latter involves an increase in the strength of a distinguishable topography given some specific controlling variable and point-to-point correspondence between the response form and the response product (e.g., providing the correct answer during an oral examination). Since Michael’s seminal paper on the topic, researchers have continued to clarify the distinction between selection-based and topography-based verbal behavior. Potter and Brown (1997) have suggested that topography-based responses might promote the acquisition of selection-based responses in some contexts, especially in participants with extensive verbal repertoires. In examining the role of such verbal behavior, Potter et al. (1997) found that participants preferred selection-based tasks which incorporated a topography-based component when taught relations between sample stimuli consisting of flag-like patterns and comparison stimuli consisting of dot patterns. These researchers found that participants engaged in consistent vocal-verbal responding (i.e., problem-solving) during both selection-based tasks and selection-based tasks with a topography-based component. This finding was seen as support for the notion that some selection-based conditional discriminations, and emergent equivalence relations (see Walker et al. 2010; Lovett et al. 2011; Walker and Rehfeldt 2012), are promoted by topography-based vocal-verbal responding in individuals with extensive verbal repertoires. Indeed, it is likely that typically functioning adults engage in covert topography-based responses during selection-based tasks (e.g., multiple-choice examinations). Polson and Parsons (2000) commented on the concerns associated with ignoring the selection-based versus topography-based distinction and cautioned researchers on the use of selection-based match-to-sample (MTS) tasks. The current study sought to validate the use of a selection-based instructional protocol which included a topography-based component to teach intraverbal (i.e., a verbal operant evoked by a verbal discriminative stimulus and that does not have point-to-point correspondence with that verbal stimulus and that is followed by generalized conditioned reinforcement) responses to interview questions in two individuals with a learning disability.
Method
Participants
Participants included two individuals recruited from a local Midwest university vocational development center. Mike was a 23-year-old male diagnosed with autism and a learning disability not otherwise specified. Carla was a 20-year-old female diagnosed with a learning disability not otherwise specified. Verbal repertoires were sufficient to communicate vocational interests and experience. Participants were identified by the director of the center as computer literate and having some exposure to formal interview instructional programs. Each was in need of additional instruction before the vocational center would consider the participants ready for community interviews.
Setting, Apparatus, and Stimuli
The experiment was conducted in a private meeting room in the vocational developmental center as well as the living area of the participant’s residence. The experimental areas consisted of two chairs, a small table, and a laptop computer. The participant and experimenter sat facing each other on opposite sides of the table during mock interviews. Sessions varied in duration from 3–5 min and were performed four to six times per week depending on availability. On any particular day, a maximum of two instructional sessions were completed with a short break in between. The apparatus was programmed in Microsoft© Power Point© and viewed on the laptop computer in full-screen presentation mode. Presentations consisted of 6–8 slides, one of which was used to redirect repetitive responses and the other to redirect inaccurate responses. An interview question was presented in a box at the top center of a green screen upon start-up. This was followed by a 3-s delay, and then 4–5 response options appeared below the question on the same green screen. The number of comparisons varied according to the number of accurate responses identified by the director of the vocational developmental center. These options appeared simultaneously on separate lines spaced approximately 17.5 mm apart. The order of response options was rotated on a trial-by-trial basis using 4–6 slides depending on the number of response options for a particular question. An example of a slide is shown in Fig. 1. All text appeared in black Ariel font, size 28–30.
Fig. 1.

Example of an instructional program containing six total slides. Slides 1–4 (green background) are identical except for random rotation of response options. Slide 5 (orange background) and Slide 6 (red background) were used to redirect responses that did not meet the Lag 1 criterion and inaccurate responses, respectively
Design
A multiple-baseline probe design across a set of three questions was employed across each participant. This design allowed for staggered exposure to the intervention across three behaviors (i.e., responding to three different questions) while controlling for major threats to internal validity.
Dependent Measure, Interobserver Agreement, and Treatment Integrity
There were two primary dependent variables. Accurate responses were defined as vocal-verbal responses related to the work environment and the participant’s employment history. Inaccurate responses were defined as vocal-verbal responses unrelated to the work environment and the participant’s employment history. No answer was defined as the vocal-verbal response “no” or other functionally equivalent response (e.g., not at this time). Non-vocal responses (e.g., shaking head) in the absence of a vocal-verbal response were not anticipated and did not occur. All sessions were video-recorded. Interobserver agreement and treatment integrity were scored for 33 % of instructional sessions and 80 % of pre- and posttest mock interviews by two independent observers. Interobserver agreement was calculated by dividing the number of sessions with 100 % interobserver agreement by the total number of sessions and multiplying by 100 %. Resulting interobserver agreement was 92 % for mock interview sessions and 80 % for instructional sessions. Treatment integrity was calculated by dividing the number of trials with 100 % implementation by the total number of trials and multiplying by 100 %. Resulting treatment integrity was 100 % for mock interview sessions and 95 % for instructional sessions.
Procedure
Topography-Based Pretests
The first author sat at the table, facing the participant, and asked seven interview questions, one at a time. The director of the program had identified each of the seven questions as being particularly crucial to the interview process. The seven questions were as follows:
What can you tell me about yourself?
Why do you want to work here?
What are your strengths?
How do you handle pressure and stress?
What motivates you?
Tell me how you can work in a team?
Do you have any questions?
All seven questions were asked during all pretests. After each response, a general acknowledgement (e.g., “okay” or “alright”) was provided and a verbal prompt of “anything else?” was provided immediately after the participant’s initial response to each question. No other feedback was provided during baseline. When performance on pretest probes was deemed visually stable, accurate and inaccurate answers were identified by the director of the vocational developmental center. Of the seven initial questions, three were selected for instruction because the participant provided (a) inaccurate answers or (b) no answers. Some accurate and inaccurate responses during pretest were retained for inclusion in the instructional protocol but no more than one of each was retained for any particular question. Question 1 was “What can you tell me about yourself?” Question 2 was “What motivates you? Question 3 was “Do you have any questions?” The same three questions were selected for both participants and so the order of presentation during intervention differed in order to control for potential diffusion of the intervention.
Selection-Based Instruction
The instructional protocol operated on a Lag 1 reinforcement schedule for accurate answers. This required participants to select a different response than the immediately preceding response in order to contact a generalized conditioned reinforcer (GCR) at the end of each trial. In addition, the program provided: (a) audio recordings of a single interview question when the question box was clicked, (b) audio feedback in the form of a GCR (e.g., “great answer”, “very interesting”, etc.) for accurate selections which also met the Lag 1 reinforcement schedule, (c) visual redirection to the previous screen (“good answer but click here to try again” on an orange background) in response to accurate selections which did not meet the Lag 1 criterion, and (d) a generalized conditioned punisher plus redirection to the previous screen (“Not appropriate, click here to try again” on a red background) for inaccurate selections. No audio feedback was provided during redirection to a previous screen.
Participants sat at the table with the laptop computer in front of them. Before the start of each session, participants were instructed to complete the following steps for every trial: (a) click on the question box at the top of the screen and wait for the audio recording to play, (b) state aloud the answer you have chosen (i.e., a textual response), and (c) click on the answer you have chosen. Twenty trials of an individual question were completed per session. Mastery criteria stipulated that a Lag 1 criterion be met on 90 % of trials (i.e., 18 out of 20 trial responses were different than the immediately preceding trial’s response). In addition, participants were required to complete three consecutive instructional sessions at or above the 90 % Lag 1 criterion before moving to posttest for any particular question. These mastery criteria ensured that two requirements were met: (a) different accurate responses were selected multiple times but not consecutively during a session, and (b) inaccurate responses were not selected more than twice per session before moving to posttest. Booster sessions were identical to selection-based instruction.
Topography-Based Posttests and Generalization Probes
Posttest sessions were identical to pretest sessions, and all seven questions were asked during all posttests. The criteria for moving from posttest to intervention for the next question were (a) zero inaccurate topography-based vocal-verbal responses, (b) an increase in number of accurate topography-based vocal-verbal responses, and (c) stable patterns of topography-based vocal-verbal responding to all questions remaining in pretest. Generalization probes were identical to pretest except that they were conducted in a different room at the vocational training center and were performed by an experienced staff member who had no involvement with the intervention.
Results and Discussion
Both participants exhibited an increase in the number of accurate topography-based vocal-verbal responses to three interview questions after exposure to the selection-based instructional protocol. In addition, inaccurate topography-based vocal-verbal responses were reduced to zero or near-zero levels. As shown in Fig. 2, Mike provided a low level of accurate responses to all three questions (range, 0 to 2) and a higher level of inaccurate responses (range, 0 to 3) during pretests. Figure 3 shows that Carla also provided a low level of accurate responses to all three questions (range, 0 to 2) and a higher level of inaccurate responses (range, 0 to 4). During the instructional phase, both participants met or surpassed the Lag 1 criterion within two sessions and maintained criterion-level responding for at least three consecutive sessions before advancing to posttest. Carla took an unexpected 8-day break in order to travel to a family event. During the first session of the posttest, both participants responded with an immediate increase in level of accurate responses across all three questions (range, 2 to 5). In addition, both participants’ inaccurate responses reduced in level to zero. Although accurate responding saw a gradual decrease during follow-up posttests and inaccurate responses began to reemerge for Mike on question 2 as well as for Carla on question 1, a single booster session with the instructional protocol was sufficient to reverse the effects. Novel inaccurate responses did not occur during any posttest. A follow-up probe conducted 1 week later by an experienced staff member provided further evidence of generalization in Mike’s case. As mentioned previously, this staff member had no prior involvement with the study. In summary, each participant was exposed, across 4 weeks, to less than 1 h of a selection-based instructional protocol with a topography-based component to teach relations between intraverbal stimuli. Both participants were then able to provide multiple topography-based intraverbal responses in the absence of instructional stimuli.
Fig. 2.

Responses provided across three questions by Mike. Open data points represent accurate responses, and closed data points represent inaccurate responses. Squares indicate vocal-verbal responses during mock interviews, circles indicate the percentage of responses that met Lag 1 criterion during instructional sessions, and triangles represent staff member vocal-verbal generalization probes
Fig. 3.

Responses provided across three questions by Carla. Open data points represent accurate responses and closed data points represent inaccurate responses. Squares indicate vocal-verbal responses during mock interviews, circles indicate the percentage of responses that met Lag 1 criterion during instructional sessions
Further inquiry is warranted to determine the limits as well as the critical components of selection-based instructional protocols. For example, it is unclear whether similar results would have been obtained in the absence of participants’ overt textual responding during instructional trials. The possibility remains that covert textual responding could have sufficed for generalization of topography-based intraverbal responses to occur. However, as Polson and Parsons (2000) have cautioned, without the accompaniment of a topography-based component, instructors run the risk of responses coming under the control of a unique property of a stimulus arrangement. In the current example, responses might have come under the control of a unique property (i.e., a single word) within any particular response option. It seems plausible that given these conditions, posttest responses might have been considerably shorter if not inaccurate due to a lack of point-to-point correspondence between the choice stimuli and overt topography-based intraverbal responses. Additionally, responding might come under the control of the Lag 1 reinforcement schedule alone. In this case, it would be possible for non-verbal selection-based responses to occur if participants responded based on some physical property instead of the verbal content of response options. In fact, anecdotal evidence suggests that location discrimination may have occurred during some later sessions, but posttest scores provide evidence against such a simple discrimination. Our results complement those of Potter and Brown (1997) in suggesting that selection-based responses promote the acquisition of topography-based responses, at least in the context of MTS protocols. In addition, evidence of discrimination between accurate and inaccurate stimuli was suggested by both participants’ behavior, (question 2 for Mike and question 1 and question 2 for Carla). In fact, it seems that convergent multiple control over any one response may have been exerted by (a) textual (i.e., response options) and vocal-verbal stimuli (i.e., recorded questions) emitted by the instructional protocol, and (b) covert discriminative (e.g., identifying an accurate response) and self-generated rules. In combination, these responses promoted the establishment of divergent multiple control by the target question over multiple accurate responses. The current findings provide support for the proposition by Potter et al. (1997) that some selection-based conditional discriminations, and emergent equivalence relations, are promoted by topography-based vocal-verbal responding in individuals with extensive verbal repertoires. Further investigation into the multiple control of verbal behavior and the distinction between selection-based and topography-based responding is warranted.
One limitation to the current study was that the responses were relatively short and simple when considering the interview process. Each response consisted of approximately five words so answers would likely sound scripted during an interview unless the individual was prepared with follow-up statements or questions. Indeed, another important limitation to identify in the current study with regard to social validity is that assessment in the natural setting was not performed. In vivo interview recordings such as those performed by Kelly et al. (1980) were not possible but will certainly yield crucial information in future research.
Acknowledgements
The authors thank Andrew P. Blowers and Bridget E. Munoz for their assistance with reliability data.
References
- Hall C, Sheldon-Wildgen J, Sherman JA. Teaching job interview skills to retarded participants. Journal of Applied Behavior Analysis. 1980;13(3):433–442. doi: 10.1901/jaba.1980.13-433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly JA, Wildman BG, Berler ES. Small group behavioral training to improve the job interview skills repertoire of mildly retarded adolescents. Journal of Applied Behavior Analysis. 1980;13(3):461–471. doi: 10.1901/jaba.1980.13-461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lovett S, Rehfeldt RA, Garcia Y, Dunning J. Comparison of a stimulus equivalence protocol and traditional lecture for teaching single-subject designs. Journal of Applied Behavior Analysis. 2011;44:819–833. doi: 10.1901/jaba.2011.44-819. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michael J. Two kinds of verbal behavior plus a possible third. The Analysis of Verbal Behavior. 1985;3:2–5. doi: 10.1007/BF03392802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polson DAD, Parsons JA. Selection-based versus topography-based responding: an important distinction for stimulus equivalence? The Analysis of Verbal Behavior. 2000;2000(17):105–128. doi: 10.1007/BF03392959. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potter B, Brown DL. A review of studies examining the nature of selection-based and topography-based verbal behavior. The Analysis of Verbal Behavior. 1997;14:85–104. doi: 10.1007/BF03392917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potter B, Huber S, Michael J. The role of mediating verbal behavior in selection-based responding. The Analysis of Verbal Behavior. 1997;14:41–56. doi: 10.1007/BF03392915. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schloss PJ, Santoro C, Wood CE, Bedner MJ. A comparison of peer-directed and teacher-directed employment interview training for mentally retarded adults. Journal of Applied Behavior Analysis. 1988;21(1):97–102. doi: 10.1901/jaba.1988.21-97. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker BD, Rehfeldt RA. An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via Blackboard. Journal of Applied Behavior Analysis. 2012;45:329–344. doi: 10.1901/jaba.2012.45-329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker BD, Rehfeldt RA, Ninness C. Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis. 2010;43:615–633. doi: 10.1901/jaba.2010.43-615. [DOI] [PMC free article] [PubMed] [Google Scholar]
