Teaching job interview skills is an important part of vocational training, but few studies have attempted to validate strategies for teaching such skills to adults with disabilities. For example, behavioral skills training (BST) has been shown to be effective (Hall et al. 1980; Kelly et al. 1980; Mozingo et al. 1994; Schloss et al. 1988). However, despite the efficacy of BST, its treatment components of repeated instruction, modeling, rehearsal, and feedback might be time consuming and arduous for some vocational development centers. An approach to increasing efficiency might be found in the potential of selection-based (SB) instructional protocols to promote the emergence of topography-based (TB) verbal behavior (e.g., Lovett et al. 2011; Walker and Rehfeldt 2012; Walker et al. 2010). According to Michael (1985), SB verbal behavior can be defined as a conditional discrimination where a stimulus or an establishing operation alters the control exerted by another stimulus over a nondistinctive response (e.g., pointing or touching). SB verbal behavior requires an effective scanning repertoire, a subsequent conditional discrimination between stimuli, and no point-to-point correspondence between the response form and the response product (e.g., selecting the right answer to a multiple-choice question). TB verbal behavior involves a distinguishable topography given some specific controlling variable, and point-to-point correspondence between the response form and the response product (e.g., providing the right answer to fill-in-the-blank question). Polson and Parsons (2000) cautioned against ignoring the SB-TB verbal behavior distinction, while others have suggested that TB responses might in fact promote the acquisition of SB conditional discriminations in some contexts (Potter and Brown 1997; Potter et al. 1997).
In a recent investigation, O’Neill and Rehfeldt (2014) found that two individuals with a learning disability were able to provide intraverbal responses (a type of TB verbal behavior) after only 1 h of exposure to a SB instructional protocol. During each trial, participants clicked an on-screen question box and listened to an audio recording of an interview question. Participants then stated aloud (i.e., TB rehearsal) a response selected from an array of 4 to 5 on-screen response options, and then clicked the corresponding button (i.e., SB rehearsal). The instructional protocol operated on a lag-1 reinforcement schedule for accurate answers, which required selection of different responses during each trial. Programmed stimuli consisted of audio recordings of interview questions, audio feedback for accurate responses which met the lag-1 criterion, visual redirection for accurate selections which did not meet the lag-1 criterion, and negative feedback plus redirection for inaccurate responses. Although the protocol was shown to be efficacious, its critical components of the protocol were unclear. Thus, the purpose of the current study was to identify which of the protocol’s components might be necessary and sufficient to promote the emergence of TB responses to interview questions. We evaluated the additive effects of SB rehearsal, TB rehearsal, and audio components of the O’Neill and Rehfeldt protocol with three novel individuals diagnosed with a learning disability.
Method
Participants
Three participants were recruited from a university vocational development center. The director of the center selected individuals who were computer literate, had minimal prior exposure to formal interview instructional programs, and were considered in need of additional instruction in interviewing for a job. Dave was a 27-year-old male diagnosed with a learning disability not otherwise specified. Jack was a 19-year-old male diagnosed with a specific learning disorder not otherwise specified. Ernesto was a 22-year-old male diagnosed with a traumatic brain injury and learning disability not otherwise specified. All participants were able to state their vocational interests and experience through vocal communication.
Setting and Stimuli
The experiment was conducted in a classroom at the vocational development center. There were tables, chairs, and a laptop computer in the room during sessions. Instructional sessions lasted 3 to 5 min and were conducted 2 to 6 times per week in the same room. During these sessions, the experimenter sat approximately 2 m behind the participant so that the computer screen was visible to both. A maximum of two instructional sessions were completed in 1 day with a short break in between sessions. Microsoft© PowerPoint© was used to conduct instruction and was displayed on a laptop computer in presentation mode. A diagram of the PowerPoint presentation is provided in Fig. 1 of O’Neill and Rehfeldt (2014). All but two slides were green, consisting of one interview question with accurate and inaccurate response selections rotated throughout the various positions. An orange slide read “good but click here to try again” and was used as redirection for repetitive selections, and a red slide read “Not appropriate, click here to try again” for redirection of inaccurate selections. Upon start-up, a box appeared at the top-center of the screen and contained the relevant interview question on a green background. Four to five response options, one of which could be inaccurate (see TB pretests for more detail), appeared below the question box approximately 3 s after the question box appeared. Response options appeared simultaneously in stacked fashion and quasi-random order approximately 17.5 mm apart. All texts were displayed in size 28–30 Arial font. The program consisted of 6 to 8 total slides depending on the number of quasi-random response order variations needed.
Fig. 1.

The number of TB responses provided across three interview questions by Dave (left y-axis) and the percentage of lag criteria met during training (right y-axis)
During pretests and posttests, the researcher and participant sat on opposite sides of a table such that they were facing one another. The experimenter recorded participant responses to eight interview questions on the same laptop computer. These sessions also lasted 3 to 5 min. Based on pretest performance, the director of the vocational development center identified each of the eight questions as being particularly crucial to the interview process. The eight questions were as follows: (1) What can you tell me about yourself? (2) Why do you want to work here? (3) What are your strengths? (4) What are your weaknesses? (5) How do you handle pressure and stress? (6) What motivates you? (7) Tell me how you can work in a team? (8) Do you have any questions? The director of the center identified accurate and inaccurate answers based on accurate and inaccurate responses when performance on pretest probes was deemed visually stable. In order to expand participants’ existing repertoires, accurate and inaccurate responses recorded during pretest were included in the instructional protocol; however, no more than one of each was included for any particular question. Three of the initial eight questions were selected for inclusion in the instructional protocol based on pretest probes. Questions to which responses included inaccurate answers or no answers were selected. For Dave, the first question exposed to the instructional protocol was “Why do you want to work here?” For Ernesto and Jack, it was “What can you tell me about yourself?” For all three participants, the second question was “How do you handle pressure and stress?” and the third was “Do you have any questions?” See Table 1 for examples of accurate and inaccurate responses to the target questions.
Table 1.
Examples of accurate and inaccurate responses to interview questions
| Question | Accurate | Inaccurate | |
|---|---|---|---|
| (1) Dave | Why do you want to work here? | I want the experience. I like this company. I think it would be a good fit. |
I need the money. My parents will buy me a car. I heard you need help. |
| Ernesto and Jack | What can you tell me about yourself? | I graduated high school. I went to junior college. I have work experience. |
I used to play football. I am a quiet person. I like to play video games. |
| (2) | How do you handle pressure and stress? | I ask a coworker for help. I talk with my supervisor. I do one thing at a time. |
I don’t let it get to me. I’ll help you in a minute. Try to deal with it. |
| (3) | Do you have any questions? | What will you expect of me? What are the hours? Can I follow-up in a week? |
How old are you? Do you live nearby? Do you have children? |
Dependent Variables and Measurement
The current study employed the same dependent variables as O’Neill and Rehfeldt (2014). Dependent measures were the number of accurate and inaccurate TB intraverbal responses to each interview question (i.e., participants could provide multiple responses to each question) and the percentage of lag criteria met during SB instruction. Accurate responses were defined as vocal-verbal responses related to participant’s employment history and the workplace environment. Inaccurate responses were defined as vocal-verbal responses unrelated to the participant’s employment history and the workplace environment. A nonresponse was defined as vocal-verbal response such as “no” or other functionally equivalent responses (e.g., “not right now”). Nonvocal responses (e.g., shaking head) in the absence of a vocal-verbal response were not anticipated and did not occur. Percentage of lag criteria was defined as the number of SB responses that were different than the previous n responses, where n equals the number of immediately preceding responses from which the current response must differ in order to contact reinforcement. This number was then converted to a percentage by dividing by 20 (the total number of trials per session) and then multiplying by 100 %. Interobserver agreement and treatment integrity were scored via video recordings for 51 % of instructional sessions and 47 % of pretests and posttests by two independent observers. Scores were calculated using individual values from each participant’s sessions by dividing the smaller number by the larger and multiplying by 100 %. Resulting interobserver agreement was 99 % (range 99 to 100 %) for instructional sessions and 90 % (range 85 to 100 %) for pretests and posttests. Treatment integrity was calculated by dividing the number of individual sessions with 100 % implementation by the total number of sessions and multiplying by 100 %. Resulting treatment integrity was 88 % (range, 63 to 100 %) for instructional sessions and 79 % (range 75 to 100 %) for pretests and posttests.
Experimental Design
A multiple-probe design across behaviors (three questions) was employed to evaluate the training package for each participant. Although the experimenter asked all eight questions during each TB test, only responses to the three target questions were graphed. To control the potential for diffusion of the intervention across participants, the order of instruction for questions differed per participant. The general sequence of conditions was as follows: pretest, fixed ratio 1, SB instruction (S condition), TB posttest, SB instruction with audio (SA condition), TB posttest, SB instruction with audio and TB component (SAT condition), TB posttest, generalization, and follow-up. There were three sets of mastery criteria in this design. First, the mastery criterion for moving from training to posttest was met when 18 out of 20 responses differed from the response selected during the previous trial (i.e., 90 % of trials met the lag 1 criterion). Before moving to posttest for any given question, participants had to meet the lag 1 criterion of 90 % for three consecutive sessions. The purpose of this mastery criterion was to ensure that, prior to posttest, neither identical consecutive accurate selections nor inaccurate selections would occur more than twice during a given 20-trial session. Second, the mastery criterion for moving from posttest to the next instructional condition (i.e., from S to SA and SA to SAT) was that all accurate SB responses had not yet emerged as TB vocal-verbal responses for a particular question. Third, the mastery criteria for moving from posttest to instruction for the next question were (a) zero inaccurate TB vocal-verbal responses, (b) an increase in number of accurate TB vocal-verbal responses, and (c) stable patterns of TB vocal-verbal responding to all questions remaining in pretest.
Procedure
TB Pretests
An experimenter sat at the table, facing the participant, and asked each of the eight interview questions identified above, one at a time. Following each response, a general acknowledgment (e.g., “okay” or “alright”) was provided, and one vocal-verbal prompt of “anything else?” was provided immediately following the participant’s initial responses to each question. No other feedback was provided.
Instructional Protocol
First, the experimenter asked the participants to sit at a table with the laptop computer in front of them. The experimenter then provided the participants with presession instruction to read the question presented at the top of the screen for each trial and then select an appropriate answer. The instructional protocol operated on a lag 1 reinforcement schedule for accurate answers which, for a given trial, requires the selection of a different response than the immediately preceding response in order to contact reinforcement. A lag 2 reinforcement schedule requires the selection of a different response than the last two immediately preceding responses in order to contact reinforcement. Each session involved 20 trials on the same question. The following sequence of conditions was identical for all three participants.
Fixed Ratio 1 (FR-1 Condition)
Each participant was first exposed to five trials on an FR-1 reinforcement schedule at the beginning of their first and only their first instructional session. This measure was taken in order to assess variability of responding in the absence of a lag reinforcement schedule for the first five trials. Responses resulted in the program advancing to the next trial without feedback, regardless of the response option selected.
SB Instruction (S Condition)
The program provided (a) textual feedback (“good answer but click here to try again”) on an orange background contingent on accurate responses that did not meet the lag 1 reinforcement schedule and (b) textual feedback (“Not appropriate, click here to try again”) on a red background for inaccurate responses. No vocal-verbal audio feedback was presented at any time during the SB instruction condition.
SB Instruction with Audio (SA Condition)
This instructional protocol was identical to the S condition with the addition of auditory stimuli. In this condition, participants were provided (a) prerecorded audio clips of the relevant interview question when the question box at the top of the screen was clicked and (b) vocal-verbal feedback (e.g., “great answer”, “very interesting”) delivered via audio recording contingent on accurate responses that met the lag-1 reinforcement schedule. No vocal-verbal feedback was provided for incorrect responses.
SB Instruction with Audio and TB Component (SAT Condition)
This instructional protocol was identical to the SA condition with the addition of a TB component. Before each session began, participants were instructed to complete each of the following steps for every question presentation: (a) click the box containing the question at the top of the screen, (b) wait for the audio clip to play, (c) read out loud the answer you have chosen, and (d) click the answer you have chosen. Booster sessions were identical to the SAT condition and were only completed for Dave.
TB Posttests and Generalization Probes
Posttest sessions were identical to pretest sessions. The criterion for moving from posttest to the next instructional condition (i.e., from S to SA and SA to SAT) was that all accurate SB responses had not yet emerged as TB vocal-verbal responses for a particular question. The criteria for moving from posttest to instruction for the next question were (a) zero inaccurate TB vocal-verbal responses, (b) an increase in number of accurate TB vocal-verbal responses, and (c) stable patterns of TB vocal-verbal responding to all questions remaining in pretest. Stimulus generalization probes were identical to pretest except that they were performed by the director of the vocational development center and took place in the director’s office.
Maintenance and Follow-Up Probes
Maintenance probes were conducted during instruction, and follow-up probes were conducted in the weeks following instruction. Both were identical to pretest sessions.
Results and Discussion
The current study evaluated the additive effects of the S, SA, and SAT components of the SB instructional program employed by O’Neill and Rehfeldt (2014). During TB pretests, all participants provided a low level of accurate responses to all three questions (range 0–1) and a higher level of inaccurate responses (range 0–4). Consistent with the O’Neill and Rehfeldt study, inaccurate responses decreased to zero or near zero for all participants during TB posttests. This finding provides evidence of response transfer from SB to TB responding. Dave (Fig. 1) required SAT for the first question targeted (Q3), and only S for Q2 and Q1. Jack (Fig. 2) also required SAT for the first question (Q1), but only S for Q3 and Q2. Ernesto (Fig. 3) required SAT for the first question (Q2), only S for the second question (Q1), and SAT for Q3. In other words, exposure to SAT for one question resulted in learning under only S for two subsequent questions by two participants (Dave and Jack). However, during the S posttest for the first question targeted, only Jack responded with an increase in level of accurate TB responses. The SA and SAT conditions generally resulted in higher levels of accurate TB responses, respectively. To reiterate, the mastery criterion for moving from one instructional condition (i.e., from S to SA and SA to SAT) to the next was that all accurate SB response options had not yet emerged as TB vocal-verbal responses. For example, although Dave provided three accurate responses to Q3 during SA, the SAT condition was implemented because there were additional SB response options that might emerge as TB responses. These findings demonstrate that although SB responses were readily acquired during the S condition, the SB instructional component alone may not be sufficient to promote the emergence of TB intraverbal responses. Support is provided for Polson and Parsons (2000) recommendation to incorporate a TB component when employing SB instruction.
Fig. 2.

The number of TB responses provided across three interview questions by Jack (left y-axis) and the percentage of lag criteria met during training (right y-axis)
Fig. 3.

The number of TB responses provided across three interview questions by Ernesto (left y-axis) and the percentage of lag criteria met during training (right y-axis)
An explanation for the increased effectiveness of the SA and SAT conditions is found in an analysis of the multiple control of verbal behavior. Michael et al. (2011) define two types of multiple control: convergent multiple control is the control of a single response by more than one variable, while divergent multiple control is the strengthening of more than one response by a single variable. Similar to the findings of O’Neill and Rehfeldt (2014), divergent multiple control was apparent in that each target question came to evoke multiple accurate responses which were promoted by the lag reinforcement schedules. During the S condition, convergent multiple control over SB responses was exerted by accurate response options, resulting in a narrowly defined response class. The addition of auditory stimuli generated by the instructional protocol (i.e., recorded questions and feedback) may have added to control during the SA condition. Further control over SB responses was exerted during the SAT condition by requiring participants to provide a TB response immediately before each SB response. In fact, audio-recorded questions during SA and SAT conditions may have set the occasion for covertly produced stimuli (e.g., self-generated rules or problem solving) to exert control over SB responses. This notion is supported by Potter et al. (1997) who performed a protocol analysis (Ericsson and Simon 1993) of transcripts recorded during a SB task and found that participants mediated the task through consistent vocal-verbal responding (i.e., problem solving). The potential for TB responses to promote the acquisition of SB responses has also been addressed by Potter and Brown (1997). Of particular interest was the emergence of accurate TB responses to Q3 after the SA condition for Dave. In this case, auditory stimuli (i.e., recorded questions) may have set the occasion for covert TB responding. However, it is not clear whether the aforementioned increase was due solely to the addition of the audio stimuli or simply the accumulated effect of the SB component across the S and SA condition. This phenomenon is further clouded in that Jack provided fewer accurate responses after the SA condition with the reemergence of multiple inaccurate responses to Q1. This might be seen as evidence to suggest the emergence of inaccurate self-generated rules or problem solving. However, the S condition was sufficient during instruction for subsequent questions for all participants. It may be the case that the SAT condition for the first question promoted covert TB responding during later S conditions which in turn promoted overt TB responses during posttest.
Some interesting observations were gleaned from the manipulation of reinforcement schedules. During the FR-1 condition, for example, only Ernesto selected the same response during each of the five trials, indicating that the lag schedule of reinforcement might be necessary in order to evoke variable responding. During Q1 instruction for Jack, however, responses rotated between only two response options. For this reason, a lag 2 reinforcement schedule was implemented and resulted in multiple selections of all three accurate response options. Unfortunately, Jack took an unexpected 12-day break from the vocational development center and upon return was not able to meet criterion during the S-lag 2. Both Ernesto and Dave were able to meet lag 1 criterion on all questions before moving to posttest. These results suggest that lag reinforcement schedules can be useful in promoting variable responding during SB instruction. Furthermore, this study extends the literature by suggesting (1) that lag reinforcement schedules are an effective method for teaching variable responses to interview questions, and (2) SB instruction for teaching responses to interview questions should include a TB component.
There was evidence of response generalization, stimulus generalization, and maintenance in this study. Novel accurate but not inaccurate responses during some sessions suggest response generalization, as well as effective discrimination between accurate and inaccurate responses. Probes were conducted by the director of the vocational development center for Ernesto and Dave and provided evidence of stimulus generalization to a novel person and setting. Follow-up probes provided some evidence of long-term maintenance. An important part of the current analysis was on the efficiency of the training. Participants in the current study required an average total instructional duration of 1 h 45 min and accurate TB responses were provided after an average of seven sessions due to the additive nature of the study. For example, Ernesto did not provide any accurate responses until the SAT condition despite six instructional sessions in the S condition and three in the SA condition. Overall, O’Neill and Rehfeldt (2014) and the present study reflect the efficiency of computer-based instructional protocols in teaching interview skills and suggest that with such a protocol in practice, staff time and resources could be allocated to the more arduous task of fine-tuning a client’s interviewing repertoire. Future research might examine the effects of a BST protocol consisting of instructional video presentations, video modeling, TB rehearsal, and feedback. The protocol might include SB instruction if TB rehearsal is not sufficient, as well as instruction for follow-up statements, or questions that expand upon initial responses. If SB instruction is suggested to enhance BST, then such a protocol might allow for further staff resources to be reallocated by vocational development centers.
One limitation of the current study is that relatively short response lengths of approximately five words would likely sound scripted during an interview. Future research might examine the effects of varied response option lengths or varied amounts of response options on the production of novel and recombination responses.
Finally, we addressed social validity in the current study by consulting the director of the vocational development center during the design of the instructional material and during generalization probes. However, in vivo interview assessments would provide a superior measure of social validity.
Acknowledgments
The authors thank Shawna K. McPherson and Steven J. Anbrow for their assistance with data collection.
Conflict of Interest
The authors declare that they have no conflict of interest.
References
- Ericsson KA, Simon HA. Protocol analysis: verbal reports as data. Cambridge: MIT Press; 1993. [Google Scholar]
- Hall C, Sheldon-Wildgen J, Sherman JA. Teaching job interview skills to retarded participants. Journal of Applied Behavior Analysis. 1980;13:433–442. doi: 10.1901/jaba.1980.13-433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly JA, Wildman BG, Berler ES. Small group behavioral training to improve the job interview skills repertoire of mildly retarded adolescents. Journal of Applied Behavior Analysis. 1980;13:461–471. doi: 10.1901/jaba.1980.13-461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lovett S, Rehfeldt RA, Garcia Y, Dunning J. Comparison of a stimulus equivalence protocol and traditional lecture for teaching single-subject designs. Journal of Applied Behavior Analysis. 2011;44:819–833. doi: 10.1901/jaba.2011.44-819. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michael J. Two kinds of verbal behavior plus a possible third. The Analysis of Verbal Behavior. 1985;3:2–5. doi: 10.1007/BF03392802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michael J, Palmer DC, Sundberg ML. The multiple control of verbal behavior. The Analysis of Verbal Behavior. 2011;27:3–22. doi: 10.1007/BF03393089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mozingo D, Ackley GB, Bailey JS. Training quality job interviews with adults with developmental disabilities. Research in Developmental Disabilities. 1994;15:389–410. doi: 10.1016/0891-4222(94)90024-8. [DOI] [PubMed] [Google Scholar]
- O’Neill J, Rehfeldt RA. Selection-based responding and the emergence of topography-based responses to interview questions. The Analysis of Verbal Behavior. 2014;30:178–183. doi: 10.1007/s40616-014-0013-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polson DAD, Parsons JA. Selection-based versus topography-based responding: an important distinction for stimulus equivalence? The Analysis of Verbal Behavior. 2000;17:105–128. doi: 10.1007/BF03392959. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potter B, Brown DL. A review of studies examining the nature of selection-based and topography-based verbal behavior. The Analysis of Verbal Behavior. 1997;14:85–104. doi: 10.1007/BF03392917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potter B, Huber S, Michael J. The role of mediating verbal behavior in selection-based responding. The Analysis of Verbal Behavior. 1997;14:41–56. doi: 10.1007/BF03392915. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schloss PJ, Santoro C, Wood CE, Bedner MJ. A comparison of peer-directed and teacher-directed employment interview training for mentally retarded adults. Journal of Applied Behavior Analysis. 1988;21:97–102. doi: 10.1901/jaba.1988.21-97. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker BD, Rehfeldt RA. An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via blackboard. Journal of Applied Behavior Analysis. 2012;45:329–344. doi: 10.1901/jaba.2012.45-329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker BD, Rehfeldt RA, Ninness C. Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis. 2010;43:615–633. doi: 10.1901/jaba.2010.43-615. [DOI] [PMC free article] [PubMed] [Google Scholar]
