Skip to main content
Journal of the Experimental Analysis of Behavior logoLink to Journal of the Experimental Analysis of Behavior
. 2008 May;89(3):333–340. doi: 10.1901/jeab.2008-89-333

Instructional Effects on Performance in a Matching-to-Sample Study

Chad E Drake 1,1, Kelly G Wilson 1,
PMCID: PMC2373764  PMID: 18540218

Abstract

Conducting studies using an undergraduate participant pool is fraught with difficulties. Among them are problems with adequately motivating subjects both to come to the study, and once there, to actively engage the experimental task. Thirty-one college students participated in a matching-to-sample (MTS) study involving substantial training, testing, retraining, and retesting of conditional discriminations and equivalence relations among four 4-member classes of nonsensical words. The study was conducted during the end of the semester, when performance often had been observed to be poorer than at other points in the semester. Eleven of the participants, in addition to standard instructions about the task, received additional instructions specifying molar consequences for high rates of “correct” responses throughout the procedure. This subset of participants displayed markedly improved performance as compared to those who did not receive the additional instructions. Results suggest that specification of molar contingencies improves participants' sensitivity to molecular contingencies within the study. Instructions that specify and increase the consequential functions of feedback provided during MTS trials may be one means of reducing unwanted variability in human MTS performance.

Keywords: rule-governed behavior, instructional control, stimulus equivalence, matching-to-sample, human operant research, computer task, adult humans


In order to conduct operant research, experimenters must organize contingencies among antecedent stimuli, the behavior of interest, and consequences. For the contingencies to generate orderly patterns of behavior, the experimenter also must establish the motivational functions of the consequence (Dougher & Hackbert, 2000; Michael, 1982). When using animals in research, this is typically and readily done by depriving the organism of food or water. Human participants pose a challenge for the experimenter in this regard, since ethical and resource limitations make motivation a more complicated issue. Consider, for example, research on stimulus equivalence using the matching-to-sample (MTS) procedure with undergraduates who receive academic credit for their participation.

A common MTS preparation consists of a series of conditional discriminations designed to train a partial repertoire of responses among a set of arbitrary stimuli (e.g., “DAK” to “VEB” and “DAK” to “JOR”). The discriminations are consequated with verbal stimuli such as “correct” or “wrong.” After achieving a criterion for performance during this training, participants encounter novel discriminations (e.g., “VEB” to “DAK”, “JOR” to “DAK”, “VEB” to “JOR”, and “JOR” to “VEB”) presented in extinction. At the University of Mississippi, where the authors conduct research with the MTS procedure, participants are compensated for their participation with credit for their time in the study. This credit is later included in the tabulation of students' final grades in a psychology course. The availability of this credit is effective at recruiting participants into the lab, but it is sometimes less effective in organizing their behavior while they are in the lab. We have observed numerous instances in which participants produced response patterns that appear to be at the level of chance. We have witnessed this not just during testing phases that contained trials presented in extinction, but also during training trials where feedback was provided for every response. In the latter case, the consequences provided during these conditional discriminations appeared to have little impact on subsequent behavior.

Given our lack of any means to provide more direct consequences for responding to the MTS trials (such as food, shock, or money), we looked for previous research on the use of instructions to improve performance in MTS studies. Our search resulted in few relevant articles. A few studies (Dymond & Barnes, 1998; Green, Sigurdardottir, & Saunders, 1991; Sigurdardottir, Green, & Saunders, 1990) reported that participants respond appropriately to conditional discriminations throughout an MTS task just as well when not instructed. Ribes-Inesta and Martinez-Sanchez (1990) and Martinez and Tamayo (2005) demonstrated that providing inaccurate instructions can disrupt performance on MTS tasks. A number of studies have shown that instructions can make participants relatively insensitive to changes in the contingencies within a study (Galizio, 1979; Hayes, Brownstein, Haas, & Greenway, 1986; Shimoff, Catania, & Matthews, 1981; Shimoff, Matthews, & Catania 1986), but none of these involved MTS preparations. Our search uncovered a single study conducted with children (Pilgrim, Jackson, & Galizio, 2000) indicating that instructions facilitate accurate and consistent responding. However, this study utilized food as a consequence for correct responding. The motivational currency available to us, credit for participation, was not manipulable because of ethical considerations. Participants must be given full credit for the time they committed to participate in the study regardless of their performance on the task or the actual amount of time spent in the experiment.

Critchfield, Schlund, and Ecott (2000) developed a point system that effectively reinforced responding across a variety of experimental procedures. Their research, like the current study, was conducted with college students participating for class credit. Critchfield et al. implemented a minor deception in their preexperimental instructions requiring approval by an Institutional Review Board. Essentially, participants were told that their credit was dependent upon their performance at the task rather than their time in the study. Although untrue, performance was calculated in such a way that the credit usually was roughly consistent with their time investment, which satisfied the IRB committee. Although shown to be an effective means of motivating participants, the procedure required a reasonably accurate estimate of the number of reinforceable trials within the experiment. This estimate permitted a calculation of performance that usually approximated the participants' actual time in the study. Given the potentially high degree of variability in responding to our procedures, we were unable to implement the point system developed by Critchfield et al.

Thus, our options were limited to free and readily available resources compatible with our methods. Given the absence of relevant data, we decided to investigate a means of verbally specifying and establishing the reinforcing functions of compliance to our standard MTS instructions.

We first made a conceptual analysis of the students' behavior. Our undergraduate participants volunteered for the study in order to acquire credit redeemable in a class. This credit was proportional to the amount of time advertised as potentially necessary to complete the study. Although the full 1.5 hr of credit was provided to all participants who completed the study regardless of the time required to finish, an inspection of the wording on our informed consent document revealed that credit was “for your time participating in the study”. It did not distinguish specifically between time allotted for the study and actual time in the study. Thus, we considered the possibility that participants were trying to maximize their time in the study in order to maximize their credit. Such behavior could account for our puzzlingly low rates of correct responding, since it could involve ignoring the contingencies within the study itself. We decided to conduct an experimental analysis to examine our conceptual analysis by using verbal antecedents to increase the consequential functions of feedback within the MTS procedure. We devised an additional set of instructions for participants specifying the molar consequences of making “correct” selections during the MTS procedure.

Method

Participants

Thirty-one participants volunteered for the study. All were between 18 and 24 years of age (M  =  19.65). Twenty-two (71%) were female. There were 20 freshman (64.5%), 3 sophomores (9.7%), 6 juniors (19.4%), and 2 seniors (6.5%). Twenty-two identified themselves as Caucasian (71%), 7 as African-American (22.6%), 1 as Asian (3.2%), and 1 as Hispanic (3.2%). Experimental sessions were conducted individually during the final month of a spring semester. Participants read and signed a consent form, engaged an MTS task, and then were debriefed and given credit for their participation.

Apparatus and Procedure

The MTS program was written in Visual Basic.NET and compiled by a Dell computer. All participants engaged the same MTS procedure. All stimuli presented by the program were nonsense syllables and nonsensical “words” (see Table 1). Participants responded by clicking on the words with a mouse. Before the matching-to-sample trials began, all participants encountered the same set of instructions and practice trials provided by the computer. The experimenter provided two additional instructions to participants 12–22 as they sat down in front of the computer. Prior and subsequent participants did not receive these additional instructions, so the intervention resembled what occurs in a time-series ABA design. However, because the instructions could not be withdrawn effectively, the study involved a between-groups comparison rather than the within-subjects comparison normally associated with time-series designs. The additional instructions were:

Table 1.

Stimuli

A1 DAK
A2 VEB
A3 KIF
A4 JOR
B1a SLECH
B1b DIMURB
B1c GABORDY
B2a TIBALOR
B2b KEPEL
B2c MALSET
B3a JUKAL
B3b ROLDEAT
B3c ANPLUF
B4a BOCKITY
B4b NORDLE
B4c ENKAL
  1. You will receive the full 1.5 hr of credit for participating even if you finish before 1.5 hr have elapsed.

  2. The way to finish early is to make as many correct selections as possible.

As participants sat in front of the computer, the experimenter delivered the additional instructions if applicable. Otherwise, the experimenter asked the participant to read the instructions displayed by the computer and, when informed of the end of the experiment by the computer, to exit the room. The experimenter then left the room and waited until the participant finished the study or until 1.5 hr elapsed.

Participants began the study with the following information displayed on the screen:

HELLO. Thank you for taking part in this experiment. Your instructions are very simple. One box will be displayed at the top of the screen and two along the bottom. You must choose one of the two along the bottom. Click on the PRACTICE button below for a few examples.

Clicking the PRACTICE button resulted in three practice trials containing stimuli unrelated to the experimental tasks. Correct selections resulted in the words “Correct. Good job!” appearing on the screen, whereas incorrect selections resulted in the appearance of the word “Wrong.” Upon completion of these trials, the following message appeared:

WELL DONE! During some trials you will receive feedback. During others you will not. These tasks might be confusing at times. Just do the best you can. Try to make the correct choices throughout the experiment whether you are being told you are choosing correctly or not. Work through the trials as quickly as you can. Before you begin, type your first name in the box below. Then click on the BEGIN EXPERIMENT button to get started.

Clicking the BEGIN EXPERIMENT button resulted in a long series of training and testing phases. Feedback during training phases was identical to that used in practice trials, whereas testing phases contained no feedback. These phases (see Table 2) and the stimulus classes generated by them (see Figures 1 and 2) closely resembled those used in Experiment 2 of Saunders, Saunders, Kirby, and Spradlin (1988). Because specific details about the phases are unessential to the purpose of the current article, only a brief, general description is provided below.

Table 2.

Procedural Details

Description Phase Number Content of Trials* Number of Trials Criterion**
Initial Training 1 A1-B1a, A1-B1b, A1-B1c (x3) 9 9
2 A3-B3a, A3-B3b, A3-B3c (x3) 9 9
3 all trials from phases 1 and 2 18 18
4 A2-B2a, A2-B2b, A2-B2c (x3) 9 9
5 A4-B4a, A4-B4b, A4-B4c (x3) 9 9
6 all trials from phases 4 and 5 18 18
7 all trials from phases 1, 2, 4, and 5 36 >34
Initial Testing 8 B1a-B1b, B1a-B1c, B1b-B1a, B1b-B1c, B1c-B1a, B1c-B1b, B2a-B2b, B2a-B2c, B2b-B2a, B2b-B2c, B2c-B2a, B2c-B2b, B3a-B3b, B3a-B3c, B3b-B3a, B3b-B3c, B3c-B3a, B3c-B3b, B4a-B4b, B4a-B4c, B4b-B4a, B4b-B4c, B4c-B4a, B4c-B4b (x3) 72 >64
Subsequent Training 9 A1–B2a, A3–B4a (x3) 6 6
10 both trials from phase 9 (x18) 36 36
Subsequent Testing 11 B1a-B2a, B1b-B2b, B1c-B2c, B2a-B1a, B2b-B1b, B2c-B1c, B3a-B4a, B3b-B4b, B3c-B4c, B4a-B3a, B4b-B3b, B4c-B3c (x3) 36
*

Trials are listed as SAMPLE STIMULUS (hyphen) CORRECT COMPARISON. Trials were presented in pseudorandom order.

**

Each phase was repeated until participant met criterion, except for Phases 7 and 8, where failure resulted in a return to Phase 1.

Fig 1.

Fig 1

Conditional discriminations and equivalence relations in Initial Training and Testing. Trained relations are connected by solid arrows; tested relations are connected by dashed arrows.

Fig 2.

Fig 2

Conditional discriminations and equivalence relations in Subsequent Training and Testing. Trained relations are connected by solid arrows; tested relations are connected by dashed arrows.

Initial Training

Initial Training occurred in Phases 1–7. Each phase included various systematic combinations of 12 different conditional discriminations. Each of the first six phases contained a subset of the discriminations and would recur until the participant made 100% correct selections. Phase 7 contained a mixture of the 12 discriminations from Phases 1–6 presented three times each. Failure to choose correctly on more than 34 of these 36 trials (i.e., greater than 90%) resulted in the participant beginning at Phase 1 again.

Initial Testing

Initial Testing occurred in Phase 8. It contained tests for equivalence among the comparison stimuli used throughout Initial Training. Each of 24 different trials was presented three times for a total of 72 trials. No feedback was provided. Failure to derive the appropriate relations on more than 64 trials (i.e., greater than 90%) resulted in the participant beginning at Phase 1 again.

Subsequent Training

Subsequent Training occurred in Phases 9 and 10. Both phases consisted of two new conditional discriminations among the stimuli used in Initial Training and Testing. In Phase 9, each trial was presented three times, totaling six trials. In Phase 10, each trial was presented 18 times, totaling 36 trials. Each phase repeated until participants demonstrated 100% correct responses.

Subsequent Testing

The 11th and final phase tested equivalence relations based on conditional discriminations from Phases 9 and 10 in combination with conditional discriminations from Phases 1–7. Phase 11 contained 36 unique trials, each of them a test for equivalence among the comparison stimuli used throughout Initial Training. There was no criterion for correct responding.

Completion of the MTS task resulted in the following message appearing on the computer screen:

DONE! Thank you for your participation in this experiment. Please contact the experimenter for a debriefing about the experiment before you go.

Upon exiting the room, participants met the experimenter, read and signed a debriefing form, and received credit for their participation. Participants who failed to emerge from the room after 1.5 hr were interrupted and informed that the experiment was over.

Results

Participants who received the final message from the computer before 1.5 hr were classified as having completed the study. In order to display each participant's progression through the experiment, figures were devised that charted a participant's total number of phases completed (via achievement of the criterion for each experimental phase) by the number of phases encountered (via completion and/or repetition of phases). Figures 35 display these data, each figure containing a collection of participants. Five of the first 11 participants (45%; participants 2, 7, 8, 10, and 11) completed all 11 phases of the MTS experiment (Figure 3). Nine of the next 11 participants (82%; participants 12, 13, 14, 16, 17, 19, 20, 21, and 22), who were the only ones who received the additional instructions at the start of the experiment, completed the experiment (Figure 4). Four of the final 9 participants (44%; participants 23, 25, 26, and 28) completed the experiment (Figure 5).

Fig 3.

Fig 3

Progression through MTS phases by participants 1–11. Attempted Phases represents the number of phases engaged by the participant whereas Completed MTS Phases shows how far the participant progressed through the programmed procedure.

Fig 4.

Fig 4

Progression through MTS phases by participants 12–22. See Figure 3 caption for details.

Fig 5.

Fig 5

Progression through MTS phases by participants 23–31. See Figure 3 caption for details.

Figure 6 displays the percentage of participants in each group who met performance criterion for each phase of the MTS task. The 2 participants who failed to finish the study from the group receiving additional instructions never met criterion in Phase 8 (Initial Testing). Participants who did not receive the additional instructions exhibited failures to meet criterion for phases throughout the task (except for Phase 11, Subsequent Testing, which had no criterion for performance).

Fig 6.

Fig 6

Percentage of participants 1–11, 12–22, and 23–31 who completed each phase of the MTS procedure.

Discussion

The current results suggest that specifying the molar consequence for correct selections resulted in increases in participants' sensitivity to the feedback provided for conditional discriminations within the study. Participants who received instructions regarding these molar consequences showed a markedly higher completion rate on an extended MTS task. Furthermore, the 2 participants in this group who failed to complete the study were unable to meet criterion in the longest and arguably most difficult phase of the experiment (Phase 8, Initial Testing). In contrast, nonrecipients of the additional instructions, before and after the recipient group participated, exhibited failures to meet criterion in a variety of phases in the experiment. Overall, participants in these groups displayed lower rates of completion.

Some researchers have expressed concern over the provision of instructions in operant research (for discussion, see Baron & Perone, 1998) because instructions may add an unwanted source of contextual control on the behavior of interest. However, this potential limitation in the current work could instead be regarded as a worthy avenue of research. To date, only a small body of empirical literature on classes of instructional control like pliance, tracking, and augmenting is available (Hayes, Devany, Kohlenberg, Brownstein, & Shelby, 1987; Hayes, Kohlenberg, & Hayes, 1991; for review, see Hayes, Zettle, & Rosenfarb, 1989). Therefore, the current analysis of these functional categories of rule following may be regarded with a compatible degree of skepticism. Future researchers may wish to expand the literature on instructions through use of the current methods. The paradigm used in the present study may offer a means of examining the influence of verbal antecedents on operant behavior. For example, systematic manipulations of the instructions and their resulting effects on behavior may suggest additional refinements in defining and distinguishing tracks and augmentals. Such efforts also may further illuminate the limitations and potential in providing instructions to participants in operant research.

Unless an experimenter has the financial resources to pay participants for their performance, the consequences of engaging in an MTS procedure itself are very indirect. Most humans probably encounter experiences that reinforce being “correct” and punish being “wrong,” but contexts will likely vary greatly in the extent that this contingency is effective. Some studies do indicate that participants prefer consistency and coherence in the way they respond to MTS paradigms. For example, Pilgrim and Galizio (1995) found that participants who successfully formed equivalence classes among an array of stimuli were later resistant to direct efforts to disrupt the equivalence classes. Wilson and Hayes (1996) also found that former equivalence patterns resurged after subsequent patterns were extinguished. The current data, in contrast, show that opportunities to exhibit equivalence were less likely unless appetitive consequences for doing so were verbally specified. This insensitivity to the available contingencies may be common in any MTS experiment that does not involve compensating participants with respect to their trial-by-trial performance.

Human operant researchers have commiserated about the difficulties in producing orderly data with the MTS procedure and with human subjects more generally, though this has mostly occurred in private communications and during conferences rather than in print. Some researchers have developed alternative methods of data collection in part because of these difficulties (Barnes-Holmes, Barnes-Holmes, Smeets, Cullinan, & Leader, 2004). A number of difficulties remain with the MTS procedure, but this study may represent one step toward reducing a significant source of unwanted variability with it. When future research merits the use of the procedure, experimenters may benefit from considering the use of instructions that specify the consequences of engaging the procedure itself.

References

  1. Barnes-Holmes D, Barnes-Holmes Y, Smeets P.M, Cullinan V, Leader G. Relational frame theory and stimulus equivalence: Conceptual and procedural issues. International Journal of Psychology & Psychological Therapy. 2004;4:181–214. [Google Scholar]
  2. Baron A, Perone M. Experimental design and analysis in the laboratory study of human operant behavior. In: Lattal K.A, Perone M, editors. Handbook of research methods in human operant behavior. New York: Plenum; 1998. pp. 45–91. [Google Scholar]
  3. Critchfield T.S, Schlund M, Ecott C. A procedure for using bonus course credit to establish points as reinforcers for human subjects. Experimental Analysis of Human Behavior Bulletin. 2000;18:15–18. [Google Scholar]
  4. Dougher M.J, Hackbert L. Establishing operations, cognition, and emotion. Behavior Analyst. 2000;23:11–24. doi: 10.1007/BF03391996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Dymond S, Barnes D. The effects of prior equivalence testing and verbal instructions on derived self-discrimination transfer: A follow-up study. Psychological Record. 1998;48:147–170. [Google Scholar]
  6. Galizio M. Contingency-shaped and rule-governed behavior: Instructional control of human loss avoidance. Journal of the Experimental Analysis of Behavior. 1979;31:53–70. doi: 10.1901/jeab.1979.31-53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Green G, Sigurdardottir Z.G, Saunders R.R. The role of instructions in the transfer of ordinal functions through equivalence classes. Journal of the Experimental Analysis of Behavior. 1991;55:287–304. doi: 10.1901/jeab.1991.55-287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Hayes S.C, Brownstein A.J, Haas J.R, Greenway D.E. Instructions, multiple schedules, and extinction: Distinguishing rule-governed from schedule-controlled behavior. Journal of the Experimental Analysis of Behavior. 1986;46:137–147. doi: 10.1901/jeab.1986.46-137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hayes S.C, Devany J.M, Kohlenberg B.S, Brownstein A.J, Shelby J. Stimulus equivalence and the symbolic control of behavior. Mexican Journal of Behavior Analysis. 1987;13:361–374. [Google Scholar]
  10. Hayes S.C, Kohlenberg B.S, Hayes L.J. The transfer of specific and general consequential functions through simple and conditional equivalence classes. Journal of the Experimental Analysis of Behavior. 1991;56:119–137. doi: 10.1901/jeab.1991.56-119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Hayes S.C, Zettle R.D, Rosenfarb I. Rule following. In: Hayes S.C, editor. Rule-governed behavior: Cognition, contingencies, and instructional control. New York: Plenum; 1989. pp. 191–220. [Google Scholar]
  12. Martinez H, Tamayo R. Interactions of contingencies, instructional accuracy, and instructional history in conditional discrimination. Psychological Record. 2005;55:633–646. [Google Scholar]
  13. Michael J. Distinguishing between discriminative and motivational functions of stimuli. Journal of the Experimental Analysis of Behavior. 1982;37:149–155. doi: 10.1901/jeab.1982.37-149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Pilgrim C, Galizio M. Reversal of baseline relations and stimulus equivalence: I. Adults. Journal of the Experimental Analysis of Behavior. 1995;63:225–238. doi: 10.1901/jeab.1995.63-225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Pilgrim C, Jackson J, Galizio M. Acquisition of arbitrary conditional discriminations by young normally developing children. Journal of the Experimental Analysis of Behavior. 2000;73:177–193. doi: 10.1901/jeab.2000.73-177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ribes-Inesta E, Martinez-Sanchez H. Interaction of contingencies and rule instructions in the performance of human subjects in conditional discrimination. Psychological Record. 1990;40:565–586. [Google Scholar]
  17. Saunders R.R, Saunders K.J, Kirby K.C, Spradlin J.E. The merger and development of equivalence classes by unreinforced conditional selection of comparison stimuli. Journal of the Experimental Analysis of Behavior. 1988;50:145–162. doi: 10.1901/jeab.1988.50-145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Shimoff E, Catania A.C, Matthews B.A. Uninstructed human responding: Sensitivity of low-rate performance to schedule contingencies. Journal of the Experimental Analysis of Behavior. 1981;36:207–220. doi: 10.1901/jeab.1981.36-207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Shimoff E, Matthews B.A, Catania A.C. Human operant performance: Sensitivity and pseudosensitivity to contingencies. Journal of the Experimental Analysis of Behavior. 1986;46:149–157. doi: 10.1901/jeab.1986.46-149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Sigurdardottir Z.G, Green G, Saunders R.R. Equivalence classes generated by sequence training. Journal of the Experimental Analysis of Behavior. 1990;53:47–63. doi: 10.1901/jeab.1990.53-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Wilson K.G, Hayes S.C. Resurgence of derived stimulus relations. Journal of the Experimental Analysis of Behavior. 1996;66:267–281. doi: 10.1901/jeab.1996.66-267. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the Experimental Analysis of Behavior are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES