Skip to main content
Journal of Applied Behavior Analysis logoLink to Journal of Applied Behavior Analysis
. 2011 Winter;44(4):973–991. doi: 10.1901/jaba.2011.44-973

CAN AN UNDERSTANDING OF BASIC RESEARCH FACILITATE THE EFFECTIVENESS OF PRACTITIONERS? REFLECTIONS AND PERSONAL PERSPECTIVES

Murray Sidman 1,
Editor: Michael Kelley
PMCID: PMC3251303  PMID: 22219551

Abstract

I have written before about the importance of applied behavior analysis to basic researchers. That relationship is, however, reciprocal; it is also critical for practitioners to understand and even to participate in basic research. Although applied problems are rarely the same as those investigated in the laboratory, practitioners who understand their basic research background are often able to place their particular problem in a more general context and thereby deal with it successfully. Also the procedures of applied behavior analysis are often the same as those that characterize basic research; the scientist-practitioner will appreciate the relation between what he or she is doing and what basic experimenters do, and as a consequence, will be able to apply therapeutic techniques more creatively and effectively.

Keywords: basic behavior analysis, applied behavior analysis, scientist-practitioner

Why Pay Attention to Basic Research?

I have pointed out before, in several contexts, why it is advantageous and even necessary for basic researchers to recognize and value the accomplishments of applied behavior analysts, and to understand the problems applied workers face (e.g., Sidman, 2005, 2008). I believe, however, that the relation here is reciprocal; it is also advantageous to practitioners for them to understand their basic research background and even to participate in basic research themselves.

I have found experience with applied research, too, to facilitate effective behavior-analytic practice, but I shall stress basic research here because I believe that many practitioners may be unaware of its relevance to what they do. Although applied problems are rarely the same as basic problems investigated in the laboratory or even in the field, practitioners who understand their basic research background may often be able to place their particular problem in a more general context and thereby deal with it more successfully. I have no quantitative data to back up this point of view, but I believe my own experience is relevant. Before entering the worlds of applied research and practice, I spent approximately 10 years intensively involved in basic behavioral research in the laboratory, mostly with nonhumans as subjects. Then, almost as soon as I started to work with people who had suffered strokes or who displayed severe learning and other behavioral deficiencies, I realized that the preceding 10 years had constituted a period of apprenticeship for me. It turned out to have been an effective apprenticeship. By applying principles and investigative techniques I had learned in the laboratory, I found that I could communicate nonverbally with people who could not speak, that I could teach the supposedly unteachable, and that I could often successfully revise ineffective therapeutic procedures.

It is probably accurate to characterize the earliest of my new activities as translational research (e.g., Mace & Critchfield, 2010), although that term had not yet come into general use. I found myself applying basic research principles and techniques to nonlaboratory problems with which I had had little or no experience. The success of those principles and techniques in guiding my applied research and applications has maintained their influence on my own behavior ever since.

I quickly came to recognize that the procedures of applied behavior analysis, both research and practice, are often the same as those that characterize basic research. As has been true of my own experience with practical problems, the scientist-practitioner who appreciates the relation between what he or she is doing and what basic experimenters do as a consequence will be able to apply therapeutic techniques more creatively and effectively. I do not take the extreme position that all practice not grounded in basic science is less effective. It is my belief, however, that all practitioners will experience occasions on which knowledge of basic research findings and principles will provide solutions to seemingly intractable problems. I shall elaborate on this point later, but the following example may be instructive here.

Most behavioral practitioners are aware of the importance of consequences in determining the likelihood of behavior. That was our starting point with a group of boys who resided in a state institution and displayed severe behavioral deficits. When we first became acquainted with them, they were lined up naked around a large bare room, unattended to except when they had to be cleaned up after urinating or defecating on the floor. (At that time, such a scene was common in state residential facilities for people with severe behavioral deficiencies. It was easier for the largely untrained staff to respond to emergencies than to take preventive measures.) When we started by providing candy as reinforcers, we quickly found that the boys were capable of much more behavior than they ever had displayed before. Within a few months we had them dressed, playing games, and taking part in various learning programs that we instituted. We accomplished much with just candy and food as reinforcers.

Clearly, however, although we did demonstrate the effectiveness of reinforcement in generating and maintaining new behavior, just as we had learned in the laboratory, a life based on food and candy reinforcement was neither a desirable nor a generally applicable solution to the behavioral deficiencies that characterized these boys and others in similar situations.

It turned out that a more thorough understanding of reinforcement permitted us to solve this problem and to prepare many of the boys for life outside. Skinner's original research (Skinner, 1938) had not only demonstrated the importance of identifying and applying reinforcers but also showed how to create new reinforcers (conditioned reinforcers) and how to make reinforcers independent of particular deprivations and environments (generalized reinforcers). The creation of new or generalized reinforcers remains a practically untouched area in modern applied research or practice (but see Ayllon & Azrin, 1968; Girardeau & Spradlin, 1961; Hanley, Iwata, Roscoe, Thompson, & Lindberg, 2003). Nor has it received sufficient attention from basic researchers. The possibilities are unknown even to many academically trained behavior analysts, let alone those who perhaps understand only enough of these basic principles to pass certification exams. Even at that early stage in our work, however, we had learned enough from both basic and applied research to institute a system in which the boys had to earn tokens with which to buy their candy and food. We then were able to generalize the value of the tokens by teaching and permitting the boys to purchase many items, activities, and privileges that were otherwise unavailable to them.

This was an instance of basic research laying the groundwork for the enrichment of lives that otherwise would have remained impoverished. I have had an increasingly strong feeling, however, that the comprehension of basic research by those doing practical work has been diminishing, that an appreciation of the relevance of basic research to current practices has become less and less a part of the training of practitioners. I wonder, for example, how behavior-analytic practitioners these days would react if I asked, “Does the name Pavlov ring a bell?”

Well, most practitioners probably do remember Pavlov. After all, he did initiate the study of behavior as a natural science by demonstrating that the laws of nature apply also to what we do, that is, to our behavior (Pavlov, 1927). Still, what about the potential for behavior-analytic practitioners who have learned only enough to pass exams that qualify them for certification? Knowledge and understanding of basic behavioral science may, to a great extent, be missing from the original training of many applied workers, even of excellent applied workers. You might well ask, “In what ways does it matter?”

Conditioning

Let us start with Pavlov, who formulated what has been characterized as a stimulus–response psychology, which since has been criticized widely and generally dismissed as a mechanical and superficial account of human learning. The basic phenomenon that he discovered and investigated in great detail has come to be known as Pavlovian conditioning. For example, show a dog a piece of steak and the dog naturally salivates; ring a bell at the same time you show the steak and eventually the bell alone becomes conditioned to elicit salivation. About 40 years later, Skinner came along and showed that new behavior could be created by providing appropriate consequences, which were stimuli that did not precede but rather followed responses. Unfortunately, he named his basic procedure operant conditioning. Because of this terminological similarity, the public did not look into what Skinner actually did but instead equated his methodology with Pavlov's.

Why is it important for a behavior-analytic practitioner to know about the differences between Pavlovian and operant conditioning? Isn't that just a basic research problem? Not quite. I am sure that most practitioners are aware that their work is not accepted universally, but many do not realize that the hostility they encounter is often a result of a widespread misinterpretation of what they are doing. Pavlovian conditioning is widely regarded as a mechanistic, degrading interpretation of behavioral development. For the public, all conditioning is the same. It is therefore important that practitioners know enough about the basic research to be able to counteract this criticism and to educate the public about what they really are doing.

In stressing the differences between Pavlovian and operant conditioning, I do not intend to belittle Pavlov's real contributions to our understanding of behavior. Pavlovian conditioning does provide an inadequate account of the creation and maintenance of that behavior by means of which we interact with the world. Such behavior turns out to be the province of operant conditioning. Pavlov, however, provided a basis for understanding the creation and maintenance of what we call positive or negative, passionate or cold, emotions and feelings. Emotions and feelings are, of course, important accompaniments of operant behavior, but even basic research has done little to clarify the relations between the two. Further discussion of Pavlovian conditioning in the present context would, therefore, provide more distraction than clarification. For that reason, I recommend here that applied workers become acquainted with Pavlovian conditioning if only so that they can defend themselves and their profession from criticisms that are based on Pavlov's work rather than on Skinner's.

Punishment

The role of punishment is another source of public confusion. Many mistakenly believe that punishment plays a basic role in behavior analysis. “The carrot or the stick” is a common metaphor for behavior-analytic practice. In fact, behavior-analytic practice discourages the use of the stick (e.g., Latham, 1994; Sidman, 2000; Skinner, 1953, 1971). Many practitioners, however, may be completely unaware of the basic research on the devastating consequences of aversive behavioral control, and thus may be unable to explain to others why they go to great lengths to avoid the use of punishment (see, e.g., Sidman, 1964). Many years ago, I and several colleagues were developing a token economy for a group of institutionalized boys with severe intellectual deficiencies (Sidman, 1998). At the time, there were no training programs for practitioners, and we had to train our own workers from the beginning. One day, the director of the project asked me to help her with a problem: Would I give a lecture to our young workers and explain to them why they were not to use punishment in working with the boys? That lecture grew into my book, Coercion and Its Fallout (Sidman, 2000). Even with that book, and with the tremendously effective noncoercive teaching tools developed by Latham, many of today's applied workers still may be unable to cite research findings to justify their noncoercive practices. This lack of acquaintance with relevant basic research may hinder them from justifying their methods to a skeptical public, and may stand in the way of their own acceptance.

Reinforcers for Participating in Basic Research

Later, I will note some of the reasons why a firsthand appreciation of basic research can make one a more effective practitioner. Besides its potential relevance to practice, however, the conduct of basic research also generates immense reinforcers. Clinical practice, too, can produce reinforcers more general than the specific behavioral changes that clients show, but clinical workers might often be unaware that basic research, too, can generate reinforcers that go well beyond the cold, dispassionate numbers that describe experimental results. Let me give some examples from my own experience.

Younger behavior analysts often ask me why I entered the field in the first place. I have to tell them that I never did enter the field. There was no field of behavior analysis to enter; it just did not exist at the time. Not only was applied behavior analysis nonexistent but so was the basic science. The brilliant seminal work of Skinner was, of course, known, but only a few had begun to follow his lead. So much remained to be found out that almost everything we did in the laboratory produced some new knowledge about the origins and maintenance of behavior. Since then, the science has advanced to such an extent that it would no longer be correct if I were to suggest that by doing some basic research, you are likely to be in on the start of a significant new scientific development. But because of that context, I immediately was able to experience a new set of emotional reinforcers—joy, exhilaration, thrills. These are the types of reinforcers that the discovery of new knowledge generates (Sidman, 2007).

That is just how I got started. During the following 60 or more years, I have learned that the reinforcers attendant on basic research do not require that one be in on the beginnings of a new science. Nor do they require that the basic research be carried out in a laboratory. Our research methodologies have developed to the point where fundamental questions can be answered by translational research and other behavioral investigations carried out in the world outside the laboratory. Some potentially basic areas actually demand nonlaboratory studies, as for example, conflict resolution, coercion-generated countercontrol, and the development and transmission of cultures. Every experiment, whether carried out within or outside the laboratory, has the potential to generate the thrill of discovery, the personal satisfaction of knowing that one has produced knowledge that nobody has ever seen before, knowledge that may lead others to modify the way they approach problems that they are trying to solve. For me, that is the bottom line of successful research. When experimental data bring about changes in the behavior of others—researchers, practitioners, and sometimes, even the nonprofessional public—then the research has been successful. I wish that all new students of behavior analysis experience that kind of personal fulfillment while they are in the process of learning how to practice their profession. Whenever and wherever you do it, conducting your own research will give you a whole new slant on behavior analysis. The experience will help place what you are doing in a context of intellectual achievement much wider than your own particular accomplishments.

Now, however, such reinforcers may be unknown to many practitioners. Young people now often may come into the field because they have been told it is an easily entered, respectable, and an income-producing profession. In addition, the practice of that profession also makes it possible to help to rectify some serious and widespread personal and social deficiencies that keep people from living their lives to whatever levels they might be capable of. Even many of those who enter the profession through academically approved training programs, however, might never have the opportunity to carry out research, to add even a small bit to our store of scientifically valid knowledge. They will have missed what I look at as the thrill of a lifetime.

In addition to becoming involved in basic research and thereby creating opportunities for some unique personal satisfactions, an acquaintance with the basic research literature also can provide justification for particular lines of applied work, as well as for the methods used in clinical analysis (see Mace, 1994). For example, my own dissertation research and much of my experimentation during my first 10 post-PhD years was concerned with the aversive control of behavior (e.g., Sidman, 1953, 1966). I started in that field because I already was convinced that many of people's personal problems (learning difficulties, conflicts with others, neuroses, depression, hostility, marriage failures, school dropouts, and many others) come about as a consequence of the almost universally applied coercive behavioral control that I saw in the world around me. Although my research did not address any of those particular specific problems directly, it did succeed in demonstrating the immense destructive power of coercive control and its often debilitating consequences (Sidman, 2000). With that research as a background, practitioners then were able to demonstrate that the elimination of specific kinds of coercion makes for happier, more constructive, safer, and more productive social environments.

The satisfaction an experimenter gains from such fundamental research is more general than that from any particular application. In addition, an acquaintance with the basic research provides the clinical practitioner with a wider understanding of his or her own place in the general scheme of things, and establishes a context within which many specific applied problems can be seen to have characteristics in common. As an elementary example, basic research on the reinforcement contingency led directly to the generalization that most, if not all, behavior is generated and maintained by its consequences. This principle leads directly to the practice, applicable to many examples of clinically undesirable problem behavior: First, find the behavior's consequences. The widely effective applied principle called functional analysis (e.g., Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) grew directly from knowledge gained from basic research on reinforcement.

Can There Be a Natural Science of Behavior?

For many years, we all did our basic behavior-analytic research with nonhuman subjects. We were convinced, however, that what we discovered with nonhumans in the laboratory was generalizable also to humans outside the laboratory. Eventually, a few brave souls did some studies with human subjects. Those first studies turned out so successfully that in spite of our professed faith in the generality of our research, we still were astonished. Many basic researchers then moved into laboratory studies with human subjects. Both inside and outside the laboratory, basic and applied researchers discovered that the same variables that produced new behavior inside the laboratory also were relevant outside. Practitioners then made the same discovery; they found that, by changing consequences and other factors in their patients' environment, they could get their patients to behave differently, even to replace problem behavior with adaptive behavior. The behavior laboratory turned out to be not an isolated domain but rather a part of the real world.

The rejection of self determination

This discovery, that what people did was determined by what happened in their physical environment, was a historical development. Those who have not themselves experienced the lawfulness of behavior within a scientifically valid framework may not appreciate or even believe it possible. Many people, even thoughtful, intelligent, successful, scientifically enlightened people, dismiss the notion that there can ever be a natural science of behavior. A common belief is that our behavior is self-determined, that we can negate any supposed general law of behavior by deciding to act differently than the law predicts. In reply to this skepticism, behavior analysts advance the notion that current and historical events within one's environment determine whether or not one will decide to act in a seemingly unpredictable fashion. Such decisions themselves are determined by the same kinds of variables that determine other behavior.

The rejection of self-determination does require a major reorientation of one's self-picture. A historical analogy was the belief that the earth is the center of the universe. Many unbelievers in the centrality of the earth were tortured and put to death because of their skepticism. Today, behavior analysts reject the centrality of human will as the ultimate determiner of behavior. Although they are not in danger of being put to death because of their disbelief in self-determination, they sometimes are ridiculed, scorned, and worst of all, ignored. To be ignored is worst of all because it means that many serious human problems might go unsolved. The notion of self-determination precludes any attempt to change the behavior that defines many particular problems. Conflict resolution, for example, requires changes from conflict to cooperation. If one believes that the sources of conflict come from within, then one also must confess to an inability to accomplish any reduction of those sources. If, however, one sees the sources of conflict in the environment, then one can often arrange changes in that environment that will bring about the desired behavioral changes. Unfortunately, behavior analysis so far has come to receive only a grudging acceptance, and then mainly when it is applied to those with presumably impaired intelligence, to people who are “incapable” of self-determination.

I believe that those involved in basic laboratory research are more aware of their place in this major intellectual revolution than are those who have never seen the basic laws of behavior in all their precise and quantitative glory. Such awareness is, of course, not necessary for successful clinical practice, but it can add considerably to one's pride in and satisfaction with the course of one's own life. Effective applied behavior analysis does generate its own kinds of personal satisfaction, but I believe that the appreciation of one's positive contribution to a major change in our conception of our place in the universe of thought will bring about additional feelings of fulfillment. That certainly has been my own experience. I recommend that everyone try it.

Increasing the Effectiveness of Practice

To return to the question of how an understanding of basic research can increase the effectiveness of applied behavior analysis, here are some relevant examples that I, myself, have experienced and even have played some role in their development.

Research with individual subjects

You will find that in the laboratory, experimentation takes place with individuals as subjects. Experimental behavior analysis does not require the statistical comparison of experimental and control groups. Instead of securing a small amount of data from each of a large number of subjects and then averaging across subjects, we obtain a large amount of data from individuals. Single-subject methodology is fundamental in basic behavior-analytic research; that aspect of the methodology makes the science immediately applicable to behavior therapy, which always involves attempts to change the behavior of individuals.

Many applied behavior analysts never have been made aware of this methodological difference between what they are doing and what clinical psychologists usually do. Ignorance of the rationale for single-subject methodology leads to ignorance of the special importance of some specific techniques on which the validity of a therapeutic procedure may depend. For example, steady-state baselines are necessary for evaluating the success or failure of an experimental or a therapeutic procedure. If you want to know whether what you have done has made any difference in a particular individual's behavior, then you must in some way measure that individual's behavior both before and after you have applied the treatment. That is what we mean by establishing a behavioral baseline. It is critical to measure the individual's behavior not just after you have applied the treatment, but before, also. Otherwise, how can you not only prove to others but be sure yourself that what you have done has made any difference? Did a change come about because of what you, the therapist, did, or would the change have taken place even if you had done nothing?

How do you answer this question? Instead of comparing a group that has received the treatment with a control group that has remained untreated, you allow the treated persons to provide their own control data. You measure the behavior of interest before you apply the treatment and then see whether the behavior changes during or after the treatment. Thus, you compare the same behavior from the same individual both before and after you apply the treatment. The pretreatment measurements constitute the baseline. You evaluate the treatment by observing whether it produced changes in the individual's baseline.

It always is reassuring to the experimenter or practitioner to return to pretreatment conditions and recover the baseline behavior, which is the classic ABA design. Such recovery would strengthen the conclusion that the observed behavioral change had been brought about by the treatment and not something else. Behavioral changes, however, do not always prove to be reversible; once behavior changes, it may be impossible to return it to its pretreatment state. Also, it may be undesirable, even unethical, to return a client to behavior that would be countertherapeutic. Multiple baselines of various sorts can often resolve the problem of irreversibility of a behavioral change (Baer, Wolf, & Risley, 1968). These may involve, for example, maintaining simultaneous baselines of several different behavioral contingencies for an individual client and then testing a therapeutic program by changing one contingency at a time. Or one kind of behavioral baseline may be maintained for several clients simultaneously, with a particular therapeutic program being instituted at different times for each client. Such multiple baseline procedures allow the therapist to determine whether any observed behavioral changes can be attributed to the therapeutic procedures and not to some uncontrolled factor.

You must understand the necessary characteristics of a useful baseline. For example, it must be stable before you institute your therapeutic procedure. But how do you define stability? If the behavior of concern continues to show a steady increase or decrease before you have applied your procedure, then you either will be unable to attribute a continuing change to anything you have done, or you will be unable to specify how much change was caused by your therapy. If the baseline shows great variability, then you may be unable to claim that your attempted therapy had any effect at all, although a stable pattern of variability can still be useful as a baseline. The need for stable behavioral baselines and for the definition of stability is fundamental in experimental behavior-analytic research. Without a satisfactory specification of stability, colleagues will ignore your findings; you just as well might never have done the work. Unfortunately, applied work often is judged not only by informed colleagues but by administrators, publicists, and special interest groups to whom considerations of treatment validity are unknown, ignored, or irrelevant to their own agendas. It is therefore incumbent on practitioners to establish and maintain their own high standards. Basic research on stability criteria is directly relevant to behavior-analytic practice. There is no better place than the basic research laboratory in which to become aware of those standards and of how to use them to evaluate one's own work.

Two-way interactions in research and practice

Experimental behavior analysis consists of two-way interactions between subject and experimenter. Unlike traditional experimental psychology, behavior-analytic methodology calls for changes in the experimenter's behavior as a function of what the experimental subject does. Such flexibility also helps make the science compatible with practice. Effective behavior therapy, too, requires two-way interactions between therapist and client.

Ideally, the client's behavior will change in response to therapeutic measures, but sometimes the client's behavior does not change or an observed change may be therapeutically undesirable. The therapist therefore must know how to change his or her therapeutic procedures on the basis of what the client does. Successful behavior-analytic practice does not depend on a set of fixed rules or immutable procedures but consists of options that the practitioner can apply in response to what the client does.

When a behavior-analytic treatment fails, it may well be necessary to refine the kind of baseline from which to measure treatment effects. For example, should the therapist be concerned just with the frequency of the undesirable behavior, or should observations of when that behavior occurs constitute the critical baseline datum? Or it may be necessary to change the consequences applied to the client's behavior; was that reinforcer really a reinforcer? Experimentation has taught us how to find out. Or might the environmental context be more important than the consequences of the client's behavior? If so, such measures as changing the location in which the therapy is carried out, or changing the therapist, or perhaps presenting test material on a computer rather than on the tabletop (or vice versa) might help. Experimentation on stimulus control has provided lessons that are unknown to most new behavioral practitioners.

This two-way interaction between scientist and experimental subject, and between practitioner and client, has given rise to the concept of the scientist-practitioner. Practitioners who carefully measure features of a client's behavior, particularly its frequency but other aspects, too, and who change their treatment procedures in response to what the client does or fails to do, are themselves doing just what behavioral scientists do. For example, if an experimental subject fails to learn, the experimenter will make such changes as increasing the size of the reinforcers, decreasing the delay between behavior and reinforcement, checking to determine whether subject and experimenter are attending to the same stimuli, and making sure that the subject already has learned all the prerequisites for the behavior being measured.

Effective practitioners will do the same. They will ask, for example, Was that pat on the head and the words, “good boy” really a reinforcer? If you are trying to teach a nonspeaking child to point to pictures to indicate what he or she wants, have you first made sure that things and their pictures are equivalent? If they are not equivalent, how do you make them so? (see, e.g., Sidman, 2009). Instead of concluding that a client is incapable of learning, or that reinforcement does not work, the behavior therapist will check to make sure that the client's failure to learn was not caused by his or her own (the therapist's) failure to teach effectively. Like laboratory experimenters, effective practitioners always will start with something a client already knows how to do and only gradually will introduce additional requirements, programming the material or the behavior they are trying to teach in such a way that the client can progress steadily without encountering consistent failures.

In laboratory experimentation, such changes in the experimenter's behavior are routine. When I and my colleagues started a program devoted to increasing the behavioral repertoires of a group of boys with severe intellectual disabilities who resided in a state residential program (e.g., Mackay & Sidman, 1968), we did what we had learned to do in the laboratory. Using laboratory-derived techniques, we succeeded in generating new adaptive behavior in boys who had been judged incapable of learning and for whom neglect had left seemingly “behaviorless.” Reinforcement generated and maintained new behavior. Candies and food were quite effective as starters, but truly adaptive behavior would, of course, require other kinds of reinforcers. We found that we had to establish those new reinforcers, a problem that was not necessary in the laboratory but that experimentation had shown us how to accomplish. Then, by using standard stimulus discrimination techniques, we placed the boys' new behavior under appropriate environmental control. We established conditioned and generalized reinforcers that previously were unknown to the boys, but we found that we first had to teach them to recognize such everyday items as colors, shapes, and sounds. We taught them to dress, feed, and toilet themselves, but we had to adjust our techniques continually because of tremendous variations in the boys' preexisting behavioral repertoires. We taught many of the boys to speak, to ask for what they wanted, to play together, to read signs, to eat in restaurants, and to use public transportation. We did all of this and more by applying and modifying methods that had proven to be successful with both nonhuman and human subjects in our laboratory work. This experience taught me that science and practice were not separate enterprises but were interconnected closely (see also Baer et al., 1968).

Although becoming a practitioner taught me much that I had not known before, my laboratory experiences greatly facilitated my new learning. For example, by directly adapting principles we already had learned in the laboratory, we were able to institute an effective token economy. We used classic response shaping and backward chaining to establish tokens as secondary reinforcers and to teach the children how to use tokens to make purchases at a “store” that we set up. (I shall have more to say later about backward chaining.)

Eventually, we discovered problems that had never arisen in the laboratory. For example, we had to teach the boys that the store was not always open. They had to learn (that is to say, we had to teach them) to save tokens and use them later after they actually had earned them. To teach this, we had to work individually with each boy. We started by giving him a candy for each token immediately after he earned it. We gradually increased the time he had to hold onto the token before he could trade it in. With some, we had to proceed extremely slowly, increasing this delay period by only seconds at a time. Other boys were able to advance more rapidly. Then, we had to teach them to place their tokens in their pockets before spending them, and eventually to use a purse. We never had to teach these things to our laboratory subjects, but the methods we used were based on laboratory-derived principles.

We also had to teach the boys not to steal tokens from each other. Most of them never had anything of their own and had had little or no opportunity to learn the concept of private property. We had never had to teach such things to our laboratory subjects. The problem became acute when one day, we found that most of the tokens in our system had disappeared. We discovered what had happened when a technician from another research project, in which some of our boys participated as subjects, brought us several bags full of tokens. She said that she had mentioned to one of the most advanced of our boys that she had to buy a new car, and he had asked her how many tokens a car would cost. “Oh,” she replied, “lots and lots.” Soon afterward, he appeared with several bags of tokens for her.

Although we never had encountered such problems in our laboratory experimentation, we had learned there that whenever subjects displayed unusual behavior, there were almost always ways to deal with it by applying known principles. Rather than punish the boys for stealing (a concept most of them did not understand) we easily solved the problem by instituting a system of colored tokens. Most of the boys used blue tokens. Those who were observed to steal tokens, or even simply to pick up tokens that the less advanced boys left lying around, were required to use yellow tokens; if they tried to cash in other colors, they received nothing for them. The more advanced boys who earned tokens by helping out in the building were given red tokens; other colors were valueless to them. We then were able to use our familiar discrimination learning techniques to teach the boys that only tokens of a particular color were of value to them.

I had similar experiences when I came to work in the Neurology Department at Massachusetts General Hospital in Boston. There, I was faced with the problem of working with patients with whom we could not communicate by means of speech. I had had no experience working with people who displayed severe language deficiencies. Soon after I arrived, the chief of the service introduced me both to a population of children with severe behavioral deficits and to a number of adult patients who had suffered strokes and were incapable of speech. He asked me a simple question: “How do I evaluate these people? Because I cannot communicate with them I am unable to carry out my usual tests to assess the state of their nervous systems. Can you help me?”

Well, I had never investigated the behavior of people with little or no speech, but I had more than 10 years of experience working with nonhuman subjects who were, of course, incapable of speech. My laboratory work had been directed at the identification and analysis of environmental variables that controlled the behavior of laboratory rats, mice, pigeons, monkeys, and baboons. I had become convinced that those same variables must be operating to determine our own behavior. That conviction was strong enough to change the whole course of my life—moving to a new job in a different city and starting in a new research direction in which I had had little previous experience. Still, in my new laboratories I not only set up facilities for working with humans but maintained research with nonhumans as well. The neurologists were desperate enough to indulge me in my peculiar research needs.

I have already mentioned my work with people who displayed severe intellectual disability, and have noted that basic research spilled over into application outside the laboratory. How did we approach the behavior of stroke patients who were incapable of speech? Again, we adapted methods that were common in nonhuman laboratory research. In this instance, we did more than try to shape responses. We were concerned, first, to find out more about the stroke patients. Could we communicate with them in some way other than by speech? How much did they understand about words? Did they understand spoken and written language even though they could not speak? Could they communicate by writing? Could they comprehend written words even though they could not read aloud?

To find out about and to measure their language comprehension, we adapted a technique we had learned about from previous research, the familiar matching-to-sample (conditional discrimination) procedure (e.g., Cumming & Berryman, 1965). By making use of that technique, we were able to get patients with little or no speech to tell us how much they understood about words. For example, could they match pictures, colors, numbers, and shapes they could see or feel to the spoken and written names of those stimuli? When they looked at pictures they could not name aloud, could they write their names? If they had difficulties in any of these tasks, as many did, did they improve as time progressed after their stroke? We thus succeeded in obtaining quantitative information about the linguistic capabilities of people who could not speak to us (e.g. Sidman, Stoddard, Mohr, & Leicester, 1971). We were able to provide the neurologists with data they could then attempt to correlate with brain structures and processes (e.g., Mohr, Leicester, Stoddard, & Sidman, 1971).

Applied Research, Translational Research, and Research Translation

Translational or applied research, too, will teach practitioners much that is relevant to therapeutic procedures, and I encourage clinicians to engage in those kinds of research. Basic research, however, whether in the laboratory or outside, rarely is concerned with any specific behavior. Skinner originally selected lever pressing as an arbitrary response to use in his research because he considered the particular response topography irrelevant to the general principles he was developing. Nor does basic research usually concern itself with the social significance of the contingencies under investigation. The research aim is generality. True generality means that many different specifics are covered, not just those involved in a particular study. Such generality is a distinctive feature of basic research in comparison with the specific aims of most translational or applied research.

It is relevant here to point out a difference between translational research and research translation. In translational research, we attempt to use scientific procedures to evaluate the applicability of basic research findings, procedures, or principles in situations that we cannot control as rigorously as we do in basic research. It is through translational research that we confirm, for example, that we can teach children equivalence relations between colors and color names in the classroom as well as in the laboratory. Even though the classroom environment is not nearly as constant as basic experimentation demands, our testing procedures and data evaluation are as rigorous as those we used originally in the laboratory. In research translation, however, in contrast to translational research, we do not attempt to use scientific procedures to test or to evaluate the results when we try to apply knowledge we have gained in the laboratory. We just use the basic teaching and testing procedures with many children without controlling for their ages, intellectual abilities, types of intellectual and physical handicaps, living and testing environments, and so on. We simply observe whether our procedures really work. By teaching children with varying kinds of intellectual disability, for example, to match spoken color names with visual colors and with printed color names, we can find out whether equivalence relations between the visual colors and printed names emerge without having been directly taught (see, e.g., Sidman, 2009). When you repeatedly see children reading and understanding color names without having been directly taught to do so, do you need scientific proof that the complex procedure works?

To carry out such research translation, even without meeting the criteria for translational research, practitioners do need to understand what the basic research was all about. I believe that the most reliable way for them to gain such understanding is to be involved first in performing basic and translational research, as my colleagues and I did (e.g., Sidman, 2009). If they simply follow a formula they have been taught for establishing color and color-name equivalences, they well may be unable to appreciate that they could accomplish the same results with numbers, number names, and quantities, or with the different combinations of coins that make up a given quantity of money (e.g., McDonagh, McIlvane, & Stoddard, 1984), or with pictures and their printed names, or with words in different languages, and more. The personal satisfactions we gained from such research translation more than repaid us for engaging in the rigorous research we did first.

Backward chaining

The teaching of behavioral chains, particularly by means of the technique of backward chaining, is another area that has brought me satisfying, even thrilling, feelings of accomplishment. In the laboratory, backward chaining is a standard procedure for teaching nonhuman subjects such complex procedures as chained schedules, by means of which we have, for example, learned much about conditioned reinforcement (see Catania, 2007). In teaching subjects to perform accurately in chained schedules, experimenters have come to take the effectiveness of backward chaining for granted. They teach the later segments of the chain first, gradually adding earlier responses and stimuli.

Little translational research on backward chaining has been reported, but research translation by those originally acquainted with the basic laboratory procedure has demonstrated its utility with more complicated and more socially relevant behavior. To extend the backward chaining procedure to more complex forms of behavior requires one first to recognize behavioral chains, which are the only kind of behavioral sequence to which backward chaining applies. Behavioral chains are sequences of actions and environmental events in which earlier units must be completed before later units even become available. Shoe tying is an example; each successive step produces a new configuration of the laces, and each configuration calls for a different response. In backward chaining, the teacher would start by tying the shoe almost all the way and then asking the child to supply only the final response (pulling the loops tight). Reinforcement for “tying the shoe” would be immediate. Then, the teacher would retie the shoes but not quite as far this time, and ask the child to supply one more new step. Completing that step would place the child in position simply to complete the sequence as he or she previously had learned to do, with reinforcement coming at the end, as before. The teacher gradually would work backward in the sequence, with each new step placing the child in a position to complete the task by doing what he or she already had learned.

I taught my 5-year-old daughter this way to tie her shoes. It took only about 10 min. At that point, I did not need a research project (basic, translational, or applied) to tell me that the procedure worked. Similarly, simply by applying the technique to many different kinds of behavioral chains, I became convinced that the original basic procedures involved in teaching animals chained schedules was of practical use. In our project with intellectually deficient boys, we used backward chaining to teach them to do such things as feed themselves (using spoons, forks, knives, cups, etc.) and to dress themselves (to put on a shirt, trousers, socks, and shoes). We taught them to help with meal tasks (cleaning and setting tables and carrying their meals on trays from the kitchen to their tables). We also used backward chaining to teach them how to maintain personal and environmental cleanliness (washing hands, brushing teeth, combing hair, sweeping the floor, etc.). Later, we also used the same method to teach them to write their names, to spell words, and to pronounce words. With more advanced pupils, we even were able to teach them via backward chaining to memorize a lengthy poem; each time they read the poem aloud, we left out more letters, syllables, words, and phrases until they finally had to read only the title before reciting the whole poem without the help of text. (I actually have used this procedure in a classroom demonstration for college students.)

We did not carry out any of these successful applications of backward chaining under controlled, replicable conditions. Nor did we gather data that we could report in a journal. Our experiences, however, taught us about the applicability of backward chaining. I did not need translational or applied research to convince me; successful research translations were sufficient. I am convinced, however, that practitioners are less likely to recognize situations in which backward chaining would help them if they never have been involved in basic research that deals with the teaching of behavioral chains. Practitioners who are familiar with research examples (even better, who themselves have been involved in relevant research) then can use the research examples to guide them and can generalize the basic findings to new, clinically relevant teaching situations.

Errorless learning

Backward chaining is more than just another effective teaching technique. It illustrates the principle that learning does not require trial and error; learning can be errorless. The discovery that pigeons can be taught discriminations errorlessly (Terrace, 1963a, 1963b) now has been generalized to many different kinds of human behavior (e.g., Sidman, 2010), but it still is not recognized as the conceptual revolution it really is with respect to our conception of where our behavior comes from. The principle of trial-and-error learning places the responsibility for learning or failure to learn on the learner. That there exist techniques for producing errorless learning shifts this responsibility from the learner to the teacher. With learning shown to be the responsibility of the teacher, we have another example of the scientific sterility of the concept of self-determination of behavior. Also, the principle that learning can be errorless is much more general than any particular teaching technique. If it is appreciated not just by basic scientists but also by practitioners, such understanding can greatly increase the effectiveness of any behavioral therapy that involves the teaching of new behavior.

Skinner (1938) actually gave us the first demonstration of the deliberate production of errorless learning. Since then, innumerable experimenters have found that we can teach responses like pressing a lever or pecking a key errorlessly if we make sure that we first teach our subjects everything they have to know in order to perform the desired response. We first teach the experimental subjects that the pellets they had never experienced before are actually food. We do this by mixing the pellets with their usual food supply. Then, while the animals are in the experimental space, we teach them where to find the food pellets (in the food tray). Also, we teach the animals when to find food in the tray (after the food dispenser sounds). Finally, we introduce the lever. As soon as the animal first presses it, the food dispenser sounds and the animal, as it had already learned to do, goes directly to the food tray and eats the pellet it finds there. Reinforcement is immediate, and the animal continues to press the lever regularly.

Practitioners who are aware of the basic research that has led to greater understanding of errorless learning will find themselves armed with more than just a few specific teaching techniques like backward chaining, stimulus fading, or stimulus shaping (e.g., Sidman, 2010). The fact is that errorless learning leads to more general procedures that can be summed up as programmed instruction. An instructional program specifies not only what the pupil, subject, or client is to learn but also describes in detail the steps the teacher must take to ensure that the pupil learns all the prerequisites for that final desired performance. Research, however, particularly in the area of stimulus control, has taught us that in any learning situation, what the teacher sees as relevant is not necessarily what the learner sees. The problem was stated succinctly by Prokasy and Hall (1963) as follows:

What represents an important dimension of the physical event for the experimenter may not even exist as part of the effective stimulus for the subject. Similarly, the subject may perceive aspects of an experimental event which have been ignored by, or are unknown to, the experimenter. (p. 312)

An important relation between basic research and practice may be illustrated succinctly by substituting in the Prokasy and Hall citation the words practitioner and client for experimenter and subject, respectively.

It was basic laboratory research that led Prokasy and Hall (1963) to recognize this fundamental problem that arises in attempts to establish stimulus control, that is to say, in placing a pupil's or client's behavior under the control of some specific aspect of the environment. In keeping with Prokasy and Hall's research-derived alert, failures to teach a client may often result from an incorrect assumption that the practitioner and the client are attending to the same stimuli. For example, in our work with children with severe intellectual challenges, we discovered that before we could teach them to recognize anything as complicated as printed words, we first had to teach them to discriminate seemingly simple stimuli like differently slanted lines, curved lines, and other basic shapes. One of our first mistakes was to assume that the mere presence of a stimulus at the time of reinforcement is enough to establish control of the response (see Saunders, 2011, for a more thorough discussion of this misapprehension). We started by teaching a boy first to trace vertical lines, and then to copy those lines. When he was copying the line perfectly on every trial, we presented him with a horizontal line to copy. What he did was draw a vertical line, instead.

For this boy, the vertical line that we thought we were giving him to copy was actually irrelevant to what he was doing. He was not copying the stimulus we were looking at but was just drawing vertical lines, for which he received reinforcement. He did not even have to look at the sample line. If we had attended properly to our own experimental procedures and data, we never would have made the mistake of presenting the same stimulus to the boy on every trial, but we had enough of a background to change our own behavior quickly. We then varied the orientation of the lines the boy was to trace and then to copy, starting with very small variations from trial to trial and then increasing them gradually. Changing our own interpretation of what was going on here served to eliminate the student's errors in learning to produce the lines.

I have seen a similar mistake being made in teaching clients to read words and to name stimuli like colors, numbers, and shapes. For example, the teacher presents a color, tells the pupil its name, and then asks the pupil to say that name in trial after trial, always presenting the same color. When the pupil names that color correctly on several successive trials, say 10 in a row, a new color is presented trial after trial until the pupil meets a predetermined criterion for saying that color name correctly. After going through this process with several colors separately, the teacher then presents these colors individually on consecutive trials and finds that the pupil is unable to name them. The incorrect assumption here was that the colors that were controlling the teacher's naming behavior also were controlling what the pupil said. All the pupil had to do to produce reinforcers, however, was to keep saying the same word on trial after trial. There was no need even to look at the colors; simply presenting colors was not sufficient to teach their names. Again, this is an example of a research-derived principle that applies to many forms of stimulus control. Teachers must make sure that pupils see what they see. Practitioners who have learned such general principles will be able to solve many more learning problems than will those who have been taught a teaching technique to solve a particular problem without understanding the technique's general applicability.

I also had an experience that, if I had failed to recognize that my subject and I were not looking at the same stimuli, would have prevented me from getting started in the field of equivalence relations (Sidman, 1994). The critical part of that first experiment involved a conditional discrimination procedure in which I attempted to teach a boy to match each of 20 dictated (auditory) picture names to its corresponding printed (visual) picture name, giving him a display of eight printed names from which to choose one on each trial. The subject in that first experiment was a boy so severely handicapped intellectually that I automatically assumed I could present the same sequence of 20 conditional discrimination trials (each consecutive trial offering him eight different visual stimuli from which to select the correct one) without his learning the position of the correct word in consecutive displays. That automatic assumption proved to be wrong, as I discovered when the boy eventually achieved a criterion of 20 consecutive correct trials and I then presented a different sequence of names. Changing the sequence completely disrupted the boy's seemingly perfect performance. It turned out that I had been looking at the correct word on each trial while the boy had been looking at the position of the correct word in consecutive displays.

How many reports have I seen since then in which trial sequences either were repeated over and over again or were not even specified, indicating that the researcher or practitioner did not consider sequence learning to be a potentially important, and confounding, variable? Regardless of the intellectual status of my experimental subjects or clients, I now make it a practice not simply to vary trial sequences but to control such characteristics as the number of trials that must intervene between each repetition of a correct stimulus location, between each repetition of a correct stimulus, between the placement of any particular incorrect stimulus, and other sequential features. Both practitioners and researchers sometimes are puzzled by having generated a criterion performance only to observe their clients or subjects then reverting to precriterion levels. Prior experience with methods for evaluating experimental or clinical data will lead them to suspect that the seeming criterion performance was illusory, based on differences between the variables that determined the behavior of the subjects or clients and those that determined the behavior of the experimenter or practitioner.

We have a situation now in which many behavior-analytic practitioners often may be unaware of the source of their methods. To take an elementary example, basic research has defined and continues to refine the most basic applied technique: positive reinforcement. Applied workers who are acquainted with this research might well be more capable of responding effectively to many seeming failures in their standard reinforcement procedures. In the laboratory, for example, we can control precisely the length of time after which reinforcement follows some particular behavior. When we do that, we get a graphical picture of how delayed reinforcement affects the likelihood of a response, a quantitative representation of a powerful variable. A seeming inability of a client to learn some new behavior may be caused by even a small delay in the delivery of reinforcement. The applied worker who has seen such experimental data may be more likely to avoid delays in reinforcing a client's desirable behavior than is the worker who never has seen proof of the importance of even a few seconds of delay.

Similarly, although many applied problems have no exact counterpart in the laboratory, practitioners can constructively apply general principles that have emerged from the laboratory. If one is asked, for example, to do something about a teenager who repeatedly runs away, an acquaintance with research on punishment, escape, and avoidance behavior may suggest that it is not the runaway who needs treatment, but the parents or other caregivers (Sidman, 2000). If one is faced with a destructive, seemingly unmanageable child, the knowledgeable practitioner will ask, “What are the consequences of the destructive behavior for the child? What does the child get from destructive actions?” He or she then will try to arrange for the child to obtain those same consequences by acceptable rather than destructive behavior. For example, one of the boys with whom we were working went through a phase in which he broke windows by smashing them with his fist. Eventually, we noticed that this violence never produced cuts or any other injuries to his hand. This clue told us that his window smashing was not simply an emotional response or an example of his “destructive nature” but rather was an out-and-out example of operant behavior, reinforced by its consequences. An immediate consequence, of course, was the tremendous amount of attention he generated each time he broke a window. When we then made sure to provide such attention after more desirable behavior, his window smashing stopped.

Another example might be when a practitioner sees that the constructive new behavior he or she has taught a client does not generalize beyond the actual teaching situation. An understanding of stimulus control will lead to an examination of the particular aspects of the teaching situation that control the new behavior but are not present at other times and places. The therapist then will try to eliminate those irrelevant sources of behavioral control. For example, a student who has learned to match words and pictures that are presented on a computer screen may then fail that task when the stimuli are presented on the tabletop. For the student, the vertical orientation of the computer display may have been a critical aspect of the stimuli of interest to the teacher (the words and pictures). The problem might be solved by orienting the stimulus display vertically on the tabletop and then gradually tilting the display until it is oriented horizontally on the table.

All of these problem situations vary, but the principles are consistent. Basic research, although not designed to solve any particular applied problem, provides principles and techniques that are applicable even to problems that a practitioner may never have seen before.

Concluding Comments

These have been some highlights of one person's experiences in moving back and forth between basic behavior-analytic research and practice. Many others have similar stories to tell. The main point is that basic research is not irrelevant to practice; it can provide effective tools for treating unwanted behavior, identifying missing behavior, and teaching new behavior. Engaging even in a narrowly defined research project can be valuable to a practitioner; engaging in a prolonged research project that takes you in different directions will add even more to your practical skills. I believe that the practice of behavior analysis would become more generally effective if the required training programs for those intending to go into applied behavior analysis were made more rigorous relative to training in basic research. A basic research background would provide practitioners with a firmer understanding of why they are doing what they do.

For a practitioner whose training did not include basic research experience in particular, his or her daily professional life may not provide opportunities to gain the kinds of experiences that I have outlined. For them, I can only suggest that a program of readings in the research literature, along with discussions (as regularly scheduled as possible) with friends, coworkers, and supervisors might provide valuable insights. A major task before us, therefore, is to develop programs that will turn out researchers who understand and even engage in practice, and that will turn out practitioners who understand and even engage in research. I refer here, of course, to the scientist-practitioner model.

Currently, however, many academic programs in behavior analysis might not emphasize or even discuss the concept of the scientist-practitioner. The relation between research and practice is a two-way relation, with research experience providing a general background that permits practitioners to deal with problems that go well beyond the particular ones that he or she actually has been trained to handle, and with practical experience exposing problems that would repay scientific investigation (e.g., Sidman, 2008). Unfortunately, I am not aware of any publicly available data that permit us to evaluate particular training programs. What I am saying, then, is that the professional training of behavior analysts requires analysis itself. Effective training principles then might be adopted from those programs that have been successful in turning out scientist-practitioners. Even more valuable, and probably more challenging, would be the development of applied research programs that are designed to evaluate the effectiveness of particular features of training programs.

Self-examination also would help to clarify the effectiveness of the qualifying tests for the certification of behavior-analytic practitioners. I understand the need for certification as a way for the profession to protect itself from those who would make false claims of competence. I commend the development of the Behavior Analysis Certification Board (BACB) and the gradual refinement of its requirements for the certification of practitioners. I worry, however, that some aspects of the current certification program may make it difficult for it to maintain high standards and, at the same time, protect itself from attack by those who are unfriendly to the development of behavior-analytic practice. For example, the BACB seems not to include measures of its own efficacy. That is to say, I know of no evidence that certification is helping to turn out more effective practitioners. How to secure that evidence constitutes a problem whose solution will require the attention of creative investigators and practitioners.

Again, I speak without the support of data when I express a concern that certification exams may raise some serious problems with respect to the training of behavior analysts. That concern should not be interpreted as an attack on the BACB but rather as the result of an “armchair” analysis of some of the behavioral contingencies that its operations are likely to generate. For example, I fear that even academic curricula may become limited by the content of the qualifying examinations. Students are likely to seek curricula that prepare them for certification. Indeed, training programs with just that limited goal are being offered now. As a consequence, training programs that originally were intended to cover general topics may be forced to teach to the exams, even though those exams cannot possibly evaluate all of the necessary applied skills or the general knowledge that basic research has generated. What will happen, then, when basic research develops new knowledge that would be relevant to practice? Because that new knowledge would take time to be absorbed into certification tests, many training programs would not include it and many practitioners would remain unaware of it.

It seems reasonable to me that practitioners would be more likely to seek training that included new developments if the qualifying tests were made more inclusive than they are at present. Might it even be useful for the test designers to provide explicit justification not only for including areas of basic research but also for leaving out other areas? A big job, certainly, but might the effort not be justified by increases in the effectiveness of practitioners? I do believe it would be useful for questions and suggestions like these to be discussed openly.

Finally, I fear that a neglect of basic behavior-analytic science eventually will reduce and perhaps even eliminate the public approval of behavior-analytic practice. The general public is coming to recognize and appreciate the concept of evidence-based practice in many areas (e.g., Green, 2008). Sooner or later, the public will come to reject any practice that it sees as lacking scientific backing. Is not this avoidance of rejection the goal of those who advocate evidence-based practice in any field? Furthermore, if practitioners themselves are unaware of how basic science supports what they do, then a public that also is uninformed is likely to assume that no such relation exists. When that happens, practitioners will lose their public acceptance.

I believe, therefore, that it is critical to maintain a close relation between basic research and practice. A major goal of our profession should be the creation of scientist-practitioners. The realization of that goal will require changes in the curricula offered by many academic programs, even many that already view themselves as following a scientist-practitioner model. They will have to add not only significant basic research training for potential practitioners but also significant practical experiences for potential basic researchers. Researchers should be required not only to take part in translational and applied research but also to translate and apply research findings and principles to particular behavioral problems. In addition, practitioners should be required to participate in basic as well as translational and applied research. A fundamental principle of learning is that students must participate, not just be handed information to absorb.

Although to some I may represent simply an example of old-fashioned accomplishments that are irrelevant to modern behavior-analytic practice, I hope that many practitioners will take a second look, or perhaps even a first look, at the characteristics of experimental individual-subject methodology. You will find those characteristics quite relevant to what you are trying to accomplish, as I did when I moved into applied research and practice.

Acknowledgments

This paper was adapted from a presentation at the Florida Association for Behavior Analysis meeting on October 8, 2010.

REFERENCES

  1. Ayllon T, Azrin N.H. The token economy: A motivational system for therapy and rehabilitation. New York: Appleton-Century-Crofts; 1968. [Google Scholar]
  2. Baer D.M, Wolf M.M, Risley T.R. Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis. 1968;1:91–97. doi: 10.1901/jaba.1968.1-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Catania A.C. Learning. Cornwall-on-Hudson, NY: Sloan; 2007. [Google Scholar]
  4. Cumming W.W, Berryman R. The complex discriminated operant: Studies of matching-to-sample and related problems. In: Mostofsky D.I, editor. Stimulus generalization. Stanford, CA: Stanford University Press; 1965. pp. 284–330. (Ed.) [Google Scholar]
  5. Girardeau F.L, Spradlin J.E. Token rewards in a cottage program. Mental Retardation. 1961;2:345–351. [PubMed] [Google Scholar]
  6. Green G. “Evidence-based practice”: Improvement or illusion. ABAI Newsletter. 2008;31(3) [Google Scholar]
  7. Hanley G.P, Iwata B.A, Roscoe E.M, Thompson R.H, Lindberg J.S. Response-restriction analysis: II. Alteration of activity preferences. Journal of Applied Behavior Analysis. 2003;36:59–76. doi: 10.1901/jaba.2003.36-59. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Iwata B.A, Dorsey M.F, Slifer K.J, Bauman K.E, Richman G.S. Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis. 1994;27:197–209. doi: 10.1901/jaba.1994.27-197. (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3–20, 1982) [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Latham G.I. The power of positive parenting. North Logan, UT: P&T Ink; 1994. [Google Scholar]
  10. Mace F.C. Basic research needed for stimulating the development of behavioral technologies. Journal of the Experimental Analysis of Behavior. 1994;61:529–550. doi: 10.1901/jeab.1994.61-529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Mace F.C, Critchfield T.S. Translational research in behavior analysis: Historical traditions and imperative for the future. Journal of the Experimental Analysis of Behavior. 2010;93:293–312. doi: 10.1901/jeab.2010.93-293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Mackay H.A, Sidman M. Instructing the mentally retarded in an institutional environment. In: Jervis G.A, editor. Expanding concepts in mental retardation. Springfield, IL: Charles C Thomas; 1968. pp. 164–169. (Ed.) [Google Scholar]
  13. McDonagh E.C, McIlvane W.J, Stoddard L.T. Teaching coin equivalences via matching to sample. Applied Research in Mental Retardation. 1984;5:177–197. doi: 10.1016/s0270-3092(84)80001-6. [DOI] [PubMed] [Google Scholar]
  14. Mohr J.P, Leicester J, Stoddard L.T, Sidman M. Right hemianopia with memory and color deficits in circumscribed left posterior cerebral artery territory infarction. Neurology. 1971;21:1104–1113. doi: 10.1212/wnl.21.11.1104. [DOI] [PubMed] [Google Scholar]
  15. Pavlov I.P. Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex. London: Oxford University Press; 1927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Prokasy W.F, Hall J.F. Primary stimulus generalization. Psychological Review. 1963;70:310–322. doi: 10.1037/h0049354. [DOI] [PubMed] [Google Scholar]
  17. Saunders K. Stimulus control is an inference: Avoiding the rookie stimulus-control error. European Journal of Behavior Analysis. 2011. 12, 333–334.
  18. Sidman M. Two temporal parameters of the maintenance of avoidance behavior by the white rat. Journal of Comparative and Physiological Psychology. 1953;46:253–261. doi: 10.1037/h0060730. [DOI] [PubMed] [Google Scholar]
  19. Sidman M. Anxiety. Proceedings of the American Philosophical Society. 1964;108:478–481. [Google Scholar]
  20. Sidman M. Avoidance behavior. In: Honig W, editor. Operant behavior: Areas of research and application. New York: Appleton-Century-Crofts; 1966. pp. 448–498. (Ed.) [Google Scholar]
  21. Sidman M. Equivalence relations and behavior: A research story. Boston: Authors Cooperative; 1994. [Google Scholar]
  22. Sidman M. The scientist/practitioner in behavior analysis: A case study. 1998. Videotape, available from Society for the Quantitative Analyses of Behavior.
  23. Sidman M. Coercion and its fallout (rev. ed.) Boston: Authors Cooperative; 2000. [Google Scholar]
  24. Sidman M. Meeting the world halfway. The Current Repertoire: Newsletter of the Cambridge Center for Behavioral Studies. 2005;21:3–4. [Google Scholar]
  25. Sidman M. The analysis of behavior: What's in it for us. Journal of the Experimental Analysis of Behavior. 2007;87:309–316. doi: 10.1901/jeab.2007.82-06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Sidman M. O impacto da ciência na aplicação: Rua de mão única? [The impact of science on application: A one-way street?] Revista Brasileira de Análise do Comportamento. 2008;4:9–11. [Google Scholar]
  27. Sidman M. Equivalence relations and behavior: An introductory tutorial. The Analysis of Verbal Behavior. 2009;25:5–17. doi: 10.1007/BF03393066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Sidman M. Errorless learning and programmed instruction: The myth of the learning curve. 2010. European Journal of Behavior Analysis, 11, 167–180.
  29. Sidman M, Stoddard L.T, Mohr J.P, Leicester J. Behavioral studies of aphasia: Methods of investigation and analysis. Neuropsychologia. 1971;9:119–140. doi: 10.1016/0028-3932(71)90038-8. [DOI] [PubMed] [Google Scholar]
  30. Skinner B.F. The behavior of organisms: An experimental analysis. New York: Appleton-Century-Crofts; 1938. [Google Scholar]
  31. Skinner B.F. Science and human behavior. New York: Macmillan; 1953. [Google Scholar]
  32. Skinner B.F. Beyond freedom and dignity. New York: Knopf; 1971. [Google Scholar]
  33. Terrace H.S. Discrimination learning with and without “errors.”. Journal of the Experimental Analysis of Behavior. 1963a;6:1–27. doi: 10.1901/jeab.1963.6-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Terrace H.S. Errorless transfer of a discrimination across two continua. Journal of the Experimental Analysis of Behavior. 1963b;6:223–232. doi: 10.1901/jeab.1963.6-223. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES