Skip to main content
The Behavior Analyst logoLink to The Behavior Analyst
. 2011 Fall;34(2):137–148. doi: 10.1007/BF03392245

To a Young Basic Scientist, About to Embark on a Program of Translational Research

Thomas S Critchfield 1,
PMCID: PMC3211374  PMID: 22532736

Abstract

From recent commentaries about the role of basic behavior scientists in translational research, I distill some advice to young investigators who seek to apply their basic science training to translational studies. Among the challenges are (a) devising use-inspired research programs that complement, and are not redundant with, existing efforts in basic and applied behavior analysis; and (b) making tactical decisions, such as the selection of methods and collaborators, based on the research topic rather than, necessarily, the existing traditions in behavioral research. Finally, it must be recognized that although use-inspired basic research has the potential to attract support to basic laboratories and contribute to “saving the world,” neither of these outcomes is guaranteed. I discuss the relative risks for basic scientists who proceed with use-inspired basic research rather than ignore such translational questions.

Keywords: basic research, applied research, translational research


So you have decided to apply your training as a basic scientist to a new program of translational research. Perhaps you are convinced that use-inspired basic research is more likely than pure basic research to attract tangible support. Given a general shortage of science funding and contemporary erosion in public enthusiasm for science (particularly basic science; e.g., Neuringer, 2011; Poling & Edwards, 2011), you are wise to consider whether the experimental analysis of behavior (EAB), or at least your personal contributions to it, can prosper without increased attention to socially important problems.

If that is not motivation enough, Neuringer (2011) offers the complementary observation that society itself might not survive without significant new contributions from behavior science:

Another reason to work toward solving real-world problems is that we may be running out of time. Threats abound, including atomic warfare, global warming, overpopulation, pandemic diseases, natural resource depletion, and corporate abuse. Each of these has the potential to destroy our way of life, if not life itself. (p. 28)

It is important to note that each of the preceding problems is, in some fashion or another, a problem of human behavior. Where better to begin to understand the behavioral roots of such problems than in the basic behavioral laboratory?

Whatever your motivation, you will soon discover that in translational science (as in all science) there are no strict rules for how to conduct successful experiments or nurture research programs. As Sidman (1960) has indicated, the best that any investigator can do is to get busy in the laboratory, be shaped by data, and reflect thoughtfully on the practical and theoretical issues that bear on his efforts. In the last regard, you can benefit from the recently published observations of Branch (2011), DeLeon (2011), Neuringer (2011), Pilgrim (2011), Poling and Edwards (2011), and Vollmer (2011). Their comments prompt the following observations.

WHERE USE-INSPIRED BASIC RESEARCH FITS IN

Relation to Pure Basic Research

A critical early step in translational research is to specify clearly, for yourself and others, how you will complement the field's more prominent research traditions. One important issue concerns the relation between use-inspired basic research and the rest of the EAB. On the one hand, some have suggested that translational work provides a valuable bridge between the laboratory and the practical world that makes applied innovations more likely (e.g., DeLeon, 2011; Vollmer, 2011; Wacker, 1996, 2000). On the other hand, “There can be little in the experimental analysis of behavior that is actually irrelevant to everyday human problems, given the nature of our science” (Pilgrim, 2011, pp. 39–40).

Is “bridging,” therefore, even necessary? EAB's rigorous methods and theoretically conservative approach (e.g., Skinner, 1956) have an impressive track record of scientific success. Very rarely, across seven or eight decades of systematic investigation, have EAB findings been dramatically overturned, even as laboratory procedures have evolved and as research topics have been exported from laboratory to laboratory. With each laboratory replication and extension, confidence in the generality of fundamental behavior principles grows. As Skinner (1953) suggested, “When we have discovered the laws which govern a part of the world about us, we are then ready to deal effectively with that part of the world” (p. 13).

The comments of Skinner (1953) and Pilgrim (2011) might be taken to imply that every successful basic researcher stands poised to inspire a better mousetrap, but that is not how things tend to operate in science. Most basic researchers do not extend their laboratory work to the field, and most do not discuss their work in ways that might prompt others to attempt the extension (e.g., Poling, 2010). Perhaps as a result, most basic-science breakthroughs do not quickly make the leap to the practical world (Critchfield, 2011; Rogers, 2004; Stokes, 1997). The comments of Skinner and Pilgrim, therefore, can be seen as expressions of the widely held, but dubious, assumption about pure basic research that “someone else, someday, can be expected to harness the resulting principles for practical benefit” (Critchfield, 2011, p. 5). A translational perspective assumes instead that the “someone else is you and someday is as soon as you can figure out how [basic science] benefits the [practical] problems you address” (DeLeon, 2011, p. 44).

Perhaps it should be said that, with laboratory-validated principles in hand, we are better prepared to change the world rather than necessarily ready. Among the reasons why is that basic scientists, whatever their successes, may not have asked the precise basic questions that best address phenomena in the field (e.g., Mace, 1994; Vollmer, 2011). For example, there are many applications of stimulus equivalence technology (e.g., Rehfeldt, 2011), all of which have strong roots in the basic laboratory. Yet interventions that employ stimulus equivalence are likely to bump into issues of contextual control, in which equivalence class membership is modulated by a contextual stimulus (e.g., Fienup & Critchfield, 2010), and only a handful of laboratory studies have examined contextual control of equivalence class membership. Collectively, they only scratch the surface of this complex and important problem. Much is known about stimulus equivalence, but what we know well is not always what is needed to change socially important behavior. Good laboratory research on contextual control over equivalence class formation would have both basic theoretical and applied implications.

Relation to Applied Behavior Analysis

What of the relation between use-inspired basic research and applied behavior analysis (ABA)? ABA has a long track record of using laboratory-derived behavior principles to fuel the development of effective interventions (DeLeon, 2011; Neuringer, 2011; Pilgrim, 2011; Vollmer, 2011). The accomplishments are sufficiently impressive for Branch (2011) to suggest that ABA is all that is needed to verify the wide applicability of our basic principles. From this perspective, the agenda of basic research can be restricted to pure basic studies, and no special need exists for translational efforts.

Branch's (2011) proposal is a variation on the “someone, someday” assumption above, and consequently it may overestimate the degree to which applied behavior analysts have sampled the fruits of basic research. For the sake of discussion, however, let us assume that Branch has it just right, that ABA can do all of the heavy lifting in extending behavior principles from the lab to the field. A remaining uncertainty (one that should preoccupy you, the laboratory researcher) is whether the successes of ABA have any bearing on tangible support for basic research. To illustrate, imagine a future in which the successes of applied research have become widely appreciated and richly rewarded. Imagine that applied behavior-analytic interventions are liberally funded and that persons with expertise in devising them are in hot demand. Can we expect EAB to reap any benefits from this bonanza?

Behavior analysis is currently experiencing a version of this “utopian” future. During recent years, the popularity of applied behavior-analytic interventions for problems related to autism has led to the creation of many new university training opportunities. As of this writing, for instance, the Web site of the Behavior Analysis Certification Board® listed more than 200 approved university course sequences, most of which did not exist a decade ago. Yet I am aware of only a small handful of cases in which this bonanza has created employment for a basic scientist. Overall grant funding for pure basic research appears to have decreased during this same period. Thus, currently, many EAB investigators struggle to find employment and sustain their laboratory research even as demand grows for the services that this research helps to inspire, thus suggesting that the successes of ABA alone cannot assure a healthy future for EAB. In launching a translational research program, you have resolved not to fully entrust to others the task of establishing the social relevance of your laboratory research.

SOME IMPLICATIONS OF “FINDING A DISEASE”

Travis Thompson has addressed the social relevance goals of research by advising young scientists to “find a disease” (quoted in Poling, 2010, p. 10), that is, to select a specific societal problem and structure a research program around it (see also DeLeon, 2011). Once you have chosen a problem of interest, you cannot be doctrinaire about deciding what is important to know. Most problems that are serious enough to demand your attention are complex and probably impinge on the expertise of people who are not behavior analysts. To devise credible research, you will need to master details that have little to do with behavior per se, or at least that have not been discussed in the language of behavior analysis.

Imagine that it is the 1970s and you are Pennypacker and colleagues, about to launch the program of research that eventually will spawn the MammaCare system of promoting effective breast cancer manual examination (e.g., Pennypacker, 1986; Pennypacker et al., 1982). Your first task will be to understand what is known about breast cancer and about the detection efficacy of both manual breast examination and its alternatives (e.g., mammograms). You will need to understand the scope of the problem, including the extent to which existing approaches erroneously identify healthy tissue as problematic and fail to detect breast tumors. This is not a behavioral principle, but it helps to frame the need for effective, behavior-based screening procedures. Similarly, the fact that certain kinds of tumors are more likely to be life threatening than others is not a behavioral principle, but knowing this helps to define what effective behavior-based diagnostics must accomplish.

Now certain that a problem exists, you may wish to devise better ways to teach tactile discrimination of breast tumors. Behavior analysts know a thing or two about discrimination learning (e.g., Dinsmoor, 1985), but to promote this learning requires tight control over stimulus features, something that is difficult to achieve when the stimuli are naturally occurring human breasts. Perhaps instead you can embed tumor-simulating nodules, with carefully chosen properties, in simulated breasts. To do this well, however, you must be certain that your models adequately mimic the tactile properties of breast tissue. The relative tensile strength of silicon gel versus breast tissue is not a behavioral principle, but you may need to conduct engineering studies to determine just this (e.g., Madden et al., 1978).

In short, in conducting translational research, you inevitably become something of a generalist (Poling & Edwards, 2011), no matter how much you value your primary identity as a behavior analyst. At the same time, in “finding a disease,” you must make it your own by confidently applying what you have been taught to do as a basic behavioral scientist. A person with good basic science training is likely to notice, and capitalize on, the features shared by practical problems and certain types of basic research. In the research and development that led to MammaCare, Pennypacker and colleagues assumed that the task of detecting the smallest possible tumors (those that could be removed easily with least risk of have metastasized) had features in common with psychophysical detection methods. They also understood that basic scientists had not asked the specific questions that they needed to have answered, such as how tactile detection thresholds varied as a function of the hardness, depth, and motility of an object embedded in a viscous field. Consequently, they conducted psychophysical studies to determine just this (Adams et al., 1976; Bloom, Criswell, Pennypacker, Catania, & Adams, 1982).

A critic might suggest that the aforementioned research reveals no new principles; it had been known for about a century that psychophysical detection becomes less likely as stimulus magnitude decreases (Kling & Riggs, 1971). But Pennypacker and colleagues made the basic science contribution of extending this principle to a new problem of perception. As many observers have noted, replication is the hallmark of good science, and confidence in basic principles grows with each new systematic replication (e.g., Sidman, 1960). Although Pennypacker and colleagues might not have seen basic science as their primary mission, they nevertheless contributed to it.

Historically, translation has been thought of most often as identifying the existing fruits of basic research and extending them to practical settings (the “basic to applied” view). The “find a disease” axiom, and the use-inspired basic research that it defines (Critchfield, 2011), may require a different approach in which apparent connections between laboratory-derived behavior and an applied problem suggest profitable avenues of laboratory investigation (Mace, 1994). A. C. Catania reports that this is part of how the groundwork was laid for Mammacare:

I had … spent some time at the Smithsonian Institution … before I visited the University of Florida to give a colloquium, and I'd seen an exhibit that involved the visitor pressing buttons for feedback in learning some discrimination (something botanical, I think). I found it interesting that they'd designed an exhibit that actually had the visitor doing something for which feedback could be arranged (that was unusual in a museum in those days). At a reception after my colloquium Hank [Pennypacker] and I and others got to talking about whether we as behavior analysts could come up with more significant discrimination tasks, and we soon arrived at breast examination. (Catania, personal communication, May 29, 2011)

Thus, the first steps taken were not field studies, but rather the laboratory investigations cited above.

Form a Team and Determine How to Contribute

Credible translational research has many influences, so be as broadly educated as time allows but do not attempt to master everything. You can't. Poling and Edwards (2011, Table 1) illustrate this point by listing the individuals who published in both the Journal of Applied Behavior Analysis (JABA) and the Journal of the Experimental Analysis of Behavior (JEAB) during a recent 5-year interval. That 38 people did so is impressive, but many of these individuals were secondary authors on a single paper in one of the journals. They likely would not have appeared on the list without participating in collaborative teams that involved both basic and applied expertise.

Collaborative teams harness more expertise than any individual can possibly embody, and this puts as many tools as possible to work on your research agenda (e.g., DeLeon, 2011). Note that, from its earliest stages, the research and development that led to MammaCare brought together individuals with varied forms of expertise, including a materials engineer and a practitioner of EAB (e.g., Adams et al., 1976; Madden et al., 1978). Seek out such collaborations, and then think carefully about your role on the team. You are likely to make your best translational contributions through studies that tap your expertise as a basic researcher.

Knowing one's limitations, of course, is just as important as showcasing one's strengths. Vollmer (2011) reminds us that there are many forms of translational research, and only some of these are directly compatible with the practice of EAB. For the most part, EAB is not involved in figuring out how best to develop, deliver, and disseminate interventions (more on this later). But under the right circumstances, circumstances that you can engineer, others may be eager to apply the insights that your laboratory research can provide:

I want you to know this: You don't have to do it yourself, but do tell me what you think should be done. … I want to adopt it to help those I serve and can perhaps do so more readily than you can (De Leon, 2011, pp. 44–45).

My own background is in the laboratory, and I have dabbled just enough in field research to discover that I possess an impressive capacity to muck things up (e.g., see Critchfield & Fienup, in press). To phrase the problem more positively, I have come to appreciate the nuanced expertise that applied researchers have for conducting applied research and practice. If field-work is needed, and most people would say it is (Pilgrim, 2011; Vollmer, 2011), let that effort be led by someone on your team who knows how.

A nice example of this approach is provided by a recently funded $1.7 million (U.S.) grant from the National Institute of Child Health and Development to a team of basic and applied investigators to study implications of behavioral momentum theory for interventions aimed at problem behaviors that produce reinforcers that cannot be eliminated by therapists. Tony Nevin describes the project thusly:

The first year involves basic pigeon work on an analog to problem behavior that has intrinsic consequences as well as identifiable external reinforcers (Tim Slocum at Utah State University), plus translational work with kids with intellectual/developmental disabilities to verify the basic findings (Bill Dube and Bill Ahearn at Shriver Center and New England Center for Children). Years 2 through 5 will involve more basic and translational work plus applications to clinical populations (Iser DeLeon at Kennedy Krieger Institute and Bud Mace at Nova University). My job is to coordinate the various projects, and perhaps model the data. (J. A. Nevin, personal communication, May 2, 2011)

In this project, as in all good collaborations, each investigator contributes according to his talents.

Let Your Questions Guide Your Methods

Beginning a translational research program with an interesting topic means selecting methods based on what best suits the topic (Neuringer, 2011; Poling & Edwards, 2011; Vollmer, 2011). This simple prescription, however, can run afoul of some cherished traditions in EAB. Animal research, for example, has been the mainstay of experimental behavior analysis, and it is hard to imagine a future for EAB that excludes animal studies. Yet human problems may demand an analysis of human behavior, for two reasons.

Reason 1 concerns the “marketing” of science. People who are not behavior analysts often are skeptical that animal research can provide a complete account of human behavior, and some, of course, doubt that animal studies can teach us anything meaningful about humans (e.g., see Critchfield, 2011). People who provide tangible support for basic research may have less in common with behavior analysts than with such skeptics. Moreover, the skeptics have a point that is hard to dismiss: Why shouldn't consumers ask that generality to humans be demonstrated rather than assumed? If consumers of behavioral research want to see effects in humans, then the investigators who seek their support probably should figure out how to conduct the research with humans (see Branch, 2011).

This leads us to Reason 2, which is that not all types of human behavior can be adequately modeled in nonhumans. Verbal behavior comes to mind, in part because of its obvious centrality to everyday life. Branch (2011) reminds us that basic behavioral research has only begun to scratch the surface of this important research domain.

How and why does … rule governance develop? Why are some people easily duped by verbal stimuli and others not? Why do so many reason illogically? Why don't people always do that they say they are going to do? What are the roles of so-called self-rules in governing behavior? (Branch, 2011, p. 21)

At the conjunction of Reasons 1 and 2 lies another reason to reject the assertion that generality of pure basic research can be demonstrated entirely through application (Branch, 2011). Each attempted intervention carries both a risk of doing harm (some well-intentioned interventions make things worse) and an opportunity cost (implementation consumes finite time, money, and the consumer's opportunity to experience alternative interventions; see Lilienfeld, 2002). Consequently, racing from animal lab to clinic can be reckless; there are good reasons, for instance, why new drugs are not brought to market strictly on the basis of safety and efficacy screening in animals (Ng, 2009). Translational studies, including those with human subjects, provide an intermediate step that bolsters confidence in the efficacy and safety of new types of interventions before they are employed in the everyday world.

The limitations of animal experiments illustrate that use-inspired basic research can require more diverse methods than traditional pure basic research (Neuringer, 2011; Poling & Edwards, 2011). Among the potential benefits of collaboration is the opportunity to profit from the methodological expertise of others. This expertise serves, in part, as a hedge against being constrained by methodological dogma. Choosing only those problems that can be easily studied using preferred methods yields an unambitious science of behavior (e.g., Vollmer, 2011).

Dogma, of course, is in the eye of the beholder. Scientists agree that there are better and worse methods but not always about which is which. When you employ methods that are not standard in your research community, you may open the door to investigation of new topics (e.g., Neuringer, 2011; Vollmer, 2011). At the same time, you may drive a wedge between yourself and members of your most familiar research community, who expect research to be constructed in familiar ways (e.g., Hake, 1982).

Match Your Efforts to the Intended Audience

Part of the solution is to remember that translational research should have multiple audiences. Behavior analysts should know of your efforts to address socially relevant problems, but if you have chosen your topic astutely, there is also a mainstream audience (including scientists who are not behavior analysts) that cares deeply about your topic. For your research program to have maximum impact, you must reach this audience through the journals that serve it. A wise scientist designs studies with publication outlets in mind. Don't design every translational study for behavior-analytic journals like JABA or JEAB, and understand that different audiences have different methodological expectations. Many mainstream journals, for example, require group-comparison experimental designs that are evaluated through inferential statistics. To be sure, behavior analysts may object to such methods (e.g., Branch, 2011), but at a practical level you may have to choose between methodological purity and showcasing your research where it will have the most impact.

The necessary translational task, then, is … to identify for ourselves and others the role played by the “facts in the bag” of our science when dealing with socially important human behavior. As a start, we must tell the story, set the context, and make the argument for connections. (Pilgrim, 2011, p. 40)

The introductory and concluding comments that frame our written and spoken descriptions of translational experiments are of critical importance. To make an obvious point, it may be helpful to remember what we, as translational investigators, had to learn in order to appreciate the broader implications of laboratory work. It is both common courtesy, and of potential personal benefit to the investigator, to share the same with an audience.

FACING DOWN FAILURE

Do Translational Data Really Persuade?

Skinner (1956) offered as an “unformalized principle of scientific practice” (p. 224) that investigators sometimes are lucky, that is, their efforts lead to unexpected insights. A corollary to this principle is that some research doesn't work out very well. There are generic reasons why some research ends up on the scrap heap, including that the investigator built a flawed experiment, and that a line of investigation that initially seemed profound turns out to be closer to trivial. In the former case, your training as a basic scientist should serve you well in designing good experiments. In the latter case, however, you face a substantial challenge in guessing which events occurring in the basic laboratory are important in the everyday world, and Pilgrim (2011) cautions against blind faith in the assumption that “we would know a practical benefit if we saw one” (pp. 37–38).

To develop a program of translational research you will need to rely, in part, on your layperson's instincts to decide what problems to try to solve. You will have to apply your behavior analysis skills to decide how to make sense of those problems. But you could guess wrong, and there exist no foolproof guidelines for how to make good guesses. Nobody knows “the proportion of translational research programs that might be expected to yield practical benefits” (Pilgrim, 2011, p. 38), but that proportion surely is much less than one, in part because not all laboratory models speak as clearly to everyday problems as their designer first imagines.

Even if you succeed brilliantly, Pilgrim (2011) expresses skepticism that society will regard your data as persuasive. To paraphrase her argument, translation is a data-driven exercise, and everyday people may not care much about data. To support this assertion, Pilgrim mentions two interventions (both built with close attention to behavioral principles) that society largely ignores despite the existence of considerable evidence of effectiveness. Direct instruction (DI) of reading was found in a large-scale field study to outperform many other types of instruction, but today it is used in very few schools in the United States (see Watkins, 1997). Instruction based on stimulus equivalence research (equivalence-based instruction, or EBI) is supported by many translational and applied studies dating back to the 1970s (Rehfeldt, 2011), and yet

Few scientists outside behavior analysis have adopted equivalence or other relational training methods for research or for practice, and we would be hard pressed to document society's appreciation for them, despite clear demonstration of potential practical benefits. … If [applied] evidence of clear benefits… is insufficient to secure public attention and support, one might question the premise that … laboratory models of interesting human phenomena are likely to do so. (Pilgrim, 2011, p. 39)

There is little point in conducting use-inspired basic research unless you can see your way clear of Pilgrim's (2011) conclusion. The issues involved are more complex than can be properly addressed here, but a brief digression can at least suggest an alternative way of interpreting the DI and EBI examples.

In the case of DI, Watkins (1997) presented a detailed analysis of the contingencies that govern traditional educational practice that might be thought to show that society is biased against behavioral interventions. But the available evidence is equally compatible with the conclusion that DI's designers and promoters failed to properly evaluate what it takes to achieve wide adoption of educational interventions. As Watkins noted, for instance, DI procedures do not mesh particularly well with traditional conceptions of how classrooms should be organized, staffed, and run. The fact that traditional classroom practices are not especially effective is, in a sense, beside the point. Curricular decisions are made by policy makers, administrators, and educators who are accustomed to traditional practices. Moreover, the values held by many teachers focus on the importance of a teacher's creativity and charisma, issues that seem not to be addressed by DI's highly structured curriculum. In short, when evaluating reading curricula, people in the educational establishment necessarily are driven by behavioral histories that tell them to find what fits easily into existing school routines and appeals to existing teachers.

In the case of EBI, Rehfeldt (2011) reviewed much of the published literature and concluded that it is long on potential practical benefits and short on actually saving the world. Largely absent from the EBI literature is research evidence of a sort that is likely to be persuasive to those who consider evidence in selecting educational interventions. For instance, Rehfeldt noted that very few studies have been conducted in true field settings or with typically developing learners. Instead, most studies have focused on special populations in highly controlled, laboratory-like settings. Also missing are persuasive research designs. Rehfeldt noted that most existing studies use A-B designs that are seen as weak by most observers and that demonstrate, at best, only that one type of intervention works better than doing nothing. Most existing studies use small numbers of subjects, which may not be persuasive in the current culture of evidence-based practice, which tends to favor randomized controlled trials that involve sizable groups (e.g., Stolberg, Norman, & Trop, 2004). Just as important, most of the relevant research has been published in behavioral journals, where only the converted will see it, and the reports have used terminology that even friendly readers tend to find difficult to understand. So a broad audience is unlikely to encounter accessible information about equivalence-based instruction. One cannot adopt something of which one is unaware.

To summarize, DI and EBI may have suffered in the public arena because of inadequate attention, on the part of researchers and intervention designers, to the social dynamics that govern the dissemination of interventions. In the case of DI, an intervention was devised without adequate attention to the customs and values that predominate in implementation settings. In the case of EBI, research has been conducted, but not in a form that is persuasive to traditional audiences, and published, but not in journals that traditional audiences read.

Failures to disseminate effective interventions are troubling, challenging, and impossible to ignore if we want a better world. They are, however, of little immediate bearing on the job of the laboratory investigator. Your proximal job in translation includes illuminating possibilities for improving the world, which you can do most readily by persuading the powers that be that laboratory research is potentially worthy of attention and investment. To do this, you will have to master the art of communicating with granting agencies, university departments that might hire you, and possibly private sector entities that are interested in research and development (and this may call for a very different repertoire than, say, persuading the nation's thousands of school superintendents to install DI in the public schools). If you succeed, you improve the standing of your laboratory research in a limited-resource environment, but developing and disseminating interventions falls outside your job description.

More distally, of course, Pilgrim (2011) is correct in suggesting that translational research that remains strictly promissory (does not spawn effective practical innovations) is only superficially translational. Once again, the importance of working in collaborative teams comes to the fore. These teams can be built to include not only expertise in developing and validating interventions but also the distinct skill set of dissemination (e.g., Pennypacker, 1986). Rogers (2004) (in my view, required reading for anyone with translational interests) provides a detailed accounting of the social dynamics that govern the adoption of innovations. You may not be personally responsible for dissemination, but if you are serious about saving the world, you'll acknowledge the importance of these dynamics and seek out applied colleagues who know how to negotiate them.

On the Consequences of Failure

So you have decided to undertake translational research, and you understand that you could fail to save the world. Branch (2011) thinks you probably will fail, and worries about what this will mean to our field. Because behavior analysis is a young science,

It is not very likely that doing research that is claimed to lay the groundwork for solutions to particular societal issues will actually result in such solutions. … And in the long run, promising what cannot be delivered might well be more damaging than being honest about the current state of knowledge and what is likely to be achieved by any particular research project or program. (p. 20)

Branch (2011) is correct about the need for circumspection in translational research. Naively constructed promises and experiments are bound to disappoint the investigator and his or her audiences, both lay and scientific. This underscores the importance of basic scientists becoming well informed about applied problems or functioning on teams in which such expertise is well represented (e.g., Critchfield, 2011; Mace & Critchfield, 2010; Poling, 2010). The more you know about the real-world referent of your research, the less likely you are to overstate the implications of your findings.

Branch (2011) does not believe that this is safeguard enough. He suggests that is better to defer translational research and trust in the cumulative progress of well-conducted basic science. Someday we will know more, and then we will be able to more confidently tackle the everyday problems about which people care deeply.

Yet mistakes, dead ends, and uncertain progress are an integral part of science (every variety):

Other sciences began with attempts to solve societal problems, and successes and failures in those attempts helped to shape the science. … In place of application, researchers in EAB have emphasized orderliness of data as a criterion, but some orderly data are more likely than others to lead to … fruitful development of the science. (Neuringer, 2011, p. 28)

In the end, all science involves rolling the dice and being shaped by the outcomes. What you must decide is which gambles you wish to take.

To undertake translational research in the laboratory is to bet that your education and instincts will help you select good research questions; that methods can be devised with the capacity to answer potentially atypical questions; and that through collaborations with carefully chosen colleagues, you can help to extend your best findings beyond the laboratory. On a more personal level, of course, you're also gambling that the translational thrust of your research will help you to garner tangible support, and that by pursuing translational questions you can scratch the itch for theoretically important questions that presumably drew you into basic science in the first place.

To forgo translational research is a different kind of gamble. Making no translational promises plays into the widely held belief (see Branch, 2011; Neuringer, 2011; Poling, 2010) that behavioral science is irrelevant to the human condition. People with this preconception sometimes control the contingencies of academic hiring and research funding (Branch, 2011), so a nontranslational laboratory program is a gamble (some would suggest a bad one; Neuringer, 2011) that these people will instead value pure basic science for its own sake.

To forgo translational research is also, as Branch (2011) indicates, to place great trust in the cumulative progress of science. No reasonable student of science would portray this as a bad bet per se (time plus the scientific method yields progress!), but questions remain. How will we decide when enough is known to allow us to step cautiously away from a pure basic research agenda? Will waiting really protect us from future failed investigations and unfulfilled translational promises? Without lots of collective practice in posing (and hopefully answering) translational questions, how can we be sure that future basic scientists will be inclined to think beyond their theoretically driven laboratory programs, or to communicate their findings in ways that nonspecialists can understand?

In any event, young scientists like you may not have the luxury of granting unlimited time for behavioral science to mature. With society unenthusiastic about supporting EAB, one wonders how much longer a critical mass of EAB investigators will be able to remain at their benches. And then there is the matter of how much longer society will remain able to support anything. In staring down the barrel of numerous threats with “the potential to destroy our way of life, if not life itself” (Neuringer, 2011, p. 28), we may be forced to acknowledge that without a future for humankind, our science will have no opportunity to become more perfect. Today's translational science is an attempt, some might say a necessary attempt, to do the best we can with the fruits of the first seven or eight decades of EAB.

Acknowledgments

One of the nicest honors in science is to have capable people react thoughtfully to your work. I am grateful for comments on an article about translational basic science (Critchfield, 2011) by the following distinguished scholars: Marc Branch, Iser DeLeon, Alan Neuringer, Carol Pilgrim, Alan Poling and Timothy L. Edwards, and Timothy Vollmer. The purpose of the present article is to amplify, and in a few instances to contextualize, their insightful comments.

REFERENCES

  1. Adams C.K, Hall D.C, Pennypacker H.S, Goldstein M.K, Hench L.L, Madden M.C, et al. Lump detection in simulated human breasts. Perception & Psychophysics. 1976;20:163–167. [Google Scholar]
  2. Bloom H.S, Criswell E.L, Pennypacker H.S, Catania A.C, Adams C.K. Major stimulus dimensions determining detection of simulated breast lesions. Perception and Psychophysics. 1982;32:251–260. doi: 10.3758/bf03206229. [DOI] [PubMed] [Google Scholar]
  3. Branch M.N. Is translation the problem? Some reactions to Critchfield (2011). The Behavior Analyst. 2011;34:19–22. doi: 10.1007/BF03392228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Critchfield T.S. Translational contributions of the experimental analysis of behavior. The Behavior Analyst. 2011;34:3–17. doi: 10.1007/BF03392227. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Critchfield T.S, Fienup D.F. A “happy hour” effect in translational stimulus relations research. Experimental Analysis of Human Behavior Bulletin in press [Google Scholar]
  6. DeLeon I.G. The aesthetics of intervention in defense of the esoteric. The Behavior Analyst. 2011;34:41–45. doi: 10.1007/BF03392233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dinsmoor J.A. Stimulus control I. The Behavior Analyst. 1985;18:51–68. doi: 10.1007/BF03392691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fienup D.F, Critchfield T.S. Efficiently establishing concepts of inferential statistics and hypothesis decision making using contextually controlled equivalence classes. Journal of Applied Behavior Analysis. 2010;34:437–462. doi: 10.1901/jaba.2010.43-437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hake D.F. The basic-applied continuum and the possible evolution of human operant social and verbal research. The Behavior Analyst. 1982;5:21–28. doi: 10.1007/BF03393137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Kling J.W, Riggs L.A. Experimental psychology. New York: Holt, Rinehart, and Winston; 1971. [Google Scholar]
  11. Lilienfeld S.O. The scientific review of mental health practice: Our raison d'etre. The Scientific Review of Mental Health Practice. 2002;1:1–9. [Google Scholar]
  12. Madden M.C, Hench L.L, Hall D.C, Adams C.K, Goldstein M.K, Pennypacker H.S, et al. Development of a model human breast with tumors for use in teaching breast examination. Journal of Bioengineering. 1978;2:427–435. [PubMed] [Google Scholar]
  13. Mace F.C. Basic research needed for stimulating the development of behavioral technologies. Journal of the Experimental Analysis of Behavior. 1994;61:529–550. doi: 10.1901/jeab.1994.61-529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Mace F.C, Critchfield T.S. Translational research in behavior analysis: Historical traditions and imperative for the future. Journal of the Experimental Analysis of Behavior. 2010;93:293–312. doi: 10.1901/jeab.2010.93-293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Neuringer A. Reach out. The Behavior Analyst. 2011;34:27–29. doi: 10.1007/BF03392230. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ng R. Drugs: From discovery to approval. Hoboken, NJ: Wiley-Blackwell; 2009. [Google Scholar]
  17. Pennypacker H.S. Technology transfer: The challenge of buying in without selling out. The Behavior Analyst. 1986;9:147–156. doi: 10.1007/BF03391940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Pennypacker H.S, Bloom H.S, Criswell E.L, Neelakantan P, Goldstein M.K, Stein G.H. Toward an effective technology of instruction in breast self-examination. International Journal of Mental Health. 1982;11:98–116. [Google Scholar]
  19. Pilgrim C. Translational behavior analysis and practical benefits. The Behavior Analyst. 2011;34:37–40. doi: 10.1007/BF03392232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Poling A. Looking to the future: Will behavior analysis survive and prosper? The Behavior Analyst. 2010;33:6–17. doi: 10.1007/BF03392200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Poling A, Edwards T.L. Translational research: It's not 1960s behavior analysis. The Behavior Analyst. 2011;34:23–26. doi: 10.1007/BF03392229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Rehfeldt R.A. Toward a technology of derived stimulus relations: An analysis of articles published in JABA, 1992–2009. Journal of Applied Behavior Analysis. 2011;44:109–119. doi: 10.1901/jaba.2011.44-109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Rogers C.E. Diffusion of innovations (5th ed.) New York: Free Press; 2004. [Google Scholar]
  24. Sidman M. Tactics of scientific research. Oxford, UK: Basic Books; 1960. [Google Scholar]
  25. Stokes D.E. Pasteur's quadrant: Basic science and technological innovation. Washington, DC: Brookings Institution Press; 1997. [Google Scholar]
  26. Stolberg H, Norman G, Trop I. Randomized controlled trials. American Journal Roentgenology. 2004;183:1539–1544. doi: 10.2214/ajr.183.6.01831539. [DOI] [PubMed] [Google Scholar]
  27. Skinner B.F. Science and human behavior. New York: Free Press; 1953. [Google Scholar]
  28. Skinner B.F. A case history in scientific method. American Psychologist. 1956;11:221–233. [Google Scholar]
  29. Vollmer T.R. Three variations of translational research: Comments on Critchfield (2011). The Behavior Analyst. 2011;34:31–35. doi: 10.1007/BF03392231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Wacker D.P. Behavior analysis research in JABA: A need for studies that bridge basic and applied research. Experimental Analysis of Human Behavior Bulletin. 1996;14:11–14. [Google Scholar]
  31. Wacker D.P. Building a bridge between research in experimental and applied behavior analysis. In: Leslie J.C, Blackman D, editors. Experimental and applied analysis of human behavior. Reno, NV: Context Press; 2000. pp. 205–212. (Eds.) [Google Scholar]
  32. Watkins C.L. Project Follow Through: A case study of contingencies influencing instructional practices and the educational establishment. Cambridge, MA: Cambridge Center for Behavioral Studies; 1997. [Google Scholar]

Articles from The Behavior Analyst are provided here courtesy of Association for Behavior Analysis International

RESOURCES