Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2015 Apr 23;8(1):3–6. doi: 10.1007/s40617-015-0049-0

What Counts as High-Quality Practitioner Training in Applied Behavior Analysis?

Thomas S Critchfield 1,
PMCID: PMC5048252  PMID: 27703875

Abstract

Dixon and colleagues (this issue), who support faculty research productivity as one measure of quality for graduate training programs in applied behavior analysis, show that the faculty members of many programs have limited research track records. I provide some context for their findings by discussing some of the many unanswered questions about the role of research training for ABA practitioners.

Keywords: Scientist-practitioner, Translation, Graduate program evaluation


Professionals have long sought to prescribe the experiences that novices must complete in order to participate in the profession. This tradition traces back at least to ancient Egypt, where some of the first recorded craft guilds arose based on an apprenticeship model (Brentano 1969). Using specialized training as a mechanism of professional quality control thus is a matter of long-standing custom that few readers will view as controversial.

It is another thing entirely, however, to try to specify the type of training that is (or should be) foundational to a given profession, and such efforts can illuminate lines of disciplinary fracture. In particular, there is a long history of friction between sub-communities of scientists and practitioners within various disciplines (e.g., Critchfield 2011a), and these groups may have very different ideas about what counts as foundational knowledge. In the 19th century Britain, for example, the precursors of today’s fundamental chemists and chemical engineers hotly debated the proper academic curriculum for those whose work focused on chemical processes (Bud and Roberts 1984). A similar debate, while hardly new in applied behavior analysis (ABA), has been brought into sharp focus by the rapid growth of certification in ABA (e.g., Shook 1993) and public demand for professionals with this credential.

The theme of the debate is as follows: Creating knowledge through scientific investigation requires a distinct skill set. So too does applying knowledge through practical interventions. Intelligent people simply disagree about the relative importance of these skills in professional training. For example, some behavior analysts believe that basic researchers need learn only about basic science (leaving to others the process of translation, in which scientific knowledge is applied to advance clinical practice). Others assert that basic science often is better served by considering which fundamental processes operate most profoundly in everyday circumstances (e.g., Critchfield 2011b). More pertinent to the present discussion, some behavior analysts believe that practitioners need to master only a limited set of empirically vetted techniques in order to create meaningful changes in the everyday world. Others assert that practitioners should know about science because it is a major driver of clinical innovation (e.g., see Dixon et al. 2015).

Onto this proverbial hornet’s nest step Dixon and colleagues (Dixon et al. 2015, this issue) with an assessment of the research climate in ABA graduate programs. Dixon and colleagues advance two arguments. First, and more generally, they assert that existing mechanisms of external graduate program review, in particular the approval of course sequences by the Behavior Analyst Certification Board® (BACB), may provide insufficient guidance to consumers who wish to distinguish among exemplary, adequate, and possibly inadequate programs. Second, and more specifically, they maintain that as a scientist-practitioner enterprise ABA depends on research to provide new ideas and on research-savvy practitioners to translate these ideas into practical applications. According to this logic, future practitioners require exposure to research and, thus, assessments of research output by program faculty may serve as one means by which consumers can distinguish among the many existing ABA training programs.

The findings of Dixon and colleagues bear close inspection because they paint a rather uninspiring picture of the research culture in many ABA training programs. In evaluating this outcome, it is tempting to quibble about levels of analysis. For example, Dixon and colleagues focused on career-total counts of publications, which reflect scholarly productivity, rather than, say, citation impact, which is thought to reflect scholarly influence. Even if productivity is the correct emphasis, however, no single measure of it tells all. Consider that cumulative article counts conflate productivity with time spent in the field (for those who do research, publication counts tend to increase across years). Because many ABA graduate programs were established fairly recently, some of their faculty may be relative newcomers whose low publication counts speak more to youth rather than lack of engagement with research. For such individuals, a more informative measure might be publication rate (e.g., articles published per year).

Debating the best way to measure research productivity has become a popular sport in academia (e.g., Nazaroff 2005; Radicchi and Castellano 2011; Wootton 2013), but readers of the Dixon and colleagues article who focus exclusively on this issue will risk missing a crucial and obvious point: No measure of research productivity can transform a person who has not published into one who has. Regardless of how one slices the data, therefore, it appears that students who enter an ABA graduate program may be mentored by individuals with little to no track record of publishing scholarly research. This conclusion is straightforward and unambiguous.

Genuine Ambiguities

Far less definitive are the potential implications of this finding, entangled as they are with a host of mostly unanswered questions that our field ought to be discussing more systematically than seems to be the case. For starters, on an objective basis, just how important is research training to practitioners? Dixon and colleagues appear to take as a given that the answer is “very,” but if they are wrong their findings have little bearing on the process of identifying high-quality practitioner training programs. Many opinions have been expressed about the role of research experience in practitioner training, but presumably our field can do more than simply opine. Studies can be conducted on how research training influences ABA practitioner performance, but unfortunately such investigations currently are in short supply.

Also typically missing from discussions about ABA training is reference to the experiences of other fields with concerns similar to ours. For example, since approximately the 1960s, many medical care responsibilities have been transferred from physicians, who possess advanced academic credentials, to nurse practitioners and physician assistants, who receive considerably less training (Jones 2007). How much do these individuals learn about the process of conducting research? How often are they taught by individuals with active research programs? Do these factors affect their performance in practice settings? It seems only reasonable that our field should learn from the successes and failures of other disciplines.

When we suggest that research training is valuable to practitioners, exactly what benefits do we expect it to promote? As Dixon and colleagues indicate, possibilities include conducting independent research, critically consuming research literature with an eye toward deriving clinical insights, and applying critical thinking and data-based decision making to practical problems. These are all valid goals, but experiences of other fields suggest that at least some of them may be unrealistic in part because contingencies of survival in practice settings tend not to maintain research repertoires developed during graduate training (e.g., Critchfield 2011a; Parker and Detterman 1988). Here, the field of clinical psychology provides a cautionary tale that behavior analysts should not ignore. In 1949, at a conference in Boulder, CO, leaders of that field laid out a training model in which practitioners would be steeped in the process of science, which in turn would allow them to critically evaluate others’ research for translational insights and to conduct their own clinical research. The resulting “Boulder Model” of scientist-practitioner graduate training greatly influenced the American Psychological Association’s standards for accreditation of clinical doctoral programs (Barlow et al. 1984; Cautin and Baker 2014). Yet after decades of this form of accountability, many contemporary observers regard the Boulder Model as an unrealized ideal because most practicing clinicians do not conduct independent research (e.g., Parker and Detterman 1988; Zachar and Leong 2000) and large numbers of clinicians endorse practices that research does not support (e.g., Patihis et al. 2014). If other fields have struggled to establish durable science-related repertoires through doctoral training, what are the odds that this can be accomplished in ABA at the masters level where most graduate training is concentrated? The answer may depend in part on the specific professional repertoires under discussion, but we currently know little about which repertoires are affected by which graduate training experiences.

Assuming that practitioners really need to know about research, to what extent does faculty productivity influence the quality of student research training? For example, how much faculty research output is “enough” to assure that students receive a valid training experience? I am aware of no objective standards for this, although common sense applies to some degree. For example, a faculty member who conducts no research may find it difficult to provide students with credible research experiences. But is there any reason to distinguish between faculty members who produce research at modest versus exceptional paces? A guiding assumption of faculty productivity rankings would appear to be that more publishing always is better, although Dixon and colleagues responsibly acknowledge that other relationships are possible—for instance, perhaps the most productive researchers are too busy with research to invest much in student development. It remains unclear, therefore, precisely how faculty research productivity ought to be applied as a metric of graduate program quality.

Even if practitioner research training matters, it is unclear whether this training is by itself sufficient to promote the clinical innovation that we may hope practitioners will exhibit. As generations of basic researchers have demonstrated, simply conducting research does not assure that research findings will be connected with relevant practical problems (e.g., Critchfield 2011b; Mace and Critchfield 2010; Poling 2010). It is possible, therefore, that deriving research-based clinical insights may require three separately acquired repertoires rather than two: one concerning science, one concerning practice, and one concerning the process of deriving science-practice connections (e.g., Dixon et al. 2015; Critchfield and Reed 2005). How should graduate training be structured to develop this third, translational repertoire?

Perhaps most importantly, if faculty research productivity matters in practitioner training, and if many faculty members conduct little research, what is to be done about this? As the old saying goes, the first step toward change is admitting that that one has a problem, and the data of Dixon and colleagues indeed suggest that ABA may have a problem. Yet the data only define this problem. They do not indicate why it occurs or how it should be solved, and simple solutions are unlikely to be forthcoming.

One possibility is that external agencies like BACB and Association for Behavior Analysis International (ABAI) can hold training programs to a higher standard by linking their endorsements to research productivity. The plausibility of this making any difference depends on how much institutions value external endorsement in the first place, and what resources are required to obtain it (e.g., research requires time, money, and other forms of support that not all institutions offering ABA graduate training may be willing or able to provide). It remains unclear, therefore, just how much leverage agencies like BACB and ABAI actually have in promoting research productivity, although I suspect Dixon and colleagues would argue that we will never know for sure until research productivity becomes a more central part of the external endorsement process.

A Continuing Conversation

As should now be evident, the report by Dixon and colleagues raises more questions than it answers, and for this reason, peer reviewers disagreed about its suitability for publication. As action editor, I chose to accept the article because of its potential to stimulate a thoughtful discussion about what we, as a discipline, expect from external program-review mechanisms, particularly with respect to the role of science training for practitioners. If the published article seems light on discussion of implications, that is because during the revision process, I counseled Dixon and colleagues to present their data with minimal commentary in order to allow readers to draw independent conclusions.

Some of those reader reactions, in the form of invited expert commentaries, will be published in a future issue of Behavior Analysis in Practice; Dixon and colleagues will also have the opportunity to respond. In the meantime, I encourage interested stakeholders to use organization publications, professional meetings, and discussions with each other and with representatives of evaluating agencies to take up the important conversation about what constitutes high-quality training in ABA. Most particularly, I encourage readers to follow the lead of Dixon and colleagues by submitting to Behavior Analysis in Practice their own manuscripts that provide a scholarly appraisal of issues relevant to the training of ABA practitioners.

References

  1. Barlow DH, Hayes SC, Nelson RO. The scientist practitioner: research and accountability in clinical and educational settings. Oxford: Pergammon; 1984. [Google Scholar]
  2. Brentano L. On the history and development of guilds and the origin of trade-unions. New York: Burt Franklin; 1969. [Google Scholar]
  3. Bud R, Roberts GK. Science versus practice: chemistry in Victorian Britain. Manchester: Manchester University Press; 1984. [Google Scholar]
  4. Cautin RL, Baker DB. A history of education and training in professional psychology. In: Johnson WB, Kaslow NJ, editors. The Oxford handbook of education and training in professional psychology. New York: Oxford University Press; 2014. pp. 17–32. [Google Scholar]
  5. Critchfield TS. Interesting times: practice, science, and professional associations in behavior analysis. The Behavior Analyst. 2011;34:297–310. doi: 10.1007/BF03392259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Critchfield, T. S. (2011b). Translational contributions of the experimental analysis of behavior. The Behavior Analyst, 34. [DOI] [PMC free article] [PubMed]
  7. Critchfield TS, Reed DD. Conduits of translation in behavior-science bridge research. In: Burgos JE, Ribes E, editors. Theory, basic and applied research, and technological applications in behavior science: conceptual and methodological issues. Guadalajara: University of Guadalajara Press; 2005. pp. 45–84. [Google Scholar]
  8. Dixon, M.R., Reed, D.D., Smith, T., Belisle, J., & Jackson, R.E. (2015). Research rankings of behavior analytic graduate training programs and their faculty. Behavior Analysis in Practice. (in press). [DOI] [PMC free article] [PubMed]
  9. Jones EP. Physician assistant education in the United States. Academic Medicine. 2007;82:882–887. doi: 10.1097/ACM.0b013e31812f7c0c. [DOI] [PubMed] [Google Scholar]
  10. Mace FC, Critchfield TS. Translational research in behavior analysis: historical traditions and imperative for the future. Journal of the Experimental Analysis of Behavior. 2010;93:293–312. doi: 10.1901/jeab.2010.93-293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Nazaroff WW. Measuring research productivity. Indoor Air. 2005;15:382. doi: 10.1111/j.1600-0668.2005.00403.x. [DOI] [PubMed] [Google Scholar]
  12. Parker LE, Detterman DK. The balance between clinical and research interests among Boulder Model graduate students. Professional Psychology: Research and Practice. 1988;19:342–344. doi: 10.1037/0735-7028.19.3.342. [DOI] [Google Scholar]
  13. Patihis L, Ho LY, Tingen IW, Lilienfeld SO, Loftus EF. Are the “memory wars” over? A scientist-practitioner gap in beliefs about repressed memory. Psychological Science. 2014;25:519–530. doi: 10.1177/0956797613510718. [DOI] [PubMed] [Google Scholar]
  14. Poling A. Looking to the future: will behavior analysis survive and prosper? The Behavior Analyst. 2010;33:7–17. doi: 10.1007/BF03392200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Radicchi F, Castellano C. Rescaling citations of publications in physics. Physical Review E. 2011;83(4):046116. doi: 10.1103/PhysRevE.83.046116. [DOI] [PubMed] [Google Scholar]
  16. Shook GL. The professional credential in behavior analysis. The Behavior Analyst. 1993;16:87–101. doi: 10.1007/BF03392614. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Wootton, R. (2013). A simple, generalizable method for measuring individual research productivity and its use in the long-term analysis of departmental performance, including between-country comparisons. Health Research Policy and Systems, 11(2). doi:10.1186/1478-4505-11-2 [DOI] [PMC free article] [PubMed]
  18. Zachar P, Leong FTL. A 10-year longitudinal study of scientists and practitioner interests in psychology: assessing the Boulder model. Professional Psychology: Research and Practice. 2000;31:575–580. doi: 10.1037/0735-7028.31.5.575. [DOI] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES