Abstract
Dixon et al. (Behavior Analysis in Practice 8:7–15, 2015) argued that the research productivity of behavior analytic graduate programs may be a reasonable criterion to evaluate training program quality. They reviewed the cumulative publications of graduate programs. From this analysis, they generated a top ten list of graduate programs with the greatest number of faculty publications and, because of the number of these publications, inferred that they may be better training programs than those not on the list. We countered that the quality of graduate training programs is evident in the behavior of those who are trained, and thus, our field’s interest should focus on determining the degree to which individual program graduates—and not their faculty—have mastered the research process. Thus, we proposed including student authors’ work as an alternative to Dixon et al.’s analysis.
Dixon and colleagues argued that the research productivity of behavior analytic graduate programs, and of the faculty who work in them, may be a reasonable criterion of training program quality (Dixon et al. 2015). They reviewed the cumulative publications of graduate programs that were approved by the Behavior Analysis Certification Board (BACB) and accredited by the Association of Behavior Analysis, International (ABAI). From this analysis, they generated a top ten list of graduate programs with the greatest number of faculty publications that may be better training programs than those not on the list.
This endeavor is not without merit. It is reasonable to assume that faculty with experience in conducting and publishing behavior analytic research have the capacity to instruct students about these skills. But this is only a starting point. If Dixon and colleagues wish to make this case, then they need to show that a program’s faculty research productivity is positively correlated with other measures of training program quality. Unfortunately, the authors did not address this possible correlation, instead citing only metrics that they regard as insufficiently informative about training program quality, including BACB approval of course sequences and ABAI program accreditation.
Dixon and colleagues also mentioned BACB’s list of examination pass rates by program graduates, but did not regard this as a measure of program quality. While it is possible to debate precisely what is measured by the BACB certification examination, this is, by default, the only existing standardized measure of competency in applied behavior analysis, and thus deserves at least passing attention in the present discussion. Presumably, graduate programs that succeed at training applied behavior analysts will produce high pass rates. That is, program-faculty research productivity should correlate with pass rates on the BACB exam. Yet this prediction is difficult to test because only five of the ten institutions on the “top 10” productivity list produced by Dixon and colleagues appeared on the BACB’s list of pass rates for 2013 (“2013-BCBA Examination Pass Rates”, 2014). There are at least two reasons why a program would not appear on BACB’s pass rate list (“2013-BCBA examination pass rates for approved course sequences 2014): The program may not have been in existence for four or more years or fewer than six of its graduates were exam candidates for the year reported (2013). In either case, the program would have produced too few graduates to test the predicted correlation. The remaining five programs in the “top 10” research productivity list all had certification exam pass rates higher than the national average. This result appears to provide anecdotal support for the position taken by Dixon and colleagues until one considers that in 2013 the BACB reported that 31 programs had pass rates above the national average. Thus, 26 training programs with research productivity lower than those on the “top 10” list had better pass rates than the national average. To the extent that BACB pass rates are informative, the available data would appear to argue against the authors’ position.
Among many possible explanations for this lack of correlation is a levels-of-analysis problem: The success of training programs ultimately is reflected in the behavior of program graduates, whereas Dixon and colleagues focused on program faculty behavior. In evaluating the impact of research training on practitioner competency, we suggest that a better metric is student research productivity. Measuring student research productivity is, however, trickier than measuring faculty productivity. At large, research-focused institutions and faculty and student research productivity are largely conflated: Faculty who operate under research productivity contingencies, and who enjoy appropriate research-focused resources and time allocation, may publish frequently with student co-authors. In such cases, counting faculty publications may well estimate student research engagement. By contrast, we are well aware that at smaller institutions like ours the contingencies of faculty survival tend to reward teaching productivity, and time and resources may be in short supply to support the laborious process of writing formal research reports and negotiating peer review. Nevertheless, as part of a program’s teaching mission, many smaller institutions require students to conduct research and present it at local, state, and national conferences. Most programs require students to conduct research (e.g., a master’s thesis) as a component of training. Because of constraints on small-institution faculty, however, much of this research is not published and thus is not represented in the counts conducted by Dixon and colleagues. Yet if these efforts conform to accepted standards for the research process, shouldn’t they be considered in an analysis of program research climate? Research is a process, not a product, as Sidman (2011) eloquently noted:
Every experiment, whether carried out within or outside the laboratory, has the potential to generate the thrill of discovery, the personal satisfaction of knowing that one has produced knowledge that nobody has ever seen before, knowledge that may lead others to modify the way they approach problems that they are trying to solve. For me, that is the bottom line of successful research. When experimental data bring about changes in the behavior of others—researchers, practitioners, and sometimes, even the nonprofessional public—then the research has been successful. I wish that all new students of behavior analysis experience that kind of personal fulfillment while they are in the process of learning how to practice their profession. Whenever and wherever you do it, conducting your own research will give you a whole new slant on behavior analysis (p. 976).
In summary, we support the search for objective measures of quality for graduate training programs in applied behavior analysis, and we agree with Dixon and colleagues (as well as Sidman 2011) that program graduates who are skilled in research will probably be better clinicians. The quality of graduate training programs, however, is evident in the behavior of those who are trained, and thus, our field’s interest should focus on determining the degree to which individual program graduates—and not their faculty—have mastered the research process.
References
- 2013-BCBA examination pass rates for approved course sequences (2014). Retrieved from http://www.bacb.com/Downloadfiles/PassRates/BCBA_ACS_pass_rates_percent.pdf
- Dixon MR, Reed DD, Smith T, Belisle J, Jackson RE. Research rankings of behavior analytic graduate training programs and their faculty. Behavior Analysis in Practice. 2015;8(1):7–15. doi: 10.1007/s40617-015-0057-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sidman M. Can an understanding of basic research facilitate the effectiveness of practitioners? Reflections and personal perspectives. Journal of Applied Behavior Analysis. 2011;44(4):973–991. doi: 10.1901/jaba.2011.44-973. [DOI] [PMC free article] [PubMed] [Google Scholar]