Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2015 Jul 30;8(2):136–137. doi: 10.1007/s40617-015-0076-x

Metrics of Quality in Graduate Training

Brian A Iwata 1,
PMCID: PMC5048276  PMID: 27703902

Abstract

This commentary, in response to Dixon et al. (Behavior Analysis in Practice, 8, 7–15, 2015), describes difficulties in defining metrics of quality in graduate training for different audiences (types of applicants). Outcome measures are preferred whenever possible, supplemented by subjective but frequently used opinion surveys.

Keywords: Graduate training, Outcome measures

Metrics of Quality in Graduate Training

Dixon et al. (2015) raised interesting questions about the quality of graduate training programs in behavior analysis. They suggested that one metric of quality is faculty research productivity and provided extensive data on publication output for a sample of programs, faculty members, and journals as “a starting point for a disciplinary conversation about relative program quality on this dimension” (p. 9). Although one might take exception with the authors’ sampling or search methods, the data—number of publications rather than frequency or citations, or even the premise that publications are a determinant of program quality, their procedures were well defined, and faculty research productivity has long been recognized as a key component of program rankings and a predictor of student research activity following graduation (Buchmeuller et al. 1999; Clemente and Sturgis 1974; Hogan 1981; Maher 1999). Thus, rather than commenting on the specific metric selected by Dixon et al., I will focus on the general topic of training quality, which seemed like the authors’ overarching theme.

One problem in attempting to define quality is that the target audience is heterogeneous—applicants to graduate school have different training goals. Those interested solely in practice may seek a master’s degree toward certification as a behavior analyst. An adequate knowledge base and technical competence in application are essential to these students. Although familiarity with our research literature forms the core of both, master’s level practitioners are unlikely to actually conduct research for a variety of reasons. As a result, it is doubtful that these applicants’ choices are greatly affected by the research reputation of faculty members. Other applicants, unsure of their long-term goals but contemplating the possibility of a research career, seek master’s programs that include both practical and research training. These students need opportunities not only to observe research in progress but also to conduct it. Finally, applicants interested in academic careers must acquire demonstrated research competence. Thus, no single characteristic of any graduate program is very informative to these varied groups.

If the goal of disseminating comparative data about graduate programs is to assist applicants in making decisions, a second problem is that many applicants may be most concerned with matters such as acceptance and retention ratios, time to graduation, and financial support. Although all of these characteristics are informative, none is highly correlated with program quality.

To the extent that quality metrics actually influence decisions about graduate school selection, it would be helpful to know what they are. Perhaps one could begin with a survey of current or recent graduates of our behavior analysis programs aimed specifically at program quality, results of which could be categorized based on applicant goals or types of programs. In the absence of having any such data, I suggest that quality metrics should be closely tied to outcome indicators such as the following. Students seeking practitioner training might find helpful data on Behavior Analyst Certification Board (BACB) pass rates for graduate programs, and initial job placements. Those seeking pre-PhD training at the master’s level would be interested in additional quality indicators, such as proportion of graduates who are successful in gaining entry into PhD programs, proportion of published master’s theses, and faculty productivity. Finally, applicants to PhD programs should find relevant not only faculty but also student research productivity, proportion of students entering academic positions, actual job placements, and early-career achievements.

These types of data are not readily available and often are dependent on accurate self-reporting that may be unflattering. Perhaps as a result, many common methods for ranking academic programs, including that used by the National Research Council (Ostriker et al. 2011), rely heavily on public sources of data, such as faculty publications, and on “expert faculty” surveys. Although highly subjective and potentially biased, these surveys nevertheless provide a “composite” view of program quality. Ironically, subjective views of this type offered by academic advisors may heavily influence an applicant’s selection of graduate programs, so why not organize them in some sort of uniform manner?

In summary, I admire Dixon et al. (2015) for examining an important topic. Although their analysis of program quality was restricted to one dimension, faculty research productivity, it is a universally recognized quality metric, which can be measured in a variety of ways. Attempts to address the larger issue might benefit from consideration of varied training goals, characteristics of graduate training programs that most influence applicant decisions (some of which are unrelated to quality metrics), objective outcome measures indicative of student success, and—yes—surveys of those whose advice is sought about graduate training. Information about all of these aspects of graduate training may be helpful but still will leave a gap, which is the extent to which a given program represents a “best fit” between an applicant’s training goals and the type of experience offered. Presumably, that decision can be made based on varied sources of data that currently are difficult to find, but the field can be helpful to a great degree by extending the analysis begun by Dixon et al. and, more important, by taking action.

Footnotes

Brian Iwata is a distinguished professor at the University of Florida, where he serves as coordinator of the behavior analysis training program.

References

  1. Buchmeuller TC, Dominitz J, Hansen WL. Graduate training and the early career productivity of Ph.D. economists. Economics of Education Review. 1999;18:65–77. doi: 10.1016/S0272-7757(98)00019-3. [DOI] [Google Scholar]
  2. Clemente F, Sturgis R. Quality of departments of doctoral training and research productivity. Sociology of Education. 1974;47:287–299. doi: 10.2307/2112109. [DOI] [Google Scholar]
  3. Dixon MR, Reed DD, Smith T, Belisle J, Jackson RE. Research rankings of behavior analysis graduate programs and their faculty. Behavior Analysis in Practice. 2015;8:7–15. doi: 10.1007/s40617-015-0057-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Hogan TD. Faculty research activity and the quality of graduate training. Journal of Human Resources. 1981;3:400–415. doi: 10.2307/145628. [DOI] [Google Scholar]
  5. Maher B. Changing trends in doctoral training programs in psychology: a comparative analysis of research-oriented versus professional-applied programs. Psychological Science. 1999;10:475–481. doi: 10.1111/1467-9280.00192. [DOI] [Google Scholar]
  6. Ostriker JP, Holland PW, Kuh CV, Voytuk JA, editors. A data-based assessment of research-doctorate programs in the United States. Washington, DC: The National Academies Press; 2011. [PubMed] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES