Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 1.
Published in final edited form as: Train Educ Prof Psychol. 2021 May 6;16(4):394–402. doi: 10.1037/tep0000371

Predictors of Dissertation Publication in Clinical and Counseling Psychology

Robyn S Herbert 1, Spencer C Evans 2,*, Jessy Guler 3, Michael C Roberts 3
PMCID: PMC9635593  NIHMSID: NIHMS1738362  PMID: 36337764

Abstract

A doctoral dissertation constitutes a student’s original research and a novel contribution to scientific knowledge. Yet, few psychology dissertations, particularly in professional subfields, are published in the peer-reviewed literature, and the reasons for this are unclear. The present study investigated student, advisor, and doctoral program variables that might predict dissertation publication in professional psychology. Using a stratified random cohort sample of 169 Ph.D. dissertations in clinical and counseling psychology, we conducted exhaustive searches to determine whether dissertation studies were published in peer-reviewed journals within 0–7 years following their completion. Logistic regression models were estimated to test whether dissertation publication was predicted by student and advisor prior research productivity, dissertation length, and doctoral program’s training emphasis, accreditation status, and subfield. Results indicated that dissertations that were supervised by more research-productive advisors and that were relatively brief (<180 pages) were significantly more likely to be published in peer-reviewed journals. No other predictors were significant. Results are discussed with regard to implications for training and mentorship. Faculty advisors who publish frequently might be more likely to attract research-oriented students, to mentor students in preparing a publishable dissertation, and/or to encourage students to publish their dissertation research. By systematically promoting research dissemination as part of doctoral research training, graduate programs and faculty mentors in clinical and counseling psychology could help facilitate students’ sharing their dissertation findings with the scientific community.

Keywords: dissertation, publication, research, productivity, education and training


A dissertation is a significant piece of scholarship, completed toward the end of one’s doctoral training, in which a graduate student conducts original research and then presents and defends their work to a faculty review committee (Burkard et al., 2014). The committee’s approval of the dissertation indicates that the document meets scientific standards, a form of scholarly peer review (Bell et al., 2019). Some dissertations are later revised and submitted to scholarly journals, subject to further peer review before publication. However, most dissertations are not published (Evans et al., 2018 and the reasons for this are unclear. The present study investigates predictors of dissertation publication in clinical and counseling psychology.

By definition, a dissertation should be research that is submittable for peer-reviewed publication. Indeed, many psychology doctoral programs consider dissertations as serving dual purposes: (a) to demonstrate the student’s competency at research and (b) to make an original contribution to the literature (Bell et al., 2019). Each of these objectives is important. Much of the immediate value of and function of the dissertation is as a mechanism for training in research and for evaluating one’s research, leading to conferral of the Ph.D. In reality, is not necessarily the case that all psychology dissertations are publishable or should be published. Some have argued that the growing pressure on graduate students to publish may weaken the quality of doctoral research and research training (Yeung, 2019). Nevertheless, the latter purpose of the dissertation, and the goal of research (doctoral or otherwise), is to produce new knowledge that advances the field. When new knowledge is produced but not shared the field does not advance.

Although research on this topic is limited, the available evidence suggests that few dissertations are published in related fields such as clinical social work, nursing, and pediatric medicine (29%, 24%, and 25%, respectively; Fabre, 2015; Kearney, 2017; Maynard et al., 2014). To shed some light on this question in psychology, we recently investigated a stratified random cohort sample of 910 dissertations in ProQuest Dissertations and Theses (PQDT), conducted comprehensive prospective literature searches to ascertain whether they were eventually published (Evans et al., 2018). Only 25.6% (95% CI: 23.0, 28.4) of psychology dissertations were later published in peer-reviewed journals. Further, publication rates varied by subfield. Clinical (20.8%) and counseling (19.4%) fell significantly below the overall average and far below subfields that are more uniformly research-focused (e.g., experimental, cognitive, neuroscience), where about 1 in 2 dissertations was published (41.0–59.4%). Still, dissertation research, when published, appeared in influential journals (impact factor M = 2.84) and were highly cited (M = 25.95 citations total, or 3.65 per year; Evans et al., 2018).

The relatively lower dissertation publication rate in clinical and counseling might be explained by the “dual identity” of training (i.e., research and clinical) in professional psychology (Ready & Santorelli, 2014), as compared to other fields that more exclusively emphasize research. Although the ability to evaluate and apply research findings is applicable to all psychologists (De Los Reyes, 2020), doctoral programs in professional psychology have come to be classified into two types: those emphasizing research training vs. those emphasizing clinical training (Belar, 2014; Health Service Psychology Collaborative, 2013). This training variability is especially evident when considering different training modalities in professional psychology. For example, one survey of doctoral training programs found that 44% of students in scientist-practitioner programs had authored a publication upon graduation as compared to 16% of those in practitioner-scholar programs (Ready & Santorelli, 2014). Research-oriented doctoral programs may offer more opportunities, time, and resources to publish a dissertation, in comparison to more clinically oriented programs (e.g., Norcross et al., 2010).

The non-publication of dissertations may have several consequences. As noted, if the findings are of a study are not shared with the field, then the field does not progress. Dissertation non-publication contributes to the “file drawer problem,” where studies yielding nonsignificant results are less likely to be published (Franco et al., 2014). This, in turn, can lead to systematic bias toward Type I errors, distorting systematic and meta-analytic reviews (Rosenthal, 1979). Notably, such reviews often exclude dissertations from their synthesis despite evidence that unpublished dissertations are often methodologically stronger than published articles (McLeod & Weisz, 2004). When dissertations go unpublished, other researchers may later unwittingly attempt to carry out redundant or similar studies without the chance to consider prior work. This propagates unnecessary work and does not move science forward. It can also be inefficient on the doctoral student’s side, especially given that they commit a great deal of time, effort, and expense to conduct original research that likely does not get shared beyond their dissertation committee. Finally, dissertation non-publication is arguably a matter of ethical importance, as doctoral research often requires time and effort of human participants and is intended to generate findings of some potential importance to society (Roberts et al., 2015).1

The Present Study

Although the problematic effects of dissertation non-publication have been documented (e.g., McLeod & Weisz, 2004; O’Boyle et al., 2017), less is known about why some dissertations are published and why most are not. Such information could be useful toward improving doctoral research training and for promoting the dissemination of strong psychological science.

With these considerations in mind, the present study extends prior work (Evans et al., 2018) to better understand dissertation publication in clinical and counseling psychology.2 Using a cohort sample of 169 dissertations, we longitudinally investigated predictors of subsequent publication in a peer-reviewed journal. Several student, advisor, document, and program characteristics were selected based on rational considerations of what might account dissertation publication patterns in clinical and counseling psychology. First, we expected that students who published more during graduate school would be more likely to publish their dissertation (past behavior predicts future behavior). Similarly, we expected that students who had highly productive advisors would be more likely to publish their dissertation after graduation (e.g., due to greater motivation or pressure). The length of the dissertation document was also considered. Traditional dissertations can be hundreds of pages long, but a traditional article manuscript in psychology is around 25–35 pages. It was expected that longer dissertations might be less likely to get published (e.g., due to the time and effort necessary to edit a lengthy dissertation into a viable article format). At the program level, we expected that greater emphasis on research training would predict dissertation publication. That is, a research-intensive program might equip students with research skills and create a culture wherein publishing is more normative, expected, and supported. Finally, we considered doctoral program subfield (counseling vs. clinical) and accreditation status (accredited vs. unaccredited) due to the relevance of these considerations for training, although they were expected to show no association with dissertation publication outcomes.

Method

Upon request, PQDT staff provided a database of all dissertations indexed with the term “psychology” in the subject field during a single year. The particular year was selected based on a balancing of two key considerations: (a) how long it might take for graduating students to publish their dissertations (e.g., given the time it takes for editing, submissions, revisions, and publication) if ever they were to do so, counterbalanced with (b) a desire for the dissertations involved to be relevant and recent. With regard to the latter, we selected a window of 0–7 years post-dissertation year based on empirical estimates that the half-life of knowledge in psychology is around 7–9 years (Arbesman, 2013; Davis & Cochran, 2015; Niemeyer et al., 2012; Price, 1986; Tang et al., 2008). Thus, the dissertation cohort year of 2007 was selected at the outset, to permit comprehensive follow-up searches for articles published during 2007–2014. The research team has since screened, coded, analyzed, and reported these data from 2015 to present, including one prior report (Evans et al., 2018). Curve-fitting and survival analyses demonstrated the adequacy of this sampling frame: a shorter window (e.g., 0–5 years post-dissertation) would have allowed a slightly more recent cohort year, but many published dissertations would have been missed; in contrast, a longer window (e.g., 0–10 years), though more comprehensive, would have required older data and yielded very few additional published dissertations.

From the full PQDT database (n = 6,580), the dissertation sample was narrowed down using the following inclusion criteria: (a) doctoral degree was a Ph.D.,3 (b) institution was in the U.S., and (c) dissertation included ProQuest subject terms related to psychology (Evans et al., 2018). For the present study, dissertations were specifically required to include either “clinical psychology” (n = 1,434) or “counseling psychology” (n = 131) as a subject term. A stratified random sample of 267 dissertations was selected from those that met these criteria, with stratification designed to draw a sample comprised of about 75% clinical and 25% counseling dissertations, similar to the ratio of clinical and counseling Ph.D. programs in the U.S. (APA Commission on Accreditation, 2020; APA Work Force Studies, 2016). Ninety-eight of these dissertations were excluded because a different year was listed on the full-text PDF and/or due to information unavailable other key variables (e.g., unable to verify the Ph.D. program). Thus, 169 cases were used in the present analysis, including 132 clinical and 37 counseling dissertations.

PsycINFO was used to search for published versions of the dissertations as well as the number of peer-reviewed articles published by graduate students and their advisors to calculate student and advisor research productivity. The largest database of psychological research in the world, PsycINFO has been shown to have significantly fewer incorrect citations compared to other scientific databases (García-Pérez, 2010).

Search and Coding Procedures and Variables Derived

The main outcome in this study was a binary variable indicating whether or not a dissertation had been published in the peer-reviewed literature. The procedures used to generate this outcome variable were detailed previously (Evans et al., 2018) and are summarized here. Program subfield (clinical or counseling) was based on the PQDT subject term. Other variables and procedures for deriving them are described next. All searches and coding procedures were carried out by multiple undergraduate research assistants (RAs) working independently, with discrepancies resolved by consensus among three or more coders.

Dissertation Publication.

Trained RAs and graduate researchers performed exhaustive searches to locate any published version of the dissertations in PsycINFO (Evans et al., 2018). Searches were conducted using search terms readily available for every dissertation: (a) student name, (b) advisor name, and (c) dissertation title. RAs reviewed their search results holistically and focused on coding the article(s) most clearly derived from the dissertation. When multiple articles were relevant—perhaps all derived from the same dissertation—a consensus judgment was made as to which single article best captured the dissertation study based on relevant factors (e.g., identical sample N; or selecting the empirical results article over its review article). As a final step, internet searches were carried out for relevant leads (e.g., CVs posted online, faculty web pages). In carrying out these procedures, the main objective was to decide which of two codes to apply: 1 = dissertation published as a peer-reviewed article (and if so, which one), or 0 = not published—no peer-reviewed article version of the dissertation was found. These judgments were made based on information in the article’s abstract, methods, and indexing data, in comparison with corresponding information from the PQDT dissertation PDF. As noted, only articles published 0–7 years post-dissertation were included. Published dissertation articles, when identified, were coded for relevant characteristics (e.g., title, journal, co-authors).

Student Productivity.

Doctoral student research productivity was operationalized as the number of peer-reviewed journal articles authored or co-authored prior to the dissertation (i.e., before 2007). A lower bound of 1980 was used to help screen out false positives that were most likely to emerge from older literature, screening out articles published 27+ years prior to earning their doctorate4. Multiple searches were conducted in PsycINFO for each graduate student author, using first and last name (with and without middle initial/name) to ensure that all article results were the work of the author in question. The student’s dissertation and any articles published during the dissertation year were omitted from this count.

Advisor Productivity.

Advisor research productivity was operationalized as the number of peer-reviewed journal articles authored or co-authored by the advisor during the years prior to the student’s dissertation (i.e., prior to 2007), with no lower bound. Similar to the procedures used for student productivity, multiple searches were conducted in PsycINFO for articles published by the advisor, using first and last name (with and without middle initial/name).

Dissertation Length.

Each PQDT dissertation record included a field showing the total number of pages in the dissertation document. These data were continuously distributed with a positive skew (M = 144.9 pages, SD = 94.7, Mdn = 120, range = 23–890). For analysis, the variable was dichotomized using a cutoff of ≥180 pages. This cutoff was selected given the hypothesis that dissertations of a typical length or shorter, relative to their field, are more likely to be published. The cutoff roughly corresponds with the 75th percentile for psychology dissertations, both in the present sample and in previous datasets (e.g., Beck, 2014). For analysis, document length was coded as a binary variable: 1 = <180 pages, 0 = ≥180 pages.

Program Training Emphasis.

Drawing from data in each dissertation’s PDF and PQDT records (e.g., institution, subject [counseling vs. clinical], student and advisor’s names), we were generally able to ascertain the specific doctoral program from which each dissertation originated. From there it was possible to code program variables. Program training emphasis was recorded directly from the 1–7 rating values published in 2006–2007 edition of The Insider’s Guide to Graduate Programs in Clinical and Counseling Psychology (Mayne et al., 2006). The Insider’s Guide has long been a key reference book for prospective graduate students seeking an overview of every APA-accredited doctoral program. To obtain the research-clinical emphasis ratings and other data, Mayne et al. (2006) sent surveys to every APA-accredited program in the U.S., asking program directors to rate their program on a 1 to 7 scale, with 1 being an entirely clinically oriented and 7 being entirely research-oriented. Thus, a rating of “4” would indicate the doctoral program placed equal emphasis on research and clinical training (Mayne et al., 2006).

Program Subfield of Psychology.

The PQDT subject terms “clinical” and “counseling” were used to identify types of doctoral training programs. These particular terms were typically one of many subject terms included in each dissertation’s metadata, as assigned by PQDT and derived from their own controlled vocabulary system. As such, the terms did not bear a perfect 1:1 correspondence to the topic of the dissertation or to the type of degree program it came from. Upon closer examination, however, we found that “clinical” and “counseling” in this context did serve as reliable indicators for the student’s type of degree program (clinical PhD vs. counseling PhD). This conclusion was supported by systematic internet searches in which we identified an online presence for a clinical or counseling training program associated with all included PQDT dissertations, cross-validated against directories of clinical and counseling psychology programs (e.g., APA accreditation database). Thus, we interpret this variable as a proxy for the type of degree program from which a dissertation was produced—clinical or counseling. Data were coded as 1 = clinical, 0 = counseling for analysis.

Program APA Accreditation Status.

Each program’s APA accreditation status in 2007 was recorded as 1 = yes (APA-accredited), 0 = no (not APA accredited). These data were obtained from the Graduate Study in Psychology book (APA, 2007), and were also cross validated with records from the APA Commission on Accreditation (2020) website.

Analytic Plan

First, univariate and bivariate characteristics of study variables were examined. Cohen’s d and Cramer’s V effect sizes were calculated for differences between published and non-published dissertations. Next, hierarchical logistic regression models were estimated with publication status as the outcome. Program-level predictors (program emphasis, subfield, and accreditation status) were entered into the model first, to have greater sensitivity to detect effects if present. Individual-level characteristics (student productivity, advisor productivity, document length) were then added to the model while retaining training program variables as covariates. All analyses were conducted in SPSS. There were no missing data due to study inclusion and exclusion criteria and the comprehensive coding procedures used to validate the data.

Results

Descriptive statistics and correlations for all study variables are reported in Table 1. Overall, 17.2% of the dissertations in the sample were published in a peer-reviewed journal within 0–7 years following their completion. This estimate was in line with earlier results (Evans et al., 2018) showing a dissertation publication rate of 20.8% [95% CI: 16.2–26.3] for clinical and 19.4% [11.4–31.5] for counseling (for all subfields of psychology, the estimate was 25.6% [23.0–28.4]). Notably, results reported here (Table 1) and previously showed no differences between clinical and counseling psychology programs in terms of publication outcomes, supporting the decision to combine these subfields in this analysis. The large majority (93.5%) of dissertations were from APA-accredited programs. Dissertations were more often from clinical programs (78.1%) than counseling (21.9%). Most (78.6%) were shorter than 180 pages. On average, programs considered themselves to be slightly more research-oriented than clinical in their training emphasis (M = 4.64, SD = 1.06, on the scale from 1 [entirely research] to 7 [entirely clinical]). About half (49.1%) leaned toward research (rating < 4) while only 6.5% leaned toward clinical (rating > 4), and 44.4% were balanced (rating = 4). Students published about one peer-reviewed article (M = 1.12, SD = 2.12, range = 0–15) prior to their dissertation year, while advisors averaged about 13 articles (M = 13.21, SD = 17.99, range = 0–164), prior to the student’s dissertation year. Correlations showed that higher advisor productivity and briefer dissertation lengths were both positively associated with dissertation publication. Advisor research productivity was associated with clinical and research-oriented programs.

Table 1.

Descriptive Statistics and Correlations Among Study Variables

1 2 3 4 5 6 7

1. Dissertation published (1 = yes, 0 = no) --
2. Clinical (1 = yes, 0 = counseling) .01 --
3. Accredited (1 = yes, 0 = no) .06 −.02 --
4. Research emphasis .05 −.11 .12 --
5. Advisor publications .36** .18* .02 .33** --
6. Student publications .04 .11 .05 −.05 .06 --
7. Briefer dissertation (1 = yes, 0 = no) .20** .14 .04 −.06 .06 −.00 --

M -- -- -- 4.64 13.21 1.12 --
SD -- -- -- 1.06 17.99 2.12 --
Range 0–1 0–1 0–1 2–7 0–164 0–15 0–1
% Yes 17.2 78.1 93.5 -- -- -- 78.6

Note.

*

p < .05

**

p < .01

As shown in Table 2, published and unpublished dissertations showed clear differences in advisor productivity and document length. Specifically, published dissertations tended to come from advisors with about 27 publications in years preceding the dissertation as compared to 10 publications for the advisors of unpublished dissertations. This difference had a medium to large effect size of d = 0.72. Published dissertations also had significantly fewer pages on average (151 vs. 114 pages, d = −0.48) and more often fell into the “brief” category in regard to document length compared to unpublished dissertations (96.6% vs. 74.3%, V = 0.20). Notably, program emphasis in research did not differ for published vs. unpublished dissertations; this remained true whether analyzed as a 1–7 continuous rating variable or dichotomized to compare research-focused (rating > 4) vs. neutral/clinical-focused programs (rating ≤ 4).

Table 2.

Characteristics Associated with Published vs. Not Published Dissertations

Not Published (n = 140) Published (n = 29) t or χ2 p d or V

Continuous, M (SD), t, d
 Program emphasis 4.62 (1.08) 4.76 (0.99) 0.63 .527 0.14
 Advisor publications 10.26 (11.89) 27.45 (31.40) 5.01 <.001 0.72
 Student publications 1.08 (2.22) 1.31 (1.54) 0.54 .594 0.12
 Page length 151.26 (101.35) 114.00 (39.62) −3.30 .001 −0.48
Categorical, n (%), χ2, V
 Accredited program 130 (92.9) 28 (96.6) 0.54 .463 0.06
 Clinical program 109 (77.9) 23 (79.3) 0.03 .863 0.01
 Briefer dissertation 104 (74.3) 28 (96.6) 6.97 .008 0.20
 Research > clinical 69 (49.3) 14 (48.3) 0.01 .921 0.01

These variables were next analyzed as predictors of dissertation publication in the logistic regression models (Table 3). Generally, program characteristics (training emphasis, subfield, and accreditation status) accounted for about 1% of the variance in the overall model, with none of these predictors being statistically significant.5 However, when individual predictors (student productivity, advisor productivity, and document length) were added to the model in Step 2, this accounted for approximately 30% of the variance and revealed several interesting results. First, advisor productivity significantly predicted dissertation publication (B = .08, p < .01), such that for each additional publication the advisor produced prior to the dissertation year, the odds of their student publishing their dissertation increased by about 8% (adjusted odds ratio [AOR] = 1.08). Unexpectedly, student research productivity did not emerge as a significant predictor of dissertation publication outcomes. Lastly, dissertations that were briefer (<180 pages) were significantly more likely to be published than longer ones (B = 2.60, p <.05, AOR = 13.41), suggesting that a briefer dissertation could be 13× more likely to be published than a longer one. Notably, these effects for advisor productivity and dissertation brevity were significant, linear, and robust, evident after accounting for various characteristics of the program (training emphasis, subfield, accreditation status) as well as the student’s own productivity.

Table 3.

Hierarchical Logistic Regression Models Predicting Dissertation Publication

Step 1 Step 2


B (SE) OR [95% CI] B (SE) OR [95% CI]

Step 1
 Accredited 0.71 (1.07) 2.05 [0.25, 16.79] 1.25 (1.39) 3.49 [0.23, 53.56]
 Research emphasis 0.11 (0.19) 1.12 [0.76, 1.64] −0.34 (0.25) 0.71 [0.43, 1.17]
 Program subfield 0.12 (0.51) 1.13 [0.42, 3.05] −0.83 (0.59) 0.44 [0.14, 1.38]

Step 2 -- --
 Advisor publications -- -- 0.08 (0.02)** 1.08 [1.04, 1.12]
 Student publications -- -- 0.04 (0.11) 1.05 [0.84, 1.31]
 Briefer dissertation -- -- 2.60 (1.14)* 13.41 [1.43, 125.60]

Nagelkerke R-square 0.010 0.296
*

p < .05

**

p < .01

Discussion

Dissertations mark the end of a long and complex scholarly training process, culminating in an original research study, which in principle is a publishable scientific contribution (Bell et al., 2019; Vidair et al., 2019). But publishing one’s dissertation research is the exception, not the rule, and this non-publication pattern is especially clear in clinical and counseling psychology. To better understand this phenomenon, this study investigated several possible predictors of publishing dissertation research in a peer-reviewed journal. We hypothesized that higher student and advisor research productivity, briefer dissertation document lengths, and greater doctoral program emphasis on research training (as compared to clinical emphasis) might all contribute to a greater likelihood of publication. Two of these four hypotheses were supported: advisor productivity and briefer documents significantly predicted dissertation publication.

These results suggest that advisors may influence dissertation publication outcomes, whereas student research productivity, program accreditation status, and training program characteristics may have little to no influence on the likelihood of dissertation research moving to a peer-reviewed journal. There are several possible explanations for the significance of advisor productivity. First, the growing popularity of “team science” and multiple co-authors involved in large-scale psychology projects might account for higher advisor publication rate (Wuchty et al., 2007). Indeed, most of the dissertations that were published included both the student and advisor as co-authors. Second, productive advisors may provide more support or pressure for their students to publish their dissertation research after defense. This may be the case regardless of whether the student intends on pursuing a research-oriented career or a more applied clinical career. If the advisor is highly motivated to publish, the advisor may pressure or encourage their student to do the same, exemplifying the “publish or perish” aphorism in academia. Additionally, there may be a selection effect that occurs years prior to doctoral research, such that students who want to be research-productive are drawn to research-productive advisors, and vice versa.

The finding that shorter dissertations are published more frequently than longer ones could be a result of expectations for somewhat more “publication-ready” dissertations. That is, programs vary in their requirements, and some make a concerted effort not to require students to compose long and wordy (often redundant) dissertations, which were more common in previous decades and in other disciplines. From this perspective, students whose dissertation starts in a more publication-ready format may be one step closer to transitioning their dissertation for a peer-reviewed journal (for guidance on adapting dissertations for publication, see APA 2020 and Bell et al., 2019). Dissertation research may benefit the from the faculty committee providing rigorous critique, improving its eventual journal publishability. Program policies encouraging more publication-ready dissertations could increase the likelihood that graduates will share their “magnum opus” with the scientific community after commencement. These training expectations are often set by core faculty members as a group and could endorse a program commitment to the knowledge generation and scientific mindedness of the health service psychology blueprint (Health Service Psychology Collaborative, 2013; Melchert et al., 2019). Here it is important to keep in mind that the present findings say less about very brief article-style dissertations than they do about extremely long dissertations. Given that we defined “short” with a cutoff of 180 pages, it seems that both brief (e.g., <100 pages) and moderately long (e.g., 100–179 pages) dissertations are more likely to lead to publication than those that are hundreds of pages long.6

In the present study, student research productivity prior to dissertation did not have a statistically significant association with dissertation publication. One possible explanation for the discrepant research productivity results (i.e., wherein advisor, but not student, research productivity was a significant predictor) may be that graduate training is associated with publication patterns in various ways. A student may be more likely to publish peer-reviewed articles during training due to their advisor’s or program’s expectations, rather than their own career aspirations. Conversely, a student with few or no publications in graduate school may go on to have a prolific research career, including the publication of their dissertation. Future research should examine dissertation publication as a predictor of research productivity during post-doctoral training and early career.

It was expected that that a doctoral program’s research emphasis, but not their subfield or APA accreditation status, would have an effect on dissertation publication outcomes. We found that none of these effects were significant, with only the absence of an effect for training program research emphasis coming as a surprise. On this note, the distributional characteristics of the 1–7 emphasis ratings seem relevant. More than half (56%) of the programs in this sample rated themselves at the midpoint of “4,” suggesting a desire to report a balanced research and clinical training emphasis (whether real or apparent) espoused by the popular scientist-practitioner and clinical science training models (Ready & Santorelli, 2014). But even as this balanced or integrated approach has been promoted, there has been a growing trend in doctoral training toward prioritizing the science of clinical science rather than its application. For example, doctoral student publication rates have emerged as an important factor for being selected for interviews for predoctoral internship positions (Callahan et al., 2014; Lund et al., 2016), despite the reality that the number of publications may have little to do with a trainee’s clinical skills or aptitude. Considering other factors, it does seem that program subfield and accreditation status are not key determinants of dissertation publication, in line with hypotheses.

Limitations and Directions for Future Research

This study has some limitations that highlight possible directions for future research. First, the sample was restricted to a single cohort year of dissertations (2007), which may limit generalizability to other years—perhaps most importantly, to current and future doctoral students. This limitation was largely a function of the longitudinal cohort design, which precluded investigation of the most recent cohorts of Ph.D. graduates. At a general level, data produced during a particular window of time (in this case, 2007–2014) can generate useful scientific knowledge, but whether these findings hold up for today’s Ph.D. graduates will be a question for future research (perhaps a decade from now if using the similar design, or sooner using a different design). Second, the sample size may have offered limited power for detecting significant associations, particularly for program-level and binary variables. Third, the present study relied on key sources (e.g., PQDT, PsycINFO) for sampling and coding, including using PQDT subject terms to select dissertations and identify clinical vs. counseling programs. It is possible that dissertations or published articles may have been missed that were not available through these large databases, but this concern is mitigated by the comprehensiveness of these archives. Further, our interpretive decisions seem well-justified considering supplemental search steps and database characteristics. Fourth, these results are longitudinal but not experimental. Thus, many of the implications discussed here, while consistent with the data, are not directly supported from the perspective of causal inference.

Finally, we did not delve into other variables worth considering in future research, such as the character of the dissertation findings (null vs. positive vs. mixed), study characteristics (e.g., methodology, design, quality, topic), university prestige, program ranking, research funding status, and membership in various professional training councils. Future research should examine these variables as well as different training models (e.g., scientist-practitioner, clinical science) and emphases (Melchert et al., 2019; Health Service Psychology Collaborative, 2013). In addition to considering final publication status as the outcome, it would be useful to investigate the extent to which dissertation-based manuscripts are even submitted to scholarly journals in the first place (as well as rejected, revised/resubmitted, submitted again elsewhere, etc.). Recognizing that many journals have high rejection rates, dissertation publication is perhaps best conceptualized as a multistage process.

Implications for Training, Education, and Mentorship

These findings offer important implications for training programs, advisors, and trainees alike. Graduate school applicants should apply to mentors who are well situated to help them pursue their goals. Our finding suggest that those applicants pursuing scientific careers might benefit by choosing advisors who are research productive. Doctoral programs and training directors may also want to adopt practices that encourage faculty and student research productivity and a positive research training environment (Kahn & Schlosser, 2010). One such policy to consider would be allowing dissertations to be prepared in a relatively brief, publication-ready format, which are becoming more popular in various fields (Graves et al., 2018; Grossman, 2020). Another approach, borrowed from other disciplines and countries, but becoming increasing popular in U.S. psychology, is that of the dissertation-by-publication model (Gould, 2016). This model allows students prepare and submit for publication one or more manuscripts during their doctoral training, and then compile them together into a program of research submitted to fulfill the dissertation requirement. It will be important for research to evaluate publication-ready dissertations, the dissertation-by-publication model, and more traditional dissertation approaches in the context of professional psychology doctoral training. In particular, well-designed research is needed to compare these models for possible differences in terms of peer-reviewed publication outcomes, methodological quality of the dissertations, characteristics of findings, and educational and career outcomes for the graduate.

Discussions of programmatic expectations for doctoral dissertations can raise questions that are challenging to resolve. When considering potential changes to doctoral training, it may not be ideal to place requirements or even strong expectations on doctoral students to publish (Yeung, 2019). At the same time, it has become important in psychology that doctoral students do publish during graduate school—especially, though not exclusively, for those pursuing a research career (Callahan et al., 2014; Lund et al., 2016; Ready & Santorelli, 2014). It is perhaps appropriate for faculty to expect “something more” from a dissertation vs. a standard journal article, but few would argue for “something less” than an original contribution to the literature (Bell et al., 2019). Within this general understanding of a dissertation, it is reasonable that there should be room for variation in dissertation expectations across training programs. With regard to the question of adopting newer dissertation models, some faculty might object to major changes on the grounds that this would require drastic shortening, sacrificing the value of comprehensively reviewing the literature and situation the dissertation within the context of past research. Here it is worth considering what value lengthy introduction chapters and research review chapters might offer to the student, especially if this is at odds with the goal of advancing new knowledge for the field. Organizing a dissertation to include briefer introductions or reviews—or systematic or meta-analytic reviews that could stand as their own articles—may offer additional value, both to the student and to the field. To the extent that programs are implementing such changes, the present data (from cohort year 2007) may serve as an important reference point for future research to evaluate whether changes in dissertation models in psychology result in meaningful improvements in the dissemination of doctoral research.

Of course, it may always be the case that a dissertation—even after being completed and approved—might not be appropriate for publication. Perhaps this is because it is not sufficiently high in quality, irrelevant to the field, or cannot be edited to be made appropriate for journals. To the extent that these circumstances are the case, they should lead faculty to ask: Why is this so, and what can be done about it? Such realities underscore the need for training programs and faculty mentors to attend to research quality, relevance, and dissemination throughout graduate training. The basic principle is well-encapsulated by APA (2020): “Research is complete only when scholars share their results or findings with the scientific community” (p. 3). Thus, if a Ph.D. in clinical or counseling psychology is a research degree, one would expect that it should cover the entire research process, including dissemination.

Considering the present findings, faculty mentors might strive to engage their trainees in the peer-reviewed publication process throughout their graduate training. Doing so could help familiarize students with the steps to pursue the publication of their dissertation research after graduation, perhaps with a higher degree of independence. This could be particularly important given the correlation between publication rates and other important career variables, such as internship placement and tenure review (Callahan et al., 2014; Lund et al., 2016). Adopting a proactive approach to enriching the advisor-student relationship (Cobb et al., 2018) prior to and during the dissertation process may increase the likelihood of dissertation publication, among many other potential benefits for the mentor as well as the mentee. Without greater attention to ways in which to increase the publication of so many dissertations, years of hard-earned science will remain missing from the field and society at large.

Public Significance Statement.

One part of becoming a clinical or counseling psychologist involves completing a doctoral dissertation, or an original research study aimed to fill a gap in scientific knowledge. Unfortunately, most of the knowledge gained by dissertation research is left to sit on the shelf rather than being shared with other scientists, professionals, and the public where it can more readily benefit society. Investigating several possible reasons why this is the case, the present study found that dissertation research is more likely to be published in peer-reviewed journals when the students writing those dissertations have research-productive mentors and when the documents themselves are not extremely long.

Acknowledgments

The present study was adapted from RSH’s senior honors thesis in the Department of Psychology at the University of Kansas, supervised by SCE and MCR with additional oversight from Michael Vitevitch. We thank Austin McLean and the staff of ProQuest Dissertations and Theses for providing the population dataset of dissertations. We also gratefully acknowledge the following members of our research team for their assistance with coding and study implementation: Christina Amaro, Maggie Biberstein, Jen Blossom, Jamie Eschrich, Andrea Garcia, Mackenzie Klaver, Alexa Mallow, Alexandra Monzon, and Emma Rogers.

Author Biographies

Robyn S. Herbert, M.S. is a doctoral candidate in the Clinical Psychology Program at Washington State University, where she received her master’s degree. Her current research focuses on the intersection of executive functioning, attention-deficit/hyperactivity disorder, and academic achievement. She is specifically interested in protective and risk factors for academic underachievement in children with deficits in executive functioning.

Spencer C. Evans, Ph.D. is an Assistant Professor in the Department of Psychology at the University of Miami. After earning his doctoral degree in Clinical Child Psychology from the University of Kansas, he completed his predoctoral internship at the Medical University of South Carolina and a postdoctoral fellowship at Harvard University. His research focuses on advancing the understanding, assessment, and treatment of severe irritability and aggression in youth. He also has interests in methodological and professional development issues in clinical psychology.

Jessy Guler, M.S. is a doctoral candidate in the Clinical Child Psychology Program at the University of Kansas. She received her master’s degree from Duke University in Global Health prior to beginning her predoctoral studies where she focused her work on the mental health of young children and their caregivers living in low-resource settings. Her current research focuses on examining risk and protective factors which influence health and cognitive processes among refugee and immigrant youth and their families exposed to trauma and adversity. She is specifically interested in improving the understanding of the ways in which exposure to war, terrorism, and violence are related to neurocognitive processes, psychopathology, resilience and long-term physical health outcomes post-migration.

Michael C. Roberts, Ph.D., ABPP is Professor Emeritus in the Clinical Child Psychology Program and former Dean of Graduate Studies at the University of Kansas. Licensed in the state of Kansas and Arizona, he is ABPP2 in Clinical Psychology and in Clinical Child Psychology. Dr. Roberts has published over 200 journal articles and book chapters and authored or co-edited over 20 books and served as editor for six journals. Additionally, he has been active in governance of psychological organizations, including Chair of the APA Board of Educational Affairs, chair of the APA Council of Editors, and Chair of the Council of University Directors of Clinical Psychology. His work revolves around the application of psychology to understanding and influencing children’s physical and mental health.

Footnotes

1

To illustrate, consider the language used in inrformed consent. Human participants generally do not provide consent to participate in research in order for this student to learn something, complete a dissertation, and earn their Ph.D. Rather, all the usual ethical principles of human subjects research apply.

2

Note that school psychology dissertations were considered but could not be included for this analysis because the PQDT classification system does not differentiate school psychology from educational psychology.

3

Dissertations from Psy.D. degrees were not included in this study. Though many Psy.D. programs require a dissertation or a final project for graduation, not all of them do, and those that do have variable requirements. Additionally, publication is not emphasized in in the mission of most Psy.D. programs (Stewart et al., 2007). Accordingly, the present study focuses on Ph.D. dissertations and programs to help ensure consistency in the interpretability and generalizability of results.

4

In the case of the hypothetical average psychology Ph.D.-holder who earned their doctorate at the median age of 32.3 years (Nation Science Foundation, 2019), this lower bound would have led us to miss any publications they produced just prior to the kindergarten phase of their scholarly career.

5

Our study focuses on APA accreditation given APA’s status as the dominant accreditation system for clinical and counseling psychology training in the U.S. for many decades. Recently, the Psychological Clinical Science Accreditation System (PCSAS) has been introduced, which applies to clinical programs (not counseling programs) and specifically to those adhering to and accredited in a clinical science model (at the time of this writing, 43 programs; pcsas.org). Given recent interest in PCSAS, we re-estimated the models including PCSAS status as the accreditation predictor variable instead of APA accreditation. PCSAS membership did not predict publication status: Step 1: B = 0.37 (0.61), OR = 1.05 [0.67, 1.65], p = .546; Step 2: B = 0.04 (0.70), OR = 1.04 [0.27. 4.05], p = 0.959. Results for other variables shown in Table 3 did not change appreciably.

6

Indeed, the present data support this view. This discussion led us to perform post hoc analyses comparing “brief” (<100 pages; n = 49) “moderate” (100–179 pages; n = 83), and “long” (≥180 pages; n = 37) dissertations. We found no differences in publication outcomes between brief (<100 pages) vs. longer (≥100 pages) dissertations (p = .11). The original finding for long dissertations remained robust (p = .025) even after including brief dissertations (p = .48). Additionally, linear and quadratic terms for page count predicting publication status were all nonsignificant or better accounted for by the original variable contrasting longer (≥180 pages) vs. briefer (<180 pages) dissertations.

References

  1. American Psychological Association Commission on Accreditation. (2020). APA-Accredited Programs. https://www.accreditation.apa.org/accredited-programs
  2. American Psychological Association for Workforce Studies. (2016). 2015 county-level analysis of U.S. licensed psychologists and health indicators. https://www.apa.org/workforce/publications/15-county-analysis
  3. American Psychological Association. (2007). Graduate study in psychology, 2007. Author.
  4. American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). [Google Scholar]
  5. Arbesman S. (2013). The half-life of facts: Why everything we know has an expiration date. Penguin. [Google Scholar]
  6. Beck MW (2014, July 15). Average dissertation and thesis length take two. R-bloggers. https://www.r-bloggers.com/average-dissertation-and-thesis-length-take-two/ [Google Scholar]
  7. Belar CD (2014). Reflections on the health service psychology education collaborative blueprint. Training and Education in Professional Psychology, 8(1), 3–11. 10.1037/tep0000027 [DOI] [Google Scholar]
  8. Bell DJ, Foster SL, & Cone JD (2019). Dissertations and theses from start to finish: Psychology and related fields (3rd ed.). American Psychological Association. [Google Scholar]
  9. Burkard AW, Knox S, DeWalt T, Fuller S, Hill C, & Schlosser LZ (2014). Dissertation experiences of doctoral graduates from professional psychology programs. Counselling Psychology Quarterly, 27(1), 19–54. 10.1080/09515070.2013.821596 [DOI] [Google Scholar]
  10. Callahan JL, Hogan LR, Klonoff EA, & Collins FL Jr (2014). Predicting match outcomes: Science, practice, and personality. Training and Education in Professional Psychology, 8(1), 68–82. 10.1037/tep0000030 [DOI] [Google Scholar]
  11. Cobb CL, Zamboanga BL, Xie D, Schwartz SJ, Meca A, & Sanders GL (2018). From advising to mentoring: Toward proactive mentoring in health service psychology doctoral training programs. Training and Education in Professional Psychology, 12(1), 38–45. 10.1037/tep0000187 [DOI] [Google Scholar]
  12. Davis PM, & Cochran A. (2015). Cited half-life of the journal literature. arXiv preprint arXiv:1504.07479. [Google Scholar]
  13. De Los Reyes A. (2020). The early career researcher’s toolbox: Insights into mentors, peer review, and landing a faculty job. The Center for Reinforcing Early Academic Training and Enhancement (CREATE). [Google Scholar]
  14. Evans SC, Amaro CM, Herbert R, Blossom JB, & Roberts MC (2018). “Are you gonna publish that?” Peer-reviewed publication outcomes of doctoral dissertations in psychology. PLoS ONE. 10.1371/journal.pone.0192219 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fabre A. (2015). Publication of pediatric medical dissertations in France. Archives de pediatrie: Organe Officiel de la Societe Francaise de Pediatrie, 22(8), 802–806 [DOI] [PubMed] [Google Scholar]
  16. Franco A, Malhotra N, & Simonovits G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. 0.1126/science.1255484 [DOI] [PubMed] [Google Scholar]
  17. García-Pérez MA (2010). Accuracy and completeness of publication and citation records in the Web of Science, PsycINFO, and Google Scholar: A case study for the computation of h indices in Psychology. Journal of the American Society for Information Science and Technology, 61(10), 2070–2085. 10.1002/asi.21372 [DOI] [Google Scholar]
  18. Gould J. (2016). Future of the thesis: PhD courses are slowly being modernized. Now the thesis and viva need to catch up. Nature, 535(7610), 26–28. 10.1038/535026a [DOI] [PubMed] [Google Scholar]
  19. Graves JM, Postma J, Katz JR, Kehoe L, Swalling E, & Barbosa-Leiker C. (2018). A national survey examining manuscript dissertation formats among nursing PhD programs in the United States. Journal of Nursing Scholarship, 50(3), 314–323. [DOI] [PubMed] [Google Scholar]
  20. Grossman ES (2020). Content analysis of the South African MMed mini-dissertation. African Journal of Health Professions Education, 12(2), 56–61. 10.7196/AJHPE.2020.v12i2.1227. [DOI] [Google Scholar]
  21. Health Service Psychology Collaborative. (2013). Professional psychology in health care services: A blueprint for education and training. American Psychologist, 68(6), 411–426. [DOI] [PubMed] [Google Scholar]
  22. Kahn JH, & Schlosser LZ (2010). The graduate research training environment in professional psychology: A multilevel investigation. Training and Education in Professional Psychology, 4(3), 183–193. 10.1037/a0018968 [DOI] [Google Scholar]
  23. Kearney MH (2017). Making dissertations publishable. Research in Nursing and Health, 40(1), 3–5. 10.1002/nur.21780 [DOI] [PubMed] [Google Scholar]
  24. Lund EM, Bouchard LM, & Thomas KB (2016). Publication productivity of professional psychology internship applicants: An in-depth analysis of APPIC survey data. Training and Education in Professional Psychology, 10(1), 54–60. [Google Scholar]
  25. Maynard BR, Vaughn MG, Sarteschi CM, & Berglund AH (2014). Social work dissertation research: Contributing to scholarly discourse or the file drawer? British Journal of Social Work, 44(4), 1045–1062. 10.1093/bjsw/bcs172 [DOI] [Google Scholar]
  26. Mayne TJ, Norcross JC, & Sayette MA (2006). Insider’s guide to graduate programs in clinical and counseling psychology, 2006–2007. Guilford Press. [Google Scholar]
  27. McLeod BD, & Weisz JR (2004). Using dissertations to examine potential bias in child and adolescent clinical trials. Journal of Consulting and Clinical Psychology, 72(2), 235–251. [DOI] [PubMed] [Google Scholar]
  28. Melchert TP, Berry S, Grus C, Arora P, De Los Reyes A, Hughes TL, Moye J, Oswald FL, & Rozensky RH (2019). Applying task force recommendations on integrating science and practice in health service psychology education. Training and Education in Professional Psychology, 13(4), 270–278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. National Science Foundation (2019). Doctorate recipients from U.S. universities: 2018. Special Report NSF 20–301. Available at https://ncses.nsf.gov/pubs/nsf20301/ [Google Scholar]
  30. Neimeyer GJ, Taylor JM, & Rozensky RH (2012). The diminishing durability of knowledge in professional psychology: A Delphi Poll of specialties and proficiencies. Professional Psychology: Research and Practice, 43(4), 364–371. 10.1037/a0028698 [DOI] [Google Scholar]
  31. Norcross JC, Evans KL, & Ellis JL (2010). The model does matter II: Admissions and training in APA–accredited counseling psychology programs. The Counseling Psychologist, 38(2), 257–268. 10.1177/0011000009339342 [DOI] [Google Scholar]
  32. O’Boyle EH Jr, Banks GC, & Gonzalez-Mulé E. (2017). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43(2), 376–399. 10.1177/0149206314527133 [DOI] [Google Scholar]
  33. Price DJ (1986). Little science, big science… and beyond (pp. 336–336). Columbia University Press. [Google Scholar]
  34. Ready RE, & Santorelli GD (2014). Values and goals in clinical psychology training programs: Are practice and science at odds? Professional Psychology: Research and Practice, 45(2), 99–103. 10.1037/a0036081 [DOI] [Google Scholar]
  35. Roberts MC, Beals-Erickson SE, Evans SC, Odar C, & Canter KS (2015). Resolving ethical lapses in the non-publication of dissertations. In Sternberg RJ & Fiske ST (Eds.), Ethical challenges in the behavioral and brain sciences: Case studies and commentaries (pp. 59–62.). Cambridge University Press. [Google Scholar]
  36. Rosenthal R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. [Google Scholar]
  37. Stewart PK, Wu YP, & Roberts MC (2007). Top producers of scholarly publications in clinical psychology PhD programs. Journal of Clinical Psychology, 63(12), 1209–1215. 10.1002/jclp.20422 [DOI] [PubMed] [Google Scholar]
  38. Tang R. (2008). Citation characteristics and intellectual acceptance of scholarly monographs. College & Research Libraries, 69(4), 356–369. [Google Scholar]
  39. Vidair HB, Kobernick CL, Rosenfield ND, Gustafson PL, & Feindler EL (2019). A systematic review of research on dissertations in health service psychology programs. Training and Education in Professional Psychology, 13(4), 287–299. [Google Scholar]
  40. Wuchty S, Jones BF, & Uzzi B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036–1039. 10.1126/science.1136099 [DOI] [PubMed] [Google Scholar]
  41. Yeung N. (2019). Forcing PhD students to publish is bad for science. Nature Human Behaviour, 3(10), 1036–1036. 10.1038/s41562-019-0685-4 [DOI] [PubMed] [Google Scholar]

RESOURCES