Abstract
When scholars express concern about trust in science, they often focus on whether the public trusts research findings. This study explores a different dimension of trust and examines whether and how frequently researchers misrepresent their research accomplishments when applying for a faculty position. We collected all of the vitae submitted for faculty positions at a large research university for 1 year and reviewed a 10% sample for accuracy. Of the 180 applicants whose vitae we analyzed, 141 (78%) claimed to have at least one publication, and 79 of these 141 (56%) listed at least one publication that was unverifiable or inaccurate in a self-promoting way. We discuss the nature and implications of our findings, and suggest best practices for both applicants and search committees in presenting and reviewing vitae.
Keywords: research ethics, misconduct, questionable research practices, detrimental research practices, faculty recruitment, research integrity, trust, curriculum vitae (CV)
Introduction
When scholars express concern about trust in science, they often focus on whether the public trusts research findings. This article explores a different dimension of trust in science, specifically, whether researchers can trust each other. In the increasingly social world of science, researchers need to trust their collaborators and other scholars at nearly every point of the research process, including literature reviews, data collection, data analysis, manuscript preparation, and peer review. Evidence from non-academic settings, academic administration, and academic medical centers suggests that this trust might not be well placed.
According to a recent study, 55% of resumes contain erroneous information and 31% of resumes include “misrepresentations that are purposely designed to mislead recruiters” (Henle, Dineen, & Duffy, 2017, p. 2). While we would like to believe that academics are more trustworthy, a number of sensational cases within higher education cast doubt on their integrity as well. In 2004, Henry Zimon, then president of Albright College, resigned after he was “accused of lying about his academic and publishing record” (Basinger, 2004, p. 1). Among his many embellishments were listing a forthcoming book for which he had neither a manuscript nor a publishing contract, and claiming a postdoctoral position at Harvard when in fact he had only given a guest lecture (Basinger, 2004). In 2007, Marilee Jones, then dean of admissions at the Massachusetts Institute of Technology (MIT), resigned after admitting that she had falsified her resume (Lewin, 2007). Ms. Jones mis-represented her academic degrees when she first applied for a job at MIT, and later confessed that over the 28 years of her employment with the university “did not have the courage to correct my resume” (Lewin, 2007, p. 1).
Such sensational cases of misrepresentation are not only limited to academic administration but have also featured researchers in academic departments. In 2012, Anoop Shankar, then professor of epidemiology at West Virginia University, was accused of falsifying his credentials and research accomplishments (Aronowitz & Dokoupil, 2014). The resulting investigation found that he had falsified not only his credentials and research accomplishments, but the data for several of his research publications as well (Aronowitz & Dokoupil, 2014).
These high-profile cases of professional misrepresentation in academia generate news, but they do not reveal the frequency or severity of the problem. Instead, they raise concerns about the truthfulness of academic vitae and questions about how often academics falsify their credentials and accomplishments when applying for jobs. Older studies in academic medical centers found that 5% of applicants for clinical faculty positions submitted vitae that contained falsified clinical credentials (Shaffer, Rollo, & Holt, 1988) and 15.6% of applicants submitted vitae that contained falsified research citations (Goe, Herrera, & Mower, 1998). More recent studies of applicants to residency and fellowship programs have shown that an average of 22% have falsified research citations, with internal medicine reporting the lowest (2%) and pediatric pulmonology reporting the highest (50%).1
While much work has been done on vitae falsification in academic health science, the prevalence and scope of vitae falsification in academic environments outside of clinical medicine is largely unexamined. To establish an initial measure of the incidence and types of vitae falsification among faculty applicants to non-health science programs, we conducted a pilot study of curriculum vitae (CVs) submitted to faculty searches at a large, land grant, doctoral university with very high research productivity.
Method
After obtaining approval from our institutional review board (IRB), and the provost and general counsel at our field site, we monitored all searches during the 2015–2016 academic year. Our inclusion criteria for searches were (a) a faculty position (as opposed to staff), (b) with research expectations (as opposed to solely instructional or clinical faculty), and (c) in a non-health science program. To avoid a potential conflict between obligations to protect the confidentiality of human subjects and obligations to report suspicions of employee misconduct, our exclusion criteria for applicants within a search were (a) successful applicants and (b) any applicant who was an employee of our host institution when applying for the position.
Given the objective of the study, and the potential for self-selection bias, we did not seek consent from applicants and instead obtained a waiver of informed consent from the IRB. In addition, the online portal through which applicants submitted their materials included a statement that materials could be used for, among other things, “statistical purposes” and “academic research.” Because the study had the potential to affect the reputations of the programs conducting the searches, we sought permission from the supervising chair or dean for every search that met our inclusion criteria.
For the searches that met our inclusion criteria and for which we received administrative permission, we collected electronic copies of the CVs for all of the applicants (except those who met our exclusion criteria). After collecting the CVs, we waited 18 to 30 months to conduct our analysis and verification process to give forthcoming publications time to appear in print.2 We used systematic sampling with a random start to select approximately 10% of the CVs for analysis, and two coders independently analyzed all of the vitae in the 10% sample.
For each CV, the coders extracted basic demographic information, including the type of position sought, the location (domestic vs. international) and Carnegie classification of each applicant’s degree granting institution, and time since degree. In addition, the coders tallied and categorized reported publications according to type (article, book, or book chapter3) and status (published or forthcoming4). All other types of publications (e.g., conference proceedings, blog posts, letters to the editor) were not recorded.
The coders then completed a verification process for all published and forthcoming articles, published books, and published book chapters. For published and forthcoming articles, the coders proceeded through a five-step search process that included searching for the article title and author name(s) in (a) Google Scholar, (b) Academic Search Premier, (c) the aggregated database available through the university library,5 and (d) Google. If that failed, the coders also searched for the title of the journal in Google in an attempt to locate and search the journal’s website.6 For books and book chapters, coders used the same process searching for book titles and author name(s) in (a) Google Scholar, (b) Academic Search Premier, (c) the aggregated database available through university library, and (d) Google, and also searched (e) Google Books. If that failed, the coders searched the publisher’s name in Google in an attempt to locate and search the publisher’s website.
If the coders were unable to find a publication through this process, they coded the publication as “unverified.” The coders then categorized the unverified publications as either “unverified journal” or “unverified publication.” Unverified journals were cases in which the coders could not find the journal. Unverified publications were cases in which the coders (a) found the journal but could not find the article, (b) could not find the book, or (c) found the book but could not find the book chapter.
If the coders found the publication, they compared the publication’s official citation data to the citation information presented on the CV to determine whether there were discrepancies. The coders then categorized the discrepancies by type: authorship insertion, authorship promotion, and authorship omission. Authorship insertions were cases in which applicants listed themselves as an author on their CV but did not appear as an author in the published version of the work. Authorship promotions were cases in which the applicant claimed a better authorship position than what appeared on the actual publication.7 Authorship omissions were cases in which the published version of the work included more authors than the applicant listed on their vitae.8 The coders also noted other errors that were not self-promoting, such as incorrect titles, journals, and publication dates.9
The two coders met on a weekly basis to compare coding results and resolve discrepancies. Each time the coders disagreed (e.g., one verified a publication, and one did not), they went through the verification process again together. After all discrepancies were resolved, the coders created a new code sheet for each CV with their agreed-upon codes, attached the two original code sheets, and gave the code sheets to a research assistant who entered them into a database maintained in Excel (without identifiers).
Results
Sample Characteristics
In the 2015–2016 academic year, there were 45 searches across 26 programs that met our inclusion criteria. We asked 25 chairs and one dean for permission to include their searches in our study: 23 chairs and one dean granted permission, two chairs did not. Our sample included rejected applicants, who were not otherwise employed by the institution, for 43 searches across these 24 programs. The applicant pools for each search ranged in size from seven to 204 individuals, and the aggregated applicant pool was 1,837. A 10% random sample yielded 180 CVs to code.10
Most of the applicants in our sample were applying for entry-level faculty positions (see Table 1). Of the 180 applicants included in the analysis, four had applied for a postdoctoral position, 21 had applied for a visiting assistant professorship, 150 had applied for an assistant professorship, two had applied for an associate professorship, two had applied for a senior faculty position, and one had applied for a department chair (see Table 1).
Table 1.
Applicants by Rank of Open Position.
| Post-doctoral position | 4 |
| Visiting assistant professor | 21 |
| Assistant professor | 150 |
| Associate professor | 2 |
| Senior faculty | 2 |
| Chair | 1 |
Findings
Of the 180 applicants whose CVs we reviewed, 141 (78%) reported at least one publication on their CV; 39 (19.4%) applicants reported no publications on their CV (see Table 2).11 The 141 applicants who claimed to be an author reported a range of 1 to 77 publications, with an average of eight and a median of four. Controlling for the career stage of the open position, the average number of publications for applicants to entry-level positions (post-doctoral positions, visiting assistant professorships, and assistant professorships) was 6.7; the average number of publications for applicants to mid-level and senior positions (associate professorships, senior faculty positions, and chair) was 34.
Table 2.
Unverified and Inaccurate Research Citations.
| Claimed | Unverified or inaccurate | ||
|---|---|---|---|
| Authors | 141 | 79 | 56% |
| Publications | |||
| Journal articles | 967 | 139 | 14% |
| Books | 27 | 10 | 37% |
| Book chapter | 76 | 20 | 26% |
| Forthcoming articles | 57 | 24 | 42% |
| Total | 1,127 | 193 | 17% |
Of the 141 applicants who claimed to be authors, 79 (56%) had at least one unverified or inaccurate research citation on their CV (see Table 2). The number of unverified or inaccurate citations per author ranged from 1 to 17, with an average of 2.4. Of these 79 authors, 35 had one unverified or inaccurate research citation and 44 had two or more. The percentage of unverified or inaccurate research citations per author ranged from 3.9 to 100 with an average of 40.3.12
The 141 applicants who claimed to be authors reported a total of 1,127 publications as published or forthcoming: 967 journal articles, 27 books, 76 book chapters, and 57 forthcoming journal articles (see Table 2).13 Of the 1,127 publications, 193 (17%) were unverified or inaccurately represented: 139 (14%) of the journal articles, 10 (37%) of the books, 20 (26%) of the book chapters, and 24 (42%) of the forthcoming articles (see Table 2).
These 193 instances of unverified or inaccurate research citations included the following: six articles in journals that we could not locate (unverified journal), 72 articles we could not find in journals that we could locate (unverified publication, article), 31 books and book chapters that we could not find (unverified publication, book/book chapter), 24 forthcoming articles we could not find in journals that we could locate (unverified publication, forthcoming article), four instances of authorship insertion, 27 instances of authorship promotion,14 27 instances of authorship omission,15 and five generally categorized as “other” (see Table 3).16
Table 3.
Types of Unverified and Inaccurate Citations.
| Unverified journal | 6 | Author insertion | 4 |
| Unverified article | 72 | Author promotion | 27 |
| Unverified book/book chapter | 31 | Author omission | 27 |
| Unverified forthcoming article | 24 | Other | 5 |
Demographics
We analyzed the rates of unverified and inaccurate research citations according to the demographic information we collected for each applicant (see Table 4).17 When we compared authors with a doctoral degree from an international institution with authors with a doctoral degree from a domestic institution, we found a higher rate among applicants with a graduate degree from an international institution (67% and 52%, respectively). However, caution should be exercised with this finding, as international doctoral graduates are more likely to publish in foreign language journals, which were more difficult for the coders to verify. When we compared the Carnegie classification of doctoral granting institutions among applicants with a doctoral degree from a domestic institution, we found that 76 (78%) of domestically educated authors had a PhD from an institution with very high research productivity (institutions formerly known as an R1), and 22 (22%) of domestically educated authors had a PhD from a non-R1 institution. We found a slightly higher rate of unverified and inaccurate research citations among R1 doctoral graduates (54% vs. 45%, respectively). When we compared early-career authors (103, 73%) with authors who were more advanced in their careers (36, 26%), we found virtually no difference in the rates (56% and 53% respectively). We were unable to determine career stage of two (1%) authors, and unable to determine the location of the PhD granting institution for 10 (7%) authors.
Table 4.
Demographics.
| Applicants | Authors | Unverifiable or inaccurate citations on curriculum vitae | |
|---|---|---|---|
| PhD Institution: Location | |||
| International | 35 | 33 | 22 (67%) |
| Domestic | 128 | 98 | 51 (52%) |
| Unidentified | 17 | 10 | 6 (60%) |
| PhD Institution: Carnegie classification | |||
| Domestic R1 | 101 | 76 | 41 (54%) |
| Domestic Non-R1 | 27 | 22 | 10 (45%) |
| Career stage | |||
| Early career | 132 | 103 | 58 (56%) |
| Advanced career | 41 | 36 | 19 (53%) |
| Unidentified | 7 | 2 | 2 (100%) |
Discussion
Frequency of Unverified and Inaccurate Research Citations
Our findings for the frequency of unverified and inaccurate research citations in applications to academic faculty positions were similar to the incidence of erroneous information in resumes for non-academic jobs. According to Henle et al. (2017), a recent study found that 55% of resumes submitted for job applications contained erroneous information; our study of applications for academic faculty positions found that 44% of CVs submitted for academic faculty positions—56% of CVs submitted by those claiming to be authors—contained one or more unverified or inaccurate research citation. Notably, in comparison with studies of applicants for faculty, fellowship, and residency programs in academic health science centers, our findings are significantly higher for the percentage of applicants submitting CVs with unverified or inaccurate research citations. In 21 studies of applicants to residency and fellowship programs, between 2% and 50% of applicants claiming to be authors submitted a CV with at least one unverified or inaccurate research citation, with an average and a median of 22%. Again, in our study, 56% of applicants claiming to be authors submitted a CV with at least one unverified or inaccurate research citation. Although the percentage of authors in our study whose CV contained unverified or inaccurate research citations was higher (56%) than the average previously reported in academic health science programs (22%), the percentage of unverified or inaccurate publications in our study was approximately the same. Across nine of the studies of CVs submitted to residency and fellowship programs,18 3% to 36% of the reported publications were unverified or inaccurate, with an average of 17.3%; our study found that 17.1% of the publications reported by applicants to non-health science programs were unverified or inaccurate.
Limitations
The validity, generalizability, and significance of our findings are limited by a number of factors, which could indicate that we are both underreporting and overreporting the incidence of unverified and inaccurate information on CVs submitted by applicants for faculty positions. Regarding underreporting, our findings might underestimate the incidence because we looked only at reported publications. We did not verify academic credentials, grants, awards, or past employment for two reasons: (a) we did not have the resources to do such extensive work, and (b) in some cases, doing so would have presented a greater risk of violating the confidentiality of the individuals whose CVs were included in our study. If we had called advisors, supervisors, and program officers to verify applicants’ claims, then we might have alerted people in the research community that the applicant had presented an inaccurate CV, which could have resulted in harm to the applicant’s reputation and employability. However, if we had expanded the scope of our analysis to include these items, we might have found more instances of unverified or inaccurate claims.
Regarding overreporting, our findings might overestimate the incidence of CV falsification for several reasons. First, despite access to a wide array of publication databases through the university’s library services, our coders reported difficulty in finding journals and books published in languages other than English. For seven unverified publications, the coders noted that the publication was in a language other than English, so it is possible that these works exist and our coders were simply unable to verify them. However, even if we removed these cases from our sample, our findings would not change substantially because these account for merely seven (4%) of 193 unverified publications. Second, we may have underestimated the processing time for publications listed as “forthcoming” or “in press.” We assumed that 18 to 30 months would be sufficient for an accepted manuscript to appear in a published issue, or at the very least, on the accepting journal’s website. However, it is possible that some journals have a longer queue and no online platform on which to display articles pending publication. Of the 57 forthcoming articles reporting as “forthcoming” by applicants, we were unable to verify 24 articles; it is possible that some of these 24 articles were still forthcoming when we conducted our verification process. However, given the increasing popularity of digital publication practices like “online first view” and “e-pub ahead of print,” we estimate this error margin to be low, accounting for a small number of the unverified forthcoming articles. Furthermore, even if we dropped forthcoming articles from our analysis, our overall rate of falsification would change only slightly, from 17% (193/1,127) to 16% (169/1,070). Third, our coders used only academic databases and Internet searches when verifying publications. Some journals and book publishers may not be indexed by the more common academic databases or may not have an Internet presence. So, again, it is possible that these publications exist and our coders were simply unable to find them. However, not having found many journals that are not in at least one of these major databases in our own professional careers (across three disciplines), we estimate this error margin to be low as well.
A fourth limitation concerns the generalizability of this sample. First, and foremost, we studied applicants to only one institution of higher education. While we suspect this university is relatively representative of Carnegie doctoral universities with highest research activity, we cannot determine whether applicants to this university are representative of applicants for positions at other highest research activity universities or other types of universities and colleges. Also, we did not receive permission from two supervisors who were conducting searches, so applicants to those programs were not included in our study. However, we do not think that the loss of applicants from those two searches is significant because we looked at disciplinary differences in results, and while the differences between individual disciplines were substantial, our findings did not show any systematic differences between disciplinary groupings (e.g., humanities, social sciences, natural sciences, and engineering).19 Ultimately, these results reflect only a 10% sample of one institution’s applicants for non-health science positions. Yet, as the first known attempt to quantify the extent of CV falsification in non-health science fields, we believe our findings are a reasonable approximation of the phenomenon.
A final limitation concerns the significance and meaning of our findings. Although we wanted to ascertain how many people are lying on their CV, ultimately, we were only able to determine how many applicants have unverified or inaccurate research citations on their CV. Without knowing their intent and motivations, we do not know whether these unverified and inaccurate citations are deliberate misrepresentations or honest mistakes. This is why we have been careful to use the terms “unverified” and “inaccurate” rather than “misrepresentations” or “falsifications” or “lies” to describe these research citations.20 We tried to exclude honest mistakes in our accounting, and the 193 instances of unverified or inaccurate citations include only errors that benefited the applicant and appeared to be self-promoting. We found an additional 27 research citations with the wrong title, wrong year, wrong journal, or authorship demotion, but we did not include these in our figures because they were not self-promoting errors. Even so, it is also possible that some of the 193 unverified and inaccurate claims that were self-promoting were simply honest mistakes.
Despite these limitations, we contend that our findings indicate a real problem.
Theory
We have a number of hypotheses to explain our findings, some pointing to honest error and some pointing to deliberate misrepresentation, falsification, and outright lying.
There are at least three reasons why some instances of unverified and inaccurate citations might be honest error rather than deliberate attempts to mislead search committee members. First, some of the inaccuracies in authorship order could be explained by poor communication among a team of collaborators, leading to confusion about authorship order that does not get corrected (Kuo et al., 2008; Nosnik, Friedmann, Nagler, & Dinlenc, 2010). Second, some of the unverified journal articles and unverified forthcoming journal articles could be explained by an applicant misunderstanding (or failing to appreciate) the various stages of publication. Some applicants, particularly graduate students or new PhDs who have not been well mentored, may not be aware of the distinctions between a paper that is published and one that is forthcoming, accepted, conditionally accepted, invited to revise and resubmit, under review, or a working draft that one hopes to submit soon (Goe et al., 1998; Hsi, Hotaling, Moore, & Joyner, 2013). Finally, some of the unverified journal articles and unverified forthcoming journal articles could be explained by a combination of failure to understand the distinctions between the various stages of publication and poor coordination among a team of authors (Boyd, Hook, & King, 1996). For example, a lead author on a co-authored paper might receive a conditional acceptance, list the paper as forthcoming on his vitae, but then experience delays in coordinating the team of authors to complete the revisions such that the paper still does not appear in print or online—even 30 months later.
Regarding deliberate misrepresentation, there are at least three reasons to suspect that at least some of our findings, particularly our findings of unverified articles and unverified forthcoming articles, are falsifications and lies. The first hypothesis is that some applicants are telling “little lies” by deliberately blurring the distinctions between various stages of publication. Unlike the honest error hypothesis, these applicants know the difference between the various stages of publication and exploit these differences—along with the low likelihood of getting caught—to misrepresent their accomplishments. For example, somebody who revised and resubmitted a manuscript might list it as published on their vitae, or somebody who has a manuscript under review (or hopes to soon) might list it as “forthcoming” on their vitae. These might be “little lies” in the sense that the claim is not true at the time it is written, or even when the application is submitted, but the applicant hopes it will be true when the search committee reviews the vitae.
The second hypothesis is that applicants are telling “big lies” because they need a job, which is supported by the general research on research misconduct. Situational factors that create acute periods of stress in a researcher’s professional or personal life, such as a looming deadline, the loss of family member, or relationship difficulties, have been correlated with questionable research practices and research misconduct (Davis, Riske-Morris, & Diaz, 2007; Mumford & Helton, 2002). Acute stress might lead people to more risky behaviors, or it might simply overwhelm the meta-cognitive resources needed for moral reasoning and analytical problem solving (Mumford & Helton, 2002). Indeed, acute stress is a possible explanatory factor in a recent case of misconduct at Colorado State University, where financial and marital problems influenced an assistant professor’s decision to fabricate a job offer in an effort to secure a raise (Stripling & Zahneis, 2018). In the context of our study, a graduate student who is entering the job market and concerned about how her research record will compare against other candidates might make up a book chapter or a forthcoming article based on a prior seminar paper.
Finally, the third hypothesis is simply that “some scientists are psychopaths” (James, 1995, p. 474).21 That is, some academics are liars, who lie well, frequently, without hesitation, without regret, and seemingly without getting caught. Although we have no evidence of this phenomenon in our sample, the hypothesis is supported by research on research misconduct that shows personality traits such as narcissism and grandiosity are commonly recognized influencing factors for questionable research practices and research misconduct (Antes et al., 2007; Davis et al., 2007; Kornfeld, 2012; McCook, 2016). Narcissism is associated with a disproportionately high level of confidence and sense of entitlement, and correlated with a number of undesirable behaviors in research, including deception (Antes et al., 2007). Grandiosity is sometimes associated with a “messianic complex,” which can manifest as the certainty in the superiority of one’s judgment. This can lead a researcher to believe that a particular hypothesis or theory is true, in fact so true that the researcher need not bother testing or proving it (Davis et al., 2007; Kornfeld, 2012). In the context of our study, an applicant who firmly believes he is qualified for a job might not think he needs to prove his research potential beforehand. Similarly, a researcher who thinks he is destined to become a superstar might present himself as a superstar before he has actually earned superstar status. Indeed, such sociopathic behaviors have been attributed to Anoop Shankar (Aronowitz & Dokoupil, 2014) and, more recently, Michael LaCour (Munger, 2015; Singal, 2015b). In these cases, the researchers presented themselves as superstars by falsifying their CV: Shankar falsified several degrees and multiple publications (Aronowitz & Dokoupil, 2014), and LaCour falsified nearly US$1 million in external funding as a graduate student (Singal, 2015a). However, even if this hypothesis is true—that is, even if some researchers have pathological personality traits that make CV falsification more likely—even that cannot fully explain why 44% of all applicants (56% of authors) had at least one unverified or inaccurate, self-promoting research citation on their vitae. The prevalence of pathological narcissism in the general population is estimated at 0.5% to 1% (American Psychiatric Association, 2000). When applied to our sample of 180 applicants, this would account for approximately two of the 79 applicants who submitted a vitae with an unverified or inaccurate research citation.
While we do not know why applicants have unverified and inaccurate research citations on their CVs, we do know that many applicants do, and we have good reason to believe that some (perhaps many) are deliberate misrepresentations of their research accomplishments.
Questionable Research Practices (QRP), Detrimental Research Practice (DRP), and Misconduct
The CV is an essential form of self-representation in academia that details a scholar’s educational credentials, professional experience, and professional accomplishments. The CV documents—in great detail—academic achievements that attest to the individual’s knowledge, skill, and expertise. Although the order and content of information may take different forms from one academic discipline to another, a CV is presumed to be an accurate summary of a scholar’s preparation and related accomplishments. More importantly, the CV attests to the individual’s specialized qualifications for sometimes rarified work. Particularly when presented as part of an application for an academic position, a CV is the primary source of information about a scholar’s contributions to, and relative standing in, a given field.
The inclusion of inaccurate information in an academic CV may be difficult to detect without detailed research because information about an individual’s graduate education, specialty training, and prior employment is seldom publicly available, and—while generally available—an individual’s academic publications may span a number of journals or publishers that search committee members do not read on a regular basis. While universities and research institutes typically consult a candidate’s references as part of a hiring process, the complexity of an academic CV is typically such that only the individual can truly know whether it is true and complete.
Because it is a point of trust that an academic’s CV will present a full and honest portrait, the falsification of information in an academic CV is problematic because it (a) misleads administrators and search committees about an applicant’s ability to do the job for which the vitae was submitted, (b) is unfair to other applicants who submit truthful CVs and are passed over as less qualified, and (c) undermines trust among colleagues that is essential to all academic endeavors, including research.
While the importance of the CV is widely recognized in academic research, scholars have historically paid little attention to the accuracy of academic CVs as an aspect of research integrity. Although not specifically named as such, CV falsification falls directly into the category of questionable research practices (QRP), defined in 1992 by a panel from the National Academy of Sciences (NAS; 1992) as behaviors that “violate traditional values of the research enterprise” (p. 5). In today’s terms, falsifying a CV would be easily recognized as a detrimental research practice (DRP), defined in 2017 by the National Academies of Science, Engineering, and Medicine as an action or behavior that harms the integrity of the research enterprise (National Academies of Sciences, Engineering, and Medicine [NASEM], 2017). Moreover, the intentional inclusion of incorrect or inaccurate information in a CV provided in a research context, such as a biosketch submitted as part of a grant application, might even constitute formal research misconduct, defined in 2000 by the Office of Science and Technology Policy (OSTP; 2000) as
…fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. (p. 1)
In response to public comments on the 2000 federal misconduct policy, OSTP further specified:
…(M)isrepresentation of a researcher’s qualifications or ability to perform the research in grant applications or similar submissions may constitute falsification or fabrication in proposing research. (p. 5)
This characterization has parallels in the national policies of several other top research-funding countries, where research misconduct includes misrepresentation of resumes and unjustified claims of authorship, particularly in grant proposals (Resnick, Rasmussen, & Kissling, 2015). Reporting a publication that does not exist on a CV seems to satisfy OSTP’s (2000) criteria for fabrication, which is “making up data or results and recording or reporting them” (p. 1). In such instances, the applicant is reporting a made-up research accomplishment.22 Similarly, authorship insertion, authorship promotion, and authorship omission seem to satisfy OSTP’s (2000) criteria for falsification, which is “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record” (p. 1). In these instances, the applicant is reporting manipulated information about research accomplishments that does not accurately represent the research record. Indeed, both the Office of Research Integrity (ORI) at the Department of Health and Human Services (DHHS) and the Office of the Inspector General (OIG) at the National Science Foundation (NSF) consider researchers’ falsification of credentials and accomplishments to be research misconduct, and both have prosecuted individuals who have submitted a falsified CV and biosketch as part of a grant application to their agency (Galbraith, 2017; Parrish, 1996).
Best Practices
There are a number of things that individuals and institutions in the academic community can and should do to minimize both the incidence and the effect of CV falsification (see Table 5).
Table 5.
Best Practices.
| Applicants should |
|
| Graduate programs and advisors should |
|
| Search committees should |
|
| Journal Editors should |
|
Note. CV = curriculum vitae.
Just as academics should report their research findings clearly and completely, they should report their education, academic experience, and research accomplishments on their CV in a clear and complete fashion. This includes (a) choosing a single citation style (ideally one endorsed by the applicant’s discipline) and using it consistently throughout the CV; (b) bolding one’s name in the list of authors in the order in which it appears on the publication; (c) including the digital object identifier (doi), when available, at the end of the citation; (d) separating published, forthcoming, under review, and in-progress (not submitted) works in different sections with subheadings; and (e) providing complete information about credentials and degrees (e.g., include the city in which schools are located). In presenting their publications, academics should not use “with” or “co-authored with” language in lieu of providing the order of authors in a citation because this language can misrepresent all parties’ contribution to the project. Finally, when the publication appears in print, academics should cross-reference the final version with the citation reported on their CV to check for errors and ensure consistency. Titles, journals, and authorship order may change as projects evolve, and errors can be perpetuated when academics “cut and paste” entries as they transition from “works in progress” to “under review” to “forthcoming” to “published.”
Graduate programs should include formal instruction and mentoring for students on constructing a truthful CV. In addition, academics serving as graduate student advisors should do four things to promote the accurate reporting of their students’ research accomplishments. First, advisors should help students understand why it is important to present their research accomplishments accurately. Second, advisors should help students understand how to report their research accomplishments accurately and completely, including helping advisees understand the different stages of publication and how to accurately present research projects at each stage. Third, advisors should review the CV of advisees for whom they write letters of recommendation and address any misrepresentation promptly. Fourth, advisors should provide advisees with a website or other place in which to publicly share their profile and CV, and make sure the public version of the CV matches the accomplishments they report to the advisor.
Academics serving on search committees or in administrative positions overseeing hiring processes also have responsibilities toward ensuring the integrity of CVs. Job announcements and calls for applications should provide specific instructions on what applicants should include in their CV, and warn applicants that the search committee might fact-check submitted vitae.23 While it may seem unduly suspicious for a department to issue such a warning to applicants, early adopters of programs such as turnitin© likely experienced a similar reluctance to check manuscripts for plagiarism. Now such verification is not only common, but it is considered a best practice in teaching (see Note 22). Warning applicants that CVs will be fact-checked might be sufficient to deter some applicants who would have otherwise reported falsified research accomplishments. We also recommend that search committee members actually fact-check the CVs of applicants shortlisted for interviews. Ideally, this would happen when a short-list of candidates is identified, but at the very latest, a careful review should be conducted before candidates are invited for on-campus interviews. The process is not onerous—it took our coders an average of 29 min to fact-check the individual CVs in our sample, even though five applicants were applying for senior-level positions and had multiple publications, and the coders were collecting demographic data. We estimate that it would take an average of 20 min per vitae to verify publications for entry-level positions (without collecting demographics).
Finally, journal editors should help authors understand how to report the status of their manuscripts by making it clear when an article is “under review,” “conditionally accepted,” “accepted,” “forthcoming,” or “published.” For example, when corresponding with an author to confirm receipt of a manuscript, the message could include a statement, “At this point in the process you may say that your manuscript is under review.” While some journals already include such statements in their communications with authors, and others provide this information more generally in their online instructions to authors, this is by no means a common practice. Making the manuscript’s status clear might help to reduce genuine confusion among graduate students and early career scholars about the various stages of the publishing process, and serve as a reminder that presenting the project otherwise is a lie.
Research Agenda
This research is the first investigation—to our knowledge—of unverified and inaccurate content in CVs for positions in non-health science disciplines. As with many initial investigations, these results leave us with more questions than answers, and thus many directions for future work.
Methodologically, we have three recommendations for future research: (a) a complete analysis (rather than 10% sample) of all applicants to more than one institution to fully ascertain disciplinary differences in falsification and to ensure there is no bias in the estimate from this sample; (b) a comparative analysis of applicants at a variety of institutions to determine whether CV inaccuracies are a problem only for research institutions or if they are also a problem for teaching institutions, small liberal arts colleges, and regional institutions; and (c) more robust demographic analyses to determine whether falsifications are more likely to come from various groups of applicants.
Substantively, we have three recommendations for future research. First, researchers should examine whether the recommendations from the studies published earlier have been adopted and whether these practices have been effective. Many of the studies on applicants to residency and fellowship programs recommended that programs modify their application instructions to require that applicants include the PubMed identifier for their articles, or a reprint of their article if it was not indexed in PubMed. We could find no follow-up studies assessing whether programs implemented these changes, and if so, whether they found a reduction in the incidence of unverified and inaccurate citations on applicant vitae.
Second, researchers should study the behavior of administrators and search committee members as they review applicants’ CVs to identify best practices for assessing candidates. We know that applicants who include unverified or inaccurate information on their CV bias the job market in their favor, but we do not know whether or how this bias would be corrected if and when search committee members discovered misrepresentations that they believe are deliberate. A recent study in a non-academic setting suggests that the discovery of misrepresentation does not always disqualify an applicant (Kuhn, Johnson, & Miller, 2013).
Finally, researchers should study the relationship between an applicant’s willingness to include unverified or inaccurate information on their vitae and their likelihood to engage in other types of unethical behavior. We contend that at least some of the unverified and inaccurate citations we identified were willful misrepresentations, and we are concerned that academics who are willing to misrepresent themselves on their CV might also be willing to misrepresent their research findings through other types of QRP, DRP, or FFP.24
Educational Implications
As discussed above, graduate programs should provide instruction and mentoring to their senior students on how to construct a truthful CV based on disciplinary standards. Formal and informal instruction in the responsible conduct of research can address both why it is important to the integrity of the research enterprise for scholars to present their education, employment, and publications accurately to others, and how to craft a CV that conveys such information honestly and completely. Particularly, as students and young investigators learn to cite others’ work, they should take note of the ways in which forthcoming work is referred to in the literature, and how they can describe and discuss their own pending contributions.
Acknowledgments
The authors thank Ian Rockett for his inspiration, and Franchesca Nestor, Maxwell Nimako, Ashley Brash, and Madison Canales for their contributions to this project. The authors would also like to thank the WVU ADVANCE Center and the Eberly College of Arts and Sciences at West Virginia University for their support.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: In kind support from the Eberly College of Arts and Sciences at West Virginia University; partial funding from a subaward from the WVU Program for Retaining Institutional Diversity and Equity (NSF Award 1007978).
Author Biographies
Trisha Phillips is associate professor of political science at West Virginia University. She is a philosopher who engages in empirical and conceptual research on research integrity. She conceived the study with Heitman, designed the study with Cossman, drafted several sections of the manuscript, and managed the manuscript preparation process.
R. Kyle Saunders is a PhD student in sociology at Florida State University, whose main research interests employ mixed methods to study mental health disparities and sexuality. His role in this project was data collection and coding, drafting the methods section of the manuscript, and reviewing and approving the final version.
Jeralynn Cossman is professor of sociology and chair of the Department of Sociology and Anthropology at West Virginia University. She is a medical sociologist whose work examines spatial disparities in contemporary American mortality. On this project, she worked with Phillips to design the project; she also supervised data collection, coding, and analysis; drafted several sections of the manuscript; and reviewed and approved the final version.
Elizabeth Heitman is professor in the Program in Ethics in Science and Medicine at the University of Texas Southwestern Medical Center whose work spans research in research integrity and education in research ethics and responsible conduct of research (RCR). With Phillips, she conceived the project and explored early studies on the falsification of CVs in academics; critically revised, edited, and expanded drafts of the manuscript; and reviewed and approved the final version.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Notes
Frumovitz et al. (2012); Hsi et al. (2013); Kaley, Bornhorst, Wiggins, and Yared (2013); Kuo et al. (2008); Nosnik et al. (2010); Sater, Coupland, Zhang, and Nguyen (2012); Simmons, Kim, Zins, Chiang, and Oelschlager (2012); and Wiggins (2010). In his meta-analysis of 18 studies, Wiggins (2010) notes that the variations in methodologies, particularly what constitutes misrepresentation, make it difficult to average across studies.
Eighteen months represents the length of time from the end of our vitae collection window (June 2016) to the beginning of our analysis (January 2018). However, many vitae were submitted earlier in our window (summer or fall of 2015), so for some vitae, the wait was approximately 30 months.
We also tallied book reviews and manuscripts under review, but did not attempt to verify them, and do not report them here.
While many disciplines may not use the phrase “forthcoming,” articles are sometimes accepted for publication long before they appear in print or online. Humanities and social science disciplines frequently use this or similar terms to indicate that an article has been accepted but is not yet available to a readership. For our purposes, “forthcoming” represents articles that were listed as “accepted,” “in press,” “forthcoming,” or similar variants.
This database includes ABI/INFORM collection, JSTOR Arts & Sciences Collections I-XII, JSTOR Business IV Collection, JSTOR Current Scholarship Journals, JSTOR Life Sciences Collection, ScienDirec, and WordCat.org.
For the 21 studies of applicants to residency and fellowship programs, many conducted similar electronic searches primarily relying on Medline, PubMed, and Google. One study also searched the library stacks (Goe et al., 1998); one study attempted to contact journals for verification (Boyd et al., 1996).
While we understand that authorship practices differ among disciplines, for our purposes, we assumed the lead author to be in a most desirable position. Additionally, research citations in which applicants did not list co-authors as they appeared on the official publication, but instead described the publication as “co-authored with,” were counted as authorship promotions when the applicant was not the lead author.
Research citations in which applicants did not list co-authors as they appeared on the official publication, but instead used “et al.,” were coded as authorship omission.
Each of these categories (unverified journal, unverified publication, unverified forthcoming publication, author insertion, author promotion, author omission) was previously used in at least one of the studies of residency and fellowship applications, but none of the previous studies used all six.
We initially analyzed 181 vitae, but dropped one case as anomalous and likely to skew our findings. The applicant listed 80 publications, each citation was listed as “co-authored with,” and for none of the publications was the lead author. According to our criteria, this would constitute 80 incidences of authorship promotion.
The majority of the 39 applicants with no publications were very early career scholars, who were “all but dissertation” (ABD) at the time of application, or whose PhDs were less than 1 year old. If we had included book reviews as publications, and applicants claiming book reviews as authors, then the number of authors would have been 145 of 180 (79%).
For one applicant, 100% of the publications listed were inaccurate citations. In this case, the applicant listed two publications, and in both citations, the applicant used “co-authored with” language when the applicant was not the lead author. In accordance with our method, both publications were coded as “authorship promotion.”
There were an additional seven citations that did not contain sufficient information for our coders to categorize and verify the publication, so we did not include these in our figures. We also tallied, but did not attempt to verify, 71 book reviews.
Four of these authorship promotions were cases in which the applicant used “with” or “co-authored with” and the applicant was not lead author.
There were three cases in which the applicant used “et al.” instead of listing all authors. These three cases were included in the 27 instances of authorship omissions.
Three publications were double coded, so these 196 instances of misrepresentation characterize 193 publications.
Our demographic information is based on the graduation date and institution from which the applicant claimed to earn a PhD. We collected but did not confirm this information.
Wiggins did not report the percentage of publications that were unverified or inaccurate; he reported only the number and percentages of applicants with at least one unverified or inaccurate publication citation (2010).
The individual disciplinary numbers are small which is why we do not report them here. Because we coded only 10% of our sample, we analyzed fewer than 10 applicants for most of the disciplines represented in our sample.
While many of the previously reported studies use “misrepresented” or “falsification,” Frumovitz et al. (2012) also “took care to describe these publications as ‘unverifiable’” because they did not have evidence that the unverifiable citations were overt attempts to misrepresent accomplishments (p. 4).
Davis et al. (2007) also present this quote from James.
Sekas and Hutson (1995) draw a similar conclusion.
Thanks to members of the audience at the Annual Meeting for the American Society for Bioethics and Humanities for this suggestion (Anaheim, CA, 2018).
At least three articles reporting falsifications in applications to residency and fellowship programs raise similar concerns about the relationship between misrepresentation and willingness to engage in other unprofessional behavior (Learman, 2012; Simmons et al., 2012; Yang, Schoenwetter, Wagner, Donohue, & Kuettel, 2006).
References
- American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author. [Google Scholar]
- Antes AL, Brown RP, Murphy ST, Waples EP, Mumford MD, Connelly S, & Devenport LD (2007). Personality and ethical decision-making in research: The role of perceptions of self and others. Journal of Empirical Research on Human Research Ethics, 2(4), 15–34. [DOI] [PubMed] [Google Scholar]
- Aronowitz N, & Dokoupil T (2014, September 9). Ivory tower phony? Sex, lies and fraud alleged in W. Va. university case. Nbcnews.com. Retrieved from https://www.nbcnews.com/news/us-news/ivory-tower-phony-sex-lies-fraud-alleged-wva-university-n199491 [Google Scholar]
- Basinger J (2004, March 5). 4 years after a scandal, a president steps down. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/4-Years-After-a-Scandal-a/30708 [Google Scholar]
- Boyd A, Hook M, & King L (1996). An evaluation of the accuracy of residency applicants’ curricula vitae: Are the claims of publications erroneous? Journal of the American Academy of Dermatology, 35, 606–608. [DOI] [PubMed] [Google Scholar]
- Davis MS, Riske-Morris M, & Diaz SR (2007). Causal factors implicated in research misconduct: Evidence from ORI case files. Science and Engineering Ethics, 13, 395–414. [DOI] [PubMed] [Google Scholar]
- Frumovitz M, Kriseman M, Sun C, Blumenthal-Barby J, Sood A, Bodurka D, & Soliman P (2012). Unverifiable accomplishments and publications on applications for gynecologic oncology fellowships. Obstetrics & Gynecology, 119, 504–508. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galbraith K (2017). Life after research misconduct: Punishments and the pursuit of second chances. Journal of Empirical Research on Human Research Ethics, 12, 26–32. [DOI] [PubMed] [Google Scholar]
- Goe L, Herrera A, & Mower W (1998). Misrepresentation of research citations among medical school faculty applicants. Academic Medicine, 73, 183–186. [DOI] [PubMed] [Google Scholar]
- Henle C, Dineen B, & Duffy M (2017). Assessing intentional resume deception: Development and nomological network of a resume fraud measure. Journal of Business and Psychology, 34, 87–106. doi: 10.1007/s10869-017-9527-4 [DOI] [Google Scholar]
- Hsi R, Hotaling J, Moore T, & Joyner B (2013). Publication misrepresentation among urology residency applicants. World Journal of Urology, 31, 697–702. [DOI] [PubMed] [Google Scholar]
- James W (1995). Fraud and hoaxes in science. Nature, 377(6549), Article 474. [DOI] [PubMed] [Google Scholar]
- Kaley J, Bornhorst J, Wiggins M, & Yared M (2013). Prevalence and types of misrepresentation of publication record by pathology residency applicants. Archives of Pathology & Laboratory Medicine, 137, 979–982. [DOI] [PubMed] [Google Scholar]
- Kornfeld DS (2012). Research misconduct: The search for a remedy. Academic Medicine, 87, 877–882. [DOI] [PubMed] [Google Scholar]
- Kuhn K, Johnson T, & Miller D (2013). Applicant desirability influences reactions to discovered resume embellishments. International Journal of Selection and Assessment, 21, 111–120. [Google Scholar]
- Kuo P, Schroeder R, Shah A, Shah J, Jacobs D, & Pietrobon R (2008). “Ghost” publications among applicants to a general surgery residency program. Journal of the American College of Surgeons, 207, 485–489. [DOI] [PubMed] [Google Scholar]
- Learman LA (2012). Bibliografake: More common than we thought? Obstetrics & Gynecology, 119, 493–494. [DOI] [PubMed] [Google Scholar]
- Lewin T (2007, April 27). Dean at MIT resigns, ending a 28-year lie. The New York Times. Retrieved from https://www.nytimes.com/2007/04/27/us/27mit.html [Google Scholar]
- McCook A (2016, August 29). Why do scientists commit misconduct? Retraction Watch. Retrieved from https://retractionwatch.com/2016/08/29/why-do-scientists-commit-misconduct/ [Google Scholar]
- Mumford MD, & Helton WB (2002). Organizational influences on scientific integrity In Steneck NH & Scheetz MD (Eds.), Investigating research integrity: Proceedings of the first ORI research conference on research integrity (pp. 73–90). Rockville, MD: Office of Research Integrity. [Google Scholar]
- Munger M (2015, June 15). L’affair LaCour. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/LAffaire-LaCour/230905 [Google Scholar]
- National Academies of Sciences, Engineering, and Medicine. (2017). Fostering integrity in research. Washington, DC: National Academies Press; Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK475954/#sec_37 [PubMed] [Google Scholar]
- National Academy of Sciences, Committee on Science, Engineering, and Public Policy, Responsible Science. (1992). Ensuring the integrity of the research process (Vol. 1). Washington, DC: National Academies Press; Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK234525/ [Google Scholar]
- Nosnik I, Friedmann H, Nagler H, & Dinlenc C (2010). Resume fraud: Unverifiable publication of urology training program applicants. The Journal of Urology, 183, 1520–1523. [DOI] [PubMed] [Google Scholar]
- Office of Science and Technology Policy. (2000). Federal research misconduct policy. Federal Register, 65, 76260–76264. Retrieved from https://ori.hhs.gov/definition-misconduct [Google Scholar]
- Parrish D (1996). Falsification of credentials in the research setting; scientific misconduct? The Journal of Law, Medicine & Ethics, 24, 260–266. [DOI] [PubMed] [Google Scholar]
- Resnick DB, Rasmussen LM, & Kissling GM (2015). An international study of research misconduct policies. Accountability in Research, 22, 249–266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sater L, Coupland S, Zhang X, & Nguyen L (2012, October). Publications misrepresentation: Evaluating honesty among otolaryngology residency applicants in Canada Presented at the International Conference on Residency Education, Ottawa, Canada. [Google Scholar]
- Sekas G, & Hutson W (1995). Misrepresentation of academic accomplishments by applicants for gastroenterology fellowships. Annals of Internal Medicine, 123, 38–41. [DOI] [PubMed] [Google Scholar]
- Shaffer W, Rollo F, & Holt C (1988). Falsification of clinical credentials by physicians applying for ambulatory-staff privileges. The New England Journal of Medicine, 318, 356–358. [DOI] [PubMed] [Google Scholar]
- Simmons H, Kim S, Zins A, Chiang S, & Oelschlager A (2012). Unverifiable and erroneous publications reported by obstetrics and gynecology residency applicants. Obstetrics & Gynecology, 119, 498–503. [DOI] [PubMed] [Google Scholar]
- Singal J (2015a, May 26). The largest funding source listed on Michael LaCour’s CV is made-up. Nymag.com. Retrieved from https://www.thecut.com/2015/05/lacour-made-up-his-biggest-funding-source.html [Google Scholar]
- Singal J (2015b, May 29). The case of the amazing gay-marriage data: How a graduate student reluctantly uncovered a huge scientific fraud. Nymag.com. Retrieved from https://www.thecut.com/2015/05/how-a-grad-student-uncovered-a-huge-fraud.html [Google Scholar]
- Stripling J, & Zahneis M (2018, September 4). The big lie. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/interactives/big-lie [Google Scholar]
- Wiggins M (2010). A meta-analysis of studies of publication misrepresentation by applicants to residency and fellowship programs. Academic Medicine, 85, 1470–1474. [DOI] [PubMed] [Google Scholar]
- Yang GY, Schoenwetter MF, Wagner TD, Donohue KA, & Kuettel MR (2006). Misrepresentation of publications among radiation oncology residency applicants. Journal of the American College of Radiology, 3, 259–264. [DOI] [PubMed] [Google Scholar]
