Abstract
In this study we gathered data on the misconduct policies of social science journals and combined it with the data from our previous study on journal misconduct policies, which did not include enough social science journals for data analysis. Consistent with our earlier finding, impact factor of the journal was the only variable significantly associated with whether a journal had a formal (written) misconduct policy with an odds-ratio of 1.72 (p < 0.01). We did not find that type of science (physical, biomedical, or social) or publisher had a significant effect on whether a journal had a policy. Another important finding is that less than half of the journals that responded to the survey had a formal misconduct policy.
Keywords: ethics, impact factor, misconduct policies
INTRODUCTION
Journal editors sometimes have to deal with allegations of research misconduct in published articles (LaFollette, 1996). This task can be complex and difficult, because editors must protect the integrity of the journal and the research record, and ensure that people who are accused of misconduct are treated fairly. Editors must also be mindful of potential legal liability if they damage an innocent scientist’s reputation (LaFollette, 1996). In response to a series of embarrassing scandals, journals have begun to think more carefully about how to deal with misconduct (Shamoo and Resnik, 2009). In 1999, the Committee on Publication Ethics (COPE, 1999) issued a consensus statement calling for journals to develop misconduct policies. Other authorities, such as the National Academy of Sciences (2002), have made similar recommendations. Three years ago, COPE (2006) issued some guidelines for dealing with misconduct in scientific journals and the Council of Science Editors (2006) published a white paper on ethics policy development.
In a previous publication (Resnik et al., 2009), we surveyed editors from 399 scientific journals drawn from the Institute of Scientific Information (ISI) Science Citation Index to determine the proportion of journals that developed misconduct policies; 47.7% of the respondents said that their journal had a formal (written) policy. Journal impact factor was the only variable that was significantly associated with having a policy. Impact factor was slightly positively associated with whether or not the publisher had a policy. One of the limitations of the study is that it did not include enough social science journals for data analysis. Only 5.6% of journals that responded were from the social sciences. The goal of our present study was to gather additional data on social science journals and combine that data with the data from our previous research to determine whether impact factor, field of science (physical sciences and engineering, biomedical sciences, or social sciences), or publisher are significantly associated with having a formal misconduct policy.
MATERIALS AND METHODS
The procedures used for gathering the additional data are similar to those described in Resnik et al. (2009). We selected a random sample of social science journals from the ISI Science Citation Reports Social Science Edition, which contained 1,866 journals. We excluded mathematics, statistics, and philosophy journals. We attempted to contact 400 journals by e-mail and asked them to provide us with information about their misconduct policies. If we did not receive a response within 10 days, we sent a reminder e-mail. We collected data on the journal’s impact factor, field of science, and publisher, and whether the journal had a formal misconduct policy.
We defined a misconduct policy as rules or statements about the definition of misconduct or procedures for responding to misconduct, such as how to report allegations of misconduct or how to correct the scientific literature in response to confirmed cases of misconduct (Resnik et al., 2009). While the U.S. federal government and many other organizations define research misconduct as fabrication, falsification, or plagiarism (FFP), we did not limit definitions of misconduct to only FFP, because many research organizations recognize types of misconduct other than FFP, such as misuse of human or animal subjects in research (Shamoo and Resnik, 2009). We did not consider journal policies pertaining to duplicate publication, simultaneous submission, or copyright permissions to be misconduct policies (Resnik et al., 2009). The National Institute of Health’s Office of Human Research Protections classified our study as exempt research.
For our data analysis, we attempted to determine whether having a formal policy was associated with impact factor, field of science, or publisher. We limited our analysis to six major commercial publishers: Elsevier, Wiley, Springer, Taylor and Francis, Sage, and Lippincott, Williams, and Wilkins. Since we expected the publishers to behave like clusters so that journals within each publishing house would have similar policies, we treated publisher as a random effect. We analyzed the data using a mixed effects logistic regression model. The statistical software SAS was used for the analysis.
RESULTS
One-hundred fifty-three social science journals responded to our query for a response rate of 38.3%. Fifty-one journals (33.3%) had a formal misconduct policy, while 102 journals (66.6%) did not. The journals were published by Elsevier (15%), Wiley (13.7%), Taylor and Francis (9.8%), Sage (8.5%), American Psychological Association (3.9%), and other publishers (49.1%). Journal impact factor ranged from 0.08 to 16, with a mean of 1.51.
We combined the new data with the data from our previous study. We had 350 journals in the combined dataset for a total response rate of 43.8%. One-hundred forty-four journals (41.1%) had a formal misconduct policy, and 206 (58.9%) did not. Sixty-two journals (49.6%) from the biomedical sciences had a formal misconduct policy, 31 journals (47.7%) from the physical sciences/engineering had a policy, and 51 journals (31.9%) from the social sciences had a policy. Sixty-five journals (18.6%) were from the physical sciences and engineering, 125 journals (35.7%) were from the biomedical sciences, and 160 journals (45.7%) were from the social sciences. There were 7 social science journals from the previous study in the combined dataset. Sixty-nine journals (19.4%) were published by Elsevier, 42 (12%) by Wiley, 29 (8.3%) by Taylor and Francis, 24 (6.9%) by Springer, 14 (4%) by Sage, and 10 (2.9%) by Lippincott, Williams, and Wilkins. One-hundred sixty-two journals (46.3%) were listed as “other” because they had less than 10 entries. The mean impact factor of all the journals was 1.91, with a range from 0 to 33.5. The biomedical sciences had the highest mean impact factor (2.39), followed by physical sciences and engineering (2.09), and social sciences (1.51). Consistent with our earlier study (Resnik et al., 2009), we found that only the impact factor of the journal was significantly associated with whether a journal had a formal misconduct policy, with an odds-ratio of 1.72 (p < 0.01), with a 95% confidence interval of 1.21, 2.44. We did not find the type of science to have a significant effect on whether a journal had a policy (p = 0.7283). We also found that the interaction between the type of science and impact factor was not significant as far as having a policy was concerned (p = 0.9175). Thus, the three types of science had a similar odds-ratio.
Although a lower percentage of the journals from the social sciences had a formal misconduct policy (31.9%) than the journals from the biomedical sciences (49.6%) or physical sciences and engineering (47.7%), the differences among the three sciences were not statistically significant (p = 0.6286), adjusting for the publisher as a random effect. However, the three types of science differed significantly with respect to impact factor. Using the standard linear mixed effects models, we found that the mean impact factors of the three types of science differed significantly (p < 0.0001), with the mean value for social sciences journals being significantly smaller than that of both biomedical (p < 0.0001) and physical sciences (p = 0.0145), but there was no difference between biomedical and physical sciences (p = 0.3139). The impact factor was highly significant as far as the policy of the journal was concerned, whether adjusted for sciences or not (see Table 1).
Table 1.
Field of science | Misconduct policy | Mean impact factor |
---|---|---|
Biomedical sciences | 49.6% | 2.39 |
Physical sciences/Engineering | 47.7% | 2.09 |
Social sciences | 31.9% | 1.51 |
DISCUSSION
Our most important finding is that the impact factor of a scientific journal is significantly associated with whether the journal has a formal misconduct policy. The effect of impact factor was highly significant (p < 0.01). This result is consistent with our earlier study (Resnik et al., 2009). Although a lower percentage of social science journals had a misconduct policy than journals from the other fields of science, this result was not statistically significant. The observed differences among the different fields of science can be explained by differences in mean impact factors of the journals in those fields. The mean impact factor of social sciences journals was significantly smaller than the mean impact factor of journals from the physical sciences/engineering or biomedical sciences.
Journal impact factor is a measure of how frequently the average article in a journal is cited in a particular period (Garfield, 1972). Impact factor may or may not be a good measure of the quality of a scientific journal. General journals tend to have higher impact factors than specialized journals, because they have a broader readership. Because review articles are cited more frequently than original research articles, journals that publish a lot of review articles tend to have higher impact factors than those that focus more an original research (Grzybowski, 2009). Even though the connection between a journal’s impact factor and its quality is questionable, it is reasonable to assume that journals with high impact factors tend to receive more attention and scrutiny than journals with low impact factors. Thus, it is possible that higher impact journals have developed misconduct policies in response to the extra attention that they receive from scientists, reporters, and others. It also may be the case that high impact factor journals have had more incidents of misconduct than low impact factor journals and, therefore, have a greater need to develop policies (LaFollette, 1996). We would like to stress, however, that these possible explanations for the role of impact factor in policy development are speculative at this point, and more research is needed. For example, it would be useful to interview journal editors and ask them what factors have influenced their decisions to develop (or not develop) misconduct policies.
Another important finding is that less than half (41.1%) of the journals in our combined dataset had a formal misconduct policy. As noted earlier, COPE, the National Academy of Sciences, and others have recommended that journals develop misconduct policies. Since journals editors have been aware of the problem of misconduct in scientific research for many years, one would expect that most journals would have developed misconduct policies by now. Perhaps many journals have not developed formal misconduct policies because journal editors do not view misconduct as a common problem. A recent article by Wager et al. (2009) reported that most scientific journal editors believe that misconduct occurs only rarely in their journals. If editors view misconduct as only a rare occurrence, they may not be motivated to develop policies to deal with it. Indeed, many of the editors we contacted for this study and the previous one indicated that they had never dealt with a misconduct case in their journal.
One limitation of our research is the low response rate (38.3%) for our second survey in which we focused on social science journals. The response rate for our previous survey was 49.4%. A possible explanation for this low rate in the second survey is that we conducted this research in June, July, and August 2009, when many editors who work at academic institutions in the Northern Hemisphere may have been on summer break. Another explanation for the low response rate is that some of our e-mails may have been rejected by programs that screen out spam and junk e-mail. Additionally, some journals may not have responded to our inquiry because they expected that we could find the information through the journal’s or publisher’s Web site. Although we have no evidence that the low response rate from the second study skewed our data, we acknowledge that this is a possibility.
Another limitation of our study is that our sample size (350) in the combined dataset was too small to detect the effects of many different publishers on whether a journal has a policy. As noted earlier, we limited our data analysis to six major commercial publishers. If we had a much larger sample, we might have been able to determine whether other noncommercial publishers (e.g., Cambridge, Oxford, Public Library of Science) or smaller commercial publishers (British Medical Journal) would have had a significant influence on policy development.
ACKNOWLEDGMENTS
This research was supported by the Intramural Program of the National Institutes of Health (NIH), National Institute of Environmental Health Sciences (NIEHS). It does not represent the views of the NIH, NIEHS, or the U.S. government. We are grateful to Greg Dinse and Bill Schrader for comments on this manuscript.
Footnotes
Publisher's Disclaimer: Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf
This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
REFERENCES
- Committee on Publication Ethics (COPE) Joint Consensus Conference on Misconduct in Biomedical Research—28th and 29th October, 1999: Consensus Statement; 1999. [Last accessed September 26, 2009]. Available at: http://publicationethics.org/static/2000/2000pdf5.pdf. [Google Scholar]
- Committee on Publication Ethics (COPE) [Last accessed: September 26, 2009];A Code of Conduct for Editors of Biomedical Journals. 2006 Available at: http://www.publicationethics.org.uk/guidelines/code.
- Council of Science Editors. [Last accessed September 26, 2009];White Paper on Promoting Integrity in Scientific Journal Publications. 2006 Available at: http://www.councilscienceeditors.org/editorial_policies/white_paper.cfm.
- Garfield E. Citation analysis as a tool in journal evaluation. Science. 1972;178:471–479. doi: 10.1126/science.178.4060.471. [DOI] [PubMed] [Google Scholar]
- Grzybowski A. The journal impact factor: how to interpret its true value and importance. Medical Science Monitor. 2009;15, 2:SR1–SR4. [PubMed] [Google Scholar]
- LaFollette M. Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing. Berkeley, CA: University of California Press; 1996. [Google Scholar]
- National Academy of Sciences. Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct. Washington, DC: National Academy of Sciences; 2002. [PubMed] [Google Scholar]
- Resnik D, Peddada S, Brunson W., Jr Misconduct policies of scientific journals. Accountability in Research. 2009;16:254–267. doi: 10.1080/08989620903190299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shamoo A, Resnik D. Responsible Conduct of Research. 2nd ed. New York: Oxford University Press; 2009. [Google Scholar]
- Wager E, Fiack S, Graf C, Robinson A, Rowlands I. Science journal editors’ views on publication ethics: Results of an international survey. Journal of Medical Ethics. 2009;35:348–353. doi: 10.1136/jme.2008.028324. [DOI] [PubMed] [Google Scholar]