ABSTRACT
Aims and Objectives:
This study aimed to assess the nature and prevalence of misconduct in self and nonself-reported biomedical research.
Materials and Methods:
A detailed review of previously conducted studies was conducted through PubMed Central, PubMed, and Google Scholar using MeSH terms: “scientific misconduct,” “Publications,” “plagiarism,” and “authorship,” and keywords: scientific misconduct, gift authorship, ghost authorship, and duplicate publication. MeSH terms and keywords were searched in combinations using Boolean operators “AND” and “OR.” Of 7771 articles that appeared in the search, 107 were selected for inspection. The articles were screened for their quality and inclusion criteria. Finally, 16 articles were selected for meta-analysis. Data analysis was conducted using an Open-Source, Open Meta Analyst, statistical software using the package “metaphor.”
Results:
Plagiarism, data fabrication, and falsification were prevalent in most articles reviewed. The prevalence of research misconduct for plagiarism was 4.2% for self-reported and 27.9% for nonself-reported studies. Data fabrication was 4.5% in self-reported and 21.7% in nonself-reported studies. Data falsification was 9.7% in self-reported and 33.4% in nonself-reported studies, with significant heterogeneity.
Conclusion:
This meta-analysis gives a pooled estimate of the misconduct in research done in biomedical fields such as medicine, dental, pharmacy, and others across the world. We found that there is an alarming rate of misconduct in recent nonself-reported studies, and they were higher than that in the self-reported studies.
Keywords: Data falsification and fabrication, plagiarism, research misconduct
INTRODUCTION
Research misconduct is “Fabrication, falsification, or plagiarism in proposing, performing or reviewing research and reporting research results,” as described by the “US Department of Health and Human Services Office of Research Integrity.”[1] This definition includes only falsification and fabrication of data or misrepresentation of results, but other procedures also fall under misconduct such as safety misuses or violation of funds, or undisclosed conflicts of interest.[2,3,4] If healthcare practitioners rely on information based on fabricated study data, people may be in danger or suffer harm.[5,6] People’s research careers might also be damaged or destroyed. The research group’s credibility might suffer greatly.[7] It erodes the public’s confidence in science and those who practice it. In reaction to the historical mistreatment of human subjects in biomedical research, international conventions were often developed along with federal laws and regulations for human subjects.[8]
There are many individual studies to represent the nature and prevalence of misconduct, but there are no recent meta-analyses of studies. This research offers a meta-analysis and systematic review of survey data on scientific misconduct in proposing, performing, or reporting biomedical research.
MATERIALS AND METHODS
The proportion of respondents who acknowledged or saw wrongdoing at least once for each survey item was calculated, and the analysis was restricted to qualitatively comparable types of misconduct, namely falsification, fabrication, and plagiarism, that might skew scientific data and results.
SEARCH STRATEGIES
Literature was reviewed for all observational studies concerned with publication misconduct. An electronic search was performed in PubMed Central, PubMed, and Google Scholar using MeSH terms: “scientific misconduct,” “Publications,” “plagiarism,” and “authorship,” and keywords: scientific misconduct, gift authorship, ghost authorship, and duplicate publication on February 1, 2018. MeSH terms and keywords were searched in combinations using Boolean operators “AND” and “OR.” Out of the 7771 articles published in English in the past 10 years (2008–2018) 107 were selected for inspection [Figure 1]. All the titles found through the online search were checked for the relevancy and then abstracts of relevant studies were assessed. The full text was checked in case of uncertainty. One hundred seven articles were selected for abstract review. Two independent reviewers (RP and MB) assessed 31 articles individually and studied them in detail. Based on inclusion–exclusion criteria, 16 articles were finally selected for quantitative analysis. The studies included in this review are observational cross-sectional studies and observational questionnaire-based studies (self-reported and nonself-reported). Publications that include biomedical research as their study unit were considered. Outcome measures are scientific misconduct, type of scientific misconduct, quantitative subjective, and objective measures of scientific misconduct.
Figure 1.
Study selection flow chart
Studies were categorised as based on Quality Assessment Tool for Observational Cohort and Cross-sectional Studies” given by National institute of Heart, Lungs and blood( NHLBI) and National institute of health (NIH).[9] The research was estimated based on the study’s quality and incomplete assessment of outcome data, along with other aspects.
STATISTICAL ANALYSIS
Data analysis was conducted using an Open-Source, Open Meta Analyst, a statistical software using the package “metaphor.” We estimated pooled weight of the prevalence of plagiarism and fabrication, and falsification of data in self and nonself-reported studies. We used the random effect model and tested for heterogeneity using the Q, df, and P of the software meta-analyst.
RESULTS
The original literature search retrieved 7771 abstracts and 31 full-text reports that were potentially relevant to the review. Sixteen reports were included for the meta-analysis.
These 16 studies included 11,039 researchers from various countries. Four studies were from the USA and UK; two were from India; one was from Pakistan, Iran, Norway, Sweden, Belgium, Malaysia, Nigeria, Australia, and Croatia; and one was from international surveys [Table 1].
Table 1.
Characteristics of included studies in review
| Sr. no. | Author | Study population | Sample size | Methodology | Results | Country |
|---|---|---|---|---|---|---|
| 1 | Dhaliwal (2007)[23] | Faculty in a teaching hospital | 95 | Questionnaire self-administered | Research misconduct: 39% | India |
| 2 | Pryor et al. (2007)[24] | Research coordinator | 1645 | SMQ-revised | Plagiarism: 5.2%–66.9% | USA |
| 3 | Wislar (2008)[25] | Journal authors | 630 | Web-based questionnaire | Research misconduct: 21%–11.9% | USA |
| 4 | Sandra et al.(2008)[15] | Medical researchers | 2212 | Questionnaire study | Plagiarism: 36.3% | USA |
| 5 | Wager et al.[26] | Editor-in-chief | 524 | Survey questionnaire-based | Plagiarism: 11% | UK |
| 6 | Nilstsun et al. (2010)[10] | Medical faculties | 262 | Questionnaire survey | Research misconduct: 10% | Sweden |
| 7 | Ghajarzadeh et al.[27] | Medical faculty members of Tehran University | 120 | Email questionnaires study | Plagiarism | Iran |
| 8 | Broome et al. (2010)[16] | Nursing reviewers | 1675 | Web-based questionnaire survey monkey | Approximately 20% of the reviewers had experienced various ethical dilemmas. | Online International survey |
| 9 | Tijdink et al. (2012)[12] | Flemish biomedical scientists | 315 | Nationwide survey | 15% fabricated, falsified, plagiarized, or manipulated data in the past 3 years | Belgium |
| 10 | Jawaid and Jawaid et al.[28] | Faculty members of various private and public sector medical institutions of Pakistan | 218 | Self-administered questionnaire | Misconduct: 42.7%–19.35% | Pakistan |
| 11 | Hadji et al. (2013)[13] | Iranian authors | 2321 | Email survey | Plagiarism: 4.90% | Iran |
| 12 | Hofmann et al.[22] | PhD medical students | 189 | Questionnaire survey | Publication misconduct 13% | Norway |
| 13 | Nylenna et al.[30] | PhD students | 654 | Questionnaires based | Research misconduct: 68% | Norwegian |
| 14 | Dhinga and Mishra[17] | Medical faculty | 155 | Questionnaire study | Plagarism: 53% | India |
| 15 | Rathore et al.[31] | 7 medical colleges of Lahore and Rawalpindi faculty and students | 680 | Attitudes toward plagiarism questionnaire was modified | 25.2% were trained in research ethics | Pakistan |
| 16 | Looi et al.[32] | APAME editors | 151 | Web-based questionnaire through survey monkey | Plagiarism (75%) and duplicate publication (58%) | Malaya |
APAAME, Asia pacific association of medical editors; SMQ, scientific misconduct questionnaire
Five studies (Nilstun et al.,[10] Okonta (2013),[11] Tijdink et al.,[12] Hadji et al.,[13] and Pupovac (2016)[14]) reported plagiarism (self-reported) and five studies (Titus et al.,[15] Broom (2010),[16] Tijdink et al.,[12] Dhingra and Mishra,[17] and Pupovac (2016)[14]) gave quantitative data on nonself-reported plagiarism. Eight studies (Nilstun et al.,[10] Okonta (2013),[11] Tijdink et al.,[12] Camilla (2015),[18] Hadji et al.,[13] Pupovac(2016),[14] Habermann (2010),[19] and Patel (2017)[20]) reported fabrication or falsification as the main outcome. Broome et al. (2013)[21] reported various ethical dilemmas of reviewers in an international online survey, and Hofmann et al.[22] reported publication misconduct.
Plagiarism in self-reported studies
The meta-analysis yielded a pooled weighted estimate of 4.2% (95% CI: 1.5–6.8) with significant heterogeneity (P = 0.001; df = 4; Cochran’s Q = 89.01) [Figure 2]. The studies that asked the question about plagiarism done by the researcher by itself were included in this forest plot. A study done by Nilstun et al.[10] was given the highest weightage in the meta-analysis because it has the least number of nonresponses.
Figure 2.
Forest plot of admission of plagiarism in self-reports. Study weight- Nilstun etal 2010- |23.43%|, Okonta 2013-|13.19%|, Tijdink JK etal 2014-|20.48%|, Maryam Hadji 2016 -|22.65%|, V. Pupovac 2016-|20.26%|
Plagiarism in nonself-reported studies
The meta-analysis yielded a pooled weighted estimate of 27.9% (95% CI: 12.2–43.6), with significant heterogeneity (Cochran’s Q = 99.56, P < 0.001, df = 4) [Figure 3]. Studies, where questions about plagiarism were asked but for a colleague or friend, were included. Titus et al.[15] get the highest weightage because it recruited the highest number of participants, which gives it the highest effect size in all included studies in this forest plot.
Figure 3.
Forest plot of admission of plagiarism in nonself-reports. Study weight-Titus et al. 2008-20.41%., Broom 2010-20.37%, Tijdink JK et al 2014-19.94%, Dhulika Dhingra 2014-19.43%, V.pupovac 2016-19.85%
Fabrication of data and results in self-reported studies
The meta-analysis yielded a pooled weighted estimate of 4.0% (95% CI: 1.6–6.4), with significant heterogeneity (Cochran’s Q = 87.43; P < 0.001; df = 4) [Figure 4]. The study done by Tijdink et al.[12] was given the highest weightage in the random effect model due to its effect size related to negative responses given by participants in the study.
Figure 4.
Forest plot of admission of fabrication of data and results in self-reports. Study |weight|- Nilstun 2010-|20.33%|, Tijdink JK et al 2014-|22.78%|, Maryam Hadji 2016-|22.46%|, V.pupovac 2016-|19.45%|, Habermann 2010-|14.98%|
Fabrication of data and results in nonself-reported studies
The meta-analysis yielded a pooled weighted estimate of 21.7% (95% CI: 14.8–28.7), with significant heterogeneity (Cochran’s Q = 98.61; P < 0.001; df = 5) [Figure 5]. The most heterogeneity in this forest plot was due to the study done by Dhingra and Mishra,[17] which have the most number of participants who agreed to data fabrication for colleague that other studies.
Figure 5.
Forest plot of admission of data fabrication and results in nonself-reports. Study |weight|- Titus et al. 2008- |17.69%|, broom 2010-|17.50%|, Tijdink JK etal 2014|16.41%|, Dhulika Dhingra 2014-|14.52%|, V.pupovac 2016-|16.24%|, Aniket Tiwari 2012-|17.64%|
Falsification of data and results in self-reported studies
The meta-analysis yielded a pooled weighted estimate of 9.7% (95% CI: 5.6–13.9) with significant heterogeneity (Cochran’s Q = 90.9; P < 0.001; df = 6) [Figure 6]. Most studies in this forest plot were given more or less the same weightage because their negative response ratio with respect to their sample size were the same.
Figure 6.
Forest plot of admission of falsification of data and results in self-reports. Study |weight|-- Nilstun 2010 |15.64%|, Okonta 2013|10.52%|, Tijdink JK et al 2014 |15.20%|, Camilla 2015 |13.76%|, V.pupovac 2016 |14.51%|, Habermann 2010 |15.41%|, Patel M 2017 |14.96%|
Falsification of data and results in nonself-reports
The meta-analysis yielded a pooled weighted estimate of 33.6% (95% CI: 8.1–59.1) with significant heterogeneity (Cochran’s Q = 99.24; P < 0.001; df = 3) [Figure 7]. In this forest plot, Titus et al.[15] study given the most weightage because of its effect size.
Figure 7.
Forest plot of admission of falsification of data and results in nonself-reports. Study |weight|- Titus et al 2008|25.29%|, Tijdink JK et al 2014|25.01%|, Dhulika Dhingra 2014|24.71%|, V.pupovac 2016|24.98%|
DISCUSSION
In the present review, it was discovered that nearly 4% of scientists admitted to involve in any one form of scientific misconduct. There is a marked difference in reporting scientific misconduct for themselves (self) and others (nonself). Plagiarism prevalence ranged from 0.2% to 49.4% in the studies reviewed.[23,24,25,26,27,28,29,30,31,32] A national survey done on members of the Association of clinical research found plagiarism as high as 27.7%, whereas in nursing sciences, Broome et al. found the prevalence of plagiarism as 8.8%–26.4%. The finding that acceptance rates for plagiarism in nonself-reports are greater may be due to the perception of plagiarism as a less severe type of scientific misconduct. This perception might discourage scientists from reporting plagiarism or possibly being more willing to engage in it. Other studies have shown that less severe misconducts are more frequently reported than more significant ones.[33]
Alternatively, because of the extensive availability of internet tools, admittance rates for plagiarism may be greater than for data manipulation. As a result, many responders who serve as peer reviewers or journal editors may encounter plagiarism issues.[34] The findings are not entirely consistent with this later hypothesis’ prediction that nonself-reports should have increased with time. Plagiarism is a multifactorial issue, but it can be prevented by raising more awareness of the existence and frequency of plagiarism.[35] Academic institutions should incorporate writing ethics in a general curriculum and must have established centers to promote or develop high-quality research.
It was found that up to one-fourth of scientists acknowledged being engaged in a range of other dubious research procedures, and around 4% admitted to fabricating, or altering data or findings. Over 9.7% of respondents in studies aimed at behavior reported seeing fabrication and alteration on average, and nearly 33.6% reported seeing other questionable practices. The admission rate over the last few years has significantly increased in self-reports but not in nonself-report cases. Bio-statistical misconduct could be falsification and fabrication of data, suppression of data, deceptive analysis or design, and deceptive reports of results. In our review, Okota et al.,[11] Pryor et al.,[24] Geggie et al.,[29] and Titus et al.[15] found a prevalence of data fabrication and falsification. Biostatisticians are methodologically capable of spotting fraud and are likely to have a vested professional interest in reliable findings. Despite this, the prevalence of this form of misconduct is high. Biostatisticians often have access to private information and are skilled enough to comprehend its implications. They could, thus, be in a particularly good position to observe scientific transactions as they occur before publication.
These findings show that data fabrication and falsification trends are increasing in self-reported and are constant in nonself-reported. These trends can be explained by the fact that scientists are more aware of fabrication and falsification due to increasing awareness and strict implementation of policies in recent years at institutional levels. The most direct and simplest explanation for this trend is that self-reports provide data about the responder alone, whereas non self-reports include information about the coworkers of each particular respondent. Self-reports on delicate subjects such as scientific misconduct are likely to be underestimated due to social desirability and its impacts, but nonself-reporting is prone to overestimate the incidence of misconduct. According to a survey of more than 2000 psychologists, those rewarded for being honest report greater rates of misbehavior, with the magnitude of the rise being related to the seriousness of the behavior.[36] The average proportion of scientists who at least once purposefully plagiarized ideas or text from a colleague without giving due credit is, thus, likely to lie between the published percentages of nonself- and self-admissions. One probable conclusion is that although scientists are more aware of and eager to report the misbehavior of others, the probability that they would engage in misconduct has dropped. In this case, scientific misbehavior would be less common but more likely to be reported by scientists, so the two impacts would balance each other out and keep the nonself-report rates unchanged. Alternately, we may propose that scientific misconduct has not decreased and that scientists are less likely than in the past to confess to engaging in it while still reporting the misbehavior of their peers.
A recent study of journal editors backs this theory, 44% of whom believe that plagiarism and repetitive publishing have somewhat risen while the prevalence of data falsification and fabrication has not diminished.[37,38] Both theories would be consistent with data showing that scientists and editors are becoming more aware of the norms and regulations against scientific misconduct, a development that best accounts for the rise in the number of retractions in the literature.[39]
The association between stated rates of data falsification and data fabrication raises the possibility that a certain cohort of scientists may be somewhat representative of the total degree of scientific misconduct. This variation can show many levels of “average honesty” of the respondents in question. Such variability can reflect different levels in the average scientific integrity of people considered in research.[40]
Reflecting on the degree to which respondents are aware of and informed about scientific misconduct, and are, therefore, likely to identify it and report it, would be the most urgent and reasonable course of action.[41] The two theories do not conflict with one another. Empirical studies show that researchers are less likely to disclose misconduct when they work in contexts that prioritize research integrity standards, such as by adopting explicit laws, educational prevention programs, and consequences for misconduct.
The first drawback of our research is the inherent nature of survey data, which the elements of the study design may significantly impact. According to the current meta-analysis, surveys distributed in person instead of mailed or e-mailed had much higher admission rates for scientific misconduct.
Surveys that do not make use of the technically incriminating word “plagiarism” and surveys that includes more generic and less direct questions (e.g., “Have you ever observed or heard about,” “Have you suspected”) have reported less scientific misconduct. A prior meta-analysis of survey data on falsification and fabrication found comparable results that may be attributed to social desirability effects.[42] It may be concluded that surveys on scientific misconduct often provide logically coherent findings if admission rates correspond predictably with survey variables.
The second restriction is technically related to the first, which stems from the enormous variation across research and has largely gone unnoticed. Ideally, the findings of studies included in a meta-analysis should vary only due to sampling error. The detected variability was mostly over this level, especially among nonself-reports. This finding may be typical of meta-analyses in the behavioral and social sciences, where the research subjects are frequently complicated, and there is little consensus about theories and methodology.[43,44] The most plausible explanation for some of the observed variances is methodological problems, whereas others could be explained by true differences in the rate of scientific misconduct encountered by respondents in various research. The methodological aspects were shown not to influence nonself-reporting or overall. It is still unclear, however, how much the differences in survey results were caused by respondent characteristics that we could not identify and test with sufficient power and how much heterogeneity was brought on by differences in study design and quality that our inclusion criteria failed to shift out. Low statistical power was available to identify explanatory variables. We found many methodological aspects and research features that greatly impacted the results in the self-reports.[45] However, it is important to note that these results, acquired from several tests carried out on a relatively small sample, are likely false positives. We examine the potential ramifications of these findings further below.
Publication bias, a possible issue in every meta-analysis, is the third potential limiting factor. In this analysis, self-report findings were shown to be highly connected with response rates, but nonself-report results seemed to be correlated with both sample size and year of publication, indicating that nonself-admission rates may have grown over time only among smaller research.[46,47] All these impacts can point to a large and increasing publishing bias. If surveys on misbehavior have a big sample size, a high admittance rate, or if they provide stronger findings, they are more likely to be published (higher plagiarism rates). However, other reasons could also be viable. For example, we cannot rule out the possibility that recent and small research utilized study designs distinct from those used in more extensive or older studies. However, other reasons could also be viable. For example, we cannot rule out the possibility that recent and small research utilized study designs distinct from those used in more extensive or older studies.
While most nonself-reports did not consider the possibility of reporting the same event twice, one of the larger studies in the sample did so by choosing only one survey responder per academic department, and the results showed much lower admission rates (3.1%). This research and one from a prior meta-analysis were included in the meta-analysis to maintain a cautious approach in our analyses. This inclusion decision may have been questionable and unnecessarily increased the sample’s heterogeneity. Even yet, excluding this research from the pooling of estimates results in a minor reduction in heterogeneity (I2 = 98%) and an increase in the total pooled estimates for nonself-reports, that is, not statistically significant. In general, excluding this research had no appreciable impact on the findings of our analysis. Survey technique also seems to have improved throughout the years. In particular, subsequent polls tended to use tactics that lessen the impacts of social desirability, such as asking straightforward questions and avoiding the term “scientific misconduct” more often. Whether these latest methodological decisions constitute an advance in survey technique and produce more accurate estimates of the incidence of misconduct is still up for debate.
CONCLUSION
This meta-analysis gives a pooled estimate of the misconduct in research done in biomedical fields such as medicine, dental, pharmacy, and others across the world. There are many surveys conducted on misconduct in biomedical research in recent years that have taken into consideration the researchers from different biomedical fields, countries, and demographics, but results are inconsistent due to nonstandardized methodologies. The results of this meta-analysis suggest that there is an alarming rate of misconduct in recent nonself-reported studies. The prevalence is much higher in nonself reported than self-reported studies.
Recommendations for future research
The surveys done till now adopt different methodologies for assessing plagiarism and data fabrication, and falsification. There is a need for a standardized survey methodology and protocol for these assessments. On the other hand, there are many sociological factors that should be addressed in future studies. Apart from that all institutions and their leaders will be held more accountable than they now are for maintaining morally upright research settings and preventing small infractions of accepted scientific practices. Even for its outliers, the scientific community must assume joint accountability.
FINANCIAL SUPPORT AND SPONSORSHIP
Nil.
CONFLICTS OF INTEREST
There are no conflicts of interest.
AUTHORS CONTRIBUTIONS
None.
ETHICAL POLICY AND INSTITUTIONAL REVIEW BOARD STATEMENT
None.
PATIENT DECLARATION OF CONSENT
None.
DATA AVAILABILITY STATEMENT
Full list of references reviewed is available upon request from the corresponding author.
ACKNOWLEDGEMENT
None.
REFERENCES
- 1.OECD. Frascati Manual 2015 Guidelines for Collecting and Reporting Data on Research and Experimental Development, The Measurement of Scientific, Technological and Innovation Activities. Paris: OECD Publishing; 2015. [Google Scholar]
- 2.Kothari CR. Research Methodology Methods and Techniques. 4th ed. New Delhi: New Age International; 2015. [Google Scholar]
- 3.Armond ACV, Gordijn B, Lewis J, Hosseini M, Bodnár JK, et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics. 2021;22:50. doi: 10.1186/s12910-021-00620-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Viđak M, Barać L, Tokalić R, Buljan I, Marušić A. Interventions for organizational climate and culture in academia: A scoping review. Sci Eng Ethics. 2021;27:24. doi: 10.1007/s11948-021-00298-6. [DOI] [PubMed] [Google Scholar]
- 5.ALLEA. The European Code of Conduct for Research Integrity. 2017. Available from: https://www.allea.org/publications/joint-publications/european-code-conduct-research-integrity .
- 6.Aubert Bonn N, Godecharle S, Dierickx K. European universities’ guidance on research integrity and misconduct. J Empir Res Human Res Ethics. 2017;12:33–44. doi: 10.1177/1556264616688980. [DOI] [PubMed] [Google Scholar]
- 7.Benessia A, Funtowicz S, Giampietro M, Pereira ÂG, Ravetz JR, Saltelli A, et al. The Rightful Place of Science: Science on the Verge. Consortium for Science, Policy & Outcomes; 2016. [Google Scholar]
- 8.Kakuk P. The legacy of the Hwang case: Research misconduct in biosciences. Resources. 2009;15:545–62. doi: 10.1007/s11948-009-9121-x. [DOI] [PubMed] [Google Scholar]
- 9.NIHR Journals Library; 2020. Health Services and Delivery Research, No. 8.5. Appendix 11, Quality appraisal of included studies. [Last accessed on 12 Jun 2020]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK553267/
- 10.Nilstun T, Löfmark R, Lundqvist A. Scientific dishonesty—Questionnaire to doctoral students in Sweden. J Med Ethics. 2010;36:315–8. doi: 10.1136/jme.2009.033654. [DOI] [PubMed] [Google Scholar]
- 11.Okonta P, Rossouw T. Prevalence of scientific misconduct among a group of researchers in Nigeria. Dev World Bioeth. 2013;13:149–57. doi: 10.1111/j.1471-8847.2012.00339.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Tijdink JK, Verbeke R, Smulders YM. Publication pressure and scientific misconduct in medical scientists. J Empir Res Hum Res Ethics. 2014;9:64–71. doi: 10.1177/1556264614552421. [DOI] [PubMed] [Google Scholar]
- 13.Hadji M, Asghari F, Yunesian M, Kabiri P, Fotouhi A. Assessing the prevalence of publication misconduct among Iranian authors using a double list experiment. Iran J Public Health. 2016;45:897–904. [PMC free article] [PubMed] [Google Scholar]
- 14.Pupovac V, Prijić-Samaržija S, Petrovečki M. Research misconduct in the Croatian scientific community: a survey assessing the forms and characteristics of research misconduct. Sci Eng Ethics. 2017;23:165–81. doi: 10.1007/s11948-016-9767-0. [DOI] [PubMed] [Google Scholar]
- 15.Titus S, Wells J, Rhoades L. Repairing research integrity. Nature. 2008;453:980–2. doi: 10.1038/453980a. [DOI] [PubMed] [Google Scholar]
- 16.Broome M, Dougherty MC, Freda MC, Kearney MH, Baggs JG. Ethical concerns of nursing reviewers: An international survey. Nurs Ethics. 2010;17:741–8. doi: 10.1177/0969733010379177. [DOI] [PubMed] [Google Scholar]
- 17.Dhingra D, Mishra D. Publication misconduct among medical professionals in India. Indian J Med Ethics. 2014;11:104–7. doi: 10.20529/IJME.2014.026. [DOI] [PubMed] [Google Scholar]
- 18.Rajah-Kanagasabai CJ, Roberts LD. Predicting self-reported research misconduct and questionable research practices in university students using an augmented Theory of Planned Behavior. Front Psychol. 2015;6:535. doi: 10.3389/fpsyg.2015.00535. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Habermann B, Broome M, Pryor ER, Ziner KW. Publication misconduct among medical professionals in India. Indian J Med Ethics. 2014;11:104–7. doi: 10.20529/IJME.2014.026. [DOI] [PubMed] [Google Scholar]
- 20.Patel M. Misconduct in Clinical Research in India: Perception of Clinical Research Professional in India. J Clin Res Bioeth. 2017;8:303. [Google Scholar]
- 21.Broome ME, Riner ME, Allam ES. Scholarly publication practices of Doctor of Nursing Practice-prepared nurses. J Nurs Educ. 2013;52:429–34. doi: 10.3928/01484834-20130718-02. [DOI] [PubMed] [Google Scholar]
- 22.Hofmann B, Myhr AI, Holm S. Scientific dishonesty—A nationwide survey of doctoral students in Norway. BMC Med Ethics. 2013;14:3. doi: 10.1186/1472-6939-14-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Dhaliwal U, Singh N, Bhatia A. Awareness of authorship criteria and conflict: Survey in a Medical Institution in India. MedGenMed. 2006;8:52. [PMC free article] [PubMed] [Google Scholar]
- 24.Pryor ER, Habermann B, Broome ME. Scientific misconduct from the perspective of research coordinators: A national survey. J Med Ethics. 2007;33:365–9. doi: 10.1136/jme.2006.016394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Wislar JS, Flanagin A, Fontanarosa PB, Deangelis CD. Honorary and ghost authorship in high impact biomedical journals: A cross sectional survey. BMJ. 2011;343(d6128) doi: 10.1136/bmj.d6128. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Wager E, Fiack S, Graf C, Robinson A, Rowlands I. Science journal editors’ views on publication ethics: Results of an international survey. J Med Ethics. 2009;35:348–53. doi: 10.1136/jme.2008.028324. [DOI] [PubMed] [Google Scholar]
- 27.Ghajarzadeh M, Norouzi-Javidan A, Hassanpour K, Aramesh K, Emami-Razavi SH. Attitude toward plagiarism among Iranian medical faculty members. Acta Med Iran. 2012;50:778–81. [PubMed] [Google Scholar]
- 28.Jawaid M, Jawaid SA. Faculty member’s views, attitude and current practice as regards International Committee of Medical Journal Editors criteria for authorship. Iran J Public Health. 2013;42:1092–8. [PMC free article] [PubMed] [Google Scholar]
- 29.Geggie D. A survey of newly appointed consultants’ attitudes towards research fraud. J Med Ethics. 2001;27:344–6.l. doi: 10.1136/jme.27.5.344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Nylenna M, Fagerbakk F, Kierulf P. Authorship: Attitudes and practice among Norwegian researchers. BMC Med Ethics. 2014;15:53. doi: 10.1186/1472-6939-15-53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Rathore FA, Waqas A, Zia AM, Mavrinac M, Farooq F. Exploring the attitudes of medical faculty members and students in Pakistan towards plagiarism: A cross sectional survey. Peer J. 2015;18:e1031. doi: 10.7717/peerj.1031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Looi LM, Wong LX, Koh CC. Scientific misconduct encountered by APAME journals: An online survey. Malays J Pathol. 2015;37:213–8. [PubMed] [Google Scholar]
- 33.Fadlalmola HA, Elhusein AM, Swamy DSV, Hussein MK, Mamanao DM, Mohamedsalih WE. Plagiarism among nursing students: A systematic review and meta-analysis. Int Nurs Rev. 2022;69:492–502. doi: 10.1111/inr.12755. [DOI] [PubMed] [Google Scholar]
- 34.Yi N, Nemery B, Dierickx K. Do biomedical researchers differ in their perceptions of plagiarism across Europe? Findings from an online survey among leading universities. BMC Med Ethics. 2022;23:78. doi: 10.1186/s12910-022-00818-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Clarke O, Chan WYD, Bukuru S, Logan J, Wong R. Assessing knowledge of and attitudes towards plagiarism and ability to recognize plagiaristic writing among university students in Rwanda. High Educ (Dordr) 2022;13:1–17. doi: 10.1007/s10734-022-00830-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Kaushik M, Singh V, Chakravarty S. Rewards, detection and dishonesty: Experimental evidence from India. 2021 doi: 10.1038/s41598-022-06072-3. Available from: http://dx.doi.org/10.2139/ssrn.3939746 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Núñez-Núñez M, Andrews JC, Fawzy M, Bueno-Cavanillas A, Khan KS. Research integrity in clinical trials: Innocent errors and spin versus scientific misconduct. Curr Opin Obstet Gynecol. 2022;34:332–9. doi: 10.1097/GCO.0000000000000807. [DOI] [PubMed] [Google Scholar]
- 38.Yi N, Nemery B, Dierickx K. How do Chinese universities address research integrity and misconduct? A review of university documents. Dev World Bioeth. 2019;19:64–75. doi: 10.1111/dewb.12231. [DOI] [PubMed] [Google Scholar]
- 39.Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: A review of the literature on retracted journal articles. Curr Sociol. 2017;65:814–45. doi: 10.1177/0011392116663807. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.De Fiore L, Addis A. Presunti colpevoli. Falsificazione e fabbricazione di dati nelle pubblicazioni scientifiche [Presumed guilty. Falsification and fabrication of data in scientific publications.] Recenti Prog Med. 2022;113:353–54. doi: 10.1701/3827.38106. [DOI] [PubMed] [Google Scholar]
- 41.Horbach JM, Breit E, Halffman W, Mamelund E. On the willingness to report and the consequences of reporting research misconduct: The role of power relations. Sci Eng Ethics. 2020;26:1595–623. doi: 10.1007/s11948-020-00202-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009;4:e5738. doi: 10.1371/journal.pone.0005738. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Adesanya AA. A proposed research misconduct policy for universities and postgraduate colleges in developing countries. Niger Postgrad Med J. 2020;27:250–8. doi: 10.4103/npmj.npmj_51_20. [DOI] [PubMed] [Google Scholar]
- 44.Wong CA, Song WB, Jiao M, O’Brien E, Ubel P, Wang G, et al. Strategies for research participant engagement: A synthetic review and conceptual framework. Clin Trials. 2021;18:457–65. doi: 10.1177/17407745211011068. [DOI] [PubMed] [Google Scholar]
- 45.Armond ACV, Gordijn B, Lewis J, Hosseini M, Bodnár JK, Holm S, et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics. 2021;22:50. doi: 10.1186/s12910-021-00620-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Spineli LM, Pandis N. Publication bias: Graphical and statistical methods. Am J Orthod Dentofacial Orthop. 2021;159:248–51. doi: 10.1016/j.ajodo.2020.11.005. [DOI] [PubMed] [Google Scholar]
- 47.Ventura M, Oliveira SC. Integrity and ethics in research and science publication. Cad Saude Publica. 2022;38:e00283521. doi: 10.1590/0102-311X00283521. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Full list of references reviewed is available upon request from the corresponding author.







