Abstract
Objective
To evaluate the submission guidelines of physical medicine and rehabilitation (PM&R) journals regarding their policies on the use of artificial intelligence (AI) in manuscript preparation.
Design
Cross-sectional study, including 54 MEDLINE-indexed PM&R journals, selected by searching “Physical and Rehabilitation Medicine” as a broad subject term for indexed journals. Non-English journals, conference-related journals, and those not primarily focused on PM&R were excluded.
Setting
PM&R journals.
Participants
Not applicable.
Interventions
Not applicable.
Main Outcome Measures
Reviewing policies regarding the use of AI and comparing CiteScore, Source Normalized Impact per Paper (SNIP), Scientific Journal Ranking (SJR), and Impact Factor (IF) between journals with an AI policy and those without.
Results
Of the 54 PM&R journals, only 46.3% had an AI policy. Among these, none completely banned AI use or allowed unlimited use without a declaration. Most journals (52%) permitted AI for manuscript editing with a required declaration, 44% allowed unlimited AI use with a declaration, and only 4% allowed AI-assisted editing without any declaration. No significant difference was found in scientometric scores between journals considered with and without AI policies (P>.05).
Conclusions
Under half of MEDLINE-indexed PM&R journals had guidelines regarding the use of AI. None of the journals with AI policies entirely prohibited its use, nor did they allow unrestricted use without a declaration. Journals with defined AI policies did not demonstrate higher citation rates or affect scores.
KEYWORDS: Artificial intelligence, Guideline, Journals, Machine Learning, Rehabilitation
The term artificial intelligence (AI) was first introduced in 1950 and is used as a general term that suggests employing a computer to simulate intelligent behavior while minimizing human involvement. Its growth has been explosive, and now AI is used in many fields, including medicine, to train itself, gain knowledge from experience, and perform tasks that previously required human intelligence.1
The emergence of large language models (LLMs) such as ChatGPT has accelerated this trend into scientific publication. These models can produce text closely resembling human writing and serve as robust tools aiding researchers in text generation, information provision, and data analysis.2 As a valuable technology, it can assist in various stages of article preparation, such as suggesting topics, drafting well-structured articles, and creating content for each section of a paper.3 AI tools can also enhance the quality of the text by correcting errors and improving the writing style, especially for non-native English speakers, thereby ensuring that their manuscripts are accurate.4
However, despite these advantages, it is accompanied by concerns that extend not only to authors but also to peer reviewers and editors. They can cause challenges regarding copyright violations and authorship.5,6 According to publishers contacted by Nature's news team, AI tools do not meet the criteria for authorship because they cannot bear responsibility for the authenticity, accuracy, and integrity of the content—fundamental factors for authorship.7 Among the ethical concerns surrounding AI tools are their occasional production of biased or harmful results and, more importantly, their inability to differentiate between reliable and unreliable sources.8 Another potential misuse is plagiarism, when using it to generate content, that If not used with caution, the resulting text might bear a strong resemblance to current medical literature; therefore, authors must edit the text and manually include the relevant citations.9 Despite these issues, a survey of corresponding authors from the top 15 medical journals by effect revealed that the majority had “limited” to “moderate” familiarity with AI.10 The solution, then, is that authors consider AI as an ancillary tool rather than a primary source of information.11
Regardless of these issues and obstacles, the appropriate use of this technology greatly enhances the communication of scientific findings. Therefore, instead of outright banning the use of AI, the sensible approach is to establish suitable and standardized guidelines.12
Different journals have different author guidelines, to ensure that authors provide all the necessary information and adhere to the journal's standards. The International Committee of Medical Journal Editors (ICMJE) now requires that authors disclose any use of AI-assisted technologies in the generation of any part of their manuscripts.13
In this article, we discuss the author guidelines of MEDLINE-indexed journals in the field of physical medicine and rehabilitation (PM&R) regarding the usage of AI and compare their scientometric data.
Materials and methods
This is a cross-sectional study conducted in March 2024. A PGY-3 PM&R resident provided the list of MEDLINE-indexed journals related to the field by first evaluating the broad subject terms for indexed journals. After evaluating the terms, only “Physical and Rehabilitation Medicine” were included, and a list of MEDLINE-indexed journals based on this term was extracted. This study did not involve any human participants or patient data. Therefore, no informed consent or ethical approval was required. After obtaining the first list of journals, all non-English journals, conference-related journals, and those not primarily focused on PM&R were excluded. The selection process of this study is depicted in figure 1.
Fig 1.
Flowchart illustrating the procedure for selecting and excluding journals in the research.
Next, we evaluated whether the finalized list of journals had any specific policy regarding AI usage. For this purpose, the authors’ guidelines, any external links within the guidelines, and editorial policies were searched specifically for references to artificial intelligence, AI, Chat (for ChatGPT and other chatbots), Language (for LLM), and machine learning, using the same keywords used in a prior study conducted in the field of radiology by Simsek et al.14 For those where any of these were found, another section was completed. This section determined whether the journal allowed the use of AI, and if so, in what way—whether unlimited use or limited to editing. Additionally, it checked whether researchers were required to declare the use of AI in their manuscript.
Subsequently, each journal was evaluated using a standardized form designed based on the structure of the Simsek et al14 study. This evaluation form comprises various sections, including its name, society affiliation, indexing in SCOPUS and Web of Science, and metric characteristics such as CiteScore, Source Normalized Impact per Paper (SNIP), Scientific Journal Ranking (SJR), and Impact Factor (IF).
Statistical analysis
Qualitative data were expressed as frequencies and percentages, whereas quantitative data were represented using medians, ranges, and quartiles. The normality of the quantitative data was evaluated using the Shapiro–Wilk test. Comparisons of quantitative data were conducted using t tests and Mann–Whitney U tests, whereas chi-square tests were used for qualitative data. The analysis was performed using SPSS Statistics version 27.0. A P value less than 0.05 was considered significant.
Results
After searching the NLM catalog, we found 62 journals. After assessing all the journals, 8 were excluded according to exclusion criteria, and finally, 54 were finally included in this study (fig 1).
Only 25 (46.3%) journals have policies regarding the use of AI in manuscripts. Of these, 13 (52%) limited the use of AI only to edit the text with the necessity of a declaration in the manuscript, 11 (44%) mentioned that authors should declare the use of AI but did not mention any restrictions, and 1 (4%) limited the use of AI only to edit the text without a need to declare. The usage of AI was not prohibited in any of these journals. Additionally, none of them permitted authors to unlimited use of AI without declaration (fig 2).
Fig 2.
Pie chart illustrating a summary of the policies.
Thirty-three (68.5%) of these journals were affiliated with a society or an institution. All of them were indexed in SCOPUS, and except for one, the others were also indexed in Web of Science. The median SJR was 0.629 (range: 0.4-1.95), CiteScore 3.95 (range: 0.6-10.4), IF 2.2 (range: 0.9-10.7), and SNIP 1.22 (range: 0.22-2.6).
According to Shapiro–Wilk tests, the distribution of IF and SJR was significantly different from the normal distribution (P value<.001), whereas SNIP and CiteScore were within the normal distribution. Thus, the Mann–Whitney test was used for SJR and IF, and the t test was used for SNIP and CiteScore to compare these metrics with AI policy. As shown in table 1, no significant difference was found among any of them; additionally, no significant relationship between journals affiliated with a society or university and the presence of an AI policy was found.
Table 1.
Comparison of scientometric scores between journals with and without AI policy
| With AI Policy | Without AI Policy | P Value | ||
|---|---|---|---|---|
| Having Affiliation |
Yes | 18 (33.3%) | 17 (31.5%) | .305 |
| No | 7 (13%) | 12 (22.2%) | ||
| CiteScore | 4.3 (2.65-5.9) | 3.9 (3-5.75) | .716 | |
| SJR | 0.739 (0.452-1.063) | 0.567 (0.449-0.885) | .286 | |
| SNIP | 1.255 (0.994-1.602) | 1.05 (0.957-1.51) | .306 | |
| Impact factor | 2.7 (1.7-3.65) | 2.15 (1.8-3.25) | .463 | |
Discussion
The results revealed that 46.3% of the 54 MEDLINE-indexed PM&R journals have an AI policy. None of these journals prohibited the use of AI, nor did any of them permit researchers to use AI unlimitedly without requiring a declaration in their published manuscripts. Additionally, no significant correlation was found between the scientometric scores of journals with and without an AI policy.
AI tools assist writers in enhancing the quality of their academic manuscripts in various ways. They can expedite the writing process, analyze data, check the grammar and sentence structure, suggest topics and abstracts for articles, and detect plagiarism. Additionally, they can be used for literature search, article selection, and even citation and referencing. Despite these benefits, it comes with many limitations and ethical concerns.4 For instance, it may violate copyright laws, present inaccurate results, or provide irrelevant references.8
Typically, academic journals have specific guidelines for acknowledging tools or software used in research. With the emergence and evolution of AI tools, many journals have clarified their policies regarding their use. Thus, before submitting an article, the authors should ensure that their target journal and its publisher permit the use of AI tools and make sure that their manuscript complies with the journal's policies and ethical guidelines regarding the use of AI.15 By doing so, they can harness the benefits of AI while upholding the standards of scientific research.
In this study, we conducted a search of journals indexed in MEDLINE within the field of PM&R. We then looked for whether these journals had established policies regarding the use of AI and evaluated the presence of such policies alongside journal metrics. To our knowledge, this is the first study with this purpose in the field of PM&R.
Inam et al16 reviewed the guidelines of 25 Cardiology and Cardiovascular Medicine journals according to the 2023 SCImago rankings. Their analysis found that all these journals permit authors to use AI in scientific writing with certain limitations in-line with ICMJE recommendations. However, authors remain fully responsible for their published articles. In our study, just one journal restricted the use of AI to editing without requiring declaration. Additionally, this journal introduced the Writefull tool for authors.
A study was conducted in October 2023 with the same purpose of reviewing AI policy and comparing the scientometric data among journals with and without AI policies in the field of radiology.14
In this study of the 112 MEDLINE-indexed imaging journals, 61.6% had a policy regarding the use of AI, which is higher than the percentage found in our study in the field of PM&R (46.3%). One possible reason for this difference is that radiology has been an early adopter of AI compared to other fields and has the highest number of yearly AI-related publications among medical specialties.17,18 This trend toward AI adoption is likely driven by the demand for imaging services and the evolving capabilities of AI technologies. Another reason for this difference is that some publishers, such as Springer Nature, Elsevier, Sage, and Taylor & Francis, have their specific policies regarding the use of AI. For instance, Elsevier allows authors to use AI tools solely for enhancing the readability and language of their work under human oversight and requires writers to declare the use of these technologies at the end of their manuscript. Journals not explicitly mentioning AI in their guidelines often reference their publisher's guidelines, which typically address AI policies. Therefore, the difference in the relative distribution of journals being published by these publishers in these 2 fields may justify this difference.
Similar to our findings, no significant relationship between journals affiliated with a society or university, and the presence of an AI policy was found. However, contrary to our results, which indicated no significant correlation between any of the metrics and journals clarifying the use of AI tools in their guidelines, this study revealed that journals with an AI policy exhibited significantly higher SNIP, SJR, and Journal Citation Indicator scores.
One possible reason for this differing result could be the timing discrepancy between the 2 research studies. Over time, the significance and utilization of AI tools have increased, leading journals to incorporate AI policies into their guidelines. It seems that due to the expanding use and advancements in the field of AI, in the near future, all journals, including those of lower quality, may incorporate a section in their author guidelines regarding the use of AI.
Study limitations
This study had several limitations. First, given the rapid advancements in the field of AI and its increasing recognition among researchers, it is likely that all publishers and journals will ultimately integrate AI policies into their guidelines. Consequently, by the time of acceptance of this article, new journals may have added AI policies to their guidelines. Second, although we attempted to include all keywords relevant to AI policy, there is a possibility that we missed some. Finally, we searched for AI policies in author guidelines; however, it is possible that we missed them if they were clarified in other sections of the journals.
Conclusions
In conclusion, although LLMs such as ChatGPT can be used for text generation and other stages of manuscript preparation, this study found that only nearly half of MEDLINE-indexed journals in the field of PM&R have established policies regarding their use. Among the journals with AI policies, none had entirely prohibited its use, nor did they allow its unrestricted use without declaration. This suggests a balanced approach, encouraging responsible AI use while maintaining scientific integrity.
Additionally, the analysis revealed no significant differences in scientometric characteristics between journals with AI policies and those without. This indicates that the implementation of AI policies is not limited to higher-quality journals. Because AI technology continues to evolve rapidly, all journals will likely establish guidelines for its use in the near future.
Disclosures
The authors have no conflicts of interest.
Acknowledgments
Acknowledgments
The authors would like to express their gratitude to Dr Amirreza Manteghinejad for his valuable feedback during the writing and revision stages of this manuscript.
Declaration of generative AI and AI-assisted technology
During the preparation of this work, the authors used ChatGPT to check grammar, spelling, and the American style of writing for some sentences. After using this tool, the authors reviewed and edited the content as needed, taking full responsibility for the publication's content.
References
- 1.Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92:807–812. doi: 10.1016/j.gie.2020.06.040. [DOI] [PubMed] [Google Scholar]
- 2.Park SH. Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean J Radiol. 2023;24:715–718. doi: 10.3348/kjr.2023.0643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ruksakulpiwat S, Kumar A, Ajibade A. Using ChatGPT in medical research: current status and future directions. J Multidiscipl Healthc. 2023;16:1513–1520. doi: 10.2147/JMDH.S413470. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Huang J, Tan M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am J Cancer Res. 2023;13:1148–1154. [PMC free article] [PubMed] [Google Scholar]
- 5.Donker T. The dangers of using large language models for peer review. Lancet Infect Dis. 2023;23:781. doi: 10.1016/S1473-3099(23)00290-6. [DOI] [PubMed] [Google Scholar]
- 6.Garcia MB. Using AI tools in writing peer review reports: should academic journals embrace the use of ChatGPT? Ann Biomed Eng. 2023;52:139–140. doi: 10.1007/s10439-023-03299-7. [DOI] [PubMed] [Google Scholar]
- 7.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620–621. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
- 8.Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6 doi: 10.3389/frai.2023.1169595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Chandra A, Dasgupta S. Impact of ChatGPT on medical research article writing and publication. Sultan Qaboos Univ Med J. 2023;23:429–432. doi: 10.18295/squmj.11.2023.068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Salvagno M, Cassai AD, Zorzi S, et al. The state of artificial intelligence in medical research: a survey of corresponding authors from top medical journals. PLOS One. 2024;19 doi: 10.1371/journal.pone.0309208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Gordijn B, Have HT. ChatGPT: evolution or revolution? Med Health Care Philos. 2023;26:1–2. doi: 10.1007/s11019-023-10136-0. [DOI] [PubMed] [Google Scholar]
- 12.Li H, Moon JT, Purkayastha S, Celi LA, Trivedi H, Gichoya JW. Ethics of large language models in medicine and medical research. Lancet Digit Health. 2023;5:e333–e3e5. doi: 10.1016/S2589-7500(23)00083-3. [DOI] [PubMed] [Google Scholar]
- 13.International Committee of Medical Journal Editors Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals [Internet] 2023. Available from: http://www.ICMJE.org. Accessed January 11, 2024.
- 14.Simsek O, Manteghinejad A, Vossough A. A comparative review of imaging journal policies for use of AI in manuscript generation. Acad Radiol. 2024;31:5232–5236. doi: 10.1016/j.acra.2024.05.006. [DOI] [PubMed] [Google Scholar]
- 15.Lingard L. Writing with ChatGPT: an illustration of its capacity, limitations and implications for academic writers. Perspect Med Educ. 2023;12:261. doi: 10.5334/pme.1072. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Inam M, Sheikh S, Minhas AMK, et al. A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Curr Probl Cardiol. 2024;49 doi: 10.1016/j.cpcardiol.2024.102387. [DOI] [PubMed] [Google Scholar]
- 17.Mavrogenis AF, Scarlat MM. Thoughts on artificial intelligence use in medical practice and in scientific writing. Int Orthop. 2023;47:2139–2141. doi: 10.1007/s00264-023-05936-1. [DOI] [PubMed] [Google Scholar]
- 18.Senthil R, Anand T, Somala CS, Saravanan KM. Bibliometric analysis of artificial intelligence in healthcare research: trends and future directions. Future Healthc J. 2024;11 doi: 10.1016/j.fhj.2024.100182. [DOI] [PMC free article] [PubMed] [Google Scholar]


