Skip to main content
American Heart Journal Plus: Cardiology Research and Practice logoLink to American Heart Journal Plus: Cardiology Research and Practice
. 2025 Aug 8;58:100586. doi: 10.1016/j.ahjo.2025.100586

Exploring AI use policies in manuscript writing in cardiology and vascular journals

Mustafa Alkhawam a, Amr Almobayed b, Akash Pandey a, Navin C Nanda a, Ali J Ebrahimi a, Mustafa I Ahmed a,
PMCID: PMC12392760  PMID: 40894457

Abstract

Background

Artificial intelligence (AI) technologies are rapidly evolving and offer efficiencies in manuscript generation however, this technology has raised concerns about the potential for bias, errors, and plagiarism to occur. In response, some journals have updated their author guidelines to address AI use.

Methods

We assessed author guidelines for 213 MEDLINE-indexed cardiovascular journals to evaluate policies on AI use in manuscript writing. Journal metrics such as CiteScore, Journal Impact Factor (JIF), Journal Citation Indicator (JCI), Source Normalized Impact per Paper (SNIP), and SCImago Journal Rank (SJR) were compared between journals with and without AI policies. We further analyzed the association between AI policy adoption and society affiliation. We reviewed the criteria for listing AI as an author and allowances for AI-generated content.

Results

Of 213 journals, 170 (79.8 %) had AI policies consistent across evaluations. Policies were present in 115 of 147 (78 %) cardiology journals and 113 of 127 (89 %) vascular journals. Furthermore, 111 of 143 (77.6 %) had AI-use policies, while 59 out of 70 (84.2 %) were unaffiliated journals. Journal metrics did not significantly differ between journals with and without AI policies (P > 0.05). Among journals with policies, 156 out of 158 (98.7 %) excluded AI as authors, while all allowed AI-assisted content.

Conclusion

Many cardiovascular journals address AI-generated content, but gaps remain in policies and disclosure requirements for AI-created manuscripts. The presence of AI-use policies was independent of journal metrics or society affiliation.

Keywords: Artificial intelligence, AI, Large language models, LLM, Cardiology journals, Vascular journals, AI policies, Ethical standards, Journal policy

Graphical abstract

Unlabelled Image

1. Introduction

Artificial intelligence (AI) is a transformative tool that has revolutionized numerous aspects of life, including education and research [1]. Large language models (LLMs) represent one of the most notable AI-assisted technologies, which have experienced significant advancements over recent years and are becoming more prevalent in academic research and writing practices [2]. Generative AI can be an assistive tool that produces diverse output, from creating to enhancing text, images, audio, and video [3,4]. Notably, a survey on AI usage found that over 25 % of academics had used AI to draft manuscripts, and more than 30 % employed it to assist with project creation—highlighting the growing presence of AI in academic settings [5]. While AI-assisted tools offer benefits such as faster manuscript drafting, translation, and data analysis, they also present risks. Specifically, AI can generate hypothetical or hallucinatory content, which threatens the accuracy and integrity of scientific writing [6,7].

Additionally, potential plagiarism, copyright infringement, and algorithmic biases further complicate their integration into research workflows [8]. Importantly, these challenges underscore the critical concern for robust ethical guidelines for the responsibility of using AI in academic publishing. In response to these concerns, the International Committee of Medical Journal Editors (ICMJE) has provided guidance on the ethical use of AI in scientific writing. Key principles derived from this guidance include the disclosure of AI usage, prohibition of AI tools as authors, human accountability for AI-generated content, and adherence to ethical and plagiarism standards. While these criteria are not explicitly outlined in the ICMJE text, we synthesized them based on its guidance. Further details on these criteria are provided in [9]. However, the consistency and specificity of AI policies across scientific journals remain limited, creating variability in ethical oversight and transparency standards. The rapid evolution of generative AI tools, especially ChatGPT (OpenAI. ChatGPT [Large Language Model]. San Francisco, CA: OpenAI; Released November 2022.), highlights the urgency for journals to establish comprehensive guidelines addressing the ethical use of AI. These guidelines should emphasize transparency in AI usage, measures to mitigate biases and inaccuracies, and accountability for AI-generated content [6,10].

Given the growing integration of technology into such fields, the role of AI tools in academic writing is particularly relevant to cardiology and vascular research. However, there is not much data available about the extent to which journals in these particular fields have implemented rules addressing the use of generative AI in scientific writing. The aim of this study is to assess and compile the AI technology usage policies of cardiology and vascular journals that are MEDLINE-indexed. This study aims to give significant insights into the present environment of AI policy acceptance in cardiovascular publications by investigating author rules, adherence to ethical standards, and relationships with journal metrics.

2. Materials and methods

Since our study did not involve human or patient data, approval from the institutional review board (IRB) was unnecessary. We conducted this cross-sectional study to evaluate cardiology and vascular journals indexed in MEDLINE. We started by compiling a list of journals from the NLM catalog. In order to identify cardiology journals, we extracted journals from the National Institutes of Health (NIH) using broad subject terms for indexed journals and selecting the category “cardiology[st].” We applied a similar method for selecting vascular journals using the “vascular[st]” journal category. The study included journals with publicly accessible submission guidelines published in English, excluding non-English journals or those focused on nursing to align with the scope of the study.

The initial list of cardiology and vascular journals was compiled by one author, M.A., and revised twice by co-author, A.A., all within the period from November 17 to December 15, 2024. Fig. 1 presents the flowchart used to finalize the journal list. We focused data collection on both general and AI-related information. General information included each journal's name, publisher, society affiliation, and metrics such as Journal Impact Factor (JIF), Journal Citation Indicator (JCI), Source Normalized Impact per Paper (SNIP), SCImago Journal Rank (SJR), Quartile ranking (Q1-Q4), and CiteScore.

Fig. 1.

Fig. 1

A flowchart demonstrating the journal method of collocation for analysis. A total of 157 PubMed-NLM cardiology journals and 134 PubMed-NLM vascular journals were found. Out of 291 journals, 78 were removed due to duplication (61 journals), nursing journals (3), non-English journals, or a lack of English recommendations (14). The final selection of 213 journals was used in the analysis.

To identify AI policies, we searched each journal's author guidelines and editorial policies using combinations of keywords such as “Artificial,” “Intelligence,” “AI,” “Large language models,” “LLM,” “Chat,” “GPT,” “Writing,” “Machine learning,” “Assist,” and “Generative.” We additionally reviewed relevant guideline sections and any external links provided, and journals with no relevant results were classified as not having a policy regarding AI use.

Next, we compiled metrics data (CiteScore, SNIP, SJR, JIF, Quartile ranking, and JCI) from Scopus and Web of Science to investigate potential associations with AI policy adoption. Quartile rank was coded as an ordinal variable (1 = highest quartile, 4 = lowest quartile). For each metric, journals without available data for that specific metric were excluded from the respective analysis but included in other applicable analyses. To evaluate whether journal access type influenced AI policy adoption, we performed a stratified chi-square analysis comparing journals categorized as open-access, subscription-based, or hybrid. Access type was determined by manually reviewing each journal's official website under sections such as “About the Journal,” “Instructions for Authors,” or “Open Access Options.” Journals were coded into one of the three mutually exclusive categories. In addition, we assessed publication rate by searching PubMed for the total number of articles published by each journal in the year 2024. Furthermore, we analyzed the exclusion criteria for using AI as an author and the range of allowances for AI-generated content according to the ICMJE (Table 1).

Table 1.

Summary of ICMJE-Based Criteria Used to Evaluate AI Policy in Scientific Journals.

This table outlines the key domains assessed when reviewing AI-related policies in scientific journals. Criteria were adapted from the International Committee of Medical Journal Editors (ICMJE) recommendations and include requirements for disclosure of AI use, description of AI contributions in manuscripts, exclusion of AI tools as authors, human responsibility for content accuracy, and mandatory review/editing of AI-generated materials. These criteria formed the basis for inclusion/exclusion and policy classification across journals in this study.

Disclosure of AI Usage at Submission Authors must disclose whether AI-assisted technologies (e.g., LLMs, chatbots, image creators) were used.
Description of AI Usage in Cover Letter, Acknowledgment Section, or Methods In the cover letter, the authors should describe how AI was used to create the work or acknowledgments (writing assistance) and methods (data, analysis, figure generation).
AI-Assisted Technologies Authorship Chatbots, such as ChatGPT, should not be listed as authors or cited by them since they cannot guarantee accuracy, integrity, or originality.
Human Responsibility for Accuracy and Originality Humans are responsible for providing material and ensuring correctness, originality, and completeness.
Review and Editing of AI-generated content Authors must thoroughly review and edit AI-generated content to avoid errors, biases, and incomplete information.
Assertion of No Plagiarism, Including AI Content Authors must ensure there is no plagiarism, including text and images created by AI, and provide proper attribution.

2.1. Statistical analysis

We performed statistical analyses using IBM Statistical Package for the Social Sciences (SPSS) version 29.0.2.0. Qualitative data in this study was reported as frequencies and percentages, while quantitative data was presented as medians and interquartile ranges (IQR). The normality of quantitative data was assessed using the Kolmogorov–Smirnov test. We used the Mann–Whitney U test to compare non-parametric quantitative data and then employed the Chi-square test to compare qualitative data. A P-value of less than 0.05 was considered statistically significant for all tests.

3. Results

Our initial search identified 291 journals. After applying the exclusion criteria, 213 journals remained for analysis. Comparisons of AI policy adoption between cardiology and vascular journals revealed several notable differences. Among the 147 cardiology journals, 115 (78 %) had adopted AI policies, whereas 32 (21 %) had not. Similarly, among the 127 vascular journals, after excluding noncardiovascular and non-English journals. We found that 113 out of 127 (89 %) had adopted AI policies, compared to 14 (11 %) who had not (Fig. 2).

Fig. 2.

Fig. 2

A comparison of AI policy adoption in cardiovascular and vascular journals. Among 147 cardiology journals, 115 (78 %) have implemented an AI policy, while 32 (22 %) have not. Conversely, 113 (89 %) of 127 vascular journals followed AI guidelines, whereas 14 (11 %) did not. This shows the higher prevalence of AI policy acknowledgment in vascular journals compared to cardiology journals.

Among these 213 journals, we determined that 143 (67.1 %) were affiliated with a cardiovascular society, and 70 (32.9 %) were not affiliated. Of the 213 journals, 170 (79.8 %) had a policy discussing AI or LLM use for manuscript preparation.

A review of AI policies in these 170 journals revealed universal adherence to the core principles outlined by ICMJE. This included provisions related to AI chatbots, the limitations of AI language models, disclosure of AI involvement, the role of AI as authors, acknowledgment of AI contributions, and the use of AI in content generation. However, we observed significant variations in the detail of these guidelines among journals.

Most notably, the employment of AI-assisted technology was not explicitly prohibited by any of the journals within our analysis. We also noted that among the society-affiliated journals, 111 out of 143 (77.6 %) had AI-use policies, while 59 out of 70 (84.2 %) were unaffiliated journals. A Chi-square test was performed for journals with affiliated societies (P = 0.255, P < 0.05), indicating no statistically significant association. Additionally, we noted that 6 of the 213 journals were not indexed in Web of Science, and 1 of the 213 journals was not indexed by Scopus.

When analyzing metrics data, we found that the median values for CiteScore, SNIP, and SJR of the included journals were 5.35 (range: 1.1–53.1), 0.87 (range: 0.35–7.71), and 0.77 (range: 0.24–8.76), respectively. The median JIF was 2.7 (range: 0.58–41.7), and the median JCI was 0.7 (range: 0.08–8.42). We also observed that journals with AI-use policies did not exhibit significantly higher scores across any of the listed metrics (P > 0.05). no statistically significant difference was observed between journals with and without AI policies for these variables. However, a chi-square test revealed a statistically significant association between journal quartile ranking and AI policy adoption (p = 0.035). Specifically, AI-use policies were more prevalent among journals in higher quartiles (Q1 and Q2) compared to lower quartiles (Q3 and Q4) as shown in (Fig. 3).

Fig. 3.

Fig. 3

Comparison of journal metrics between journals with and without Artificial Intelligence (AI) usage policies.

Metrics include Journal Impact Factor (JIF), CiteScore, Source Normalized Impact per Paper (SNIP), SCImago Journal Rank (SJR), and Journal Citation Indicator (JCI). Data are presented as median values with interquartile ranges (IQR), and p-values indicate statistical significance between the groups.

Analysis by journal access type revealed a statistically significant association between journal type and AI policy adoption. Open-access journals were significantly more likely to include AI-use policies than non-open-access journals (p = 0.007). Similarly, hybrid journals also showed a significant association (p = 0.012). No significant difference was observed among subscription-based journals (p = 0.568), although the low number of purely subscription-based journals may have limited statistical power. Although, Journals with an AI policy (n = 159) did not differ significantly in publication rate compared to those without an AI policy (n = 39) (p = 0.715), suggesting no meaningful relationship between policy presence and publication volume (Fig. 3).

Finally, we analyzed the inclusion of AI in manuscript authorship and content generation. Among journals with an AI policy, 156 out of 158 (98.7 %) explicitly excluded AI from authorship, and 2 out of 158 (1.3 %) did not specify any exclusion. With regard to AI content generation, we observed that all 159 journals with an AI policy allowed its use, and none of these journals fully prohibited AI-assisted technologies.

4. Discussion

The integration of AI or LLM into academic research has become increasingly common. Because AI is rapidly created and used in manuscript preparation, AI-related tools help authors and editorial teams present academic findings in innovative ways. However, using these tools also carries potential risks, including plagiarism, bias, hallucination, and the inclusion of incorrect information in medical academic content [11]. Consequently, further investigation of these issues is necessary across multiple disciplines to safeguard the future of scientific literature.

An earlier study that investigated the top 25 SCImago-ranked cardiology journals discovered that all of them had AI-use policies. While this study relied on top-ranked publications, our research suggests a broader perspective by encompassing all cardiology and vascular journals, regardless of rank [12].

Our results indicate a high prevalence of AI policy adoption among journals, with 79.8 % incorporating AI-related guidelines in various forms. Specifically, cardiology and vascular journals demonstrated adoption rates of 78 % and 83 %, respectively, suggesting a slightly higher adoption among vascular journals. This widespread acceptance underscores growing awareness of the ethical and practical implications of AI tools in cardiovascular research. These findings align with studies in other medical fields, such as ophthalmology and radiology, which have similarly reported increasing adoption of AI policies [13,14]. Nevertheless, more than 20 % of journals in our analysis lacked clear AI regulations, showing that a significant proportion of cardiology and vascular journals have yet to implement AI-specific guidelines. While the adoption rates in cardiology and vascular journals (22 % and 17 %, respectively) are comparable to or higher than those seen in other medical fields like as ophthalmology (36.9 %) and radiology (38.3 %), there persists a significant gap that must be bridged. This emphasizes the continuous need for more widespread adoption of AI regulations to ensure the ethical and successful integration of AI across all fields.

Interestingly, our analysis revealed that journal access type may influence the likelihood of adopting AI policies. Journals that are open access or hybrid were significantly more likely to have implemented AI-related author guidelines compared to fully subscription-based journals. This may reflect differences in editorial priorities, transparency standards, or responsiveness to emerging technologies, particularly in open-access platforms where ethical scrutiny and global visibility may be higher.

Among journals with AI policies, adherence to the principles of the ICMJE is a positive sign [9]. These principles, including the prohibition of AI authorship (noted as 98.7 % in our study), emphasize the importance of human accountability for AI-generated content. However, they also require that any AI tools used be disclosed in both the cover letter and the submitted work. While some journals provide detailed guidelines on how to disclose AI use, others offer more general instructions, which may lead to inconsistent application or even unintentional misuse of AI in research.

One of the most noteworthy findings is the lack of significant differences in journal metrics (CiteScore, SNIP, SJR, JIF, and JCI) between journals with and without AI policies. This finding contradicts previous research in ophthalmology and radiology, which found that journals with AI guidelines had more outstanding impact metrics [13,14]. In our analysis, AI policies in cardiology and vascular journals are not associated with the impact of increased academic metric journals. It is important to note that this study is observational in nature and does not support causal inference. While our findings suggest that the presence of AI policies does not appear to be associated with journal citation impact, these results should not be interpreted as evidence of a causal relationship. Although, we observed a trend toward slightly lower median impact factors in journals without explicit AI policies. This may suggest that higher-impact journals are more proactive in implementing ethical and transparency-focused editorial policies, including those related to emerging technologies such as AI. This align with patterns observed in other fields, such as ophthalmology and radiology, where AI regulations were more directly linked to impact metrics. This insight illustrates the importance of considering field-specific factors when assessing the importance of AI policy, as well as the need to construct tailored regulations that align with the priorities and practices of each scientific domain.

However, we found that quartile ranking was significantly associated with the presence of AI-use policies. Journals in Q1 and Q2 were significantly more likely to implement such policies than those in Q3 or Q4, supporting the notion that higher-ranked journals may prioritize early adoption of ethical publishing practices. These journals may have more resources, stronger editorial infrastructures, and greater incentives to align with evolving global standards related to AI usage in academic publishing.

We additionally examined whether society affiliations influenced the likelihood of AI policy adoption. Surprisingly, there was no significant association, despite the fact that professional societies often lead efforts in establishing ethical standards for their journals. Still, findings of the prevalence show that society-affiliated cardiovascular journals do have AI policies (77.6 %) compared to non-AI policy adoption journals (84.2 %) (P = 0.255). This finding emphasizes the constant use of AI policies across cardiovascular journals, regardless of their society affiliation status. This highlights the critical role of cardiovascular and vascular societies in discussing the ethical practices and implementation of practical AI usage guidelines within their specific domains.

Our findings showed no significant difference in publication rate between journals with and without AI policies, implying that policy adoption is not solely driven by journal size or output. This may reflect a broader cultural or ethical shift rather than operational factors.

This study has several limitations. Due to its cross-sectional approach, our study only reflects AI policy adoption at one specific point in time, and the fast growth of AI technology implies that AI policies may change quickly. Future research should track these changes over more extended periods. Additionally, focusing on English-language journals with publicly accessible submission guidelines may limit the generalizability of our findings to non-English or less accessible journals. Although we used an extensive list of keywords to locate AI policies, our method may not have detected some policies.

Furthermore, our focus on cardiology and vascular medicine is relevant to other scientific disciplines, highlighting the need for broader research across additional fields. Finally, the absence of a statistically significant difference in journal metrics between those with and without AI policies may be influenced by other variables, such as publisher practices or editorial board decisions, which this study did not fully explore.

Our results highlight how crucial it is to track and update AI policies as they evolve over time. To ensure dynamic ethical standards, journals should set up flexible rules that are simple to update as new information becomes available. The creation and implementation of guidelines should be put into best practices for the proper utilization of AI in academic research, which will require cooperation between journal editors, publishers, and professional associations.

5. Conclusion

In conclusion, this analysis sheds light on how cardiology and vascular journals are implementing AI policies. Even though the majority of journals possess varying degrees of AI policy, the lack of noticeable differences in journal metrics and the different levels of policy specificity suggest that more uniform and comprehensive rules are required. Journals must create dynamic policies that uphold ethical norms for the use of AI in academic research. By doing this, journals can preserve the integrity of their research, increase transparency, and remain updated on the rapidly evolving field of AI-assisted publishing in academia.

CRediT authorship contribution statement

Mustafa Alkhawam: Writing – review & editing, Writing – original draft, Visualization, Project administration. Amr Almobayed: Writing – review & editing, Formal analysis, Data curation, Conceptualization. Akash Pandey: Writing – review & editing, Writing – original draft. Navin C. Nanda: Writing – review & editing, Supervision. Ali J. Ebrahimi: Writing – review & editing, Visualization, Supervision. Mustafa I. Ahmed: Writing – review & editing, Validation, Supervision.

Ethical statement

This study did not involve human participants, animal subjects, or patient data. Therefore, approval from an Institutional Review Board (IRB) or Ethics Committee was not required. The research was conducted using publicly available information from MEDLINE-indexed cardiology and vascular journals, specifically their submission guidelines and policies on artificial intelligence. The study adheres to ethical principles for research integrity, transparency, and responsible reporting.

Funding

There was no funding for this work.

Declaration of competing interest

The authors declare no conflicts of interest relevant to this work.

Data availability

Not applicable.

References

  • 1.Zawacki-Richter O., Marín V.I., Bond M., et al. Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int. J. Educ. Technol. High. Educ. 2019;16(1):1–27. doi: 10.1186/S41239-019-0171-0. [DOI] [Google Scholar]
  • 2.Aydın ¨.O., Karaarslan E. OpenAI chatGPT Generated Literature Review: Digital Twin in Healthcare [Internet]. Rochester, NY. 2022. https://papers.ssrn.com/abstract=4308687 [cited 2023 Sep 2]. Available from:
  • 3.Yu P., Xu H., Hu X., et al. Leveraging generative AI and large language models: a comprehensive roadmap for healthcare integration. Healthcare (Basel) 2023 Oct 20;11(20):2776. doi: 10.3390/healthcare11202776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Shoja M.M., Van de Ridder J.M.M., Rajput V. The emerging role of generative artificial intelligence in medical education, research, and practice. Cureus. 2023 Jun;15(6) doi: 10.7759/cureus.40883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Van Noorden R., Perkel J.M. AI and science: what 1,600 researchers think. Nature. 2023;621(7980):672–675. doi: 10.1038/d41586-023-02980-0. [DOI] [PubMed] [Google Scholar]
  • 6.Cascella M., Montomoli J., Bellini V., et al. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J. Med. Syst. 2023;47(1):33. doi: 10.1007/s10916-023-01925-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kacena M.A., Plotkin L.I., Fehrenbacher J.C. The use of artificial intelligence in writing scientific review articles. Curr. Osteoporos. Rep. 2024;22(1):115–121. doi: 10.1007/s11914-023-00852-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hutson M. Could AI help you to write your next paper? Nature. 2022;611(7934):192–193. doi: 10.1038/d41586-022-03479-w. [DOI] [PubMed] [Google Scholar]
  • 9.ICJME, Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals, ICMJE (2023); n/a:1-20. https://www.icmje.org/icmje-recommendations.pdf. [PubMed]
  • 10.Amann J., Blasimme A., Vayena E., et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020;20(1):310. doi: 10.1186/s12911-020-01332-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Conroy G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature. 2023;622(7982):234–236. doi: 10.1038/d41586-023-03144-w. [DOI] [PubMed] [Google Scholar]
  • 12.Inam M., Sheikh S., Minhas A.M.K., et al. A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Curr. Probl. Cardiol. 2024 Mar;49(3) doi: 10.1016/j.cpcardiol.2024.102387Q2. Epub 2024 Jan 5. PMID: 38185435. [DOI] [PubMed] [Google Scholar]
  • 13.Almobayed A., Eleiwa T.K., Badla O., et al. Do ophthalmology journals have AI policies for manuscript writing? Am. J. Ophthalmol. 2024 Nov 7;271:38–42. doi: 10.1016/j.ajo.2024.11.003. (Epub ahead of print. PMID: 39515455) [DOI] [PubMed] [Google Scholar]
  • 14.Simsek O., Manteghinejad A., Vossough A. A comparative review of imaging journal policies for use of AI in manuscript generation. Acad. Radiol. 2024 Dec;31(12):5232–5236. doi: 10.1016/j.acra.2024.05.006. Epub 2024 May 20. PMID: 38772797. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from American Heart Journal Plus: Cardiology Research and Practice are provided here courtesy of Elsevier

RESOURCES