Artificial intelligence (AI) has become a transformative force in science and is set to become an indispensable tool owing to its vast capabilities that can perform complex methodological tasks, enhance research accessibility, and assist in scientific communication. While AI technology has been around for some time, interest exploded with the release of ChatGPT 3.5 by OpenAI in November 2022, bringing generative AI capabilities into the mainstream and making it possible for anyone, regardless of their technical background, to benefit from the profound impact of AI. With major companies like OpenAI, Microsoft, Google, Meta, and others entering the generative AI arena, we are witnessing a proliferation of AI tools such as ChatGPT-4, Copilot, Gemini, Llama 3, Claude 3.5 Sonnet, and many others.
AI offers vast opportunities to transform the scientific publication system in many ways for authors and publishing journals alike. For example, its ability to rapidly process large texts and seamlessly refine language and grammar can help reduce proofreading time and level the playing field for nonnative English speakers, who may face challenges when publishing their work. Noy and Zhang [1] demonstrated the value of AI under experimental conditions by testing 453 individuals on midlevel writing tasks and then randomizing them to use ChatGPT or not, finding that ChatGPT significantly helped those who initially performed poorly. AI could also assist journals in the initial screening of manuscripts, as well as the scientific dissemination of their publications. By analyzing large manuscripts, AI can generate short summaries used for digital communication and social media use, which have been shown to be effective in enhancing scientific dissemination [2].
As with any powerful human invention, AI also presents significant downsides and potential for abuse. The “publish or perish” mentality in academia is globally prevalent and may incentivize authors to cut corners. We have already witnessed several bad examples, such as the publication of a complete book using AI without a real author’s knowledge and the publication of scientific articles containing AI-generated text without any modification or correction by the authors. Another significant issue with generative AI is the trustworthiness and factual accuracy of its output. When used correctly, AI may outperform human-generated content. Van de Wyngaert and colleagues [3] found that ChatGPT-generated information on hemophilia was superior to that provided by hemophilia organizations on their websites. However, AI can also sometimes generate deceivingly convincing content that may seem plausible but is entirely inaccurate. For example, it can fabricate false information or nonexisting citations, often referred to as “hallucinations.” Chen and Chen [4] found that 98% of citations in ChatGPT 3.5 were fake. The rate improved significantly with version 4.0 but remained high at 20% [4], highlighting the need for careful evaluation and verification of AI-generated content.
Many journals have instituted policies on AI use in scientific writing and publishing. As a journal, we recognize the inevitability of large language models in manuscript writing and are alert to both their benefits and potential abuses. It is important to consider the different levels at which AI editing could be used. For instance, AI should not be used to write manuscripts entirely, and it cannot be listed as a coauthor, as it cannot fulfill essential author requirements like approving the final version of the manuscript prior to submission. We hope to be able to screen submitted manuscripts written by AI, similar to plagiarism checks, but that is not currently possible. On the other hand, using AI merely for copy-editing to correct spelling and grammar, unify style, and improve readability is reasonable. In fact, some journals even recommend AI for text polishing before submission. Substantive editing with AI may need disclosure, similar to other contributions in the declaration sections.
Editors and peer reviewers should not use AI when commenting on submissions. However, AI will likely soon be used to screen all manuscripts for aspects such as appropriate length, structure, language, and citation accuracy and to verify author authenticity and instances of past retractions. Some journals have already started doing this to a limited degree, but it will likely become more detailed and universal soon.
AI is bound to become universal and integral to the scientific writing and editorial processes, and its power is becoming more impactful every day. By dramatically reducing time-consuming tasks, AI could allow the scientific community to focus on creative thinking and help propel innovation to unprecedented heights. While some readers may not believe it will impact them, we predict that AI will become as indispensable as the internet and deeply embedded in our daily lives. Embracing this change is crucial; harnessing the potential of AI can revolutionize the future of science by transforming the way we conduct and communicate research. Avoiding or discouraging AI use will only hinder progress. Instead, we must find ways to ensure that AI is used responsibly and ethically like any other powerful tool, guided by oversight, a commitment to integrity, and an understanding of its limitations and pitfalls.
Acknowledgments
Funding
None.
Author contributions
Both authors contributed equally to this manuscript.
Relationship Disclosure
None.
Footnotes
Handling Editor: Dr Michael Makris
References
- 1.Noy S., Zhang W. Experimental evidence on the productivity effects of generative artificial intelligence. Science. 2023;381:187–192. doi: 10.1126/science.adh2586. [DOI] [PubMed] [Google Scholar]
- 2.Abou-Ismail M.Y., van der Wal D.E., Cheong M.A., Masten A., Blount L., Brown M.C. Can scientific journals benefit from a social media presence? An analysis of online traffic data and author perspectives. Res Pract Thromb Haemost. 2024;8 doi: 10.1016/j.rpth.2024.102387. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Van de Wyngaert C., Iarossi M., Hermans C. How good does ChatGPT answer frequently asked questions about haemophilia? Haemophilia. 2023;29:1646–1648. doi: 10.1111/hae.14858. [DOI] [PubMed] [Google Scholar]
- 4.Chen A., Chen D.O. Accuracy of chatbots in citing journal articles. JAMA Netw Open. 2023;6 doi: 10.1001/jamanetworkopen.2023.27647. [DOI] [PMC free article] [PubMed] [Google Scholar]