Skip to main content
The British Journal of General Practice logoLink to The British Journal of General Practice
. 2024 Mar 1;74(740):126–127. doi: 10.3399/bjgp24X736605

Generative AI in medical writing: co-author or tool?

Richard Armitage 1
PMCID: PMC10904112  PMID: 39222432

ChatGPT is now 1 year old. This large language model (LLM), which was created by OpenAI and made freely available to the public on 30 November 2022, made such broad and disrupting impact before its first birthday that many believe the dawn of generative AI constitutes a technological era of similar import to electrical power.1 In late 2023, while GPT-4 (the latest model of ChatGPT) still leads the user-friendly LLM landscape, it faces growing competition from the likes of Meta’s Llama, Microsoft’s Bing AI, Quora’s Poe, Anthropic’s Claude-2, and Google’s Bard.

The generative power of these AI tools is rapidly disrupting almost every industry, including clinical medicine, healthcare and health systems, and medical writing.2 Indeed, authors who hopefully submit their manuscripts to the The Lancet and its sub-journals are now required to make a declaration regarding their use of generative AI and AI-assisted technologies in their work, attesting to their responsibility for the article contents. The Lancet declares that generative AI is not an author, and dictates that ‘these technologies should only be used to improve readability and language’.3 But are these statements — the first a factual claim, the second a normative assertion — entirely true? Let’s deal with each in turn.

First, is generative AI an author? To answer this, we must first ascertain what constitutes an author in the context of medical writing. For authors to succeed in this domain they must competently demonstrate a variety of capabilities including idea generation, literature searches, evidence reviews, statistical analysis, information synthesis, findings summarisation, conclusion formulation, manuscript writing, and abstract generation. These aptitudes are in addition to the basic requirement to produce written work using academic language that is concise, readable, and with flawless spelling and grammar. In December 2023, it is obvious that the leading LLMs harbour all of these capabilities to degrees approaching or sometimes exceeding those of human authors (and are clearly super-human in terms of speed),2,47 such that leading medical journals have taken public positions on the use of LLMs in the works that they publish.3,811 It seems clear, therefore, that generative AI could be considered an author with regards of its proficiencies in medical writing (although it does not — at least for now — have the capacity to autonomously decide to act as such an author, but must be prompted to do so by the human who controls it).

Second, should generative AI only be used to improve language and readability in medical writing, or should the capabilities of LLMs be harnessed to conceive of, formulate, and improve such works? Before responding to this, it must first be acknowledged that these technologies simply will be used for this purpose, regardless of whether they ought to be. A complete absence of their influence in medical writing would require not a single instance of their use in the 1.3 million articles (most of which have multiple human collaborators) added to the MEDLINE database alone each year.12 Given the rapid and widespread uptake of LLMs within the last 12 months, such perfect abstinence is deeply improbable. On the backdrop of this reality, should generative AI be used for this purpose? In response to this, a straight-forward yet powerful consequentialist argument can be mounted, which supports their use if they bring about the best outcome (which is, in our domain, the improved health of our patients through the influence of high-quality medical writing). This argument supports the immediate deployment of generative AI in medical writing, since its utility in augmenting the output of human authors has already been established.

As such, it seems that generative AI can be perceived as an author, and a strong ethical case can be made for the full utilisation of its capabilities to bring about better patient outcomes. The following question is therefore raised: should LLMs be recognised as independent co-authors in medical writings in the same manner that all other (human) collaborators are, or not?

I think not for three main reasons. First, the human author’s prowess with wielding generative AI will soon be considered a necessary component of the author’s skillset in a manner akin to their proficiency with word processors and internet browsers. Since no author recognises Microsoft Word and Google Chrome — which are widely-used tools in the production of medical writings — as collaborators to their work, LLMs should similarly not be recognised as co-authors but merely regarded as tools that authors master and deploy in the production of their writings.

Second, the landscape of LLMs is rapidly expanding (indeed, custom versions of ChatGPT can already be created by individual users).13 This means that recognition of individual LLMs as co-authors would soon become a meaningless exercise since they might not be available to or understandable by those that do not use them (in addition to LLMs themselves being both uncontactable by readers of the works and unaware of any authorship recognition bestowed unto them).

Finally, assigning authorship to generative AI might serve to transfer accountability for the work at least partially away from the human co-author. Without the LLM harbouring legally recognised personhood,14 this raises the question of to whom the accountability is transferred (the owner of the LLM, the engineer that wrote it, or some other entity entirely)? These ambiguities, combined with the fact that the human author autonomously chooses to utilise the generative AI in their work, means that authorship should exclusively be assigned to humans.

graphic file with name bjgpmar-2024-74-740-127.jpg

Featured image/author statement: Generative AI (DALL·E 3) was used to produce the article’s image, but no other use of generative AI was deployed in the production of this article.

Accordingly, while generative AI already harbours impressive capabilities that often meet and even exceed those of human authors, and while a strong case can be made for its power to be deployed in medical writing, these technologies should not be recognised as the independent co-authors of human collaborators. Instead, they should be regarded as indispensable tools that augment human authors, enhance their capabilities, and constitute a newly required proficiency in the skillset of real authors.

Footnotes

This article was first posted on BJGP Life on 8 January 2024; https://bjgplife.com/generativeai

References


Articles from The British Journal of General Practice are provided here courtesy of Royal College of General Practitioners

RESOURCES