‘Science is a reflection of human dignity. It represents the pursuit of truth, the exploration of the nature of the world, and the comprehension of life itself. Science is the beacon that leads us towards a brighter future,’ stated Albert Einstein in the 20th century.
Despite this, the question of how science can enhance human life while preserving the human dignity remains a topic of discussion. There can be no denying that technological advancements have greatly improved life on earth, with space exploration, personalized medicine, new drugs, and computing having a significant impact on the lives of people. For example, messenger RNA (mRNA) vaccines have played a crucial role in fighting the current coronavirus disease 2019 (COVID-19) pandemic.1,2
Similarly, artificial intelligence (AI) has the potential to revolutionize scientific knowledge and improve the quality of human life, much like the major scientific breakthroughs of the 20th century, including the discovery of DNA, the moon landing, the Theory of Relativity, and quantum mechanics. AI is likely to have a major impact on health care, sustainability, climate change, and environmental issues. Ideally, and through the use of sophisticated AI-based devices, cities may become less congested, less polluted, and generally more livable, and health care systems may also improve. Thus it is the responsibility of the scientific community to invest significant human and economic resources into the development of AI science.
Nevertheless, the advancement of AI in science poses significant ethical dilemmas.3, 4, 5 Some of these include questions about accountability for AI’s errors, the impact of machines on human interactions, and how to address unintended consequences of AI. There is an urgent need for a ‘behavioral code’ to ensure that AI is used in a humane manner. Therefore scientists, doctors, and philosophers are developing principles of ‘algo-ethics’, which guide the ethical use of AI based on the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy, so that AI remains in service of people.
Technology often advances faster than the scientific community’s ability to evaluate it. Before determining which applications of AI bring benefits and which pose risks, it is essential to understand the potential and challenges of new technologies. This creates a ‘gray zone’, in which researchers face ethical challenges without adequate training or support.
In recent days, there has been a noteworthy discussion about the utilization of ChatGPT and other AI tools as coauthors in scientific papers.6, 7, 8 ChatGPT (Chat Generative Pre-trained Transformer) is a large language model developed by OpenAI (San Francisco, CA). It is built on the GPT-3 language model family and has been fine-tuned using both supervised and reinforcement learning methods.9 In scientific research, it can be considered a useful tool for producing well-written, sometimes comically absurd, mini essays in response to requests. It can also be used for composing short computer programs, conducting literature research, analyzing data for statistical purposes, and detecting plagiarism and mistakes in scientific texts.
Some authors have expressed their opposition to the use of ChatGPT or similar AI tools as coauthors of scientific publications due to their inability to meet editorial standards.10,11 Nonsentient devices such as ChatGPT cannot take responsibility for the content and integrity of scientific papers or give consent to terms of use and distribution rights.
Here, we want to emphasize the potential negative consequences of using ChatGPT as an author on both the scientific community and humanity as a whole.
Concerning the scientific community, the use of ChatGPT as a coauthor in a scientific publication raises serious concerns. Equating the dignity of researchers with that of a machine is unacceptable. According to the United Nations (UN), human dignity is defined as the intrinsic and inalienable value of every person, and is recognized as a universal and inalienable right that must be respected and protected by all societies and institutions. However, it is clear that machines, such as ChatGPT, cannot possess dignity. By acknowledging ChatGPT as a coauthor, we would be denying authors their dignity as humans and scientists.
Thus, by losing our dignity, will we have the strength to defend the good? Will it make sense to promulgate what is right and stigmatize what is wrong? Will we have the authority to defend science from conspiracists and fake news?
These three questions prompt us to examine the potential impact of using ChatGPT as a coauthor on human beings. It would be important to assess how the public might react if they were to learn that an AI device was one of the authors of a scientific discovery that impacts their lives. In the field of biomedical research, patients may ask the AI device for a diagnosis or treatment instead of a doctor, because both the physician and the AI device could have coauthored the same study. If this were the case, could we reject our nonsentient coauthor? This would be a challenge because we should explain an undeniable reality that we ourselves are denying, the superiority and complexity of human thought compared with that of machines.
In our opinion, these and other considerations should alarm the scientific community. Researchers and publishers should promptly preclude the use of ChatGPT as an author for scientific publication, thus providing the necessary time to discuss about this issue.
We have the responsibility to drive technological progress by both preserving the dignity of the science and pursuing the good of humanity.
PS: We used ChatGPT to detect possible plagiarisms and mistakes. It is an exceptional tool, certainly not an author.
Acknowledgments
Funding
None declared.
Disclosure
The authors have declared no conflicts of interest.
References
- 1.Tregoning J.S., Flight K.E., Higham S.L., Wang Z., Pierce B.F. Progress of the COVID-19 vaccine effort: viruses, vaccines and variants versus efficacy, effectiveness and escape. Nat Rev Immunol. 2021;21(10):626–636. doi: 10.1038/s41577-021-00592-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Fernandes Q., Inchakalody V.P., Merhi M., et al. Emerging COVID-19 variants and their impact on SARS-CoV-2 diagnosis, therapeutics and vaccines. Ann Med. 2022;54(1):524–540. doi: 10.1080/07853890.2022.2031274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kargl M., Plass M., Müller H. A literature review on ethics for AI in biomedical research and biobanking. Yearb Med Inform. 2022;31(1):152–160. doi: 10.1055/s-0042-1742516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Schuklenk U. On the ethics of AI ethics. Bioethics. 2020;34(2):146–147. doi: 10.1111/bioe.12716. [DOI] [PubMed] [Google Scholar]
- 5.Kazim E., Koshiyama A.S. A high-level overview of AI ethics. Patterns (N Y) 2021;2(9) doi: 10.1016/j.patter.2021.100314. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.ChatGPT Generative Pre-trained Transformer. Zhavoronkov A. Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience. 2022;9:82–84. doi: 10.18632/oncoscience.571. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Curtis N., ChatGPT To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr Infect Dis J. 2023;42(4):275. doi: 10.1097/INF.0000000000003852. [DOI] [PubMed] [Google Scholar]
- 8.King M.R., chatGPT A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cell Mol Bioeng. 2023;16(1):1–2. doi: 10.1007/s12195-022-00754-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit Health. 2023:S2589-7500(23)00023-7. [DOI] [PubMed]
- 10.Thorp H.H. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi: 10.1126/science.adg7879. [DOI] [PubMed] [Google Scholar]
- 11.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620–621. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
