Abstract
Artificial intelligence technologies were developed to assist authors in bettering the organization and caliber of their published papers, which are both growing in quantity and sophistication. Even though the usage of artificial intelligence tools in particular ChatGPT’s natural language processing systems has been shown to be beneficial in research, there are still concerns about accuracy, responsibility, and transparency when it comes to the norms regarding authorship credit and contributions. Genomic algorithms quickly examine large amounts of genetic data to identify potential disease-causing mutations. By analyzing millions of medications for potential therapeutic benefits, they can quickly and relatively economically find novel approaches to treatment. Researchers from several fields can collaborate on difficult tasks with the assistance of nonhuman writers, promoting interdisciplinary research. Sadly, there are a number of significant disadvantages associated with employing nonhuman authors, including the potential for algorithmic prejudice. Biased data may be reinforced by the algorithm since machine learning algorithms can only be as objective as the data they are trained on. It is overdue that scholars bring forth basic moral concerns in the fight against algorithmic prejudice. Overall, even if the use of nonhuman authors has the potential to significantly improve scientific research, it is crucial for scientists to be aware of these drawbacks and take precautions to avoid bias and limits. To provide accurate and objective results, algorithms must be carefully designed and implemented, and researchers need to be mindful of the larger ethical ramifications of their usage.
Keywords: artificial intelligence, limitations, medical role, nonhuman, research integrity
Introduction
HIGHLIGHTS
Artificial intelligence (AI) and machine learning algorithms have revolutionized study techniques in the field of surgery, especially in the areas of surgical planning, image analysis, and robotic surgery.
There are several potential advantages to the rapid advancements in AI and machine learning.
To avoid unanticipated, risky outcomes, and hazards associated with the implementation of AI in society, it is vital to take into account all of the ethical, social, and legal elements of AI systems.
Overall, even though using nonhuman authors has the potential to considerably enhance scientific research, it is essential for scientists to be aware of the shortcomings and take steps to prevent bias and limitations.
Algorithms must be properly planned and executed in order to produce accurate and impartial findings, and researchers must be cognizant of any potential negative consequences on employees and the larger ethical ramifications of their use.
The term ‘artificial intelligence’ (AI) refers to systems that exhibit intelligent behavior by assessing their surroundings and acting with some autonomy in order to accomplish certain goals, according to the European Commission’s Communication on AI 1. AI, machine learning (ML), deep learning, and data/computer science are all related but discrete topics. AI refers to the wider idea of computers displaying intellect akin to that of humans. A branch of AI known as ML focuses on the techniques that enable computers to learn from data and make predictions or judgements. A kind of ML called deep learning uses layers of neural networks to extract complicated patterns. These topics are built on the basis of data and computer science, which includes data gathering, storage, processing, and analysis. Together, they progress science and technology, revolutionizing a variety of sectors and bringing up moral and practical questions2,3.
AI software in particular voice assistants, image analysis software, search engines, speech recognition systems, and face recognition systems can also be embedded in hardware devices, for instance, advanced robots, autonomous vehicles, and drones. AI-based systems can also be altogether software-based that function only in the virtual world2. AI-powered chatbot or digital assistant software may replicate human-like conversations and offer support or information in a virtual setting. These software programs communicate with users using voice or text interfaces. Simulation software mimics forecast the behavior of real-world processes, systems, or events. Examples include engineering simulation software, medical simulators, and aviation simulators that let users evaluate and test virtual prototypes. There are several potential advantages to the rapid advancements in AI and ML. Although to avoid unanticipated, risky outcomes and hazards associated with the implementation of AI in society, it is vital to take into account all of the ethical, social, and legal elements of AI systems4. An extensive evaluation of scholarly articles, research papers, and other publications that examine the role of nonhuman writers in scientific research is part of the methodology employed in this present article. To find essential themes and ideas related to the topic at hand, the selected sources were methodically analyzed and synthesized. Thus, the objective of the current study is to define the contribution of nonhuman writers to the development of scientific knowledge in the field of AI.
Nonhuman factors
Over time, AI has integrated itself into every step of the publishing process. AI technologies were developed to assist authors in the enhancement of both the structure and the quality of their research articles, which are multiplicatively growing in number and sophistication4. There are AI tools which assist with all facets of scientific research publication, such as writing articles, for example, Paper Pal and Write Full, submitting articles namely Wiley’s Rex, which automatically extracts data from research articles, screening manuscripts before submissions, for instance, Penelope and Ripeta Review, and assisting peer-review in particular Sci Score for evaluating the strength of research methodology and Proofig and Image Twin for scientific image checking, though statistics checking is not one of these tools5.
In November 2022, Open AI published ChatGPT, a new open-source natural language processing (NLP) tool6. ChatGPT is an example of generative AI since it can utilize previously unorganized data to generate something entirely new that has never been done before. There are several possible applications for ChatGPT, such as summarizing long articles or producing a first draft of a presentation that can then be tweaked. It could inspire thought in researchers, students, and teachers, and even assist them to produce essays of passable quality in a given subject7. This puts academic integrity and the principles of individual student knowledge at jeopardy. Students’ true comprehension, capacity for critical thought, and writing skills should be reflected in the evaluation process. The validity of student work is weakened and an unfair playing field may result from using nonhuman authors to compose essays.
Impact on research
In as much as AI technologies such as NLP systems (ChatGPT) have proven to be helpful in research, their use; however, raises issues related to the accuracy, accountability, and transparency in relation to requirements for authorship credit and contributions. They have also facilitated the fabrication and falsification of text because they can generate narratives quickly from a few simple prompts8. Researchers should take caution not to create false empirical data or alter current data using NLP algorithms.
In order to regulate the use of large-scale language models in scientific publications, the science journal Nature has developed a policy that forbids naming such tools as ‘credited authors on a research paper’ because ‘attribution of authorship carries with it, accountability for the work, and AI tools cannot take such responsibility’. The researcher’s use of these tools should be documented in the manuscript’s Methods or Acknowledgments sections, according to the guideline9. Concerns have been raised by the academic publishing community over the abuse of these language models in scientific publications3,10. JAMA and the JAMA Network publications have modified pertinent guidelines in the JAMA network Instructions for authors to address these issues11.
Impact on surgery
AI and ML algorithms have revolutionised study techniques in the field of surgery, especially in the areas of surgical planning, image analysis, and robotic surgery12. While ML algorithms help with image analysis for diagnosis and treatment planning, AI algorithms assist in preoperative evaluations, surgical approach selection, and outcome prediction13. Robotic devices that are controlled by humans improve surgical precision and reduce invasiveness. Transparency, interpretability, and ethical issues, however, present difficulties. To preserve research integrity and patient confidence in the use of AI in surgery, it is crucial to eliminate biases, provide transparency in decision-making algorithms, and follow ethical standards.
Benefits
The use of nonhuman writers in scientific research has several advantages, including increased productivity, precision, and consistency of results. They promote transparent and complete investigations by providing comprehensive documentation of algorithms and data analysis methods14. Automating research procedures lowers the likelihood of human errors and increases the dependability and transparency of results. They are faster and more precise than humans in processing and analyzing enormous amounts of data15. For instance, in neuroscience, functional magnetic resonance imaging data has been analyzed using ML algorithms to spot patterns that may not be visible to the naked eye16.
Large quantities of genetic data are swiftly analyzed by computer algorithms in genomics to find probable disease-causing mutations. They can efficiently and relatively inexpensively discover novel therapies by screening millions of drugs for possible therapeutic effects. Nonhuman authors make it possible for researchers from several disciplines to work collaboratively on challenging tasks, facilitating the interdisciplinary study. For instance, nonhuman authors analyze massive amounts of social data, including online interactions or survey results, to contribute to multidisciplinary social science research. This makes it possible for scholars in psychology, sociology, and other relevant sciences to work together to study how people behave, interact with one another, and follow societal trends.
The Earth’s climate system has been better understood as a result of the multidisciplinary cooperation between computer scientists and climate experts.
Drawbacks
There are several significant disadvantages to nonhuman authors, including the potential for algorithmic prejudice. Biased data may be reinforced by the algorithm since ML algorithms can only be as objective as the data they are trained on. The data and findings may be prejudiced if the biases are not acknowledged and rectified by human researchers17. Another downside is that nonhuman authors may not always be able to accurately express the complexity of certain study concerns. For example, ML algorithms may be useful for picture analysis, but they may struggle to capture the intricate and specific characteristics of other types of data, such as social or cultural events18.
Nonhuman authors may lack the ability to ask follow-up questions or probe deeper into certain areas of research, which can limit the scope and depth of the findings drawn . Additional ethical concerns revolve around data security and privacy as well as the possible dehumanization of the research process. In several academic domains, AI and algorithms are replacing human workers, raising worries about the possibility of job displacement. However, nonhuman authors are constrained as algorithms that are poorly developed or applied may yield erroneous results, which might have major repercussions for study findings.
Ethical implications
Ethics have been defined as moral principles that govern a person’s behavior or the conduct of an activity. A basic example of one ethical principle is to treat everyone with respect19,20. AI ethics is connected to the crucial subject of how human creators, producers, and operators ought to operate in order to reduce the ethical problems that AI in human society may cause, whether these damages are caused by unethical design, improper application, or abuse. The scope of AI ethics includes immediate, here-and-now concerns, such as data privacy and bias in current AI systems, as well as near- and medium-term concerns, such as the impact of AI and robotics on jobs and the workplace, as well as longer-term concerns, such as the possibility of AI systems achieving or exceeding superintelligence21.
Limitations
There are many thorough empirical studies especially assessing the effects of nonhuman authors on research integrity because this topic is still a relatively new and developing aspect of scientific research. This limitation could make it challenging to locate reliable information and conclusions to back up the current article. The viewpoints expressed in this article are those of the individual writers and may not fully reflect the spectrum of viewpoints held by researchers, politicians, or other interested parties when discussing nonhuman authors. Such a limitation can leave out crucial points of view and viable refutations.
Conclusion and recommendations
Although AI technologies have the potential to benefit all nations and be of significant use to mankind, it presents fundamental ethical issues, for instance, how prejudices might be ingrained and exacerbated, leading to the possibility of exclusion, injustice, and threats to biological, social, and cultural variety as well as social and economic disparities18,22. The workings of algorithms and the data on which they have been trained must therefore be transparent and comprehensible, as well as their potential effects on a variety of factors, including but not limited to human dignity, fundamental freedoms, sex equality, democracy, social, economic, political, and cultural processes, engineering and scientific procedures, animal welfare, the environment, and ecosystems should be plausible. Overall, even though using nonhuman authors has the potential to considerably enhance scientific research, it is essential for scientists to be aware of the shortcomings and take steps to prevent bias and limitations. Algorithms must be properly planned and executed in order to produce accurate and impartial findings, and researchers must be cognizant of any potential negative consequences on employees and the larger ethical ramifications of their use.
Ethics approval and consent to participle
Not applicable.
Consent for publication
Not applicable.
Sources of funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Author contribution
B.J.: conceptualization; M.O.O.: manuscript preparation; C.M.V.S.: funding acquisition; B.J. and M.O.O.: investigation; M.O.O.: project administration; C.M.V.S.: resources; M.O.O.: software; M.O.O. and N.G.: supervision; B.J.: validation; M.O.O.: visualization; M.O.O., B.J., and C.M.V.S.: writing – original draft; M.O.O. and B.J.: writing– review and editing. Final approval of manuscript for publication is done by all authors.
Conflicts of interest disclosure
The authors have no competing interests to declare that are relevant to the content of this article.
Research registration unique identifying number (UIN)
Name of the registry: not applicable.
Unique identifying number or registration ID: not applicable.
Hyperlink to your specific registration (must be publicly accessible and will be checked): not applicable.
Guarantor
Malik Olatunde Oduoye. College of Medical Sciences, Ahmadu Bello University Teaching Hospital, Shika, Kaduna State, Nigeria. E-mail: malikolatunde36@gmail.com, https://orcid.org/0000-0001-9635-9891.
Availability of data and material
Not applicable.
Provenance and peer-review
Not commissioned, externally peer-reviewed.
Acknowledgements
The authors acknowledged the guidance and mentorship of Mr. Malik Olatunde Oduoye, the West-African Regional Director at Oli Health Magazine Organization, Kigali, Rwanda. E-mail: malikolatunde36@gmail.com and Professor Nikhil Gupta, MBBS, MS, and MRCS at Atal Bihari Vajpayee Institute of Medical Sciences and Dr Ram Manohar Lohia Hospital, New Delhi, India. E-mail: nikhilbinita@gmail.com
Footnotes
Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article.
Published online 15 June 2023
Contributor Information
Malik Olatunde Oduoye, Email: malikolatunde36@gmail.com.
Binish Javed, Email: binishjaved09@gmail.com.
Nikhil Gupta, Email: nikhilbinita@gmail.com.
Che Mbali Valentina Sih, Email: vche678@gmail.com.
References
- 1.Pettit RW, Fullem R, Cheng C, et al. Artificial intelligence, machine learning, and deep learning for clinical outcome prediction. Emerg Top Life Sci 2021;5:729–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Hulsen T. Literature analysis of artificial intelligence in biomedicine. Ann Transl Med 2022;10:1284 1284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Anaeto F C, Asiabaka C C, Ani A O, et al. The roles of science and technology in national development. ISJN 2016;3:38–43. [Google Scholar]
- 4.European Commission. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe. 2018. Accessed 5 April, 2023. https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe [Google Scholar]
- 5.European Parliament. The ethics of artificial intelligence: Issues and initiatives. European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 634.452. 2020;1:12. [Google Scholar]
- 6.Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge | Medical Journals and Publishing | JAMA | JAMA Network [Internet] [cited 2023 Mar 27]. https://jamanetwork.com/journals/jama/fullarticle/2801170 [DOI] [PubMed] [Google Scholar]
- 7.Guest Post – AI and Scholarly Publishing. A View from Three Experts - The Scholarly Kitchen [Internet] [cited 2023 Mar 28]. https://scholarlykitchen.sspnet.org/2023/01/18/guest-post-ai-and-scholarly-publishing-a-view-from-three-experts/ [Google Scholar]
- 8.Introducing ChatGPT. [Internet] [cited 2023 Mar 29]. https://openai.com/blog/chatgpt [Google Scholar]
- 9.Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Acc Res 2023;0:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023;613:612 612. [DOI] [PubMed] [Google Scholar]
- 11.Davis P. Did ChatGPT Just Lie To Me? [Internet]. The Scholarly Kitchen. 2023 [cited 2023 Mar 29]https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/ [Google Scholar]
- 12.Gupta A, Singla T, Chennatt J, et al. Artificial intelligence: A new tool in surgeon’s hand. J Educ Health Promot 2022;11:93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Elfanagely O, Toyoda Y, Othman S, et al. Machine Learning and Surgical Outcomes Prediction: A Systematic Review. J Surg Res 2021;264:346–61. [DOI] [PubMed] [Google Scholar]
- 14. https://jamanetwork.com/journals/jama/pages/instructions-for-authors Instructions for Authors | JAMA | JAMA Network [Internet] [cited 2023 Mar 28] [Google Scholar]
- 15.Zielinski C, Winker M, Aggarwal R, et al. WAME Board. Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and chatbots in relation to scholarly publications. January 20, 2023. Accessed January 28, 2023. https://wame.org/page3.php?id=106 [Google Scholar]
- 16.Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care 2023;27:1–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Atkinson JG, Atkinson EG. Machine Learning and Health Care: Potential Benefits and Issues. J Ambul Care Manage 2023;46:114–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Sato JR, Moll J, Green S, et al. Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression. Psychiatry Res 2015;233:289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Coan TG, Boussalis C, Cook J, et al. Computer-assisted classification of contrarian claims about climate change. Sci Rep 2021;11:22320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Sun W, Nasraoui O, Shafto P. Evolution and impact of bias in human and machine learning algorithm interaction. PLoS One 2020;15:e0235502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification — MIT Media Lab [Internet] [cited 2023 Mar 29]. https://www.media.mit.edu/publications/gender-shades-intersectional-accuracydisparities-in-commercial-gender-classification/
- 22.UNESCO. Recommendation on the ethics of artificial intelligence. The General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO), meeting in Paris from 9 to 24 November 2021, at its 41st session. 1-21.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Not applicable.