Skip to main content
PLOS One logoLink to PLOS One
. 2025 Aug 28;20(8):e0330416. doi: 10.1371/journal.pone.0330416

Generative AI and academic scientists in US universities: Perception, experience, and adoption intentions

Wenceslao Arroyo-Machado 1,*, Jinghuan Ma 1, Tipeng Chen 1, Timothy P Johnson 1,2, Shaika Islam 1, Lesley Michalegko 1, Eric Welch 1
Editor: Kingsley Okoye3
PMCID: PMC12393709  PMID: 40875655

Abstract

The integration of generative Artificial Intelligence (AI) into academia has sparked interest and debate among academic scientists. This paper explores the early adoption and perceptions of US academic scientists regarding the use of generative AI in teaching and research activities. To do so, this analysis focuses exclusively on STEM fields due to their high exposure to rapid technological advancements. Drawing from a nationally representative survey of 232 respondents, we examine academic scientists’ attitudes, experiences, and intentions regarding AI adoption. Results indicate that 65% of respondents have utilized generative AI in teaching or research activities, with 20% applying it in both areas. Among those currently using AI, 84% intend to continue its application, indicating a high level of confidence in its perceived benefits. AI is most frequently used in teaching to develop pedagogical materials (51%) and in research for writing, reviewing, and editing tasks (40%). Despite concerns about misinformation, with 78% of respondents indicating it as their top concern regarding AI, there is broad recognition of AI’s potential impact on society. Most academic scientists have already integrated AI into their academic activities, demonstrating cautious yet optimistic adoption due to perceived risks. Furthermore, there is strong support for academic-led regulation of AI, highlighting the need for responsible governance to maximize benefits while minimizing risks in educational and research settings.

Introduction

Research in Artificial Intelligence (AI) has a storied past, beginning when the term was first coined at the 1956 Dartmouth Summer Research Project on Artificial Intelligence [1]. Fast forward to late 2022, the release of ChatGPT (https://openai.com/blog/chatgpt) marked a significant milestone in AI, democratizing access to advanced generative AI technologies. Generative AI refers to a subset of artificial intelligence that is designed to create new content, such as text, images, audio, video, and even code, based on patterns learned from large datasets [2]. Unlike traditional AI systems, which function based on pre-defined rules, generative AI systems demonstrate greater functional autonomy and ability to perform intelligent behaviors in a way that mimic human creativity by learning patterns and behaviors from data [3]. The rise of generative AI, particularly through chatbots powered by including Large Language Models (LLM), further exemplifies the impact of these technologies on various domains. The rapid development of chatbots beyond ChatGPT has sparked considerable interest [4]. The widespread adoption of these AI tools has the potential to alter numerous elements of society, including impacting higher education and scientific research [5,6]. The ability of generative AI to produce text that is virtually indistinguishable from human-written work promises to transform educational methodologies and research practices, offering new opportunities for institutions, educators, and students [7,8]. This rapid expansion has also raised important questions regarding risks such as bias, transparency, and social impact [9].

The significance of generative AI’s impact is particularly evident in Science, Technology, Engineering, and Mathematics (STEM) fields. The importance of these fields is underscored by the growing number of students pursuing STEM disciplines, a trend influenced by various factors and reflective of a country’s economic development [10]. Simultaneously, since 2015, US companies have been increasing their demand for AI-related expertise and are recruiting early-career AI researchers from academia [11,12]. This growth and demand highlight the importance of integrating AI into educational curricula and research methodologies to keep pace with technological advancements and opportunities. Understanding their perspectives on and experiences with AI is essential, as these academic scientists can provide valuable technical input for its ongoing development across various disciplines [13].

Given the rapid global adoption of generative AI, this article explores the early adoption behaviors and perceptions of US academic scientists related to teaching and research activities. According to innovation diffusion theory [14], early adopters are different from late adopters as they often face higher uncertainty and different motivations compared to later adopters. In this context, early adoption refers to the initial phase of AI integration before it becomes widespread in academia. While the study is centered on the United States, the insights can be considered reflective of broader international trends due to the influential role of US academia on a global scale [15]. To achieve these insights, we draw from a nationally representative survey of academic scientists, that include assistant, associate and full professors, across six disciplines—biology, chemistry, computer and information science engineering, civil and environmental engineering, geography, and public health—at randomly selected Carnegie-designated Research Extensive and Intensive (R1) universities. The study is designed as a descriptive and exploratory analysis of academic scientists’ reported attitudes, experiences, and uses of generative AI. The following aims were established to effectively address this main objective:

  1. To assess the general attitudes of academic scientists towards generative AI.

  2. To explore the perceptions and experiences of academic scientists regarding the use of AI in academic teaching and research.

  3. To examine how academic scientists’ adoption intention for teaching and research varies.

  4. To investigate academic scientists concerns and perceptions regarding the regulation of generative AI.

The next section presents a literature on the use of AI in higher education. It explores how academic scientists from different disciplines perceive its application and reviews other survey-based research on this topic. The subsequent methodology section describes the methods and data collection process is followed by a results section that presents the findings of the survey, detailing academic scientists’ perceptions, experiences, and intentions regarding generative AI in teaching and research. The conclusion section synthesizes key insights and points out limitations of the study.

Literature review

The following review is structured around the benefits and risks of generative AI in academia, setting the stage for an exploratory analysis of scientists’ behaviors and perceptions.

Generative AI in academic education.

Artificial intelligence (AI) has long been an integral part of higher education at different levels, significantly predating the recent explosion in AI technologies such as ChatGPT, Microsoft Copilot or Elicit [16]. In its early applications in this context, AI was employed to streamline complex administrative tasks, enhancing efficiency in areas like admissions processes and predicting student retention rates, which are crucial for institutional planning [17,18]. Additionally, academic libraries began to leverage AI for automated cataloging, intelligent search tools, AI-driven reference services, and adaptive learning systems, improving accessibility and operational efficiency [19].

More recently, generative AI tools has introduced new possibilities and challenges across various domains in higher education [20]. AI is used to personalize learning through adaptive tutoring systems and identify and address student learning obstacles [21,22]. Advocates suggest that generative AI-enhanced personalized education could improve learning outcomes, increase accessibility, reduce inequities [2326], and enhance student interaction and engagement [2729]. Moreover, AI is increasingly used to assist in assessment processes, making them more accurate and less time-consuming, and enabling automated grading that involves text generation [22,24,25]. Although these applications have relied on specialized and custom-designed applications that are not universally accessible, they are beginning to transform the higher education system by offering institutions and individuals substantial administrative and educational benefits. Educators recognize that the impact on education will vary by discipline and require changes in teaching methods [30].

Despite the perceived usefulness and improved ease of communication [31], there is insufficient understanding of generative AI’s long-term impacts on educational practices, particularly regarding sustained changes in teaching methods and learning outcomes [32], and of how academic scientists perceive and respond to these developments. These uncertainties are further complicated by significant, pre-existing concerns regarding the reliability and safety of AI systems in education, issues which have broadened and intensified following generative AI’s widespread adoption [3335]. Notable issues include the risk of overreliance on AI and its effects on critical thinking [36,37] and AI’s potential to undermine creativity, hinder academic integrity, and reinforce existing inequalities due to biases in datasets, algorithmic opacity, and disparities in access to AI-driven tools [38]. AI’s potential to undermine creativity, hinder academic integrity, and exacerbate existing inequalities is reflected in disparities in access to AI tools and the reinforcement of biases within algorithmic decision-making. Generative AI can also compromise academic standards through plagiarism [26] and generate misleading or fabricated information, thereby undermining trust [39]. Concerns also remain about fairness, responsibility, and the necessity for clear guidelines and training to ensure generative AI’s ethical and effective implementation [40,41]. While AI integration offers benefits like improved accessibility and efficiency, it also raises fears about job obsolescence due to the evolving demand for new skills and competencies [42]. These concerns point to a growing need for human oversight to safeguard academic integrity and mitigate bias [36,43], as the adoption of generative AI in education raises significant challenges related to critical thinking, fairness, and equitable access to learning opportunities.

Generative AI in academic research.

Generative AI is also having a substantial impact on academic research. Messeri and Crockett [44] have outlined a typology of four different roles generative AI can play in research: oracle for study design, surrogate for data collection, quant for data analysis, and arbiter for peer review. Generative AI tools have sparked interest for their potential in various tasks along the research pipeline of discovery by facilitating the synthesis and understanding of scientific literature, generating and organizing ideas, providing targeted feedback, and analyzing vast amounts of data to uncover connections, thus streamlining research processes and mitigating some types of cognitive biases [45]. Additionally, AI models can improve the prediction of future discoveries, especially in areas with limited literature, by anticipating human predictions and identifying key researchers [46] or by acting as “an autonomous researcher,” to produce new knowledge [47].

While many of the applications of generative AI to research activities appear promising, challenges and risks still loom large [5,48,49]. For example, the expansion of Large Language Models for broader use presents significant threats to research integrity, impacting its transparency, reliability, and authenticity [4951]. This can be seen in how AI systems can introduce biases from their training data, potentially perpetuating hidden prejudices and undermining the integrity and authenticity of scientific knowledge [52]. As universities start to craft generative AI guidelines [53], academic scientists’ excitement about potential efficiency gains is balanced by concerns over risks such as misinformation, plagiarism, and falsified research [48,54]. For example, while generative AI can produce high-quality text suitable for publication, its limited capacity for research design and data analysis increases the risk of misinformation, fabricated findings, and academic integrity violations [55]. Moreover, concerns have been raised about who owns the AI-generated content [56], whether it maintains its integrity [57], and what the risks are to data privacy when sharing protected information [58]. These issues pose serious challenges that could affect the perceptions and behaviors of academic scientists leading to caution in embracing these technologies and potentially slowing down their integration into research practices.

Academic perceptions about and adoption of generative AI.

Understanding academic scientists’ perceptions of generative AI is crucial because of its potential to radically transform research and teaching landscapes in higher education, affecting vast segments of the US and international academic communities. Conceptual frameworks such as the Theory of Reasoned Action (TRA) [59], the Theory of Planned Behavior (TPB) [60], the Technology Acceptance Model (TAM) [61], the Innovation Diffusion Theory [62] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [63] provide valuable frameworks for examining technology acceptance. Cumulatively, they highlight the importance of personal beliefs and attitudes, including concerns [64] confidence and readiness [61,65], as well as past experiences [65] in shaping technology acceptance. However, the implications of AI extend beyond adoption frameworks; AI’s integration into academic environments presents unprecedented opportunities and significant uncertainties, making it imperative to closely monitor how these changes unfold over time.

While educators and researchers recognize AI as a valuable tool for developing new ideas, streamlining workflows, and offering remote and personalized learning experiences [6668], they have also expressed ethical concerns regarding plagiarism, misinformation, and misidentification of AI-produced content as human work. The challenge of accurately detecting AI-generated text is significant, as current classifiers often misclassify human-written text as generated by AI and vice versa [69,70]. These risks affect various fields differently. For instance, in medical education, while there is optimism about AI’s potential to reduce administrative burdens and enhance diagnostic accuracy, concerns persist about data privacy, the potential for AI-generated medical content to be overly relied upon without human oversight, and the challenges of ensuring the accuracy and reliability of AI-assisted educational tools [71]. Researchers in population health and academic publishing have also voiced concerns over sensationalist media portrayals of generative AI, stressing the importance of better education and awareness raising to combat these misconceptions [72,73]. Additionally, AI and machine learning specialists have identified a need for more safety research and exhibit low levels of trust in certain organizations managing AI technologies [74]. Given the complex and uncertain consequences of AI integration into academia, systematic tracking and continuous evaluation of this transformation are essential to responsibly navigate the potential benefits and risks.

With its significant promise for advancing education and research, generative AI presents both compelling opportunities and serious concerns. Understanding how academic scientists weigh these perceived benefits against potential risks is essential for assessing whether adoption is likely to accelerate, slow down, or proceed cautiously. However, few studies have jointly addressed STEM academic scientists’ perceptions of, experiences with, and generative AI adoption intentions in both education and research. In order to bridge this gap, this study develops a comprehensive approach to better delineate the current state and more accurately understand the adoption rationales of academic scientists. We believe such research is a prerequisite for tailoring educational methodologies, enhancing research practices, and addressing concerns regarding the regulation of generative AI.

Materials and methods

We make use of a unique database consisting of two sets of merged survey data from the SciOPS (Scientist Opinion Panel Survey): an intake survey administered in the Spring of 2022 and 2023 to collect general socio-demographic data regarding SciOPS panelists, and another survey of panelists’ attitudes towards AI that was administered in late 2023 (hereafter, the AI survey). SciOPS is a platform to improve science communication between university academic scientists and the public by understanding scientist opinions on current topics relevant to the science community and to society. Detailed information regarding SciOPS and past surveys administrated by this platform can be found at SciOPS website [75].

This study employed a two-stage sampling design. At the first stage, SciOPS panel members were recruited from a full sample frame that includes approximately 18,500 PhD-level academic scientists employed by Carnegie-designated Research Extensive and Intensive (R1) universities across the US, representing six STEM fields: biology, chemistry, civil and environmental engineering, computer and information science engineering, geography, and public health. The SciOPS research team used probability sampling to randomly select R1 universities and collected the names and contact information of tenured and tenure track faculty (i.e., assistant, associate, and full professors), and non-tenure track researchers with PhDs from each sampled academic department website. Fields and ranks were collected by the SciOPS research team from their personal profile on their institution websites. Table S3 in S2 Appendix shows the number of randomly selected institutions by field.

A total of 1,365 eligible academic scientists consented to become SciOPS panel members, with a recruitment rate of 7.5%, calculated according to the guidance of American Association for Public Opinion Research (AAPOR) Recruitment Rate formula [76]. The intake survey was dispensed to all 1,365 members to capture their general socio-demographic information, including birth year, self-reported gender, citizenship, academic rank, and academic discipline.

In the second stage, 777 SciOPS panel members were randomly selected from all members and invited to participate in this survey. To construct valid survey questions, the SciOPS research team designed them based on literature on technology adoption among academic scientists. The questions underwent multiple rounds of review, revision, and internal pretesting by 10 SciOPS research team members. The full questionnaire is available in S3 Appendix.

The survey was administrated online in English through the Nubis® system, which is an online software platform for administering questionnaires with the aim of protecting the confidentiality of survey respondents. A pre-notification electronic email was delivered on September 27, 2023 to notify the sample of a forthcoming survey invitation. We sent email messages with a formal survey invitation to each sampled individual along with a questionnaire hyperlink on September 29. Three formal reminder messages were sent on October 4, October 11, and October 18. A final short appeal message was sent on November 2. We closed the survey on November 4. This survey obtained a total of 232 usable responses, representing an individual survey completion rate (COMR) of 29.9% (RR4) and an AAPOR Cumulative Response Rate (CUMRR) of 2.2%. The CUMRR calculated by multiplying the RECR for the SciOPS panel (7.5%) with the Completion Rate for this survey (29.9%) and dividing by 100.

Both surveys were approved by the Institutional Review Board at Arizona State University (IRB Approval No. STUDY00012476). Data were weighted to account for probabilities of selection and post-stratified by gender, academic field, and rank to represent the population of academic scientists from which the full sample was initially recruited. The margin of sampling error for the AI survey estimates is ± 6.4 percentage points, based on a population of 18,504 and a 95% confidence level. Table 1 reports the weighted demographic characteristics of survey respondents.

Table 1. Weighted descriptive statistics of the final sample.

Construct Variable N %
Self-reported Gender Female 71 31
Male 155 66
Prefer not to report 6 3
Field Biology 16 7
Chemistry 31 13
Computer and Information Science Engineering 28 12
Civil and Environmental Engineering 63 27
Geography 90 39
Public Health 4 2
Rank Full Professor 76 33
Associate Professor 49 21
Assistant Professor 49 21
Non-tenure Track Researcher 58 25
Self-reported Citizenship Born a U.S. Citizen 135 58
Naturalized U.S. Citizen 60 26
Non-U.S. citizen with a permanent U.S. resident visa (e.g., green card) 26 11
Non-U.S. citizen with a temporary U.S. resident visa 4 2
Prefer not to answer 7 3

Additionally, to assess how variations in question framing might influence academic scientists’ attitudes toward government regulation of generative AI, we embedded an experimental design within the survey. Respondents were randomly assigned to one of four versions of a key question, which varied in terms of the inclusion of a moderate response option and the presence or absence of ethical framing regarding potential risks. This design allowed us to examine how different presentations of the same regulatory dilemma influenced participants’ preferences.

To assess the representativeness of our sample, we also conducted a non-response bias analysis (see S1 Appendix for details). The analysis shows minimal differences between the completed sample and the initial sample frame of academic scientists, and no significant differences between our sample and the panel members invited to participate in the survey. These findings confirm that the dataset provides a reasonable representation of US-based academic scientists.

Results

Following, we detail survey results on academic scientists’ perceptions of generative AI use and regulation, underscoring their experiences and intentions regarding its implementation in teaching and research.

General perceptions and practical use of generative AI in academia

Survey responses indicate that a majority of academic scientists (93%) believe that generative AI will have a significant impact on daily life in the United States (Fig 1). The potential effects of climate change also register as a major point of academic concern, albeit slightly less so (88%) than generative AI (though this difference falls within the margin of error), indicating a sense of urgency regarding environmental shifts. While global conflicts (78%), the proliferation of infectious diseases (77%), and the challenges of resource scarcity (72%) are acknowledged, they are perceived as less immediately impactful compared to technological and political shifts on the horizon. Notably, fusion energy, while revolutionary in concept, is seen by academic scientists as trailing behind in terms of imminent impact.

Fig 1. Academic scientists’ views on the extent to which different issues can cause changes to the way we live in the U.S.

Fig 1

Fig 2 presents academic scientists’ specific concerns regarding this technology. Foremost among these is concern over misinformation, which echoes the national discourse on political polarization, with 78% of respondents marking it as extremely or very concerning. Concerns extend to an over-dependence on AI technologies and their potential cybersecurity risks, noted by 60% and 51% of academic scientists, respectively, as major issues; which is followed by the fear that generative AI may stifle human creativity, with half of the surveyed academic scientists expressing apprehension. In contrast, issues such as privacy loss and job displacement, though acknowledged, elicit less concern, where 44% and 24% of respondents express extremely or very strong concern respectively.

Fig 2. Academic scientists’ level of concern over potential threats posed by generative AI.

Fig 2

Despite these concerns, there is a noteworthy trend among academic scientists integrating generative AI into their core activities: teaching and research. A sizable 65% of academic scientists have personally used generative AI to some extent. Specifically, 40% of surveyed academic scientists report using generative AI in teaching and research, demonstrating an early adoption—defined here as the integration of generative AI into academic activities shortly (one year) after the release of ChatGPT-3.5 in late 2022. In contrast, 25% utilize generative AI for alternative functions not directly related to their primary roles, and 35% have not engaged with AI personally. Diving deeper, 20% of academic scientists report using generative AI in both teaching and research. This is juxtaposed with 11% who use generative AI exclusively for research and a smaller 9% who apply it solely to teaching.

Irregular use of generative AI for teaching with persistent continuation for adopters

In terms of its adoption for education, generative AI is experiencing early-stage integration by academic scientists in teaching roles, with usage patterns reflecting cautious exploration rather than full integration (Fig 3). The data shows a majority engage with generative AI tools occasionally, reserving them for specific tasks rather than regular use. This tentative approach suggests an awareness of the potential and limitations of current generative AI technologies in educational settings. For those integrating generative AI into their pedagogy, its most common application lies in the development of teaching materials, with 51% of the generative AI-utilizing academic scientists relying on these tools for this purpose. Generative AI is also employed in crafting student assignments (38%) and conducting interactive classroom activities (37%), though less frequently for direct student interactions such as mentoring (17%) or critical tasks like grading (11%) or examinations (8%). The restrained use in these areas may point to a preference for human oversight where subjective judgment and personalized feedback are valued.

Fig 3. Frequency and academic scientists use of generative AI for teaching activities.

Fig 3

Regarding future intentions to use generative AI in teaching, the distinction between current users and non-users is stark. Almost all academic scientists who have integrated generative AI into their teaching methods, 97%, anticipate continuing its use, underscoring a high level of satisfaction and perceived value. In contrast, among those who have not yet employed generative AI in their teaching practice, skepticism is prevalent. A majority, 80%, express reservations about adopting generative AI: within this group, 45% express no intention to use it, and 35% remain undecided. Only 20% of current non-users express openness to future use; within this group, 85% (i.e., 17% of the total) state they are likely to adopt it, while 15% (i.e., 3% of the total) are certain they will incorporate it into their teaching.

Same perspectives and intentions for generative AI use in research

In the realm of research, the adoption of generative AI exhibits trends similar to its use in educational activities (Fig 4). Nearly half (46%) of the researchers surveyed employ generative AI less than once a month, indicating a tentative exploration, and about a quarter (24%) use it on a weekly basis. Additionally, among academic scientists who only use generative AI in research, 40% report finding value in its applications for writing, reviewing, and editing. Significantly, generative AI’s role in data analysis (32%) and especially in research conceptualization (28%) marks a shift, indicating its potential not just in routine tasks but in critical thinking and idea generation phases. Moreover, generative AI’s application extends to administrative responsibilities like funding acquisition, demonstrating its utility in streamlining research workflows and relieving administrative burden. Additionally, its application in visualization and other specialized tasks further highlights generative AI’s versatility and usefulness in the research process.

Fig 4. Frequency and academic scientists use of generative AI for research activities.

Fig 4

Building on the observed patterns of generative AI use in research, future intentions among academic scientists also further highlight the divide between current users and non-users. Those with experience using generative AI in their research overwhelmingly endorse its continued application, with 84% affirming they will likely or definitely continue integrating generative AI into their work. This strong positive sentiment contrasts sharply with the skepticism seen among those yet to adopt generative AI for research, where 78% express reservations. Within this group, a notable 41% are decided against using generative AI, and 37% remain uncertain about its potential benefits. Conversely, only a minority are open to the possibility, with 18% leaning towards probable use and a mere 5% convinced of its definite future use. This dichotomy suggests that firsthand experience with generative AI may influence perceptions of its utility, with a divide emerging between adopters and skeptics regarding its role in research activities.

The university as main regulatory agent and concerns about its regulation

Following the exploration of academic scientists’ perspectives and their usage of generative AI, attention shifts to the question of regulatory oversight. Within this domain, there emerges a clear consensus favoring academic institutions as the primary regulators of generative AI. A majority, 88% of academic scientists, underscore the pivotal role that universities should assume, distinguishing them from other key players. National professional associations, journal editors, and publishers also receive notable endorsement, with 74%, 65%, and 61% support respectively, highlighting the broader academic community’s involvement in regulation. In contrast, when it comes to entities outside the academic sphere, over half of all academic scientists believe that both the Federal Government (56%) and supranational organizations (54%) should play substantial roles. This distribution of responsibilities suggests a preference for an academic-led approach to generative AI regulation, complemented by governmental and international oversight, to effectively address the intricacies of generative AI’s integration into academic settings. Building on this preference for an academia-centered regulatory approach, there is also strong support among academic scientists for federal involvement in the oversight of generative AI’s use, with 71% expressing strong or somewhat strong approval. Alongside the preference for an academia-centered regulatory approach, 71% of academic scientists express strong or somewhat strong approval for federal involvement in generative AI oversight.

To assess how variations in question wording might influence academic scientists’ attitudes toward government regulation of generative AI, respondents were randomly assigned to one of four experimental versions of a key survey question: (1) “Which of the following is closest to your current point of view about the role of the government in regulating generative AI?” with responses: “Federal government should not regulate generative AI technology, leaving its regulation to private industry”, or “Federal government should ban generative AI deployment until comprehensive research on its potential consequences is conducted”; (2) identical wording but including a moderate response option: “Federal government should regulate generative AI technology, however, it should momentarily step back from doing so until comprehensive research on its potential consequences is conducted”; (3) explicit ethical framing: “Considering the possible ethical and unintended consequences generative AI may have, which of the following is closest to your current point of view about the role of the government in regulating generative AI?” with the same two responses as version 1; and (4) identical ethical framing as version 3, plus the moderate response option included in version 2.

The survey went a step further to dissect academic scientists’ attitudes towards government regulation of generative AI through an experimental approach (Fig 5). This segment aimed to evaluate how the phrasing of questions, particularly those mentioning the ethical and unintended consequences of generative AI, alongside the option of a neutral response, would affect their attitudes for federal regulation. Participants were randomly assigned one of four variations of the question, covering aspects such as outright banning versus not banning generative AI technologies and the inclusion or exclusion of a neutral middle response category. This experiment indicates that the question format in which response options were structured significantly influenced participants’ responses. Through this detailed exploration, it became apparent that academic scientists’ opinions on generative AI governance are complex, with a considerable number supporting some level of federal oversight, yet their preferences are nuanced, often swayed by the contextual presentation of potential generative AI risks and regulatory options.

Fig 5. Methodological experiment results: (a) Current opinion about the role of the government in regulating generative AI; (b) Current opinion about the role of the government in regulating generative AI, considering the possible ethical and unintended consequences it may have.

Fig 5

Confidence intervals and sample sizes for each subsample are as follows: (a1) n = 77, CI = ±11.1%; (a2) n = 50, CI = ±13.8%; and (b1 and b2) n = 52, CI = ±13.6%.

Our analysis reveals academic scientists’ nuanced preferences regarding government regulation of generative AI technologies (Fig 5). When presented with a middle-ground response option about the extent of governmental oversight, a clear majority favored this moderate stance (76% and 59% for two different question versions), demonstrating a shift away from more polarized positions. Specifically, the inclination to outright ban generative AI dropped dramatically (from 50% to 8% and 55% to 28% across the two versions) when this middle option was available. Similarly, the preference for no regulation also decreased (from 44% to 14% and 38% to 9%), indicating a general consensus towards a balanced regulatory approach. Moreover, when questions explicitly highlighted the potential threats posed by generative AI, there was a noticeable increase in support for temporary bans by the Federal Government until thorough research is completed, with or without a middle response option. This ranged from 50% to 55% without a middle option and jumped from 8% to 28% with it. Additionally, the absence of a moderate option slightly increased the likelihood of respondents choosing not to answer (from 2% to 6%, and from 4% to 7% in the respective question versions). These findings underscore the complexity of academic scientists’ attitudes towards generative AI regulation, leaning towards a preference for moderate, well-considered policy approaches that balance potential risks and benefits while awaiting further research.

Discussion

The emergence of ChatGPT stimulates new discussion on the adoption of AI-assisted technology in different sectors. This paper contributes to the understanding of university academic scientists’ perceptions of the use and regulation of generative AI, as well as their experience using it in teaching and research activities. Drawing from a nationally representative survey of 232 respondents from STEM disciplines, the study reveals that 65% of academic scientists have incorporated generative AI into teaching or research, with 20% actively using it in both fields. Generative AI is predominantly used in educational settings for developing pedagogical materials (51%) and in research primarily for writing, reviewing, and editing tasks (40%). Despite widespread adoption, concerns persist, particularly about misinformation, identified by 78% of respondents as their primary apprehension. The study further indicates a cautious yet optimistic approach among scientists, with 97% of teaching users and 84% of research users intending to continue its use, demonstrating substantial confidence in its potential benefits. Moreover, regulatory attitudes reveal a pronounced preference (88%) for academic-led oversight of AI integration, reflecting skepticism about exclusively governmental regulation.

This survey’s findings align closely with previous research on artificial intelligence in educational and research contexts. Earlier studies already recognized AI’s potential in streamlining administrative tasks, facilitating adaptive learning methods, and enhancing personalized education—factors consistently noted even prior to the widespread availability of generative AI [77]. Similar to results presented here, earlier research has highlighted AI’s beneficial role in automating text-based academic tasks such as content generation and manuscript preparation, underlining substantial perceived utility among academics [78,79]. Additionally, respondents’ concerns regarding misinformation and the fabrication of credible yet inaccurate bibliographic citations confirm issues previously documented in empirical assessments of generative AI tools, where substantive citation errors and misinformation have presented persistent challenges [51,52]. Moreover, consistent with previous survey-based studies, academic scientists demonstrate cautious optimism toward generative AI, balancing enthusiasm for potential efficiency improvements against the risks identified related to academic integrity and ethical standards in scholarly publishing [52]. This continuity with established scholarship reflects that the perceptions documented among STEM academic scientists reflect broader, ongoing dialogues around ethical and practical risks inherent to AI integration.

Notably, this survey also reveals important divergences from prior studies, particularly those involving student populations. Previous literature indicated predominantly positive student attitudes towards generative AI, emphasizing its capacity to improve academic engagement and personalized learning [29]. In contrast, academic scientists surveyed here report comparatively lower rates of generative AI use in activities involving direct interpersonal interaction or subjective evaluation, such as mentoring (17%), grading (11%), or conducting examinations (8%), indicating a selective rather than comprehensive adoption pattern. This divergence suggests a critical distinction between student enthusiasm—driven by immediate educational benefits—and the more cautious, responsibility-driven attitudes of academic scientists, who must balance innovation with preserving academic rigor and ethical standards. Furthermore, unlike earlier optimistic predictions regarding widespread chatbot adoption, which emphasized educational benefits such as personalized learning experiences, skill development, and substantial pedagogical support for educators [80,81], this survey indicates more selective adoption among academics, concentrated predominantly on content creation, and scholarly communication tasks rather than comprehensive integration across all academic functions.

While the study emphasizes misinformation concerns, other ethical implications warrant deeper exploration. The inherent bias in generative AI models, stemming from biased training data, significantly influences academic use by potentially perpetuating discriminatory outcomes, thus affecting fairness and equality within educational contexts [38]. Data privacy concerns also merit critical attention, especially regarding generative AI’s role in processing and sharing sensitive academic and research data. For instance, AI-driven qualitative analysis methods raise important questions about ethical standards for data management [58]. Additionally, generative AI poses significant challenges for plagiarism detection, as it can produce highly sophisticated academic content that is nearly indistinguishable from human writing [82]. This limits the effectiveness of traditional detection tools and highlights the need for more advanced identification methods or a reevaluation of current academic integrity standards.

Regarding generative AI regulation, the study identifies a complex regulatory landscape perceived by academic scientists, who strongly favor (88) an academia-centered regulatory approach, primarily led by academic institutions, with complementary oversight by national professional associations (74%), journal editors (65%), and publishers (61%). Although a majority (71%) of respondents support some federal involvement, their preferences clearly emphasize cautious, evidence-based policies rather than outright governmental bans. Experimental survey data explicitly indicate that academic scientists’ regulatory positions vary depending on how AI-related risks are presented, reflecting context-dependent rather than absolute regulatory stances. Current regulatory practices already align with these academic preferences. Universities are actively developing internal AI governance frameworks [53], academic publishers have begun establishing clear editorial standards for AI-generated content [83], and international organizations such as UNESCO have advanced global governance principles to shape national AI policies [84].

While this study provides valuable insights into the adoption intentions of academic scientists regarding generative AI, several limitations must be acknowledged. First, although based on probability sampling, the findings reflect responses from academic scientists across the six STEM disciplines included in the study, which may not represent the broader academic community. Additionally, as with any cross-sectional study, the findings reflect the period of data collection (late 2022–early 2023), capturing attitudes and experiences during the early adoption phase; perspectives may have since evolved. Nevertheless, the findings of the paper on early-stage adoption behavior and perceptions provide a substantive and valuable basis for informing next stage research design and data collection. Smaller percentage differences should be understood within the survey’s overall margin of error (±6.4%), though this does not affect the general interpretation of results, which point to trends in academic scientists’ attitudes and experiences. Although no significant differences were observed in responses by gender or academic rank, the final sample does show variation in disciplinary representation. Civil and environmental engineering was more represented, while biology and public health were less so. These differences may reflect varying levels of exposure or familiarity with generative AI across disciplines.

Furthermore, although the survey did not inquire about other demographic characteristics, it did ask about gender, allowing us to examine data by gender. This is a notable limitation given that generative AI adoption and its impacts might vary across different demographic groups. The lack of race and ethnicity data is significant, especially considering how certain groups have faced disproportionate challenges within academia [8588]. Additionally, while the survey examined generative AI’s impact on the work of academic scientists, it did not address experiences with generative AI hallucinations or the generation of inaccurate information, which is an emerging concern. Indeed, clear evidence of misuse and malpractice already exists. Generative AI has generated high-quality fraudulent papers that are difficult to detect [82], and it can produce factually incorrect responses, including fabricated bibliographic citations [51,89]. These limitations suggest that future research should explore generative AI’s effects across a wider range of academic disciplines and demographic groups, and more thoroughly investigate issues of accuracy and the reliability of AI-generated content. Such research is crucial for developing a comprehensive understanding of generative AI’s role in academia and ensuring its responsible integration.

Supporting information

S1 Appendix. Non-response bias analysis.

(DOCX)

pone.0330416.s001.docx (22.9KB, docx)
S2 Appendix. Number of randomly selected institutions for sampling scientists.

(DOCX)

pone.0330416.s002.docx (17.7KB, docx)
S3 Appendix. Survey instrument: Generative AI uses and impacts in academic research and education.

(DOCX)

pone.0330416.s003.docx (27.1KB, docx)

Data Availability

The dataset from the study, which includes the complete set of survey questions and corresponding answer options, is available at the Roper Center for Public Opinion Research at Cornell University (https://doi.org/10.25940/ROPER-31122347).

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Solomonoff RJ. The time scale of artificial intelligence: reflections on social effects. Lindsay RK, editor. HSM. 1985;5(2):149–53. doi: 10.3233/hsm-1985-5207 [DOI] [Google Scholar]
  • 2.Generative AI. Cambridge; 2025. Available from: https://dictionary.cambridge.org/dictionary/english/generative-ai [Google Scholar]
  • 3.Medaglia R, Gil-Garcia JR, Pardo TA. Artificial intelligence in government: taking stock and moving forward. Soc Sci Comput Rev. 2021;41(1):123–40. doi: 10.1177/08944393211034087 [DOI] [Google Scholar]
  • 4.Naveed H, Khan AU, Qiu S, Saqib M, Anwar S, Usman M, et al. A comprehensive overview of large language models. 2023. [cited 2024 Apr 24]. doi: 10.48550/ARXIV.2307.06435 [DOI] [Google Scholar]
  • 5.Von Krogh G, Roberson Q, Gruber M. Recognizing and utilizing novel research opportunities with artificial intelligence. Acad Manag J. 2023;66: 367–73. doi: 10.5465/amj.2023.4002 [DOI] [Google Scholar]
  • 6.Rawas S. ChatGPT: empowering lifelong learning in the digital age of higher education. Educ Inf Technol. 2023;29(6):6895–908. doi: 10.1007/s10639-023-12114-8 [DOI] [Google Scholar]
  • 7.Rudolph J, Tan S, Tan S. ChatGPT: bullshit spewer or the end of traditional assessments in higher education? JALT. 2023;6(1). doi: 10.37074/jalt.2023.6.1.9 [DOI] [Google Scholar]
  • 8.Kasneci E, Sessler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 2023;103:102274. doi: 10.1016/j.lindif.2023.102274 [DOI] [Google Scholar]
  • 9.Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event Canada: ACM; 2021. p. 610–23. doi: 10.1145/3442188.3445922 [DOI] [Google Scholar]
  • 10.Bacovic M, Andrijasevic Z, Pejovic B. STEM education and growth in Europe. J Knowl Econ. 2021;13(3):2348–71. doi: 10.1007/s13132-021-00817-7 [DOI] [Google Scholar]
  • 11.Kwok R. Junior AI researchers are in demand by universities and industry. Nature. 2019;568(7753):581–3. doi: 10.1038/d41586-019-01248-w [DOI] [PubMed] [Google Scholar]
  • 12.Acemoglu D, Autor D, Hazell J, Restrepo P. Artificial intelligence and jobs: evidence from online vacancies. J Labor Econ. 2022;40(S1):S293–340. doi: 10.1086/718327 [DOI] [Google Scholar]
  • 13.Jordan MI. Artificial intelligence—The revolution hasn’t happened yet. Harv Data Sci Rev. 2019. [cited 2024 Apr 24]. doi: 10.1162/99608f92.f06c6e61 [DOI] [Google Scholar]
  • 14.Rogers EM. Diffusion of innovations. 5th ed. New York, London, Toronto, Sydney: Free Press; 2003. [Google Scholar]
  • 15.Teichler U. Global perspectives on higher education: conceptual frameworks, comparative perspectives, empirical findings. Higher Education and the World of Work. BRILL; 2009. p. 331–3. doi: 10.1163/9789087907563 [DOI] [Google Scholar]
  • 16.Chen L, Chen P, Lin Z. Artificial intelligence in education: a review. IEEE Access. 2020;8:75264–78. doi: 10.1109/access.2020.2988510 [DOI] [Google Scholar]
  • 17.Vohra R, Narayan Das N. Intelligent decision support systems for admission management in higher education institutes. IJAIA. 2011;2(4):63–70. doi: 10.5121/ijaia.2011.2406 [DOI] [Google Scholar]
  • 18.Dennis MJ. Artificial intelligence and recruitment, admission, progression, and retention. Enroll Mgmt Rep. 2018;22(9):1–3. doi: 10.1002/emt.30479 [DOI] [Google Scholar]
  • 19.Talley NB. Imagining the use of intelligent agents and artificial intelligence in academic law libraries. Law Libr J. 2016;108:383–401. [Google Scholar]
  • 20.Sætra HS. Generative AI: Here to stay, but for good? Technol Soc. 2023;75:102372. doi: 10.1016/j.techsoc.2023.102372 [DOI] [Google Scholar]
  • 21.Steenbergen-Hu S, Cooper H. A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. J Educ Psychol. 2014;106(2):331–47. doi: 10.1037/a0034752 [DOI] [Google Scholar]
  • 22.Hussain M, Zhu W, Zhang W, Abidi SMR, Ali S. Using machine learning to predict student difficulties from learning session data. Artif Intell Rev. 2018;52(1):381–407. doi: 10.1007/s10462-018-9620-8 [DOI] [Google Scholar]
  • 23.Luckin R, Holmes W. Intelligence unleashed: an argument for AI in education. UCL Knowl Lab Lond UK. London, UK: UCL Knowledge Lab; 2016. Available from: https://www.pearson.com/content/dam/corporate/global/pearson-dot-com/files/innovation/Intelligence-Unleashed-Publication.pdf [Google Scholar]
  • 24.Hinojo-Lucena F-J, Aznar-Díaz I, Cáceres-Reche M-P, Romero-Rodríguez J-M. Artificial intelligence in higher education: a bibliometric study on its impact in the scientific literature. Educ Sci. 2019;9(1):51. doi: 10.3390/educsci9010051 [DOI] [Google Scholar]
  • 25.Chen X, Xie H, Zou D, Hwang G-J. Application and theory gaps during the rise of artificial intelligence in education. Comput Educ Artif Intell. 2020;1:100002. doi: 10.1016/j.caeai.2020.100002 [DOI] [Google Scholar]
  • 26.Chan CKY, Hu W. Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20(1). doi: 10.1186/s41239-023-00411-8 [DOI] [Google Scholar]
  • 27.Alemdag E. The effect of chatbots on learning: a meta-analysis of empirical research. J Res Technol Educ. 2023;57(2):459–81. doi: 10.1080/15391523.2023.2255698 [DOI] [Google Scholar]
  • 28.Hwang G-J, Chang C-Y. A review of opportunities and challenges of chatbots in education. Interact Learn Environ. 2021;31(7):4099–112. doi: 10.1080/10494820.2021.1952615 [DOI] [Google Scholar]
  • 29.Wu R, Yu Z. Do AI chatbots improve students learning outcomes? Evidence from a meta‐analysis. Brit J Educational Tech. 2023;55(1):10–33. doi: 10.1111/bjet.13334 [DOI] [Google Scholar]
  • 30.Bower M, Torrington J, Lai JWM, Petocz P, Alfano M. How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Educ Inf Technol. 2024. [cited 2024 Jul 23]. doi: 10.1007/s10639-023-12405-0 [DOI] [Google Scholar]
  • 31.Kim J, Merrill K, Xu K, Sellnow DD. My teacher is a machine: understanding students’ perceptions of AI teaching assistants in online education. Int J Hum–Comput Interact. 2020;36(20):1902–11. doi: 10.1080/10447318.2020.1801227 [DOI] [Google Scholar]
  • 32.Chiu TKF. The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interact Learn Environ. 2023;32(10):6187–203. doi: 10.1080/10494820.2023.2253861 [DOI] [Google Scholar]
  • 33.Dietterich TG, Horvitz EJ. Rise of concerns about AI. Commun ACM. 2015;58(10):38–40. doi: 10.1145/2770869 [DOI] [Google Scholar]
  • 34.Sharma RC, Kawachi P, Bozkurt A. The landscape of artificial intelligence in open, online and distance education: promises and concerns. Asian J Distance Educ. 2019;14:1–2. [Google Scholar]
  • 35.Gruenhagen JH, Sinclair PM, Carroll J-A, Baker PRA, Wilson A, Demant D. The rapid rise of generative AI and its implications for academic integrity: students’ perceptions and use of chatbots for assistance with assessments. Comput Educ Artif Intell. 2024;7:100273. doi: 10.1016/j.caeai.2024.100273 [DOI] [Google Scholar]
  • 36.Michel-Villarreal R, Vilalta-Perdomo E, Salinas-Navarro DE, Thierry-Aguilera R, Gerardou FS. Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Educ Sci. 2023;13(9):856. doi: 10.3390/educsci13090856 [DOI] [Google Scholar]
  • 37.Chen V, Liao QV, Wortman Vaughan J, Bansal G. Understanding the role of human intuition on reliance in human-AI decision-making with explanations. Proc ACM Hum-Comput Interact. 2023;7(CSCW2):1–32. doi: 10.1145/3610219 [DOI] [Google Scholar]
  • 38.Ivanov S. The dark side of artificial intelligence in higher education. Serv Ind J. 2023;43(15–16):1055–82. doi: 10.1080/02642069.2023.2258799 [DOI] [Google Scholar]
  • 39.Baidoo-Anu D, Owusu Ansah L. Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electon J. 2023. [cited 2024 Jul 23]. doi: 10.2139/ssrn.4337484 [DOI] [Google Scholar]
  • 40.McGrath C, Cerratto Pargman T, Juth N, Palmgren PJ. University teachers’ perceptions of responsibility and artificial intelligence in higher education - an experimental philosophical study. Comput Educ Artif Intell. 2023;4:100139. doi: 10.1016/j.caeai.2023.100139 [DOI] [Google Scholar]
  • 41.Kelly A, Sullivan M, Strampel K. Generative artificial intelligence: university student awareness, experience, and confidence in use across disciplines. JUTLP. 2023;20(6). doi: 10.53761/1.20.6.12 [DOI] [Google Scholar]
  • 42.George B, Wooden O. Managing the strategic transformation of higher education through artificial intelligence. Adm Sci. 2023;13(9):196. doi: 10.3390/admsci13090196 [DOI] [Google Scholar]
  • 43.Kamila MK, Jasrotia SS. Ethical issues in the development of artificial intelligence: recognizing the risks. IJOES. 2023;41(1):45–63. doi: 10.1108/ijoes-05-2023-0107 [DOI] [Google Scholar]
  • 44.Messeri L, Crockett MJ. Artificial intelligence and illusions of understanding in scientific research. Nature. 2024;627(8002):49–58. doi: 10.1038/s41586-024-07146-0 [DOI] [PubMed] [Google Scholar]
  • 45.Lin Z. Why and how to embrace AI such as ChatGPT in your academic life. R Soc Open Sci. 2023;10(8):230658. doi: 10.1098/rsos.230658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Sourati J, Evans JA. Accelerating science with human-aware artificial intelligence. Nat Hum Behav. 2023;7(10):1682–96. doi: 10.1038/s41562-023-01648-z [DOI] [PubMed] [Google Scholar]
  • 47.Tang J, Xia L, Li Z, Huang C. AI-researcher: autonomous scientific innovation. arXiv; 2025. doi: 10.48550/arXiv.2505.18705 [DOI] [Google Scholar]
  • 48.Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z, et al. Scientific discovery in the age of artificial intelligence. Nature. 2023;620(7972):47–60. doi: 10.1038/s41586-023-06221-2 [DOI] [PubMed] [Google Scholar]
  • 49.Grimes M, Von Krogh G, Feuerriegel S, Rink F, Gruber M. From scarcity to abundance: scholars and scholarship in an age of generative artificial intelligence. Acad Manage J. 2023;66: 1617–24. doi: 10.5465/amj.2023.4006 [DOI] [Google Scholar]
  • 50.Kroezen J, Ravasi D, Sasaki I, Żebrowska M, Suddaby R. Configurations of craft: alternative models for organizing work. Acad Manag Ann. 2021;15(2):502–36. doi: 10.5465/annals.2019.0145 [DOI] [Google Scholar]
  • 51.Walters WH, Wilder EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. 2023;13(1):14045. doi: 10.1038/s41598-023-41032-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Lund BD, Wang T, Mannuru NR, Nie B, Shimray S, Wang Z. ChatGPTand a new academic reality: Artificial Intelligence‐writtenresearch papers and the ethics of the large language models in scholarly publishing. J Assoc Inf Sci Tech. 2023;74(5):570–81. doi: 10.1002/asi.24750 [DOI] [Google Scholar]
  • 53.Moorhouse BL, Yeo MA, Wan Y. Generative AI tools and assessment: guidelines of the world’s top-ranking universities. Comput Educ Open. 2023;5:100151. doi: 10.1016/j.caeo.2023.100151 [DOI] [Google Scholar]
  • 54.Van Noorden R, Perkel JM. AI and science: what 1,600 researchers think. Nature. 2023;621(7980):672–5. doi: 10.1038/d41586-023-02980-0 [DOI] [PubMed] [Google Scholar]
  • 55.Khlaif ZN, Mousa A, Hattab MK, Itmazi J, Hassan AA, Sanmugam M, et al. The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Med Educ. 2023;9:e47049. doi: 10.2196/47049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Kirakosyan A. Intellectual property ownership of AI-generated content. DLJ. 2024;4(3):40–50. doi: 10.38044/2686-9136-2023-4-3-3 [DOI] [Google Scholar]
  • 57.Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423. doi: 10.1038/d41586-023-00056-7 [DOI] [PubMed] [Google Scholar]
  • 58.Davison RM, Chughtai H, Nielsen P, Marabelli M, Iannacci F, van Offenbeek M, et al. The ethics of using generative AI for qualitative data analysis. Inf Syst J. 2024;34(5):1433–9. doi: 10.1111/isj.12504 [DOI] [Google Scholar]
  • 59.Fishbein M, Ajzen I. Belief, attitude, intention, and behavior: an introduction to theory and research. Reading (MA): Addison-Wesley Pub. Co; 1975. [Google Scholar]
  • 60.Ajzen I. The theory of planned behaviour: reactions and reflections. Psychol Health. 2011;26(9):1113–27. doi: 10.1080/08870446.2011.613995 [DOI] [PubMed] [Google Scholar]
  • 61.Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: a comparison of two theoretical models. Manage Sci. 1989;35(8):982–1003. doi: 10.1287/mnsc.35.8.982 [DOI] [Google Scholar]
  • 62.Rogers EM. Diffusion of innovations. 4th ed. New York: Free Press; 1995. [Google Scholar]
  • 63.Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. 2003;27(3):425. doi: 10.2307/30036540 [DOI] [Google Scholar]
  • 64.Hall GE. The concerns-based approach to facilitating change. Educ Horiz. 1979;57:202–8. [Google Scholar]
  • 65.Agarwal R, Prasad J. A conceptual and operational definition of personal innovativeness in the domain of information technology. Inf Syst Res. 1998;9(2):204–15. doi: 10.1287/isre.9.2.204 [DOI] [Google Scholar]
  • 66.Livberber T, Ayvaz S. The impact of Artificial Intelligence in academia: views of Turkish academics on ChatGPT. Heliyon. 2023;9(9):e19688. doi: 10.1016/j.heliyon.2023.e19688 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Kuleto V, Ilić M, Dumangiu M, Ranković M, Martins OMD, Păun D, et al. Exploring opportunities and challenges of artificial intelligence and machine learning in higher education institutions. Sustainability. 2021;13(18):10424. doi: 10.3390/su131810424 [DOI] [Google Scholar]
  • 68.Romero-Rodríguez J-M, Ramírez-Montoya M-S, Buenestado-Fernández M, Lara-Lara F. Use of ChatGPT at university as a tool for complex thinking: students’ perceived usefulness. J New Approaches Educ Res. 2023;12(2):323–39. doi: 10.7821/naer.2023.7.1458 [DOI] [Google Scholar]
  • 69.Ibrahim K. Using AI-based detectors to control AI-assisted plagiarism in ESL writing: “The Terminator Versus the Machines”. Lang Test Asia. 2023;13(1):46. doi: 10.1186/s40468-023-00260-2 [DOI] [Google Scholar]
  • 70.Ibrahim H, Liu F, Asim R, Battu B, Benabderrahmane S, Alhafni B, et al. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Sci Rep. 2023;13(1):12187. doi: 10.1038/s41598-023-38964-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Moldt J-A, Festl-Wietek T, Madany Mamlouk A, Nieselt K, Fuhl W, Herrmann-Werner A. Chatbots for future docs: exploring medical students’ attitudes and knowledge towards artificial intelligence and medical chatbots. Med Educ Online. 2023;28(1):2182659. doi: 10.1080/10872981.2023.2182659 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Samuel G, Diedericks H, Derrick G. Population health AI researchers’ perceptions of the public portrayal of AI: a pilot study. Public Underst Sci. 2021;30(2):196–211. doi: 10.1177/0963662520965490 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Thomas R, Bhosale U, Shukla K, Kapadia A. Impact and perceived value of the revolutionary advent of artificial intelligence in research and publishing among researchers: a survey-based descriptive study. Sci Ed. 2023;10(1):27–34. doi: 10.6087/kcse.294 [DOI] [Google Scholar]
  • 74.Zhang B, Anderljung M, Kahn L, Dreksler N, Horowitz MC, Dafoe A. Ethics and governance of artificial intelligence: a survey of machine learning researchers (extended abstract). Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization; 2022. doi: 10.24963/ijcai.2022/811 [DOI] [Google Scholar]
  • 75.SciOPS | Science Opinion Panel Survey | Science Communication SciOPS; 2025 [cited 2025 Mar 16]. Available from: https://sci-ops.org//
  • 76.American Association for Public Opinion Research. Standard definitions: final dispositions of case codes and outcome rates for surveys; 2023. Available from: https://aapor.org/wp-content/uploads/2024/03/Standards-Definitions-10th-edition.pdf
  • 77.Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education – Where are the educators? Int J Educ Technol High Educ. 2019;16(1):39. doi: 10.1186/s41239-019-0171-0 [DOI] [Google Scholar]
  • 78.Dempere J, Modugu K, Hesham A, Ramasamy LK. The impact of ChatGPT on higher education. Front Educ. 2023;8:1206936. doi: 10.3389/feduc.2023.1206936 [DOI] [Google Scholar]
  • 79.Xames MD, Shefa J. ChatGPT for research and publication: opportunities and challenges. J Appl Learn Teach. 2023;6. doi: 10.37074/jalt.2023.6.1.20 [DOI] [Google Scholar]
  • 80.Kooli C. Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability. 2023;15(7):5614. doi: 10.3390/su15075614 [DOI] [Google Scholar]
  • 81.Labadze L, Grigolia M, Machaidze L. Role of AI chatbots in education: systematic literature review. Int J Educ Technol High Educ. 2023;20(1):56. doi: 10.1186/s41239-023-00426-1 [DOI] [Google Scholar]
  • 82.Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: pandora’s box has been opened. J Med Internet Res. 2023;25:e46924. doi: 10.2196/46924 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Hamm B, Marti-Bonmati L, Sardanelli F. ESR journals editors’ joint statement on guidelines for the use of large language models by authors, reviewers, and editors. Eur Radiol. 2024;34: 5049–51. doi: 10.1007/s00330-023-10511-8 [DOI] [PubMed] [Google Scholar]
  • 84.Guidance for generative AI in education and research. UNESCO; 2023. doi: 10.54675/EWZM9535 [DOI] [Google Scholar]
  • 85.Bohn S, Bonner D, Hsieh V, Lafortune J, Thorman T. The economic toll of COVID-19 on women. Public Policy Institute of California; [Internet]; 2020. [cited 2024 Aug 12]. Available from: https://www.ppic.org/blog/the-economic-toll-of-covid-19-on-women// [Google Scholar]
  • 86.Gould E, Wilson V. Black workers face two of the most lethal preexisting conditions for coronavirus—racism and economic inequality. Economic Policy Institute; 2020. Available from: https://epi.org/193246 [Google Scholar]
  • 87.Staniscuaski F, Kmetzsch L, Soletti RC, Reichert F, Zandonà E, Ludwig ZMC, et al. Gender, race and parenthood impact academic productivity during the COVID-19 pandemic: from survey to action. Front Psychol. 2021;12:663252. doi: 10.3389/fpsyg.2021.663252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Weissman S. Universities are freezing tenure clocks. What will that mean for junior faculty of color? HigherEdJobs; 2020. [cited 2024 Aug 12]. Available from: https://www.higheredjobs.com/Articles/articleDisplay.cfm?ID=2238 [Google Scholar]
  • 89.Orduña-Malea E, Cabezas-Clavijo Á. ChatGPT and the potential growing of ghost bibliographic references. Scientometrics. 2023;128(9):5351–5. doi: 10.1007/s11192-023-04804-4 [DOI] [Google Scholar]

Decision Letter 0

Kingsley Okoye

10 Feb 2025

Dear Dr. Arroyo-Machado,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 27 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Kingsley Okoye

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #1: Yes

Reviewer #2: Yes

**********

Reviewer #1:  Review PLOS1

Overall

The authors present an interesting manuscript exploring the attitudes and experiences of academic scientific researchers toward generative AI. The article provides insightful and valuable contributions to this emerging area of study. However, many aspects require further clarification and refinement to enhance the manuscript's overall readability and impact.

Abstract

• How many respondents has the study included?

• Considering the results you wrote in your abstract, what do you mean by the term cautious?

• STEM participants only: how did this influence your results?

Introduction

• What is particularly evident in the STEM fields? Does the field of AI really cause the growing number of students in this field? (Line 60)

• You did not define generative AI. Only that it can produce human-like texts. But generative AI is more than that, right?

• This article explores the early adoption (Line 74). Why do you specifically mention the term early adoption, and not just adoption?

• In section 4… (line 95): does this section contain the results of the scoping review and the survey?

Literature review

• Could you clarify how the literature review was conducted? Specifically, what search strings did you use, and what were the criteria for determining an article's eligibility for inclusion?

• References 14 and 17 should be placed at the end of their respective sentences for clarity and alignment with academic citation standards.

• In the statement, "Academic libraries have utilised AI to optimise their services to enhance accessibility and efficiency" (Line 106), could you elaborate on what specific services you are referring to?

• Why is automated grading categorized as generative AI rather than simply AI? (Line 119)

• You wrote: "AI's potential to undermine creativity, hinder academic integrity, and exacerbate existing inequalities." How does AI contribute to exacerbating inequalities? Providing examples or further explanation would clarify this point.

• A reference is missing after the statement: "The unique capabilities and widespread adoption of generative AI have broadened and intensified these concerns."

• In the third paragraph of the literature review, there appears to be inconsistent use of the terms "AI" and "generative AI." Could you clarify whether these terms are being used interchangeably or if there is a specific reason for the distinction?

• You wrote: "... is balanced by concerns over risks such as misinformation, plagiarism, and falsified research. For example, while generative AI can produce high-quality text suitable for publication, it has limited capacity for research design and data analysis" (Line 159). How does this example relate to the concerns mentioned, such as misinformation, plagiarism, and falsified research?

• What does "the previous review” mean in Line 168?

• In "... benefits and challenges of generative AI" (Line 168), if you are referring to the previous paragraph, it seems the text pertains to AI in general rather than generative AI specifically.

• You wrote: "These include plagiarism, misinformation, and misidentification of AI-produced content as human work." and "For instance, in medical education, while there is optimism about AI's potential to reduce administrative burdens and enhance diagnostic accuracy, concerns persist about data privacy, the loss of personal contact with patients, and the adequacy of current AI technologies remain" (Lines 174-181). While your example in medical education provides meaningful insights, it does not directly align with the specific issues highlighted in the first sentence, such as plagiarism, misinformation, and the misidentification of AI-produced content. To ensure coherence, consider revising the example to better reflect these concerns or modifying the first sentence to encompass broader challenges associated with AI applications.

• Why do you not focus on generative AI rather then AI in generall in alinea 8 (Line 198-208), as your research is also only focused on generative AI.

• I think the following sentence: To bridge this gap (line 214), should be subsentence.

Material and methods

• Most important: the study can not be repeated. The reader has no idea what the survey entailed. Info is missing. Provide the exact questions and corresponding answer options that were explicitly asked during the survey for reproducibility.

• When mentioning SciOPS for the first time, write out its full name and provide a brief explanation of what it is, instead of just using the abbreviation. This helps readers unfamiliar with the term understand its relevance.

• The reference style for https://www.sci-ops.org/ is incorrect. Ensure it follows the required citation format for the journal or conference submission guidelines.

• The sentence on line 232 repeats the list of STEM fields: "six fields of STEM academic scientists biology, chemistry, civil and environmental engineering, computer and information science engineering, geography, and public health." Simplify it or rephrase it to avoid redundancy.

• The term "AAPOR" should be fully spelt out and briefly explained when first mentioned. For example, "AAPOR (American Association for Public Opinion Research)" if that is the intended meaning.

• The use of "etc." on line 239 is vague. Specify what additional items or categories it includes to provide clarity.

• These lines appear to be part of the results section. Move them to the appropriate section to maintain logical flow and structure.

• For clarity and compliance, include the Institutional Review Board (IRB) approval number or ID after the sentence: "Both surveys were approved by the Institutional Review Board at Arizona State University."

• The sentence starting with "The results indicate..." (lines 275–278) belongs in the results section rather than its current location. Relocate it to ensure consistency and proper organisation.

Results

• All in all, there are several sentences that seem to refer to opinions/conclusions rather then on findings.

• Ensure consistent use of "generative AI" and "AI" throughout the results section. Often, "AI" is used where "generative AI" would be more appropriate.

• The results would benefit from an analysis of differences in attitudes and concerns regarding generative AI among academic scientists across various STEM disciplines. Could you explore and include this aspect in your findings?

• The sentence, "Building on the understanding that generative AI will significantly impact daily life in the United States" (line 300), is an assumption and should not appear in the results section. Replace it with a neutral, evidence-based statement or move it to the discussion section if it is necessary for interpretation.

• Define what qualifies someone as an "early adopter" of AI. Specify the criteria or timeframe, as the current mention in line 318 leaves this unclear.

• In the sentence, "20% of academic scientists report using AI in both teaching and research, illustrating a robust integration into the academic environment" (line 321), consider whether "robust" is the right term.

• The statement, "Yet, there is a small segment, 20%, considering its future use, with 17% likely and 3% certainly planning to incorporate AI into their teaching," is unclear. Specify 17% and 3% of what. This follows a prior sentence mentioning 80% expressing reservations, which makes the proportions ambiguous. Rephrase for clarity and logical flow.

• You stated: "where 40% of users find value" (Line 358) and earlier wrote: "a substantial 40% are employing AI specifically within the domains of teaching and research" (Line 317). Is this referring to the same 40% of participants, or does the latter imply that 40% of the earlier 40% find value in AI? Please clarify this distinction for consistency.

• You wrote: "This distribution of responsibilities suggests a preference for an academic-led approach to AI regulation, complemented by governmental and international oversight, to effectively address the intricacies of AI's integration into academic settings." (Line 393-396). This is an insightful observation, but it remains abstract. Could you elaborate on what specific actions or frameworks academics, governments, or international bodies could implement to address these intricacies? Adding 1-2 sentences in the discussion section would strengthen your argument.

• You wrote: "revealing that the way questions are framed significantly influences their responses." (Line 414) It seems the responses were not influenced by how questions were framed but rather by the way answers were provided. This distinction is important. Please clarify the statement accordingly.

• In your methodology, you wrote: "SciOPS panel members were recruited from a full sample frame that includes around 18,500 randomly selected PhD-level academic scientists employed by Carnegie-designated Research Extensive and Intensive (R1) universities across the US with appointments in six fields of STEM academic scientists – biology, chemistry, civil and environmental engineering, computer and information science engineering, geography, and public health." However, in the results section (Line 440), you stated that findings are limited to biologists, biochemists, and civil & environmental engineers. Why were computer engineers and geography scientists excluded from the analysis? Please explain and ensure this is clarified in the text.

• You mentioned: "while the survey presents data by gender, it did not inquire about race or ethnicity, preventing an assessment of differences across these dimensions." (Line 442). This is a valid limitation. However, did you also consider assessing variations across scientific professions within STEM disciplines? Since this is a study about generative AI in academia, understanding differences across various scientific fields could provide valuable insights. Please address this or justify its omission.

• You used the term "AI-generated content" (Line 450). Is this terminology distinct from "Generative AI"? If so, please clarify the distinction in the text to ensure consistency and precision in the use of terminology.

Conclusion

R.485: by whom should this be done?

A real in depth discussion is missing.

NB.

The data are ‘available’. Is this made anonymous? How is this done?

Reviewer #2:  The manuscript presents an insightful study on the integration of Generative AI in academia, focusing on US academic scientists’ adoption, perceptions, and regulatory concerns. The topic is highly relevant, and the study provides valuable data from a nationally representative survey of STEM faculty. The findings contribute to ongoing discussions on AI’s role in education and research. However, certain areas require further clarification and expansion, particularly regarding methodology, theoretical grounding, and the discussion of ethical concerns. Below are specific strengths and areas for improvement.

Areas for Improvement:

• The manuscript would benefit from engagement with established technology adoption models, such as the Technology Acceptance Model (TAM) or Unified Theory of Acceptance and Use of Technology (UTAUT).

• Integrating these frameworks would help contextualize the faculty’s perceptions and concerns regarding AI adoption.

• While the study is based on a nationally representative survey, key methodological details are missing, including:

Sampling technique and representativeness of different institution types.

Survey response rate and potential response bias.

Validation of survey questions to ensure reliability and relevance.

Providing more transparency in these areas would enhance the study’s rigor.

• While the study mentions misinformation concerns (78%), it could also discuss other ethical implications, such as:

Bias in AI models and how it affects academic use.

Data privacy issues in AI-generated content.

Plagiarism detection challenges in AI-assisted research.

• A deeper engagement with these ethical dimensions would strengthen the manuscript’s contribution.

• The study focuses on STEM faculty, which may limit generalizability to the humanities, social sciences, and international academic institutions.

• Expanding the discussion on how AI adoption may differ across disciplines would enhance the broader relevance of the findings.

• While the study states that faculty support academic-led AI regulation, it does not provide concrete policy recommendations or engagement with existing AI governance frameworks.

• Expanding this discussion with references to current AI policies in education would improve the impact of the study’s findings.

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #1: Yes:  Dr. Sil Aarts & Dirk Steijger, MSc

Reviewer #2: Yes:  Kamran Aziz earned his M.S. degree in Computer Science and Technology from Nanjing University of Information Technology, Nanjing, China. He is presently pursuing his Ph.D. in Cyberspace Security at the School of Cyber Science and Engineering, Wuhan University, China. Kamran is an expert in Natural Language Processing (NLP) and focuses on cutting-edge applications such as Fake News Detection, Named Entity Recognition, Sentiment Analysis, and Data Summarization & Augmentation. His research aims to enhance the reliability and efficiency of information processing in digital media.

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: review_plosone.docx

pone.0330416.s004.docx (14KB, docx)
PLoS One. 2025 Aug 28;20(8):e0330416. doi: 10.1371/journal.pone.0330416.r002

Author response to Decision Letter 1


28 Mar 2025

"The responses to the reviewers' comments are included in the attached PDF file.

Attachment

Submitted filename: Response_to_reviewers.pdf

pone.0330416.s006.pdf (331.4KB, pdf)

Decision Letter 1

Kingsley Okoye

5 May 2025

Dear Dr. Arroyo-Machado,

Please submit your revised manuscript by Jun 19 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Kingsley Okoye

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments :

We have now completed the review of your manuscript. The reviewers has suggested some areas for improvement in the manuscript. Please upload a revised manuscript and point by point response to the reviewers' comments.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

Reviewer #3: All comments have been addressed

Reviewer #4: All comments have been addressed

Reviewer #5: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions??>

Reviewer #3: Yes

Reviewer #4: Partly

Reviewer #5: No

**********

3. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #3: Yes

Reviewer #4: No

Reviewer #5: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #3: Yes

Reviewer #4: Yes

Reviewer #5: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #3: Yes

Reviewer #4: Yes

Reviewer #5: No

**********

Reviewer #3: Grammatical Mistakes

"First, despite being based on probability sampling, the findings are restricted to biologists, biochemists, and civil & environmental engineers," states Line 440 on Page 19. names "biochemists" inaccurately because the study is about biology, not biochemistry. "Biologists, chemists, and civil & environmental engineers" is the correct phrase.

Inconsistencies in Style:

Use of "AI" vs. "generative AI": The term "AI" and "generative AI" are used interchangeably throughout the manuscript, however the differences are not always made clear. To fit with the study's aim, the sentence "A sizable 65% of academic scientists have personally used AI" on page 14, line 317, for instance, may be changed to "generative AI." Early in the article, think about standardising vocabulary or defining "AI" as "generative AI."

Overall Evaluation

The manuscript satisfies PLOS ONE's standards for clarity, accuracy, and lack of ambiguity since it is written in standard English and presented in an understandable manner. The material is understandable to the intended audience of academic scholars, the language is formal and precise, and the structure makes sense. The majority of the text is free of significant typographical or grammatical problems, and the few that are found are small and simple to fix when it is revised.

Reviewer #4: The survey conducted here is on a very small sample, making this manuscript fewly relevant but this is compensated by a deep and intructing litterature review that led me to recommend the acceptance. Furthermore, the authors took a great caution in answering the concerns of the first round of reviewing, gicing more value to the manuscript.

In answered the data were available but it is still subject to approval and not available throught the article now.

I would nevertheless make the following suggestions :

- Line 33 : "Results indicate that 65% of scientists" I would rather say that "Results indicate that 65% of respondants".

- Line 210 : I am not sure the "Chubb et al." is relevant.

- Line 253 & 559 & 562 :Science told me there is only one human race, thus differencing race and ethnicity seems abusive to me.

- Line 280 : "The margin of sampling error for the AI survey estimates is +/-6.4 percentage points". The authors should state that this is for a population of 18 500 and a 95% CI.

- Line 317 : "The potential effects of climate change also register as a major point of academic concern, albeit slightly less so than generative AI, indicating a sense of urgency regarding environmental shifts." But these are within the margin of error.

- Line 448 : "Participants were randomly assigned one of four variations of the question". It would seem interesting to provide the margin of error for these sub-groups to the readers.

- Line 483 : Generative AI is predominantly used in educational settings for developing pedagogical materials (51%) and in research primarily for writing, reviewing, and editing tasks(40%)" but it only seems since when margin of erreors are taken into account this can be untrue.

I could also discuss the following the limits of this article :

- Sample size : The small sample size leads to an important margin of errors but more importantly reduces the relavance or significance of sub-groups analysis.

- Not all STEM widely represented (Geographs and civil engineers representing 66% of the cohort). This can both influence the result and the value of the data presented here and should be discussse omre in depth, since geographs seem to be more early adopters in bibliography. The signifiance upon other fields might also be discussed (For example, Biology with 16 answers might have very different behaviours that would not be highlighted by the present studies. Bibliography might also help on this point).

- It would also be interesting exploring deeper the results depending on the status of the respondants, whether professors, associates, or doctorants.

Reviewer #5: As a first-time reviewer for a manuscript that has gone through a previous round of review, I have approached my task by looking only at the latest revised version of the manuscript, to see how it reads. And, in doing so, I find the manuscript to be generally tackling an interesting question, but with various weaknesses in the way it does so that make it unclear to me what, precisely, the authors are trying to achieve with their paper. Briefly, my concerns consist of the following:

1. The manuscript is - in parts - in need of copy-editing, as there are several grammatical errors or unnatural linguistic phrases/constructions. Having a native speaker take a (possibly closer) look at it to catch these issues would likely help. Personally, I also find the use of various "power-adjectives" to be not only (occasionally) over the top and less suited for the academic context (e.g. that AI has had a "transformative impact" or represents "a significant leap forward" - both highly contestable - and contested - claims), but also occasionally a bit too reminiscent of the sort of language that chatbots tend to produce. As much as I can appreciate the subtle meta-commentary of using occasional chatbot-like phrases in a paper on the use of chatbots in academia, if this is intentional I think it should be flagged a bit more transparently, and if it's accidental then it's effect is a bit of an unintentional (partial) undermining of the expected academic tone. Either way, I think it requires some reconsideration.

2. I find the literature review lacking. Both in the sense that there are what seem to me to be serious gaps in the referenced literature (e.g. not including Gebru's work on "stochastic parrots" - although it's both crucial to the area covered and has had a massive impact on the subsequent discussion - seems strange to me). But also in the sense that there seems to be two rather larger structural issues in the section.

First, instead of discussing the benefits and risks of (gen)AI in academia, the authors go back and forth between the two, which makes for an awkward reading experience of "this sounds great, this sounds problematic, this sounds great, this sounds problematic". To be more concrete, the section has, after a brief introduction to AI in academia, a paragraph that starts with "However, despite perceived usefulness and improved ease of communication [33], there is insufficient understanding of generative AI’s long-term impacts on educational practices...". This is then followed by a paragraph starting with "As generative AI continues to transform education, its impact is equally profound in the field of research, revolutionizing research by potentially acting as "an autonomous researcher," transforming the way knowledge is produced across the research process", which is problematic both in its repetition of terms such as "transformative" and in its overreliance on "power"-adjectives as per point 1 above. But this is then, AGAIN, followed by a paragraph introducing various problems, beginning with the sentence "While many of the applications of generative AI to research activities appear promising, challenges and risks still loom large". This really needs some reconsideration of the structure of the themes and elements the authors wish to cover.

Second, and even more concerning, I don't find that the literature review section sufficiently sets up and qualifies the researchers' main focus, in the sense of clearly and unambiguously pointing to any important knowledge gaps. In the Introduction, the authors mention "Innovation Diffusion Theory", giving the impression this will constitute the theoretical framework for the remainder of the article, while the Lit Rev section presents this theory merely as one among others, with no indication as to why it constitutes a more relevant approach to their particular empirical undertaking. Further, the reliance on a rather broad range of references engenders some tension with the notion that the authors' own work has a sufficiently relevant value to warrant the research in the first place. I will say that I think this is primarily a writing issue: I believe that the research the authors have done is sufficiently important to warrant some attention and, ultimately, publication in a peer-reviewed journal. I just don't think that it is currently being set up in a sufficiently effective manner in the current version of their manuscript.

3. While I appreciate that the authors have based their work on the results of responses collected in '22 and '23, I think they need to more explicitly address the limitations this introduces, not least in terms of coming very short on the heels of the public launch of ChatGPT, meaning that attitudes today - in 2025, with all the far more wide-ranging experiences most of us have with genAI by now - might be significantly different. This is problematic in e.g. how the results are presented - phrasings such as "The data shows a majority engage with generative AI tools occasionally, reserving them for specific tasks rather than regular use" can be seen as potentially misleading - better replaced by introducing a qualifier such as "a majority during the studied time period" or similar - granted more clunky but at least more clearly indicating the actual knowledge that the responses allows one to surmise.

4. There are various issues with the statistics in the manuscript. For instance, the authors seek to assess "disciplinary variation" by performing a separate one-way ANOVA "across each response to each survey item (n=81 total comparisons)". They note that no potentially significant differences survived Bonferroni correction (.05 / 81). But given that some of their disciplines contain very few responders (e.g. 4 only in Public Health, 16 in Biology, 31 in Chemistry, etc.), this rather harsh correction for a rather absurdly large number of multiple comparisons cannot really be considered to allow the derivation of much of any relevant conclusion. I appreciate that this is one minor step - to remove concern that there could be significant hidden disciplinary variations among their respondents - but its application suggests to me a rather rudimentary appreciation of how to apply inferential statistics to real-world data. Essentially, with only 4 and 16 participants representing Public Health and Biology, respectively, a one-way ANOVA is ALREADY problematic, but to then further do an 81-comparison Bonferroni correction in order to suggest that there is no meaningful disciplinary variation in responses is such a stretch as to be generally irrelevant. Note also that this is but one example. To take another example: including four variations in phrasing of a key question introduces similar concerns, not least if cross-cutting with such categorizations as the respondents' field, rank, origin,, etc. Basically, the small numbers of respondents found in each such categorization generally aren't large enough to permit much of any meaningful comparisons.

5. The fact that there are essentially only descriptive (mostly frequency) statistics here are surprising to me. As much as I am of the opinion that researchers ought to do more exploratory analysis in general (as per Tukey), this ought typically to be in the service of either grounding subsequent inferential analysis or - perhaps - generating hypotheses for further (future) inferential testing. If the authors see themselves as falling with the exploratory-data-analysis camp, this is laudable and valuable, but then it needs to be more explicitly flagged as such throughout. In its current state, the manuscript reads more like a rather straightforward confirmatory-statistics paper, introducing aims, testing them, and drawing conclusions on the basis of the results found. But if the authors do see themselves as performing more confirmatory-style hypothesis-testing, then they would need to explore both descriptive and inferential statistics to a much greater degree (e.g. through basic measures of central tendency and variability, and inferential tools such as linear mixed models or whatever else the authors consider most appropriate), and further include statistical power calculations and the like, given their limited number of respondents. In general, I'd even go so far as to say that the repeated use of the term "analysis" isn't quite warranted by the actual research, which performs little actual analysis beyond presenting frequencies (percentages) of forced-choice responses to survey items. In this last respect, much of the manuscript findings read more like an interesting news story than a full-fledged research analysis of collected data.

In all, while the topic is interesting and highly relevant, I see a need for quite major revisions before I think this would be ready to be published in PLOS One.

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #3: Yes:  Prakash Chandra Kasera

Reviewer #4: No

Reviewer #5: Yes:  Oskar MacGregor

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org

PLoS One. 2025 Aug 28;20(8):e0330416. doi: 10.1371/journal.pone.0330416.r004

Author response to Decision Letter 2


20 Jun 2025

Detailed responses to all reviewers' comments are attached in a PDF.

Attachment

Submitted filename: Response_to_reviewers_auresp_2.pdf

pone.0330416.s007.pdf (260.5KB, pdf)

Decision Letter 2

Kingsley Okoye

1 Aug 2025

Generative AI and Academic Scientists in US universities: Perception, Experience, and Adoption Intentions

PONE-D-24-54545R2

Dear Dr. Arroyo-Machado,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Kingsley Okoye

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions??>

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #3: Yes

**********

Reviewer #3: The revised version of the manuscript addresses all issues raised by the reviewers, in particular with a better focus on facts, literature coverage, statistical robustness and acknowledgement of limitations. It has been accepted for publication in PLOS ONE based on the technical soundness, clarity and mechanism of the reasoning and clarity of the data presented. I suggest accepting the paper to be published in PLOS ONE, as it is currently already admissible for the journal's readership and adds valuable knowledge about the adoption of generative AI in academia.

**********

Acceptance letter

Kingsley Okoye

PONE-D-24-54545R2

PLOS ONE

Dear Dr. Arroyo-Machado,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Kingsley Okoye

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Non-response bias analysis.

    (DOCX)

    pone.0330416.s001.docx (22.9KB, docx)
    S2 Appendix. Number of randomly selected institutions for sampling scientists.

    (DOCX)

    pone.0330416.s002.docx (17.7KB, docx)
    S3 Appendix. Survey instrument: Generative AI uses and impacts in academic research and education.

    (DOCX)

    pone.0330416.s003.docx (27.1KB, docx)
    Attachment

    Submitted filename: review_plosone.docx

    pone.0330416.s004.docx (14KB, docx)
    Attachment

    Submitted filename: Response_to_reviewers.pdf

    pone.0330416.s006.pdf (331.4KB, pdf)
    Attachment

    Submitted filename: Response_to_reviewers_auresp_2.pdf

    pone.0330416.s007.pdf (260.5KB, pdf)

    Data Availability Statement

    The dataset from the study, which includes the complete set of survey questions and corresponding answer options, is available at the Roper Center for Public Opinion Research at Cornell University (https://doi.org/10.25940/ROPER-31122347).


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES