Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2023 Oct 5;18(10):e0292216. doi: 10.1371/journal.pone.0292216

An exploratory survey about using ChatGPT in education, healthcare, and research

Mohammad Hosseini 1,*,#, Catherine A Gao 2,#, David M Liebovitz 3,4, Alexandre M Carvalho 5,6, Faraz S Ahmad 1,7,8, Yuan Luo 1, Ngan MacDonald 9, Kristi L Holmes 1,9,10, Abel Kho 3,9
Editor: Mary Diane Clark11
PMCID: PMC10553335  PMID: 37796786

Abstract

Objective

ChatGPT is the first large language model (LLM) to reach a large, mainstream audience. Its rapid adoption and exploration by the population at large has sparked a wide range of discussions regarding its acceptable and optimal integration in different areas. In a hybrid (virtual and in-person) panel discussion event, we examined various perspectives regarding the use of ChatGPT in education, research, and healthcare.

Materials and methods

We surveyed in-person and online attendees using an audience interaction platform (Slido). We quantitatively analyzed received responses on questions about the use of ChatGPT in various contexts. We compared pairwise categorical groups with a Fisher’s Exact. Furthermore, we used qualitative methods to analyze and code discussions.

Results

We received 420 responses from an estimated 844 participants (response rate 49.7%). Only 40% of the audience had tried ChatGPT. More trainees had tried ChatGPT compared with faculty. Those who had used ChatGPT were more interested in using it in a wider range of contexts going forwards. Of the three discussed contexts, the greatest uncertainty was shown about using ChatGPT in education. Pros and cons were raised during discussion for the use of this technology in education, research, and healthcare.

Discussion

There was a range of perspectives around the uses of ChatGPT in education, research, and healthcare, with still much uncertainty around its acceptability and optimal uses. There were different perspectives from respondents of different roles (trainee vs faculty vs staff). More discussion is needed to explore perceptions around the use of LLMs such as ChatGPT in vital sectors such as education, healthcare and research. Given involved risks and unforeseen challenges, taking a thoughtful and measured approach in adoption would reduce the likelihood of harm.

Introduction

The introduction of OpenAI’s ChatGPT has delivered large language model (LLM) systems to a mainstream audience. Other applications such as Elicit, SciNote, Writefull, and Galactica, have previously existed, but the exponential growth of ChatGPT’s audience has sparked vigorous discussions in academic circles. LLMs have demonstrated remarkable ability (and sometimes inability) in generating text in response to prompts. Some LLMs like Elicit and Med-PaLM can scan available literature and suggest specific questions or insights about a particular topic/question by leveraging available knowledge. The new GPT4 can also learn from images, thereby multiplying possible use cases of LLM, especially in education, healthcare and research settings where visual representations are fundamental to create or enhance understanding. To explore the implications of using LLMs in research, education and healthcare, Northwestern University’s Institute for Artificial Intelligence in Medicine (I.AIM) and Institute for Public Health & Medicine (IPHAM) organized a hybrid (virtual and in-person) event on Feb 16th 2023 entitled “Let’s ChatGPT”. This event consisted of lively discussions and an exploratory survey of participants. In this article, we present survey results and provide a qualitative analysis of raised issues.

Using ChatGPT and other LLMs in education

Responses to the use of ChatGPT in education are varied. For instance, some New York schools banned students from using ChatGPT [1], while others adopted policies in their syllabus that encourage students to engage with these models as long as they disclose it [2]. Some educators fed ChatGPT questions from a freely available United States Medical Licensing Examination (USMLE) and reported a near or at passing range performance [3]. Others have suggested that using ChatGPT facilitates personalized and interactive learning, can improve assessment and create an ongoing feedback loop to inform teaching and learning [46]. It can also create opportunities in specific contexts. For example, in law where original references might be complicated to comprehend for students, ChatGPT could help in reciting complicated laws and making them more understandable [7]. In teaching computer science in Harvard’s flagship CS50 course, systems similar to ChatGPT will help students explain and debug their code, improve design, and answer questions, making it more likely “to approximate a 1:1 teacher:student ratio for every student” [8]. Delegating these tasks to AI is believed to free up teaching fellows’ time to engage in more meaningful and interpersonal interactions with students and focus on providing qualitative feedback [8].

As the technology improves, the debate is still open about ethical and educational uses, with many issues remaining unresolved and concerns being explored. Among such concerns, the issue of “disguising biases” is noteworthy. It is believed that by weaving information found in various sources (some of which could be biased), ChatGPT creates a “tapestry of biases”, thereby making it more difficult to pinpoint the origins of any specific bias in the educational resources it produces [9]. Using ChatGPT increases the likelihood of plagiarism, can lead to the inclusion of irrelevant or inaccurate information in students’ essays, and presents challenges in assessing students’ work [10]. The use of ChatGPT in education has been reviewed in more detail by others [11, 12].

Using ChatGPT and other LLMs in healthcare

There has long been excitement around the use of Artificial Intelligence (AI) in healthcare applications [13]. Applications of interest for language-specific tools include improving the efficiency of clinical documentation, decreasing administrative task burdens, clarifying complicated test result reports for patients, and responding to in-basket Electronic Medical Record (EMR) messages. For example, Doximity has released a beta version of DocsGPT, a tool that integrates ChatGPT to assist clinicians with tasks such as writing insurance denial appeals [14]. There has also been reports about using ChatGPT to answer medical questions [15], write clinical case vignettes [16], and simplify radiology reports to enhance patient-provider communication [17]. The electronic health record system, Epic, has announced they are examining pilot programs to use this technology for drafting notes and replying to in-basket messages [18].

In deliberations about using LLMs in healthcare, a major caveat lies in the models’ tendency to ‘hallucinate’ or ‘confabulate’ factual information, which given the sensitivity of this context, could be extremely dangerous. Accordingly, the importance of having the output reviewed by domain experts (e.g., for accuracy, relevance, and reliability) cannot be overemphasized. Furthermore, before using LLMs in healthcare it is crucial to understand their biases. Depending on the quality of the training data and employed reinforcement feedback processes, different LLMs might have dissimilar biases that users should be aware of. The use of ChatGPT and other LLMs has been reviewed by others in more detail [1921].

Using ChatGPT and other LLMs in research

Even before the introduction of OpenAI’s ChatGPT, computer generated text was used in academic publications. As of 2021, the estimated prevalence was 4.29 papers for every one million papers [22], raising concerns about the negative impact of using LLMs on the integrity of academic publications [23]. One way the community was able to detect these papers was through spotting so-called tortured phrases (i.e., the AI-generated version of an established phrase used in specific disciplines for certain concepts and phenomena).

ChatGPT, on the other hand, generates fluent and convincing abstracts that are difficult for human reviewers or traditional plagiarism detectors to identify [24]. As ChatGPT and other recently developed applications based on LLMs mainstream the use of AI-generated content, detection will likely become much more difficult. This is partly because, (1) with an increase in the number of users, LLMs learn quicker and produce better human-like content, (2) more recent LLMs benefit from better algorithms and, (3) researchers are more aware of LLMs’ shortcomings e.g., use of tortured phrases and will likely mix generated content with their own writing to disguise their use of LLMs. Detection applications seem unreliable and for the foreseeable future will likely remain so. Given challenges of detecting AI-generated text, it makes sense to err on the side of transparency and encourage disclosure. Various journal editors and professional societies have developed disclosure guidelines, stressing that LLMs cannot be authors [25, 26], and, when used, should be disclosed in the introduction or methods section, describing who used the system, when, and using which prompts, as well as among cited references [27, 28].

Besides assisting researchers in improving their writing [29], LLMs can also be used in scholarly reviews to support editorial practices. For example, by supporting the search for suitable reviewers, the initial screening of manuscripts, and the write-up of final decision letters from individual review reports. However, various risks such as inaccuracies and biases as well as confidentiality concerns require researchers and editors to engage with LLMs cautiously [30]. The use of ChatGPT and other LLMs in research has been reviewed in detail by others [21, 31, 32].

Methods

The research protocol and the first draft of survey questions were developed (M.H. and C.A.G) based on available and ongoing work about LLMs and ChatGPT, with suggestions from other panel members (K.H. and N.K.N.M) and a team member (E.W.). ChatGPT was used to brainstorm survey questions. D.L. used OpenAI ChatGPT on the 27th of January 2023 at 6:06pm CST using the following prompt: “please create survey questions for medical students, medical residents, and medical faculty members to answer regarding ideas for use and attitudes surrounding use of ChatGPT in education and research” [33]. The Northwestern IRB granted an exemption (STU00218786). We received permission from the Vice Dean of Education to gather responses from medical trainees attending the session. Attendees were informed about the survey details, such as anonymized data collection and voluntary participation, and were offered a chance to view the information sheet and consent form before the start of the survey. We included a slide with the summary details as well as a QR code linking to additional details about the proposed study. Our IRB protocol requested a waiver of participants’ verbal or written consent because this was a hybrid event with online and in-person attendees, which made both forms of consent impractical. Upon IRB’s approval of this waiver, attendees were informed that by logging in to the Slido polling platform, they were consenting to participate in our study. We collected anonymized and unidentifiable data using a paid version of Slido (Bratislava, Slovakia; https://www.sli.do/). The full survey is available in the S1 File.

The quantitative survey data were analyzed and visualized (C.A.G) in python v 3.8 with scipy v1.7.3, matplotlib v3.5.1, seaborn v0.11.2, tableone v0.7.10 [34], and plot_likert v0.4.0 [35]. ChatGPT was used for minor code troubleshooting. For the small subset of 18 respondents who selected multiple roles, we took their most senior role and most clinical role for analysis. Binarized responses included any answer with ‘yes’, with the other category being ‘No + unsure’. Categories were compared pairwise using Fisher’s Exact tests.

The discussion was analyzed after transcribing the session (M.H.). For this purpose, we used the three topic areas highlighted in the event description (education, healthcare and research) to qualitatively code the transcripts using an inductive approach [36]. Using these codes we analyzed the transcript. Subsequently, we identified three subcodes within each code (possible positive impacts, possible negative impacts and remaining questions), bringing the total number of codes to nine. Using these nine codes, we analyzed the transcript for a second time and generated a report. Upon the completion of the first draft of the report, feedback was sought from all members of the panel and the text was revised accordingly.

Results

Survey results

We had 1,174 people register for the event. The peak number of webinar participants during the event was 718, and 126 people indicated they would attend in-person. We received survey responses from 420 people; a conservative estimated response rate is 49.7%. The smallest group was medical trainees (medical students, residents, and fellows) at 14 respondents (3.3% of all respondents), and the second smallest was clinical faculty with 45 (10.7%) respondents (Table 1). There were more research trainees (graduate students and postdoctoral researchers) with 53 (12.6%) respondents, and research faculty with 65 (15.5% respondents). Administrative staff made up 70 (16.7%) of respondents. The largest group of respondents identified as ‘Other’, with 173 respondents (41.2% of all respondents).

Table 1. Number of respondents, by role, and whether they had used LLMs before.

Respondent Role Number Used LLM before, n (%)
No Yes
Medical Student, Resident, Fellow 14 5 (35.7) 9 (64.3)
Graduate Student, Postdoc Researcher 53 23 (43.4) 30 (56.6)
Clinical Faculty 45 31 (68.9) 14 (31.1)
Research Faculty 65 33 (50.8) 32 (49.2)
Administrative Staff 70 48 (68.6) 22 (31.4)
Other 173 112 (64.7) 61 (35.3)
Total 420 252 (60.0) 168 (40.0)

Overall, only 40% of the audience had tried ChatGPT. Medical and research trainees were more likely to have used ChatGPT compared with faculty and staff. Significantly more medical trainees (medical student, residents, fellows) had tried ChatGPT (64.2%) compared with clinical faculty (31.1%), p = 0.03. The percentage of graduate students and postdoctoral researchers who had tried ChatGPT (56.6%) was closed to that of research faculty (49.2%), p = 0.46.

Across all roles except medical trainees, the most common response regarding interest in using ChatGPT going forwards was ‘Somewhat’. Those who had used ChatGPT already had higher interest in using it compared with those who had not; 39.9% had interest in using it ‘to a great extent’ compared to 15.9% who had no interest in using it (p<0.001; Fig 1).

Fig 1. Interest in using ChatGPT as broken down by previous usage.

Fig 1

There was greater interest going forwards among those who had already tried ChatGPT compared to those who had not.

In response to questions about whether ChatGPT can be used in specific contexts, there was greater uncertainty around its use in Healthcare and Education, compared to using it in Research (Table 2). For Research, only 75 (17.9%) of respondents selected ‘I don’t know, it is too early to make a statement’, compared with 226 (53.8%) when asked about using it in Education (p<0.001), and 177 (42.2%), when asked about using it in Healthcare (p<0.001). Medical and research trainees were more interested in using it for education purposes compared with clinical and research faculty, though this was not statistically significant. Of note, when responding to the question about using ChatGPT in Healthcare, a significant portion (42% of respondents) of respondents approved of using it for administrative purposes (for example, writing letters to insurance companies) and there was a smaller population of respondents who thought it could be used for any purpose (12.2%). More medical trainees felt it was acceptable to use this technology for healthcare purposes (including administrative purposes), compared with clinical faculty 92.9% ‘yes’ vs 48.9% ‘yes’, p = 0.004 (Fig 2).

Table 2. Survey response breakdown, by topic of education, research, and healthcare.

Topic Statement Response, n (%)
Education I don’t know, it is too early to make a statement 226 (53.8)
No, it should be banned 11 (2.6)
Yes, it should be actively incorporated 183 (43.6)
Research I don’t know, it is too early to make a statement 75 (17.9)
No, it should not be used at all 6 (1.4)
Yes, as long as its use is transparently disclosed 259 (62.0)
Yes, but it should only be used to help brainstorm 68 (16.3)
Yes, disclosure is NOT needed 10 (2.4)
Healthcare I don’t know, it is too early to make a statement 177 (42.2)
No, it should not be used at all 15 (3.6)
Yes, it can be used for administrative purposes 176 (42.0)
Yes, it can be used for any purpose 51 (12.2)

Fig 2. Use of ChatGPT in healthcare, by respondent role.

Fig 2

Breakdown of proportions of answers when asking about ChatGPT use in Healthcare, as split by respondent’s role. Students had higher acceptability of ChatGPT’s use than faculty and staff.

Those who had already used ChatGPT were more likely to deem it acceptable for research purposes (89.3% ’yes’) versus those who had not used it before (75% ‘yes’), 14.3% higher, p<0.001 (Fig 3). Similarly, those with prior experience thought it was acceptable to use in healthcare 62.5% vs 48.8%, 13.7% higher, p = 0.008. They also thought it was more acceptable to use in education, 63.9% vs 30.2%, 33.7% higher, p<0.001.

Fig 3. Acceptability of ChatGPT use.

Fig 3

Comparison of binarized responses in use of ChatGPT for education, healthcare, and research, broken down by previous use of ChatGPT.

Analysis of the Q&A session

Education

Possible positive impacts. “Leveling the playing field” for students with different language skills was identified as an advantage of using LLMs. Since students’ scientific abilities should not be overshadowed by their insufficient language skills, ChatGPT was seen as a solution that could help fix errors in writing and accordingly, an instrument that can support students who might be challenged by writing proficiency—specifically those not writing in their native language. Another useful application was “adding the fluff” to writing (i.e., details that could potentially improve comprehension), especially for those with communication challenges. Structuring and summarizing existing text or creating the first draft of letters of application with specific requirements were also mentioned among possible areas where ChatGPT could help students. Another mentioned possibility was to use ChatGPT as a studying tool that (upon further improvements and approved accuracy) could describe medical concepts at a specific comprehension level (e.g., “explain tetralogy of fallot at the level of a tenth grader”).

Possible negative impacts. Given existing inaccuracies in content generated by systems such as ChatGPT, a panel member warned medical students against using them to explain medical concepts and encouraged them to have everything “double and triple checked”. To the extent that ChatGPT could be used to find fast solutions, and as a substitute for hard work and understanding the material (e.g., only to get through the assignments or take shortcuts), it was believed to be harmful for education. Clinical-reasoning skills were believed to be at risk if ChatGPT-like systems are used more widely. For instance, it was believed that writing clinical notes helps students “internalize the clinical reasoning that goes into decision making”, and so until such knowledge is cemented, using these systems would be harmful for junior medical students. One member of the audience warned that since effective and responsible use of ChatGPT requires adjusted curricula and assessment methods, employing them before these changes are enacted would be harmful. A panel member highlighted the lack of empirical evidence in relation to the usefulness and effectiveness of these systems when teaching different cohorts of students with various abilities and interests. As such, early adoption of these systems in all educational contexts was believed to have unforeseen consequences.

Remaining questions. Challenges of ensuring academic integrity and students’ willingness to disclose the use of ChatGPT were raised by some attendees. However, as a clinical faculty member suggested, these are neither new challenges nor unique problems associated with ChatGPT because even in the absence of such tools, one could hire somebody to write essays. Plagiarism detection applications and stricter regulations have not deterred outsourcing essay writing. Therefore, it remains an open question as to how ChatGPT changes this milieu.

A panelist suggested that similar to when ChatGPT is used to write code (e.g., in Python) and the natural tendency to test generated code to see if it actually works (e.g., as part of the larger code), students should employ methods to test and verify the accuracy and veracity of generated text. However, since systems like ChatGPT are constantly evolving, developing suggestions and guidelines for verification is challenging.

Information literacy was another issue raised by a panelist. New technologies such as ChatGPT extend and complicate existing discussions in terms of how information is accessed, processed, evaluated and ultimately consumed by users. From a university library perspective, training and supporting various community members to responsibly incorporate new technology in decision making and problem solving requires mobilizing existing and new resources.

Healthcare

Possible positive impacts. Improving communication between clinicians and patients was among anticipated possible gains. For example, it was highlighted that “doctors might not be in their best self” during an extremely busy week when they are responding to patient’s EMR messages, and so ChatGPT could ensure that all niceties are there, include additional content based on patients’ history and maintain emotional consistency in communication. Upon further development, these systems could help centralize and organize patient records by flagging areas of concern to improve diagnosis and effective decision making. Currently, our medical records lack sufficient usability and when assessing patients, one is concerned that some vital information might be “buried in a chart” that is not readily accessible. However, with LLMs acting as “assistants” or “co-pilots”, able to find these hidden and sometimes critical pieces of information, it could be possible for the provider to save time and improve care delivery.

Efficiency of documentation was highlighted as a potential gain for clinicians, patients and the healthcare system. For example, increased efficiency in note-taking through prepopulation of forms, voice recording and morphing that into clinician notes, and synthesizing existing patient notes to save clinicians’ time were noted as possibilities. This increased efficiency was believed to benefit patients through improved care and increased patient-clinicians interaction time, which could improve shared decision-making conversations. One panelist highlighted that patient notes are logged in the EHR system mostly late at night or outside regular working hours, stressing the burden of note taking on clinicians as a driver of burnouts.

Possible negative impacts. Given available evidence about ChatGPT’s inaccuracies and so-called hallucinated content [37] as well as lack of transparency about the used sources in training it, using these systems in triage and admission of new patients or for clinical diagnosis was deemed risky. One panelist highlighted previous failures of AI models in clinical settings [38, 39] as a lesson for the community to adopt these technologies with caution and only after regulatory approvals. Furthermore, the COVID-19 pandemic and clinicians’ experience of having to fight “malicious misinformation” was used as an example to highlight risks associated with irresponsible use. Malevolently using wrong or inaccurate data to train an LLM was described as “poisoning the dataset” to produce a predictive model that generates erroneous information.

Although many viewed the speculated positive impact on efficiency favorably, some shared reservations about it, highlighting that the freed-up time could be seen as an opportunity to ask clinicians to visit more patients instead of spending more time with them. The explanation was that the healthcare system could redirect an opportunity like this to generate additional revenue. Furthermore, using technology to consolidate existing notes or pre-populate forms was believed to increase the likelihood that falsehood could be copy-pasted and result in carrying forward errors. The concern being that since these systems have the propensity to pass on information as well as misinformation, wrong diagnoses could be carried forward without being questioned. Unless the veracity of carried historical information is questioned, clinicians might be trained out of the habit of critical thinking and assume all information as reliable.

Remaining questions. When discussing incorporation of ChatGPT in healthcare, specific techno-ethical challenges were highlighted. For example, it was stressed that while excitement about technology is positive, specific aspects need profound deliberation and intentional design. These include defining and enforcing different access levels (e.g., to clinical notes), regulating data reuse, protecting patients’ privacy, accountability of user groups, and credit attribution for data contributions. Furthermore, securing the required financial investment to responsibly incorporate LLMs into existing information technology infrastructure and workflows was believed to be challenging.

Upon debating as to whether ChatGPT is a friend or foe, one panelist mentioned challenges such as distribution disparities, and said “unfortunately, the track record of our use of technologies is not strong. New technologies have always worsened disparities and I have a significant concern that the computer power that is needed to generate and power these systems will be inadequately distributed”.

When discussing the risk of malevolently poisoning LLMs’ training data, one panelist highlighted that it remains unclear how healthcare data should be curated for LLMs and how erroneous information could be identified and removed. Furthermore, who should be responsible to monitor the sanctity of training data or prioritize available information (e.g., based on the reliability of used sources)? It was noted that when using tools such as the Google search engine, users have already developed specific skills to question unique sources but because ChatGPT “assimilates” enormous amounts of information, attributions are ambiguous and so verification remains challenging.

Research

Possible positive impacts. Refining scholarly text or making suggestions to improve existing texts were highlighted among possible positive impacts. Support provided by a writing center were used as an analogy to describe some of these gains. One unique feature of ChatGPT was believed to be bidirectional communication, which allows (expert) users to “interrogate the system and help refine the output”, which will ultimately benefit all users in the long run.

Possible negative impacts. Lack of transparency about the used data to train LLMs was believed to hide biases and disempower researchers in terms of “grasping the oppression that has gone into the answers”. This issue was also stressed by a member of the audience who questioned the language of used sources. One panelist speculated that the training data likely contained more sources in overrepresented languages within the scholarly corpus (e.g., English, French). Furthermore, since ChatGPT is currently made unavailable (by OpenAI) in countries such as China, Russia, Ukraine, Iran and Venezuela, it cannot be trained by or receive feedback from researchers who are based in these countries, and thus, might be biased towards the views of researchers based in specific locations.

Remaining questions. One member of the audience believed that disclosure guidelines (e.g., have researchers disclose what part of the text is influenced by ChatGPT) are unenforceable and so, their promotion is moot. They added that the existing norms on plagiarism cover potential misconduct using ChatGPT. One panelist agreed with the unenforceability of guidelines (because researchers may alter AI-generated text to disguise their use), but highlighted that given the novelty of ChatGPT and its unique challenges, good practices in relation to this technology should be specified and promoted nonetheless.

We asked attendees to describe the most important risks and benefits of using ChatGPT with only one keyword. After correcting typographical errors and replacing all plurals with singular words (with the help from ChatGPT), we used a free online word cloud generator [40] to produce the following two figures (Fig 4A and 4B).

Fig 4. Keyword responses.

Fig 4

(A) Respondent keywords to describe the most important risk of using ChatGPT (n = 225). (B) Respondent keywords to describe the most important benefit of using ChatGPT (n = 263).

Discussion

We hosted a large forum to gauge interest and explore perspectives about using ChatGPT in education, research, and healthcare. Overall, there was significant uncertainty around the acceptability of its use, with a large portion of respondents saying it was too early to make a statement and that they remained somewhat interested in using ChatGPT.

As demonstrated by their positive opinions and favorable thoughts, trainees were more interested in using ChatGPT than faculty. Moreover, more trainees than faculty had already tried ChatGPT. This points to a potential generational divide between early adopters (trainees) and late adopters (faculty), with the latter in positions of power to dictate policy to trainees and the academic community at large. As trainees are likely to be more actively engaged with this technology over the years, it is important for senior faculty to also experiment with the technology. Critically, shared decision-making about appropriate use must incorporate the voices of all user groups, with emphasis on the input of trainees, which are the ones most likely to be impacted by the continued development and deployment of this nascent technology.

Our results showed that in terms of regulating the use of LLMs, a one-size-fits-all strategy might not work. For example, more respondents indicated that it is acceptable to use LLMs for research and healthcare (including for administrative tasks) than for educational purposes. Context specific policies may be helpful in clarifying what is deemed acceptable use, so as to avoid miscommunication or ambiguity. Future studies could examine each use case in greater detail, specifically among the population of potential users.

Given LLMs’ potential to be integrated in different context, it is reasonable to encourage different cohorts to explore and test this technology. Only 40% of our respondents had tried ChatGPT. It is important to note that participants who had used this technology had a more optimistic outlook about LLMs in general whereas never-users seemed to have more concerns about its widespread adoption. Thus, it is important to continue to educate and inform different cohorts about LLMs and their responsible use through practical applications (including live demonstrations), so never-users can grasp the technology and help dispel the fear of the unknown. Using and engaging with LLMs is essential to learning about their abilities and limitations.

Respondents and audience members had a wide range of interesting points with regards to the use of ChatGPT for research, education, and healthcare, with a mixture of positive and negative responses. Ongoing discussion is essential, especially given the current “black-box” nature of ChatGPT and other LLMs, with users left in the dark on how outputs are generated. Unresolved questions remain about how LLMs curate content, the corpus of data they are trained on, the weights used to sort and prioritize evidence, and the risks of spreading fake news, misinformation or bias. One potential solution from legislators would be to require increased transparency from OpenAI and other LLM companies.

Limitations

Some of the limitations of this study include our inability to break down and better delineate the large “Other” category of respondents. Since respondents were likely interested in ChatGPT to register for and attend the event, and also complete the survey, our results might not be representative of the various cohorts within the academic community.

Although medical trainees had positive views towards ChatGPT and its use, they were our smallest group of respondents (3.3% of our cohort). We took a neutral tone to the technology in preparing our recruitment material for the event, as evidenced by the respondents from other groups who had a lukewarm or uncertain view towards ChatGPT. Hence, we suspect there is high interest from medical trainees. Future studies could focus more closely on examining this group and their attitude towards LLMs.

Conclusion

There is still much to discuss about the optimal and ethical uses of LLMs such as ChatGPT. Responsible use should be promoted, and future discussion should continue to explore the limitations of this technology. LLMs and AI in general have the potential to change the fabric of society and impact labor relations at large, deeply transforming how we relate to one another and work. They are like a double-edged sword, bringing with them the promise of more efficiency, creativity and free time for all, but risking spreading bias, hate, misinformation, and furthering the digital divide between those that have access to technology and are fluent in its use versus the ones left behind. The broad interest and engagement sparked by ChatGPT strongly suggests that while a work in progress, LLMs have a significant potential for disruption. To navigate this uncharted territory, we recommend that future explorations of its responsible use be grounded in principles of transparency, equity, reliability, and above all, primum non nocere.

Supporting information

S1 File. Survey delivered via Slido.

Full survey question and response options as administered to the audience.

(PDF)

Acknowledgments

The authors wish to thank and acknowledge Eva Winckler for her contributions to event organization and coordination. We also thank the journal editor and two reviewers for their feedback.

Data Availability

Data are at https://zenodo.org/record/7789186#.ZCb0eezML0o Code are available at https://github.com/cloverbunny/gptsurvey/blob/main/gptsurvey.ipynb.

Funding Statement

This work was supported in part by the Northwestern University Institute for Artificial Intelligence in Medicine. CAG is supported by NIH/NHLBI F32HL162377. KH and MH are supported by the National Center for Advancing Translational Sciences (NCATS, UL1TR001422), National Institutes of Health (NIH). FSA is supported by grants from the National Institutes of Health/National Heart, Lung, and Blood Institute (K23HL155970) and the American Heart Association (AHA number 856917). The funders have not played a role in the design, analysis, decision to publish, or preparation of the manuscript.

References

Decision Letter 0

Mary Diane Clark

14 Jun 2023

PONE-D-23-13566An exploratory survey about using ChatGPT in education, healthcare, and researchPLOS ONE

Dear Dr. Hosseini,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 29 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. The reviewers and I find that the manuscript can contribute to the field but feel that it needs more work on the literature review as well as in the discussion.  Please see comments that are included. 

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mary Diane Clark, PhD

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

4. We note that Figure 7 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

a. You may seek permission from the original copyright holder of Figure 7 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b.If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

Additional Editor Comments:

Thank you for submitting the article to PLOS ONE, the topic is interesting and important. The reviewers provide guidance for how they feel that the manuscript needs to be edited. Please pay attention to their comments and make all of their suggested changes. I have concerns about your discussion as it feels more like a summary of your results than an interpretation of your results. As noted by San Jose State University Writing Center (http://www.sjsu.edu/writingcenter) the discussion should include three necessary steps: interpretation, analysis, and explanation. Why are your results important and where do they fit in with what we already know. The questions a discussion should address include how or why the use of Chat GPT is helpful or harmful. What is the meaning of your findings.

We look forward to your revised manuscript

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this manuscript on ChatGPT and similar AI tools. As an academic, the topic holds great relevance and has been widely discussed. However, I believe that a comprehensive review of existing literature is essential for advancing knowledge in any field. Literature reviews serve as the foundation for new research, identifying gaps, building upon previous studies, and contributing to the overall body of knowledge. Unfortunately, this manuscript suggests that the current literature is insufficient in providing a comprehensive understanding of the subject matter.

In some cases, the insufficiency of literature can be attributed to the field being relatively new or rapidly evolving. New developments and paradigms take time to be reflected in the literature, resulting in limited studies, replication, and consensus on key concepts or theories. This dynamic nature poses challenges for researchers attempting to establish a comprehensive literature base.

Recognizing the insufficiency of existing literature is crucial for addressing gaps and advancing knowledge in the field. However, this manuscript falls short in providing a full and comprehensive picture of how ChatGPT can be utilized in broader contexts, such as education, healthcare, and research. The literature on these specific applications was limited to one paragraph each, which is insufficient.

Additionally, the manuscript heavily relies on unnecessary graphics and images, which could have been condensed to reduce fluff. The demographic information could have been presented more concisely. Furthermore, the discussion section lacks a clear data interpretation process and a deeper understanding of the results. It appears that the researchers randomly selected comments without providing a comprehensive analysis.

Moreover, there were frequent APA errors throughout the manuscript, which need to be addressed.

Overall, I appreciate the opportunity to review this manuscript. However, improvements are needed to provide a more comprehensive literature review, condense unnecessary elements, and address the APA errors.

Reviewer #2: The paper is not as per journal format.

The font sizes are different in several places

Instead of bibliography it should be references

If possible Need to cite more papers related to chatGPT and related to this research

Figures titles are not as per journal standards and make sure to use any templetes related to plus one available

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: ASHLEY GREENE

Reviewer #2: Yes: Dinesh Kalla

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 5;18(10):e0292216. doi: 10.1371/journal.pone.0292216.r002

Author response to Decision Letter 0


4 Aug 2023

Response to editor comments,

Thank you for your comments. We have revised our manuscript and included a point-by-point response to the reviewer comments below. In response to other comments raised:

Formatting: We have reformatted our document per PLOS ONE’s style requirements as requested. Our supplemental file includes a table and a list of questions both with relevant captions. Please let us know if there are additional formatting changes to be made.

Data sharing: We have published our dataset on Zenodo at doi 10.5281/zenodo.7789186 and all code needed to completely reproduce our results at Github repository https://github.com/cloverbunny/gptsurvey/blob/main/gptsurvey.ipynb. This is articulated in our Data and Code Availability statements.

Regarding Word Cloud Figure licensing: This is not a copyrighted image, as we generated this with our specific data. We contacted the author of the website to clarify and request his signature on the provided document; he said it was not necessary for us to obtain formal documentation of use from him, see screenshot of email communication and disclaimer below. The word cloud generator website explicitly contains the permission: “The generated word clouds may be used for any purpose.”

Response to reviewer comments:

Reviewer #1 Comment #1: Thank you for the opportunity to review this manuscript on ChatGPT and similar AI tools. As an academic, the topic holds great relevance and has been widely discussed. However, I believe that a comprehensive review of existing literature is essential for advancing knowledge in any field. Literature reviews serve as the foundation for new research, identifying gaps, building upon previous studies, and contributing to the overall body of knowledge. Unfortunately, this manuscript suggests that the current literature is insufficient in providing a comprehensive understanding of the subject matter.

In some cases, the insufficiency of literature can be attributed to the field being relatively new or rapidly evolving. New developments and paradigms take time to be reflected in the literature, resulting in limited studies, replication, and consensus on key concepts or theories. This dynamic nature poses challenges for researchers attempting to establish a comprehensive literature base. Recognizing the insufficiency of existing literature is crucial for addressing gaps and advancing knowledge in the field. However, this manuscript falls short in providing a full and comprehensive picture of how ChatGPT can be utilized in broader contexts, such as education, healthcare, and research. The literature on these specific applications was limited to one paragraph each, which is insufficient.

Reviewer #1 Response #1: We thank the reviewer for the comments and agree that this is a rapidly developing field. We have expanded upon our introduction and background in all three realms of education, healthcare, and research. We have furthermore linked to more comprehensive reviews for interested readers for detailed discussion and literature review beyond the scope of our paper.

Reviewer #1 Comment #2: Additionally, the manuscript heavily relies on unnecessary graphics and images, which could have been condensed to reduce fluff. The demographic information could have been presented more concisely. Furthermore, the discussion section lacks a clear data interpretation process and a deeper understanding of the results. It appears that the researchers randomly selected comments without providing a comprehensive analysis.

Reviewer #1 Response #2: We have removed and condensed some of our figures and presented our demographic information in Table 1 in a concise format. We included all comments raised during the Question and Answer session without any random selection. We have expanded upon our Discussion.

Reviewer #1 Comment #3: Moreover, there were frequent APA errors throughout the manuscript, which need to be addressed.

Reviewer #1 Response #3: PLOS One waives all formatting requirements for initial submissions, which is why we did not initially adhere strictly to these guidelines. We have made stylistic revisions in this new document and adhered to the style as outlined by PLOS. We are happy to work on any additional formatting requests from a copy editor.

Reviewer #2 Comments: The paper is not as per journal format.

The font sizes are different in several placesInstead of bibliography it should be references If possible Need to cite more papers related to chatGPT and related to this research

Figures titles are not as per journal standards and make sure to use any templates related to plus one available

Reviewer #2 Responses: We have corrected these stylistic errors, cited more papers related to the research, and corrected figure titles and references. PLOS One waives all formatting requirements for initial submissions, which is why we did not initially adhere strictly to these guidelines. We are happy to consider additional formatting changes as suggested by a copy editor.

Attachment

Submitted filename: Response to reviewer comments.docx

Decision Letter 1

Mary Diane Clark

4 Sep 2023

PONE-D-23-13566R1An exploratory survey about using ChatGPT in education, healthcare, and researchPLOS ONE

Dear Dr. Hosseini,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Oct 19 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. As I noted in my comments please get a copy editor to polish the English and then work on the tables as the one is too long and readers will get lost.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mary Diane Clark, PhD

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

Thank you for all of the fixes in this revised manuscript. Plos One does not have copy editing so I am going to support the reviewers suggestion that you hire a copy editor to check all of the English and then work with someone who can help you edit down your table.

Her comments were

I would recommend a copy editor to do the final polishing of the manuscript, and to help create APA style tables. Currently Table 1 takes 3.5 pages and could be broken down into several smaller table to help readability.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for your revisions. At this time, I feel my original comments were adequately addressed.

I would recommend a copy editor to do the final polishing of the manuscript, and to help create APA style tables. Currently Table 1 takes 3.5 pages and could be broken down into several smaller table to help readability.

Reviewer #2: All comments have being addressed and this paper can be accepted with no further changes.

All the images and tables are as per journal format.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Dinesh Kalla

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 5;18(10):e0292216. doi: 10.1371/journal.pone.0292216.r004

Author response to Decision Letter 1


7 Sep 2023

Reviewer #1: Thank you for your revisions. At this time, I feel my original comments were adequately addressed. I would recommend a copy editor to do the final polishing of the manuscript, and to help create APA style tables. Currently Table 1 takes 3.5 pages and could be broken down into several smaller table to help readability.

Thanks for your suggestion. We have broken down Table 1 into two smaller tables and applied the APA style. We also had the paper reviewed and improved for readability.

Reviewer #2: All comments have being addressed and this paper can be accepted with no further changes.

Thanks

Attachment

Submitted filename: Response to reviewer comments.docx

Decision Letter 2

Mary Diane Clark

11 Sep 2023

PONE-D-23-13566R2An exploratory survey about using ChatGPT in education, healthcare, and researchPLOS ONE

Dear Dr. Hosseini,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Extremely minor grammar issues.  Should take about 15 minutes to get it corrected.

Please submit your revised manuscript by Oct 26 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mary Diane Clark, PhD

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

Pone-d-23-13566R2

Fun paper and thanks for all of the revisions. I have a few more--must have give feedback to tooooo many doc students today. I look forward to these very minor corrections and the publication of your paper.

Abstract---Materials and Methods

Please add the word ‘a’. before Fisher’s Exact

Page 4---5th line. : embedded in used educational resources

Again page 4 first paragraph

Increased likelihood of plagiarism, propagation of irrelevant or inaccurate

information in student essays, and challenges of assessing students in the presence of

technologies such as ChatGPT are highlighted [10].

Please rephrase:

Using ChatGPT increases the likelihood of plagiarism, the propagation of irrelevant or inaccurate information in students’ essays, and challenges in assessing students’ work when technologies, such as ChatGPT, are available.

Page 5 Methods

4th line --- add ‘the’ before 27th of January 2023

Then last two lines The Northwestern IRB granted ‘an’ exemption

Notice that you are missing some articles. I have become extremely sensitive to this issue because articles are not used in ASL and my grant office gave me what for. So if this continues I will have to ask you to get a professional to check.

Page 6 8th line----platform, they are were consenting

Then under Survey Results

The smallest group was were medical trainees

Next line

Respondents), and ‘the’ second smallest by was

Page 7

Compared to 15.9% who had no interest in using it (p<0.001; Fig.1)

Page 11Possible Negative impacts

Second line

About used sources ‘used’

Page 12 last line

Such as ‘the’ Google search engine

[Note: HTML markup is below. Please do not edit.]

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 5;18(10):e0292216. doi: 10.1371/journal.pone.0292216.r006

Author response to Decision Letter 2


12 Sep 2023

Dear Dr. Clark,

Thank you so much for your comments and excellent suggestions. Please accept our apologies for minor formatting and grammar mistakes.

We have reread and revised our manuscript with minor changes. We accepted all your suggestions and include a point-by-point response below.

Fun paper and thanks for all of the revisions. I have a few more--must have give feedback to tooooo many doc students today. I look forward to these very minor corrections and the publication of your paper.

Abstract---Materials and Methods

Please add the word ‘a’. before Fisher’s Exact

Done

Page 4---5th line. : embedded in used educational resources

Improved

Again page 4 first paragraph

Increased likelihood of plagiarism, propagation of irrelevant or inaccurate information in student essays, and challenges of assessing students in the presence of technologies such as ChatGPT are highlighted [10].

Improved

Please rephrase:

Using ChatGPT increases the likelihood of plagiarism, the propagation of irrelevant or inaccurate information in students’ essays, and challenges in assessing students’ work when technologies, such as ChatGPT, are available.

Improved

Page 5 Methods

4th line --- add ‘the’ before 27th of January 2023

Done

Then last two lines The Northwestern IRB granted ‘an’ exemption

Done

Notice that you are missing some articles. I have become extremely sensitive to this issue because articles are not used in ASL and my grant office gave me what for. So if this continues I will have to ask you to get a professional to check.

We have reread and improved the whole paper and paid specific attention to articles.

Page 6 8th line----platform, they are were consenting

Done

Then under Survey Results

The smallest group was were medical trainees

Done

Next line

Respondents), and ‘the’ second smallest by was

Done

Page 7

Compared to 15.9% who had no interest in using it (p<0.001; Fig.1)

Done

Page 11Possible Negative impacts

Second line

Improved

About used sources ‘used’

Done

Page 12 last line

Such as ‘the’ Google search engine

Done

Attachment

Submitted filename: Response to reviewer comments.docx

Decision Letter 3

Mary Diane Clark

14 Sep 2023

An exploratory survey about using ChatGPT in education, healthcare, and research

PONE-D-23-13566R3

Dear Dr. Hosseini,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Mary Diane Clark, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for fixing the small issues that I noted. Congrats on a fun paper on an new and important topic.

Reviewers' comments:

Acceptance letter

Mary Diane Clark

25 Sep 2023

PONE-D-23-13566R3

An exploratory survey about using ChatGPT in education, healthcare, and research

Dear Dr. Hosseini:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Mary Diane Clark

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Survey delivered via Slido.

    Full survey question and response options as administered to the audience.

    (PDF)

    Attachment

    Submitted filename: Response to reviewer comments.docx

    Attachment

    Submitted filename: Response to reviewer comments.docx

    Attachment

    Submitted filename: Response to reviewer comments.docx

    Data Availability Statement

    Data are at https://zenodo.org/record/7789186#.ZCb0eezML0o Code are available at https://github.com/cloverbunny/gptsurvey/blob/main/gptsurvey.ipynb.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES