Skip to main content
JMIR Formative Research logoLink to JMIR Formative Research
. 2024 Sep 12;8:e56797. doi: 10.2196/56797

ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study

Susannah Kisvarday 1,2,, Adam Yan 3,4, Julia Yarahuan 5,6, Daniel J Kats 1,2, Mondira Ray 1,2, Eugene Kim 1,2, Peter Hong 1,7, Jacob Spector 1,7, Jonathan Bickel 8, Chase Parsons 1,7, Naveed Rabbani 7,9,10, Jonathan D Hron 1,7
Editor: Amaryllis Mavragani
Reviewed by: Eileen Koski, Dag Øivind Madsen, Ruoyu Zhang, Gaurav Kumar Gupta, Satyam Rajput, Lingxuan Zhu
PMCID: PMC11427860  PMID: 39265163

Abstract

Background

The public launch of OpenAI’s ChatGPT platform generated immediate interest in the use of large language models (LLMs). Health care institutions are now grappling with establishing policies and guidelines for the use of these technologies, yet little is known about how health care providers view LLMs in medical settings. Moreover, there are no studies assessing how pediatric providers are adopting these readily accessible tools.

Objective

The aim of this study was to determine how pediatric providers are currently using LLMs in their work as well as their interest in using a Health Insurance Portability and Accountability Act (HIPAA)–compliant version of ChatGPT in the future.

Methods

A survey instrument consisting of structured and unstructured questions was iteratively developed by a team of informaticians from various pediatric specialties. The survey was sent via Research Electronic Data Capture (REDCap) to all Boston Children’s Hospital pediatric providers. Participation was voluntary and uncompensated, and all survey responses were anonymous. 

Results

Surveys were completed by 390 pediatric providers. Approximately 50% (197/390) of respondents had used an LLM; of these, almost 75% (142/197) were already using an LLM for nonclinical work and 27% (52/195) for clinical work. Providers detailed the various ways they are currently using an LLM in their clinical and nonclinical work. Only 29% (n=105) of 362 respondents indicated that ChatGPT should be used for patient care in its present state; however, 73.8% (273/368) reported they would use a HIPAA-compliant version of ChatGPT if one were available. Providers’ proposed future uses of LLMs in health care are described.

Conclusions

Despite significant concerns and barriers to LLM use in health care, pediatric providers are already using LLMs at work. This study will give policy makers needed information about how providers are using LLMs clinically.

Keywords: ChatGPT, machine learning, surveys and questionnaires, medical informatics applications, OpenAI, large language model, LLM, machine learning, pediatric, chatbot, artificial intelligence, AI, digital tools

Introduction

The public launch of ChatGPT by OpenAI in November 2022 generated immediate interest from a wide range of professionals as well as the public. The chatbot is based on a generative pretrained transformer large language model (GPT LLM) that processes large amounts of text to create an artificial intelligence (AI) system, which can then be repurposed across a variety of domains and diverse tasks [1]. Researchers and health care organizations have begun investigating how LLMs could be used and adapted within the medical field.

Some of the emerging potential applications of LLMs in medicine have included knowledge retrieval, which OpenAI has used in its own promotional material, as well as clinical decision support, medical note-taking, and composing patient communications [2-5]. However, potential problems with the application of LLMs to medicine are still not well understood. One early concern has been the phenomenon of AI “hallucinations,” in which the LLM can unpredictably return plausible sounding but incorrect or nonsensical answers to prompts [6,7]. Additionally, the content and phrasing of questions/prompts can significantly influence the LLM’s output, potentially impacting reliability [8-10]. Concerningly, the publicly available version of ChatGPT has been shown to have a diagnostic error rate of 83% in pediatric cases [11]. Other concerns regarding the clinical use of LLMs include privacy risks, lack of transparency, and bias perpetuation. In response, many institutions have begun drafting guidelines for using ChatGPT and other generative AI tools in health care settings [12].

The extent to which LLMs are already being used in health care is unclear. There is a need for better assessment of the current use practices, future intended uses, and general knowledge of health care providers regarding generative AI tools. Understanding the concerns and perspectives of clinical providers will be valuable in guiding the future development of both the AI tools themselves as well as the guidelines and principles for the use of such tools in health care.

Early surveys of ChatGPT/LLM use have focused broadly on uses beyond clinical care, have assessed perceptions rather than use, or have been limited in scope [13-17]. There is a dearth of information about health care providers’ perceptions of the benefits and barriers of LLMs in health care, and no study has assessed how pediatric providers are already using publicly available tools in their clinical and nonclinical work. This study sought to describe the knowledge and use of LLMs such as ChatGPT by physicians and advanced practice providers at a large, academic, free-standing children’s hospital.

Methods

Study Design

This study used a cross-sectional survey design to explore clinicians’ knowledge and use of LLMs such as ChatGPT. The survey instrument was developed using a modified Delphi method by an expert panel of 10 physician informaticians from various pediatric specialties; survey questions were collaboratively codesigned with multiple rounds of feedback until consensus was achieved. The survey consisted of a series of structured and unstructured questions and used adaptive questioning; the full survey, including the adaptive questioning logic, is available in Multimedia Appendix 1. The survey was pilot-tested on a group of 5 pediatricians prior to being sent to the entire target population.

Ethical Considerations

The study protocol, survey, and recruitment tool were granted full approval by the Boston Children’s Hospital Institutional Review Board (#P00045317). Participation in the survey was voluntary and uncompensated; respondents were informed of the purpose of the study and assured of their anonymity. The informed consent and privacy and confidentiality protection language is provided in Multimedia Appendices 1 and 2.

Recruitment

This closed survey study was conducted at Boston Children’s Hospital, a large academic urban pediatric health care system. The target sample was all Boston Children’s Hospital physicians and advanced practice providers in both hospital and outpatient clinic settings. Recruitment emails (see Multimedia Appendix 2) were sent via Research Electronic Data Capture (REDCap), a secure, web-based data capture application, which was also used for survey administration [18,19]. Survey recruitment and data collection started on October 12, 2023, and extended through November 14, 2023. Reminder emails were sent via REDCap to nonrespondents a maximum of 2 times.

Survey responses were analyzed in aggregate with minimum subgroup sizes of 10 responses to minimize the risk of reidentification of participants. Four survey questions included an “other” free-text option to capture concepts not covered by the provided answer choices. These free-text responses were analyzed qualitatively using the following methods. For each question, the predetermined discrete survey responses were used as a provisional codebook, which was expanded through inductive content analysis of the free-text responses [20]. To create the expanded codebook, 2 researchers (DJK and NR), one of whom was not involved with the data acquisition process, reviewed free-text responses and generated additional codes through an iterative process involving consensus meetings until no new codes were identified [21,22]. Coding conflicts were resolved by a third researcher (JDH). This expanded codebook was then applied to categorize free-text survey responses. Following coding, the 3 researchers (DJK, NR, and JDH) organized the coded responses into broader themes through an iterative process of consensus meetings.

Subgroup Analysis

The χ2 test of independence, followed by a posthoc analysis of adjusted residuals, was used to assess differences in survey question responses across demographic variables, including age, gender, race/ethnicity, and clinical role.

Results

Demographic Characteristics

Surveys were sent to a total of 3127 physicians and advanced practice providers via email; we received 390 (12.5%) completed survey responses. As shown in Table 1, most respondents self-identified as female (n=293, 76.3%), White or European (n=324, 83.7%), and either an attending physician (n=165, 42.4%) or advanced practice provider (n=110, 28.3%).

Table 1.

Characteristics of respondents who voluntarily participated in the ChatGPT/large language model survey during the survey data collection period (N=390).

Characteristic Respondents, n (%)
Gender

Female 293 (76.3)

Male 85 (22.1)

Nonbinary 2 (0.5)

Prefer not to answer 4 (1.0)
Age (years)

≤29 24 (6.2)

30-39 134 (36.2)

40-49 110 (28.3)

50-59 69 (17.7)

60-69 34 (8.7)

≥70 11 (2.8)
Race/ethnicity

American Indian or Alaska Native 1 (0.3)

Asian or Asian American 36 (9.3)

Black or African American 13 (3.4)

Hispanic or Latino/a/e 14 (3.6)

Native Hawaiian or Pacific Islander 0 (0)

White or European 324 (83.7)

Something else 4 (1.0)

Prefer not to say 12 (3.1)
Role

Attending physician 165 (42.4)

Advanced practice provider 110 (28.3)

Resident/fellow 32 (8.2)

Other 82 (21.1)
Specialty

Anesthesia 14 (3.6)

Emergency medicine 20 (5.1)

General pediatrics 59 (15.2)

Pediatric subspecialty 149 (38.3)

Radiology 7 (1.8)

Pathology 2 (0.5)

Surgical specialty 50 (12.9)

Other 88 (22.6)

Familiarity With and Current Use of ChatGPT

Among the 390 respondents, 288 (73.7%) indicated that they were familiar with ChatGPT or another LLM and an additional 83 (21.2%) indicated that they had heard of ChatGPT but did not really know what it is. Only 19 (4.9%) respondents reported that they had not heard of ChatGPT. Of those who had heard of ChatGPT (n=371), 197 (53.1%) had used ChatGPT or a similar model. Only 52 (26.7%) of 195 respondents who were using an LLM had used it clinically. Reported clinical uses of LLMs are shown in Table 2; the most common uses were drafting school and work letters and drafting prior authorizations.

Table 2.

Clinical use cases endorsed by survey respondents who indicated that they had used ChatGPT or a large language model (LLM) in their clinical work (n=52).

Responses to “How have you used ChatGPT/LLM in your clinical work (select all that apply)?” Respondents, n (%)
Draft school or work letter 26 (50)
Draft prior authorization 18 (35)
Generate patient education materials 15 (29)
Generate differential diagnosis 12 (23)
Other 12 (23)
Ask a specific clinical question (not mentioned above) 11 (21)
Draft all or part of a clinical note 8 (15)
Suggest a treatment plan 8 (15)
Respond to patient inbox messages 5 (10)
Draft all or part of a discharge summary 3 (6)
Draft handoff documentation 3 (6)

Overall, 72.4% (142/196) of question respondents reported using LLMs for nonclinical work. Nonclinical use cases included drafting emails (55/139 39.6%); creating outlines for grants, papers, or teaching materials (52/139, 37.4%); drafting a letter of recommendation (51/139, 36.7%); and writing code (eg, for statistical analysis or data visualization) (18/139 12.9%). Respondent free-text uses that were listed in “other” are shown in Table 3.

Table 3.

Free-text responses collected for 4 questions regarding participant use of and beliefs about large language models in clinical and nonclinical work.

Coded survey responsea Respondents, n Example
How have you used ChatGPT to help you with your non-clinical work?

Information search 12 It is a helpful adjunct to online searching. Helps you quickly narrow what you are looking for (assuming you don't want a broad search)

Plan recreational activity 8 Used for coming up with ideas for nonwork-related group activities

Summarize text 5 Summarize review papers into usable notes.

Revise communication 5 Editing text for grammar

Literature review 3 Triage/screen PubMed abstracts to identify references of interest

Generate title 3 Generate catchy titles for manuscripts and presentations.

Ideation 3 Generate research ideas

Draft mass communication 3 social media for my business

Creative writing 2 Write poems (in English and other languages), generate ideas

Well-being programming 2 Generate relaxation scripts.

Workflow 1 Write workflow proposals

Draft cover letter 1 Write … cover letters

Translation 1 translation of materials from English to another language

Task management 1 organizing to-do lists
What concerns do you have about using ChatGPT clinically?

Perceived lack of utility 5 Still not clear on how it would be used in healthcare

Potential bias in the data model 3 At times, ChatGPT is confounded by the presence of wrong data and, therefore, presents clearly inaccurate statements.

Legal 3 legal concerns- I am so careful about my documentation, and I just don't think chat GPT will ever word things the way I need it to help me in medicolegal situations.

Automation bias 2 Worried about clinician interpretation of ChatGPT output … and cannot replace clinical reasoning

Plagiarism 2 It is plagiarism on steroids.

Learning curve 1 Would not like to have to master a new technology in addition to the onslaught of requests for computer interface as it is

Depersonalization 1 That it could take away from collaborative development of an illness explanation that provider and patient/family engage in together.

Skill atrophy 1 Humans writing reports allows clinicians to integrate data in a way that supports clinical decision making and patient counseling. I am already finding a lack of critical thinking skills in graduate students. Push button documentation would be efficient (and report writing is arduous) but we all still need to think.
How would you use the Boston Children’s Hospital HIPAAb-compliant version of ChatGPT if it were available?

Research 6 My AI robot will ... learn to sort data in redcap

Translation 3 I use it probably in the most simple of ways to translate patient handouts into their language.

Summarize clinical narrative 3 Summarization of complex patient medical history and relevant clinical information and other data aggregation tasks (e.g., ascertain primary/longitudinal care team members involved in patient's care)

Extract data from narratives 1 Review patient charts and imaging reports to generate tabular data for research.

Workflow 1 Lots of potential nonclinical purposes, describing workflow, responsibility mapping
If a HIPAA-compliant version of ChatGPT were available, what types of information would you feel comfortable entering?

Demographic data 1 Name of patient’s school

Patient medications 1 Info related creating a prior auth insurance, medication, dx, etc.

aThe responses were analyzed and codified by a team of 3 physicians using formal qualitative methodology.

bHIPAA: Health Insurance Portability and Accountability Act.

Most respondents did not think ChatGPT should be used for patient care in its present state (256/390, 70.9%). Listed concerns were accuracy or reliability (319/390, 87.2%), patient privacy or security (237/390, 64.8%), unclear how ChatGPT makes decisions (232/390, 63.4%), lack of regulation (225/390, 61.5%), and potential bias in the data model (219/390, 59.8%). Free-text responses for this question are shown in Table 3.

Future Use of ChatGPT

Among the 367 respondents, 272 (74.1%) indicated they would use a version of ChatGPT if one were available that was compliant with the Health Insurance Portability and Accountability Act (HIPAA) [23]. Table 4 shows examples of how they envisioned using a HIPAA-compliant version of ChatGPT. If such a model were available, most participants indicated that they would feel comfortable entering patient diagnoses (223/266, 83.9%), age (188/266, 70.7%), and clinical questions without patient information (186/266, 69.9%); few would feel comfortable entering patient name (101/266, 38.0%), medical record number (87/266, 33%), date of birth (98/266, 37%), or whole notes from a patient chart (99/266, 37%). Table 3 shows the results of our analysis for free-text responses.

Table 4.

Clinical use cases envisioned by pediatric providers who responded to the survey question about how they would use a Health Insurance Portability and Accountability Act (HIPAA)–compliant version of ChatGPT if one were available (n=270).

Responses to “How would you use a HIPAA compliant version of ChatGPT (select all that apply)?” Respondents, n (%)
Draft school or work letter 209 (77)
Generate patient education materials 191 (71)
Draft all or part of a clinical note 169 (63)
Draft prior authorization 159 (59)
Ask ChatGPT a specific clinical question 114 (42)
Draft all or part of a discharge summary 104 (39)
Generate differential diagnosis 99 (37)
Respond to patient inbox messages 87 (32)
Suggest a treatment plan 76 (28)
Draft handoff documentation 72 (27)
Other 23 (9)

Subgroup Analysis

Trainees were more likely to have used ChatGPT than other respondents (P=.01); the percentage who endorsed using ChatGPT clinically was higher among trainees (40%) than nontrainees (24%), but the values in this question did not reach statistical significance (P=.06). Male respondents were also more likely to have used ChatGPT (P=.005). Respondents in the ≤29 years age group were more likely to be familiar with ChatGPT and those in the ≥70 years age group were less likely (P=.002). Responses to the rest of the survey questions did not significantly differ across demographics. Specifically, there was no statistically significant difference in whether an LLM was being used for clinical and/or nonclinical work, the endorsed current use cases, expressed concerns regarding LLM use in health care, desire for a HIPAA-compliant LLM, or endorsed planned uses for a HIPAA-compliant LLM.

Discussion

Principal Findings

This study demonstrates that pediatric providers are already using LLMs in both their clinical and nonclinical work. Because of the known limitations of LLMs in a clinical setting, including demonstrated low diagnostic accuracy, it is important to know how pediatricians are using these tools. This study adds to the current literature by providing granular information about how people are currently using LLMs in their work as well as detailing ways that providers envision using a HIPAA-compliant future version of ChatGPT.

Nearly all survey respondents had heard of ChatGPT, and most had used this tool. While nearly 75% of LLM users indicated that they are already using ChatGPT in their nonclinical work, only slightly over 25% indicated that they are currently using an LLM for clinical work. Moreover, the most common clinical uses reported were for administrative tasks such as drafting letters, prior authorizations, and patient education materials.

Similar to other early studies, we found that clinicians are enthusiastic about using LLMs (specifically, a HIPAA-compliant LLM) in clinical and nonclinical work. It is notable that almost one-third of pediatric providers in this study indicated that they would feel comfortable using LLMs for patient care in the current format. Additionally, almost three-quarters of respondents indicated that they would use a HIPAA-compliant version of ChatGPT if one were available. If a HIPAA-compliant LLM were available, participants described a variety of ways that they would use it. Most of the envisioned use cases for the HIPAA-compliant LLM were still administrative or operational; however, other common uses included drafting all or part of a clinical note as well as a variety of uses related to clinical decision support and clinical documentation.

Since we have demonstrated that LLMs are already being used clinically and that there is strong interest in further future use of LLMs, it is imperative that health care systems and government agencies create thoughtful policies and regulations for LLM use in health care. The Biden administration recently announced an executive order that directs actions to maximize the promise and manage the risks of AI [24]. Clinical informaticians are needed to help navigate the thoughtful implementation of AI tools into clinical care.

Limitations and Future Work

This study has limitations. Most notably, the generalizability is limited by the low response rate and pediatric provider population. Another limitation is selection bias, as providers using LLMs may be more interested in completing a survey related to this topic. Similarly, although recent American Board of Pediatric statistics show that 67% of pediatricians are female and 57% are White [25], our survey respondents were even more skewed toward these populations. Self-reported data may contain some social desirability bias as respondents may attempt to demonstrate that they are using these technologies in acceptable ways. Also of note, we did not ask if respondents worked in an inpatient, outpatient, or other setting; not having that information limits some interpretation of the response data. For example, the number of providers interested in using LLMs to write discharge summaries is less interpretable as these documents are not generally written by outpatient providers.

The enthusiasm we found for the future use of LLMs lends itself to further investigation of LLM use in health care. We propose evaluating the differences in use cases by clinical work setting such as the emergency department versus inpatient versus outpatient versus proceduralists. There would be value in determining if survey results would differ across practice settings such as nonacademic, nonurban, adult patients, and different patient resources, or by geographic location; thus, we propose a larger study across institutions in the future. In-depth interviews and other qualitative methods could be used to gain deeper insights into providers’ LLM use and beliefs. Finally, exploring patients’ perceptions and current use of LLMs would be of great value.

Conclusions

This survey study adds to the corpus of knowledge on how providers are thinking about and using LLMs in a clinical context. LLMs will add to the series of digital tools in the clinical ecosystem meant to help advance clinical care. Despite significant concerns and barriers to LLM use in health care, this survey demonstrates that these tools are already commonly used and there is enthusiasm for their future use. Knowing how providers are using LLMs in their clinical and nonclinical work will help guide policy and regulations regarding the health care use of AI. As informaticians, it is incumbent upon us to support the appropriate use of these technologies to improve patient care, while also monitoring for their unintended consequences.

Abbreviations

AI

artificial intelligence

GPT LLM

generative pretrained transformer large language model

HIPAA

Health Insurance Portability and Accountability Act

LLM

large language model

REDCap

Research Electronic Data Capture

Multimedia Appendix 1

Survey questions.

Multimedia Appendix 2

ChatGPT survey recruitment email.

Data Availability

The datasets generated and analyzed during this study are available from the corresponding author on reasonable request.

Footnotes

Conflicts of Interest: None declared.

References

  • 1.Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. 34th Conference on Neural Information Processing Systems (NeurIPS 2020); December 6-12, 2020; Vancouver, Canada. 2020. https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html . [Google Scholar]
  • 2.Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, Scales N, Tanwani A, Cole-Lewis H, Pfohl S, Payne P, Seneviratne M, Gamble P, Kelly C, Babiker A, Schärli N, Chowdhery A, Mansfield P, Demner-Fushman D, Agüera Y Arcas B, Webster D, Corrado GS, Matias Y, Chou K, Gottweis J, Tomasev N, Liu Y, Rajkomar A, Barral J, Semturs C, Karthikesalingam A, Natarajan V. Large language models encode clinical knowledge. Nature. 2023 Aug 12;620(7972):172–180. doi: 10.1038/s41586-023-06291-2. https://europepmc.org/abstract/MED/37438534 .10.1038/s41586-023-06291-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ali SR, Dobbs TD, Hutchings HA, Whitaker IS. Using ChatGPT to write patient clinic letters. Lancet Digit Health. 2023 Apr;5(4):e179–e181. doi: 10.1016/S2589-7500(23)00048-1. https://linkinghub.elsevier.com/retrieve/pii/S2589-7500(23)00048-1 .S2589-7500(23)00048-1 [DOI] [PubMed] [Google Scholar]
  • 4.Lee P, Bubeck S, Petro J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N Engl J Med. 2023 Mar 30;388(13):1233–1239. doi: 10.1056/NEJMsr2214184. [DOI] [PubMed] [Google Scholar]
  • 5.Goodman RS, Patrinely JR, Stone CA, Zimmerman E, Donald RR, Chang SS, Berkowitz ST, Finn AP, Jahangir E, Scoville EA, Reese TS, Friedman DL, Bastarache JA, van der Heijden YF, Wright JJ, Ye F, Carter N, Alexander MR, Choe JH, Chastain CA, Zic JA, Horst SN, Turker I, Agarwal R, Osmundson E, Idrees K, Kiernan CM, Padmanabhan C, Bailey CE, Schlegel CE, Chambless LB, Gibson MK, Osterman TJ, Wheless LE, Johnson DB. Accuracy and reliability of chatbot responses to physician questions. JAMA Netw Open. 2023 Oct 02;6(10):e2336483. doi: 10.1001/jamanetworkopen.2023.36483. https://europepmc.org/abstract/MED/37782499 .2809975 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhang P, Kamel Boulos MN. Generative AI in medicine and healthcare: promises, opportunities and challenges. Future Internet. 2023 Aug 24;15(9):286. doi: 10.3390/fi15090286. [DOI] [Google Scholar]
  • 7.Rabbani N, Brown C, Bedgood M, Goldstein RL, Carlson JL, Pageler NM, Morse KE. Evaluation of a large language model to identify confidential content in adolescent encounter notes. JAMA Pediatr. 2024 Mar 01;178(3):308–310. doi: 10.1001/jamapediatrics.2023.6032.2814109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Snow J. ChatGPT can give great answers. But only if you know how to ask the right question. Wall Street Journal. 2023. Apr 12, [2023-11-19]. https://www.wsj.com/articles/chatgpt-ask-the-right-question-12d0f035 .
  • 9.Strong E, DiGiammarino A, Weng Y, Kumar A, Hosamani P, Hom J, Chen JH. Chatbot vs medical student performance on free-response clinical reasoning examinations. JAMA Intern Med. 2023 Sep 01;183(9):1028–1030. doi: 10.1001/jamainternmed.2023.2909. https://europepmc.org/abstract/MED/37459090 .2806980 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Koopman B, Zuccon G. ChatGPT tell me what I want to hear: How different prompts impact health answer correctness. In: Bouamor H, Pino J, Bali K, editors. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Singapore: Association for Computational Linguistics; 2023. pp. 15012–15022. [Google Scholar]
  • 11.Barile J, Margolis A, Cason G, Kim R, Kalash S, Tchaconas A, Milanaik R. Diagnostic accuracy of a large language model in pediatric case studies. JAMA Pediatr. 2024 Mar 01;178(3):313–315. doi: 10.1001/jamapediatrics.2023.5750.2813283 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Initial guidelines for using ChatGPT and other generative AI tools at Harvard. Harvard University Information Technology. [2023-11-19]. https://huit.harvard.edu/news/ai-guidelines .
  • 13.Ayoub NF, Lee Y, Grimm D, Divi V. Head-to-head comparison of ChatGPT versus Google Search for medical knowledge acquisition. Otolaryngol Head Neck Surg. 2024 Jun;170(6):1484–1491. doi: 10.1002/ohn.465. [DOI] [PubMed] [Google Scholar]
  • 14.Temsah M, Aljamaan F, Malki KH, Alhasan K, Altamimi I, Aljarbou R, Bazuhair F, Alsubaihin A, Abdulmajeed N, Alshahrani FS, Temsah R, Alshahrani T, Al-Eyadhy L, Alkhateeb SM, Saddik B, Halwani R, Jamal A, Al-Tawfiq JA, Al-Eyadhy A. ChatGPT and the future of digital health: a study on healthcare workers' perceptions and expectations. Healthcare. 2023 Jun 21;11(13):1812. doi: 10.3390/healthcare11131812. https://www.mdpi.com/resolver?pii=healthcare11131812 .healthcare11131812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Parikh PM, Talwar V, Goyal M. ChatGPT: An online cross-sectional descriptive survey comparing perceptions of healthcare workers to those of other professionals. Cancer Res Stat Treat. 2023;6(1):32–36. doi: 10.4103/crst.crst_40_23. [DOI] [Google Scholar]
  • 16.Iyengar KP, Yousef MMA, Nune A, Sharma GK, Botchu R. Perception of Chat Generative Pre-trained Transformer (Chat-GPT) AI tool amongst MSK clinicians. J Clin Orthop Trauma. 2023 Sep;44:102253. doi: 10.1016/j.jcot.2023.102253.S0976-5662(23)00161-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hu J, Liu F, Chu CM, Chang YT. Health care trainees' and professionals' perceptions of ChatGPT in improving medical knowledge training: rapid survey study. J Med Internet Res. 2023 Oct 18;25:e49385. doi: 10.2196/49385. https://www.jmir.org/2023//e49385/ v25i1e49385 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009 Apr;42(2):377–381. doi: 10.1016/j.jbi.2008.08.010. https://linkinghub.elsevier.com/retrieve/pii/S1532-0464(08)00122-6 .S1532-0464(08)00122-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O'Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, Duda SN, REDCap Consortium The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019 Jul;95:103208. doi: 10.1016/j.jbi.2019.103208. https://linkinghub.elsevier.com/retrieve/pii/S1532-0464(19)30126-1 .S1532-0464(19)30126-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Miles M, Huberman M, Saldana J. Qualitative Data Analysis: A Methods Sourcebook. Thousand Oaks, CA: Sage Publications; 2018. [Google Scholar]
  • 21.Ancker J, Benda N, Reddy M, Unertl KM, Veinot T. Guidance for publishing qualitative research in informatics. J Am Med Inform Assoc. 2021 Nov 25;28(12):2743–2748. doi: 10.1093/jamia/ocab195. https://europepmc.org/abstract/MED/34537840 .6372394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Cofie N, Braund H, Dalgarno N. Eight ways to get a grip on intercoder reliability using qualitative-based measures. Can Med Educ J. 2022 May 29;13(2):73–76. doi: 10.36834/cmej.72504. https://europepmc.org/abstract/MED/35572014 .CMEJ-13-073 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Health information privacy. US Department of Health and Human Services Office for Civil Rights. [2024-04-19]. https://www.hhs.gov/hipaa/index.html .
  • 24.Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. The White House. 2023. Oct 30, [2023-12-05]. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  • 25.Turner A, Gregg C, Leslie Laurel K. Race and ethnicity of pediatric trainees and the board-certified pediatric workforce. Pediatrics. 2022 Sep 01;150(3):e2021056084. doi: 10.1542/peds.2021-056084.188710 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia Appendix 1

Survey questions.

Multimedia Appendix 2

ChatGPT survey recruitment email.

Data Availability Statement

The datasets generated and analyzed during this study are available from the corresponding author on reasonable request.


Articles from JMIR Formative Research are provided here courtesy of JMIR Publications Inc.

RESOURCES