Skip to main content
JMIR Medical Education logoLink to JMIR Medical Education
. 2023 Sep 5;9:e48254. doi: 10.2196/48254

Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study

Malik Sallam 1,2,, Nesreen A Salim 3,4, Muna Barakat 5,6, Kholoud Al-Mahzoum 1, Ala'a B Al-Tammemi 7, Diana Malaeb 8, Rabih Hallit 9,10,11, Souheil Hallit 9,12
Editors: Kaushik Venkatesh, Maged N Kamel Boulos
Reviewed by: Javier Flores Cohaila, Aidan Gilson, Christine Jacob
PMCID: PMC10509747  PMID: 37578934

Abstract

Background

ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. Therefore, it is necessary to assess the usability and acceptability of this promising tool. As an innovative technology, the intention to use ChatGPT can be studied in the context of the technology acceptance model (TAM).

Objective

This study aimed to develop and validate a TAM-based survey instrument called TAME-ChatGPT (Technology Acceptance Model Edited to Assess ChatGPT Adoption) that could be employed to examine the successful integration and use of ChatGPT in health care education.

Methods

The survey tool was created based on the TAM framework. It comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for participants who used ChatGPT. Using a convenient sampling approach, the survey link was circulated electronically among university students between February and March 2023. Exploratory factor analysis (EFA) was used to assess the construct validity of the survey instrument.

Results

The final sample comprised 458 respondents, the majority among them undergraduate students (n=442, 96.5%). Only 109 (23.8%) respondents had heard of ChatGPT prior to participation and only 55 (11.3%) self-reported ChatGPT use before the study. EFA analysis on the attitude and usage scales showed significant Bartlett tests of sphericity scores (P<.001) and adequate Kaiser-Meyer-Olkin measures (0.823 for the attitude scale and 0.702 for the usage scale), confirming the factorability of the correlation matrices. The EFA showed that 3 constructs explained a cumulative total of 69.3% variance in the attitude scale, and these subscales represented perceived risks, attitude to technology/social influence, and anxiety. For the ChatGPT usage scale, EFA showed that 4 constructs explained a cumulative total of 72% variance in the data and comprised the perceived usefulness, perceived risks, perceived ease of use, and behavior/cognitive factors. All the ChatGPT attitude and usage subscales showed good reliability with Cronbach α values >.78 for all the deduced subscales.

Conclusions

The TAME-ChatGPT demonstrated good reliability, validity, and usefulness in assessing health care students’ attitudes toward ChatGPT. The findings highlighted the importance of considering risk perceptions, usefulness, ease of use, attitudes toward technology, and behavioral factors when adopting ChatGPT as a tool in health care education. This information can aid the stakeholders in creating strategies to support the optimal and ethical use of ChatGPT and to identify the potential challenges hindering its successful implementation. Future research is recommended to guide the effective adoption of ChatGPT in health care education.

Keywords: artificial intelligence, machine learning, education, technology, healthcare, survey, opinion, knowledge, practices, KAP

Introduction

Health care education has a rich history marked by notable revolutionary milestones [1-8]. The latest potential milestone could be the incorporation of artificial intelligence (AI) and machine learning (ML) into this educational domain with the capacity to bring about promising transformative changes [9-12]. The past decade has witnessed significant advancements in the application of AI and ML to health care education and practice [13-16].

Advanced AI-based tools, such as Generative Pretrained Transformer (GPT)–based tools developed by OpenAI, have the potential to significantly impact health care education [17]. These tools implement deep neural networks for generating human-like texts in various languages [17]. The high accuracy and promising potential of these tools can advance health care education [9,18]. The publicly available and user-friendly ChatGPT from OpenAI exemplifies the widespread attention and scrutiny received in academia and among health professionals [9,17,19-21].

The successful implementation of novel technologies is influenced by a range of factors, including technical, social, cultural, and psychological aspects that shape attitudes and behaviors toward the technology [22-24]. To achieve this goal, various frameworks have been developed, such as the technology acceptance model (TAM) [25,26] and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [27-29], among others [30,31]. These models help elucidate the interplay of complex factors that shape the acceptance and usage of novel technologies [32]. The popularity of TAM stems from its valid and straightforward framework, enabling the study of factors that motivate the adoption of technological innovations [32,33].

In examining the acceptance and usage of novel technology, the TAM framework utilizes constructs that assess the perceived usefulness, ease of use, risks, anxiety, attitude toward the technology, social influence, and cognitive and behavioral factors [25,26].

Since its public release in November 2022, ChatGPT has evoked both enthusiasm and concerns [34-37]. The same controversy has soared in the context health care research, education, and practice settings [9]. The utility of ChatGPT in health care education has been reviewed recently [9]. Its cited benefits included enhancing personalized learning experiences, potentially enhancing communication skills, and increasing students’ engagement in the learning process [9,18,38,39].

However, several valid concerns were raised, including the possibility of generating inaccurate content, along with ethical issues, including the risk of bias, plagiarism, and copyright issues [9,18,40,41]. Understanding the acceptance and use factors among health care students is essential, and the TAM framework offers a comprehensive yet simple approach for this purpose.

The rationale of such a study is justified based on several factors. First, ChatGPT’s novelty and potential in health care education necessitate an understanding of its acceptance and the factors influencing it. Second, ChatGPT’s transformative potential in self-learning, feedback, and problem-solving warrants investigation for effective integration. Third, exploring health care students’ attitudes sheds light on technology readiness and benefits. Finally, understanding student attitudes aids in addressing ethical concerns for responsible utilization of ChatGPT in health care settings.

Therefore, this study aimed to establish and test a TAM-based construct for understanding the acceptance and use of ChatGPT, a novel technology, among university students in health care disciplines. This study sought to analyze the possible factors that would drive the successful adoption and implementation of ChatGPT as an example of large language models (LLMs) in health care education. Consequently, the survey instrument developed in this study can provide valuable insights into the factors influencing the adoption of this transformative tool.

Methods

Inclusion and Exclusion Criteria

Potential study participants were recruited by convenience sampling through the authors’ contacts in Jordan. The survey link was sent through WhatsApp and Facebook groups targeted to students in health schools in the Arab-speaking country. The survey was open from February 28, 2023, and was closed on March 31, 2023. Participation was voluntary and did not involve incentives. The inclusion criteria that were outlined explicitly in the introductory section of the questionnaire before the informed consent item included (1) being 18 years of age or older, (2) being concurrently enrolled in a Jordanian university, and (3) having a very good comprehension of the Arabic language. The exclusion criteria included (1) being younger than 18 years of age, (2) studying in non–health care-related disciplines, (3) having a poor comprehension of the Arabic language.

The minimum sample size was estimated to be 360 participants following the established guidelines for survey validation studies, considering 36 items with 10 participants per item [42-44].

Ethics Approval

This study was approved by the institutional review board of the School of Pharmacy at the Applied Science Private University (2023-PHA-3), and approval was granted on January 24, 2023. Participation was voluntary and anonymous.

Construction of the Survey Instrument to Assess the Acceptance and Usage of ChatGPT

The survey instrument development process involved an extensive literature review and expert validation, followed by item development and pilot testing to ensure clarity [25,26,45-49]. Following an internal discussion among the authors with previous experience in survey construction and validation (MS, MB, DM, and SH), the survey tool was created based on the TAM framework. This internal discussion led to the identification of potential domains for inclusion in the final questionnaire: perceived usefulness, ease of use, risks, anxiety, attitude toward the technology, social influence, and cognitive and behavioral factors [25,26].

Herein, we refer to this edited TAM model in the context of ChatGPT adoption as the TAME-ChatGPT (Technology Acceptance Model Edited to Assess ChatGPT Adoption) survey instrument. Face and content validity were assessed by subjective evaluation, with an assessment of the clarity, comprehensiveness, and relevance of the initial items that were adopted. Additionally, any potential biases or issues with the wording of the items (eg, vague wording or complex items) were assessed [50].

Then, forward and backward translations were conducted by 3 authors (MS, NAS, and MB). Afterward, the survey was distributed among 6 participants representing a pilot test, followed by minor language modifications to improve clarity. The construct validity was checked following survey distribution using 13 TAM-based items evaluated among the respondents who heard of ChatGPT before the study. An additional 23 TAM-based items were evaluated among the respondents who used ChatGPT before the study.

The survey was introduced with a full explanation of the aims and a mandatory electronic consent item for the successful completion of the survey. The introductory section explicitly explained the guaranteed participant anonymity and privacy by refraining to request any personal details such as names or emails. This was followed by items to assess age, sex, university (public vs private), nationality (Jordanian vs non-Jordanian), school (health vs scientific vs humanities), and current educational level (undergraduate vs postgraduate). Then, a single item followed (“Have you heard of ChatGPT before the study?”) with a “yes” response required to move into the next item, while the answer of “no” resulted in survey submission. The next item was “Have you used ChatGPT before the study?” with “yes” resulting in the presentation of the full 36 items. An answer of “no” resulted in the presentation of the first 13 TAM items. The complete phrasing of the included items is presented in Table S1 of Multimedia Appendix 1.

Each item was evaluated on a 5-point Likert scale with the following responses: strongly agree scored as 5, agree scored as 4, neutral/no opinion scored as 3, disagree scored as 2, and strongly disagree scored as 1. The scoring was reversed for the items implying a negative attitude toward ChatGPT.

Statistical Analysis of Evaluation of Factorability for the Correlation Matrix of the Attitude and Usage Scales

The statistical analysis was performed using SPSS software (V22.0; IBM Corp). To explore the factor structure of the TAME-ChatGPT construct comprising a total of 36 items, we conducted an exploratory factor analysis (EFA) using principal component analysis (PCA) as the extraction method and oblimin rotation to determine the correlations between factors. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and the Bartlett test of sphericity were used to assess the suitability of the data for EFA. The internal consistency of the subscales and the TAME-ChatGPT was checked using Cronbach α. The level of statistical significance was set at P<.05.

Descriptive Analysis of Attitudes Toward ChatGPT and Its Usage Based on TAME-ChatGPT

Descriptive statistics included the measures of distribution (mean and median), dispersion (SD), and IQR. For the scale variables, and considering the relatively small sample size, the Shapiro-Wilk test was used to assess the normality of the scale variables.

The associations between categorical variables were assessed using the chi-square (χ2) test, while the associations between categorical and scale variables were assessed by the Mann-Whitney (M-W) U test for nonnormally distributed scale variables. The level of statistical significance was P=.05.

Results

Study Participants

A total of 480 responses were received over a 1-month period. A total of 9 individuals declined to participate in the study. Moreover, 5 respondents attending humanities schools and 8 science students were excluded. Thus, the final study sample comprised a total of 458 participants.

The study sample had a mean age of 21 (SD 3.3) years and a median age of 20 (IQR 19-22) years. Characteristics of the study sample are shown in Table 1. Out of the 458 participants, only 109 (23.8%) had heard of ChatGPT prior to the study, and only 55 (11.3%) self-reported ChatGPT use before the study.

Table 1.

Characteristics of the study respondents (N=458).

Categories Values, n (%)
Age (years)

18-20 years 251 (54.8)

>20 years 207 (45.2)
Sex

Male 143 (31.2)

Female 315 (68.8)
Nationality

Jordanian 207 (45.2)

Non-Jordanian 251 (54.8)
University

Public 392 (85.6)

Private 66 (14.4)
Educational level

Undergraduate 442 (96.5)

Postgraduate 16 (3.5)
Have you heard of ChatGPT before this study?

Yes 109 (23.8)

No 349 (76.2)
Have you used ChatGPT before this study?a

Yes 55 (50.5)

No 54 (49.5)

aThe item was assessed only for the participants who heard of ChatGPT before the study (109/458, 23.8%).

Prior Knowledge and Usage of ChatGPT Among the Study Participants

In the whole study sample, older age, male sex, and postgraduate education were associated with a higher probability of hearing about ChatGPT before the study (Table 2). On the other hand, the differences lacked statistical significance upon comparing the different categories in the tested variables with the probability of ChatGPT usage before the study (Table 2).

Table 2.

Association between the study variables and previous knowledge or usage of ChatGPT.

Category Have you heard of ChatGPT before this study? P value,
chi-square (df)
Have you used ChatGPT before this study? P value, chi-square (df)

Yes, n (%) No, n (%)
Yes, n (%) No, n (%)
Age (years)

.001, 12 (1)

.64, 0.2 (1)

18-20 44 (17.5) 207 (82.5)
21 (47.7) 23 (52.3)

>20 65 (31.4) 142 (68.6)
34 (52.3) 31 (47.7)
Sex

<.001, 47 (1)

.39, 0.7 (1)

Male 63 (44.1) 80 (55.9)
34 (54) 29 (46)

Female 46 (14.6) 269 (85.4)
21 (45.7) 25 (54.3)
Nationality

.30, 1.1 (1)

.29, 1.1 (1)

Jordanian 54 (26.1) 153 (73.9)
30 (55.6) 24 (44.4)

Non-Jordanian 55 (21.9) 196 (78.1)
25 (45.5) 30 (54.5)
University

.40, 0.7 (1)

.74, 0.1 (1)

Public 96 (24.5) 296 (75.5)
49 (51) 47 (49)

Private 13 (19.7) 53 (80.3)
6 (46.2) 7 (53.8)
Educational level

.002, 9.6 (1)

.31, 1 (1)

Undergraduate 100 (22.6) 342 (77.4)
49 (49) 51 (51)

Postgraduate 9 (56.3) 7 (43.8)
6 (66.7) 3 (33.3)

Factorability of the Correlation Matrix of the Attitude Scale

The EFA was conducted on a set of 13 items to identify underlying factors that accounted for the variance in the responses. The sample comprised the participants who heard of ChatGPT before the study (n=109, 23.8%). The Bartlett test of sphericity was significant (χ²78=779.2) P<.001), indicating the factorability of the correlation matrix. The KMO measure of the sampling adequacy was 0.823, indicating that the data were suitable for factor analysis.

The EFA was performed using PCA and oblimin rotation to account for potential correlations between factors. The scree plot showed that the optimal number of factors was 3, which explained a cumulative total of 69.3% of the variance in the data (Figure S1 of Multimedia Appendix 1). The eigenvalues for the 3 factors were 4.695, 3.148, and 1.168, respectively. All 13 items loaded significantly on 1 of the 3 factors, with factor loadings ranging from 0.65 to 0.87 (Table 3).

Table 3.

Pattern matrix of the principal component analysis showing the 3 inferred factors for the attitude scale.

Component Factor 1 (Perceived risk) Factor 2 (Technology/social influence) Factor 3 (Anxiety)
1. I am concerned about the reliability of the information provided by ChatGPTa. 0.743 <0.400 <0.400
2. I am concerned that using ChatGPT would get me accused of plagiarisma. 0.873 <0.400 <0.400
3. I am afraid of relying too much on ChatGPT and not developing my critical thinking skillsa. <0.400 <0.400 0.839
4. I am concerned about the potential security risks of using ChatGPTa 0.652 <0.400 <0.400
5. I am afraid of becoming too dependent on technology like ChatGPTa. <0.400 <0.400 0.869
6. I am afraid that using ChatGPT would result in a lack of originality in my university assignments and dutiesa. <0.400 <0.400 0.732
7. I am afraid that the use of the ChatGPT would be a violation of academic and university policiesa. 0.807 <0.400 <0.400
8. I am concerned about the potential privacy risks that might be associated with using ChatGPTa. 0.695 <0.400 <0.400
9. I am enthusiastic about using technology such as ChatGPT for learning and research. <0.400 0.828 <0.400
10. I believe technology such as ChatGPT is an important tool for academic success. <0.400 0.837 <0.400
11. I think that technology like ChatGPT is attractive and fun to use. <0.400 0.868 <0.400
12. I am always keen to learn about new technologies like ChatGPT. <0.400 0.775 <0.400
13. I trust the opinions of my friends or colleagues about using ChatGPT. <0.400 0.717 <0.400

aItems were reversed coded.

Based on the original TAM constructs, factor 1 was labeled “perceived risk” and included 5 items. Factor 2 was labeled “technology/social influence” and included 5 items related to attitude toward technology and social influence. Factor 3 was labeled “anxiety” and included 3 items related to anxiety and fear from ChatGPT.

The 3 factors demonstrated good internal consistency, with Cronbach α values of .876, .858, and .827, respectively, indicating that they could be used to measure these constructs in future research.

Factorability of the Correlation Matrix of the Usage Scale

The EFA was conducted on a set of 14 items to identify underlying factors that accounted for the variance in the responses. The sample comprised the participants who used ChatGPT before the study (n=55, 11.3%). The Bartlett test of sphericity was significant (χ²91=427.1; P<.001), indicating the factorability of the correlation matrix. The KMO measure of sampling adequacy was 0.702, indicating that the data were suitable for factor analysis.

Similar to the approach used for the attitude scale, the EFA was performed using PCA and oblimin rotation. The scree plot indicated that the optimal number of factors was 4, which explained a cumulative total of 72% of the variance in the data (Figure S2 of Multimedia Appendix 1). The eigenvalues for the 4 factors were 5.296, 1.979, 1.577, and 1.269, respectively. All 14 items loaded significantly on 1 of the 4 factors, with factor loadings ranging from 0.59 to 0.94 (Table 4).

Table 4.

Pattern matrix of the principal component analysis showing the 4 inferred factors for the usage scale.

Component 1 (Perceived usefulness) 2 (Perceived risk) 3 (Perceived ease of use) 4 (Behavior)
2. I am concerned that using ChatGPT would get me accused of plagiarism. <0.400 0.790 <0.400 <0.400
4. I am concerned about the potential security risks of using ChatGPT. <0.400 0.840 <0.400 <0.400
14. ChatGPT helps me to save time when searching for information. 0.840 <0.400 <0.400 <0.400
16. For me, ChatGPT is a reliable source of accurate information. 0.664 <0.400 <0.400 <0.400
19. I recommend ChatGPT to my colleagues to facilitate their academic duties. 0.840 <0.400 <0.400 <0.400
20. ChatGPT is more useful than other sources of information that I have used previously. 0.585 <0.400 <0.400 <0.400
22. I have used tools or techniques similar to ChatGPT in the past. <0.400 <0.400 <0.400 0.703
23. I spontaneously find myself using ChatGPT when I need information for my university assignments and duties. <0.400 <0.400 <0.400 0.852
24. I often use ChatGPT as a source of information in my university assignments and duties. <0.400 <0.400 <0.400 0.745
26. I think that relying on technology like ChatGPT can disrupt my critical thinking skills. <0.400 0.756 <0.400 <0.400
27. I appreciate the accuracy and reliability of the information provided by ChatGPT. 0.614 <0.400 <0.400 <0.400
28. I believe that using ChatGPT can save time and effort in my university assignments and duties. 0.937 <0.400 <0.400 <0.400
30. It does not take a long time to learn how to use ChatGPT. <0.400 <0.400 0.880 <0.400
32. ChatGPT does not require extensive technical knowledge. <0.400 <0.400 0.869 <0.400

Factor 1 was labeled “perceived usefulness” and included 6 items related to perceived usefulness. Factor 2 was labeled “perceived risk” and included 3 items related to perceived risk. Factor 3 was labeled “perceived ease of use” and included 2 items related to ease of use. Factor 4 was labeled “behavior” and included 3 items related to cognitive and behavioral aspects of ChatGPT use.

The 4 factors demonstrated good internal consistency (Cronbach α values of .885, .718, .824, and .781, respectively) and could be used to measure these constructs in future research.

Descriptive Analysis of the Attitudes Toward ChatGPT Based on TAME-ChatGPT

The 3 TAME-ChatGPT attitude subscales were evaluated at first. The possible range of the perceived risks subscale was between 5 and 25, with higher values indicating low perceived ChatGPT risks due to reverse coding of these items and a score of 15 indicating a neutral attitude toward ChatGPT.

Among the participants who have heard of ChatGPT before the study (n=109, 23.9%), the mean perceived risks score was 12.5 (SD 4.8), indicating a general agreement with the items assessing the perceived ChatGPT risks. Higher perceived risks were seen among females (P=.036, M-W; Table 5). No statistically significant differences were seen based on age, nationality, university, or self-reported ChatGPT use (Table 5).

Table 5.

Comparison of the 3 TAME-ChatGPTa attitude constructs stratified by participants’ variables.

Variables and constructs Perceived risk Technology/ social influence Anxiety

Mean (SD) Median (IQR) P value Mean (SD) Median (IQR) P value Mean (SD) Median (IQR) P value
Age (years)

.61

.26

.26

18-20 12.8 (5.5) 12.5 (9-16)
19.9 (4.1) 20 (16.5-24.5)
7.1 (3.4) 6 (4-10)

>20 12.3 (4.3) 12 (10-15)
19 (4.2) 19 (16-22)
6.2 (2.5) 6 (5-7)
Sex

.04

.02

.24

Male 13.3 (5.3) 13 (10-17)
20.1 (4) 20 (17-24)
6.9 (3.1) 6 (5-9)

Female 11.3 (3.8) 11 (9-14)
18.3 (4.2) 17.5 (15-21)
6.1 (2.7) 6 (4-7)
Nationality

.62

.11

.43

Jordanian 12.3 (4.4) 12 (10-15)
18.7 (4) 19 (15-22)
6.2 (2.4) 6 (5-7)

Non-Jordanian 12.7 (5.3) 12 (9-16)
20 (4.2) 20 (16-24)
6.9 (3.3) 6 (4-10)
University

.57

.86

.82

Public 12.3 (4.7) 12 (9-15)
19.3 (4.2) 19 (16-24)
6.6 (3) 6 (4-9)

Private 13.8 (5.7) 12 (11-14)
19.5 (3.6) 20 (19-21)
6.6 (2.6) 6 (6-7)
Educational level

.96

.40

.52

Undergraduate 12.5 (4.9) 12 (10-15)
19.4 (4.1) 20 (16-23.5)
6.7 (3) 6 (4-9)

Postgraduate 12.1 (3.9) 13 (10-15)
18.2 (4.5) 17 (15-21)
5.8 (2 6 (5-7)
Have you used ChatGPT before this study?

.84

<.001

.353

Yes 12.5 (5.2) 12 (9-17)
21 (3.6) 21 (18-25)
6.4 (2.9 6 (4-7)

No 12.4 (4.4) 12 (10-15)
17.6 (3.9) 17.5 (15-20)
6.8 (2.9 6 (5-9)

aTAME-ChatGPT: Technology Acceptance Model Edited to Assess ChatGPT Adoption.

For the technology/social influence subscale, the possible range was 5 to 25, with higher values indicating a positive attitude toward technology exemplified by ChatGPT and a score of 15 indicating a neutral attitude. The mean attitude toward technology score was 19.3 (SD 4.1), indicating a positive attitude toward ChatGPT technology. Higher technology subscale scores were seen among the participants who used ChatGPT before the study (mean 21, SD 3.6 vs mean 17.6, SD 3.9 among those who have not used it before the study; P<.001, M-W), and among males (mean 20.1, SD 4 vs mean 18.3, SD 4.2 among females; P=.023, M-W). No statistically significant differences were seen based on age, nationality, university, and educational level (Table 5).

For the anxiety subscale, the possible range was 3 to 15, with higher values indicating lower anxiety toward ChatGPT due to the reverse coding of these items and a score of 9 indicating a neutral attitude. The mean anxiety score was 6.6 (SD 2.9), indicating an anxious attitude regarding ChatGPT in the study sample. No statistically significant differences were seen based on age, sex, nationality, university, educational level, and self-reported ChatGPT use (Table 5).

Descriptive Analysis of ChatGPT Usage Determinants Based on TAME-ChatGPT

The 4 TAME-ChatGPT usage subscales were evaluated. The possible range of the perceived usefulness subscale was 6 to 30, with higher values indicating a higher perceived usefulness of ChatGPT and a score of 18 indicating a neutral attitude.

The mean perceived usefulness score was 24.2 (SD 4.9), indicating high perceived usefulness of ChatGPT among the participants who used it before the study. No statistically significant differences were seen based on age, sex, nationality, university, and educational level (Table 6).

Table 6.

Comparison of the 4 TAME-ChatGPTa usage constructs stratified by participants’ variables.

Variables and constructs Perceived usefulness Perceived risk Perceived ease of use Behavior


Mean (SD) Median (IQR) P value Mean (SD) Median (IQR) P value Mean (SD) Median (IQR) P value Mean (SD) Median (IQR) P value
Age

.06

.78

.39 10 (3.7) 10 (8-14) .58

18-20 25.6 (5.1) 27(22-30)
7.4 (3.5) 7 (5-11)
9.2 (1.1) 10 (8-10)




>20 23.3 (4.6) 23(21-27)
7 (2.3 6.5 (6-8)
8.7 (1.8) 10 (8-10)
9.6 (3.2) 9 (8-12)
Sex

.81

.14

.69

.51

Male 24.1 (4.6) 24 (22-28)
7.5 (3) 7.5 (6-9)
9 (1.3) 10 (8-10)
10 (3.3) 10 (8-12)

Female 24.3 (5.5) 27 (21-29)
6.5 (2.5) 6 (5-7)
8.9 (2) 10 (8-10)
9.4 (3.5) 9 (7-12)
Nationality

.16

.95

.45

.45

Jordanian 23.4 (4.9) 23 (21-27)
7.2 (2.7) 6.5 (5-9)
9 (1.7) 10 (8-10)
9.4 (3.4) 9.5(6-12)

Non-Jordanian 25.1 (4.9) 26 (22-29)
7.1 (3) 7 (5-8)
8.8 (1.4) 10 (8-10)
10.2 (3.3) 10 (8-12)
University

.91

.80

.52

.14

Public 24.1 (4.9) 24 (21-28)
7.1 (3) 7 (5-9)
9 (1.6) 10 (8-10)
10 (3.2) 10 (8-12)

Private 24.7 (5.4) 27.5 (20-28)
7.2 (1.7) 7 (6-8)
8.5 (1.8) 9 (7-10)
7.8 (3.8) 6 (5-11)
Educational level

.19

.65

.66

.91

Undergraduate 24.4 (5) 24 (22-29)
7.2 (2.9) 7 (5-9)
9 (1.6) 10 (8-10)
9.7 (3.5) 10 (8-13)

Postgraduate 22.3 (3.1) 22 (21-24)
6.7 (2.3) 6 (6-7)
8.7 (1.6) 9 (8-10)
9.8 (2.6 10.5 (8-12)

aTAME-ChatGPT: Technology Acceptance Model Edited to Assess ChatGPT Adoption.

For the perceived risk subscale, the possible range was 3 to 15, with higher values indicating lower perceived risks from ChatGPT use due to reverse coding of these items and a score of 9 indicating a neutral attitude. The mean perceived risk score was 7.2 (SD 2.8), indicating a slightly high perceived risk from ChatGPT use. No statistically significant differences were seen based on age, sex, nationality, university, and educational level (Table 6).

For the perceived ease of use subscale, the possible range was 2 to 10, indicating higher perceived ease of ChatGPT use, and a score of 6 indicated a neutral attitude. The mean perceived ease of use was 8.9 (SD 1.6), indicating the high perceived easiness of ChatGPT use in the study sample. No statistically significant differences were seen based on age, sex, nationality, university, and educational level (Table 6).

For the behavior subscale, the possible range was 3 to 15, with higher values indicating a positive behavior toward ChatGPT use due to reverse coding of these items and a score of 9 indicating a neutral attitude. The mean behavior was 9.8 (SD 3.3), indicating a slightly positive behavior toward ChatGPT leaning toward a neutral attitude. No statistically significant differences were seen based on age, sex, nationality, university, and educational level (Table 6).

Discussion

Principal Results

The main finding of this study demonstrated the reliability and validity of TAME-ChatGPT as a possible valuable tool for assessing health care students’ attitudes toward ChatGPT. The findings emphasized the need to account for risk perceptions, usefulness, ease of use, attitudes toward technology, and behavioral factors to successfully implement ChatGPT in health care education. These insights can guide AI developers, academics, and policy makers to formulate suitable strategies to ensure the ethical and optimal deployment of ChatGPT while addressing potential implementation challenges.

The availability of ChatGPT as an example of LLMs carries transformative societal implications, especially in health care settings, making its adoption in health care education seemingly inevitable [9,11,51-54]. Students will increasingly explore this innovative AI-based technology, with an already growing literature highlighting its significance in health care education through personalized learning with immediate feedback and impressive performance in medical exams [9,18,40,55-60]. Additionally, a recent study indicated a growing tendency among the general public to employ ChatGPT for self-diagnosis [61]. Therefore, the initial step toward the effective integration of ChatGPT in health care education involves evaluating attitudes toward this novel technology as well as the factors influencing its acceptance and usage.

However, before achieving this relevant aim, it is imperative to use a survey instrument that is validated to reach reliable conclusions based on the tested variables. Thus, this study represents one of the initial efforts to construct and validate a survey instrument assessing the attitudes toward ChatGPT among health care students in Jordan.

In this study, the major domains that were inferred through EFA included the perceived risks associated with ChatGPT, the attitude toward technology/social influence, and the anxiety that ChatGPT creates for the participants who have heard of ChatGPT. For the participants who used ChatGPT, EFA showed that 4 TAM-based domains were crucial factors driving ChatGPT use, which included the perceived usefulness, perceived risks, perceived ease of use, and behavior driving the use of technology.

The emergence of perceived risks as a major construct driving the attitude toward ChatGPT and its use is understandable. This is related to the potential for LLMs exemplified by ChatGPT to generate biased, inaccurate, or harmful content [9]. ChatGPT, among other LLMs, depends on huge training data sets; nevertheless, there is a general lack of transparency regarding the origin of these data [9,37]. Subsequently, there is a possibility that LLMs could learn and reproduce biased and incorrect content, which can have severe consequences in health care settings [9,36,37,62-64].

Risk perception plays a crucial role in decision-making, including the adoption of novel technologies like ChatGPT [65-68]. Recent studies highlighted the potential risks associated with ChatGPT risks including performance and privacy concerns [9,41]. Consequently, the participating students’ knowledge, beliefs, and prior experience with similar technologies significantly influenced their risk perception of ChatGPT. Unintended negative consequences, such as inappropriate or inaccurate content, pose significant risks in health care settings, necessitating careful consideration before its adoption in health care education [9,69-71].

This study demonstrated that risk perception significantly influenced health care students’ attitudes and usage of ChatGPT. This emphasizes the need for developers to address potential biases in ChatGPT, in addition to the need to address possible technological flaws to prevent cybersecurity threats and data breaches. Policy makers and AI-chatbot developers should prioritize transparent risk management strategies to promote responsible ChatGPT adoption in health care education [9,18,72]. Suggested measures to address ChatGPT’s perceived risks include student education on ChatGPT’s limitations and risks, establishing ethical guidelines for its responsible use, considering ethical and legal aspects, and promoting the development of high-quality training data [9,41].

The second construct driving the attitude toward ChatGPT found in this study was the attitude toward technology, alongside social influence. This construct refers to the perception and readiness to embrace technological innovations. Consistent with the previous evidence, positive attitudes facilitate the adoption of new technology adoption [73,74]. Thus, to promote a wider adoption of educational chatbots, providing training and education on the technology, highlighting its benefits, and ensuring accurate outputs are crucial [75,76].

Social influence can significantly impact attitudes toward ChatGPT adoption, including the opinions of the social circle and peers [77,78]. Additionally, media, public figures, and technology leaders play a role in shaping positive attitudes toward such applications. For example, the public opinions of prominent figures in the technology and business sectors can influence the widespread adoption and use of ChatGPT [79,80].

The third construct found in this validation study was the anxiety ChatGPT might provoke. The global availability of ChatGPT can be a transformative paradigm shift akin to the introduction of the internet and mobile phones, inducing fear, uncertainty, or discomfort [79,81,82]. Therefore, the elicited anxiety from such novel technology should be regarded as a significant factor driving its adoption [83,84].

In the second part of the TAM-based survey assessing ChatGPT usage determinants, the results showed that the perceived usefulness and ease of use as important factors influencing ChatGPT use among health care students. These psychological factors have been identified previously to play a critical role in shaping attitudes toward the adoption of new technologies [74,85-87]. Additionally, the perceived usefulness and effectiveness of technologies in achieving their intended goals could significantly influence the overall attitude of users, since an efficient and user-friendly technology encourages a more positive attitude toward its adoption [87-89]. Consequently, the impact of perceived usefulness and ease of use on students’ attitudes toward ChatGPT appear crucial for predicting and encouraging its successful adoption. In this exploratory study, we observed a high level of ease of use among the small group of participants who reported using ChatGPT, likely due to its user-friendly nature and free accessibility [17,71,90].

In this study, following the TAM model, the behavioral and cognitive factors emerged as key drivers of ChatGPT usage among health care students. ChatGPT can provide quick and easy access to information and services, reducing the need for human interaction, which is advantageous for busy health care students dealing with massive information and packed learning schedules [18,91]. Therefore, the ease of access provided by ChatGPT compared to traditional methods of education is a significant advantage [9,18,91,92]. Additionally, educational chatbots offer the potential to enhance self-confidence and communication skills, particularly for students facing challenges in social communication, highlighting its value as a conversational interface that simulates human interactions and fosters a sense of companionship among students [93,94].

On the other hand, one of the negative driving factors for ChatGPT use is the potential for dependence or even addiction [95]. This problem is of particular concern for individuals who may be susceptible to compulsive behavior [96]. This addiction can lead to decreased productivity, social withdrawal, and other negative consequences severely affecting the students’ later interactions with patients. The use of ChatGPT can also be associated with a deterioration in empathy and social skills [9]. The reliance on ChatGPT may result in hindering the development of the skills needed to interpret and respond to social cues, which should be considered in health care education [9,91].

Limitations

The limited sample size used in this study is a major limitation; however, the complexity of the scale required the participants to spend considerable time and effort, which can limit the number of participants that are willing to complete the survey due to respondent fatigue [97]. Selection bias should also be considered based on the adoption of convenience-based sampling, and this issue should be addressed in future studies aiming to confirm the findings of this study and evaluate the attitudes of health care students toward ChatGPT and its use. The female predominance might be due to selection bias, but it aligns with the fact that dentistry, pharmacy, and nursing fields in Jordan have a majority of female students, as anticipated. Importantly, despite the utilization of the TAM framework, a significant limitation of this study is the potential bias in the tested constructs, which should be considered in future validation studies.

Future Perspectives

Following the initial validation of TAME-ChatGPT as a tool to assess the attitude and usage of ChatGPT among health care students as indicated by the results of this study, a follow-up multinational project will ensue to conduct a confirmatory factor analysis and determine the major determinants of the attitude toward ChatGPT. This can help to guide the efforts needed for the successful adoption of ChatGPT in health care education.

Conclusions

In this study, we showed that the validated TAME-ChatGPT scales have good reliability and validity with usefulness to test the following domains covered by 13 items to determine the attitude toward ChatGPT: perceived risks from ChatGPT, the attitude toward technology/social influence, and the anxiety that ChatGPT creates. Additionally, 4 constructs can be helpful to determine the factors driving ChatGPT use comprising 14 items: usefulness, perceived risks, perceived ease of use, and behavior driving the use of ChatGPT. Future studies are recommended to guide the successful adoption of ChatGPT in health care education.

Overall, the results of this study highlighted the importance of considering perceptions of risks, usefulness, ease of use, and attitudes toward technology as well as the behavioral factors upon adopting new technologies for health care education exemplified by ChatGPT. This can help AI developers, academics, and policy makers devise strategies to promote the effective and ethical use of ChatGPT and identify barriers to the adoption of this breakthrough revolutionary technology. By analyzing the acceptance and use of ChatGPT through a reliable and valid construct, evidence-based insights can inform decisions on the incorporation of this technology in health care education.

Acknowledgments

We are deeply grateful to the students who participated in this study.

Abbreviations

AI

artificial intelligence

EFA

exploratory factor analysis

GPT

Generative Pretrained Transformer

KMO

Kaiser-Meyer-Olkin

LLM

large language model

M-W

Mann-Whitney

PCA

principal component analysis

TAM

technology acceptance model

TAME-ChatGPT

Technology Acceptance Model Edited to Assess ChatGPT Adoption

UTAUT2

Unified Theory of Acceptance and Use of Technology 2

Multimedia Appendix 1

Supplementary tables and figures.

mededu_v9i1e48254_app1.docx (128.7KB, docx)

Footnotes

Conflicts of Interest: None declared.

References

  • 1.de Divitiis E, Cappabianca P, de Divitiis O. The "schola medica salernitana": the forerunner of the modern university medical schools. Neurosurgery. 2004 Oct;55(4):722–44; discussion 744. doi: 10.1227/01.neu.0000139458.36781.31. [DOI] [PubMed] [Google Scholar]
  • 2.Dornan T. Osler, Flexner, apprenticeship and 'the new medical education'. J R Soc Med. 2005 Mar;98(3):91–5. doi: 10.1177/014107680509800302. https://europepmc.org/abstract/MED/15738549 .98/3/91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Arnone JM, Fitzsimons V. Plato, nightingale, and nursing: can you hear me now? Int J Nurs Knowl. 2015 Oct;26(4):156–62. doi: 10.1111/2047-3095.12059. [DOI] [PubMed] [Google Scholar]
  • 4.Hildebrandt S. Lessons to be learned from the history of anatomical teaching in the United States: the example of the University of Michigan. Anat Sci Educ. 2010;3(4):202–12. doi: 10.1002/ase.166. http://hdl.handle.net/2027.42/77525 . [DOI] [PubMed] [Google Scholar]
  • 5.Custers E, Cate O. The History of Medical Education in Europe and the United States, with respect to time and proficiency. Acad Med. 2018 Mar;93(3S Competency-Based, Time-Variable Education in the Health Professions):S49–S54. doi: 10.1097/ACM.0000000000002079.00001888-201803001-00010 [DOI] [PubMed] [Google Scholar]
  • 6.Kamel Boulos MN, Wheeler S. The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education. Health Info Libr J. 2007 Mar;24(1):2–23. doi: 10.1111/j.1471-1842.2007.00701.x. https://onlinelibrary.wiley.com/doi/10.1111/j.1471-1842.2007.00701.x .HIR701 [DOI] [PubMed] [Google Scholar]
  • 7.Bernhardt J, Hubley J. Health education and the Internet: the beginning of a revolution. Health Educ Res. 2001 Dec 1;16(6):643–5. doi: 10.1093/her/16.6.643. [DOI] [Google Scholar]
  • 8.Braddock CH, Eckstrom E, Haidet P. The "new revolution" in medical education: fostering professionalism and patient-centered communication in the contemporary environment. J Gen Intern Med. 2004 May;19(5 Pt 2):610–1. doi: 10.1111/j.1525-1497.2004.45003.x. https://europepmc.org/abstract/MED/15109334 .JGI45003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel) 2023 Mar 19;11(6) doi: 10.3390/healthcare11060887. https://www.mdpi.com/resolver?pii=healthcare11060887 .healthcare11060887 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020 Jun 30;6(1):e19285. doi: 10.2196/19285. https://mededu.jmir.org/2020/1/e19285/ v6i1e19285 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023 Mar 06;9:e46885. doi: 10.2196/46885. https://mededu.jmir.org/2023//e46885/ v9i1e46885 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Akour I, Alshurideh M, Al Kurdi B, Al Ali A, Salloum S. Using machine learning algorithms to predict people's intention to use mobile learning platforms during the COVID-19 pandemic: machine learning approach. JMIR Med Educ. 2021 Mar 04;7(1):e24032. doi: 10.2196/24032. https://mededu.jmir.org/2021/1/e24032/ v7i1e24032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zhang A, Xing L, Zou J, Wu JC. Shifting machine learning for healthcare from development to deployment and from models to data. Nat Biomed Eng. 2022 Dec 04;6(12):1330–1345. doi: 10.1038/s41551-022-00898-y.10.1038/s41551-022-00898-y [DOI] [PubMed] [Google Scholar]
  • 14.Weidener L, Fischer M. Artificial intelligence teaching as part of medical education: qualitative analysis of expert interviews. JMIR Med Educ. 2023 Apr 24;9:e46428. doi: 10.2196/46428. https://mededu.jmir.org/2023//e46428/ v9i1e46428 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lee J, Wu AS, Li D, Kulasegaram KM. Artificial intelligence in undergraduate medical education: a scoping review. Acad Med. 2021 Nov 01;96(11S):S62–S70. doi: 10.1097/ACM.0000000000004291.00001888-202111001-00014 [DOI] [PubMed] [Google Scholar]
  • 16.Hogg HDJ, Al-Zubaidy M, Technology Enhanced Macular Services Study Reference Group. Talks J, Denniston AK, Kelly CJ, Malawana J, Papoutsi C, Teare MD, Keane PA, Beyer FR, Maniatopoulos G. Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. 2023 Jan 10;25:e39742. doi: 10.2196/39742. https://www.jmir.org/2023//e39742/ v25i1e39742 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.OpenAI: models GPT-3. OpenAI. [2023-04-02]. https://beta.openai.com/docs/models .
  • 18.Sallam M, Salim N, Barakat M, Al-Tammemi A. ChatGPT applications in medical, dental, pharmacy, and public health education: a descriptive study highlighting the advantages and limitations. Narra J. 2023 Mar 29;3(1):e103. doi: 10.52225/narra.v3i1.103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Li J, Dada A, Kleesiek J, Egger J. ChatGPT in healthcare: a taxonomy and systematic review. medRxiv. Preprint posted online on March 30, 2023. doi: 10.1101/2023.03.30.23287899. [DOI] [PubMed] [Google Scholar]
  • 20.Nov O, Singh N, Mann D. Putting ChatGPT's medical advice to the (Turing) test: survey study. JMIR Med Educ. 2023 Jul 10;9:e46939. doi: 10.2196/46939. https://mededu.jmir.org/2023//e46939/ v9i1e46939 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Shahsavar Y, Choudhury A. User intentions to use ChatGPT for self-diagnosis and health-related purposes: cross-sectional survey study. JMIR Hum Factors. 2023 May 17;10:e47564. doi: 10.2196/47564. https://humanfactors.jmir.org/2023//e47564/ v10i1e47564 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Jacob C, Sanchez-Vazquez A, Ivory C. Social, organizational, and technological factors impacting clinicians' adoption of mobile health tools: systematic literature review. JMIR Mhealth Uhealth. 2020 Feb 20;8(2):e15935. doi: 10.2196/15935. https://mhealth.jmir.org/2020/2/e15935/ v8i2e15935 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Roberts R, Flin R, Millar D, Corradi L. Psychological factors influencing technology adoption: a case study from the oil and gas industry. Technovation. 2021 Apr;102:102219. doi: 10.1016/j.technovation.2020.102219. [DOI] [Google Scholar]
  • 24.Tverskoi D, Babu S, Gavrilets S. The spread of technological innovations: effects of psychology, culture and policy interventions. R Soc Open Sci. 2022 Jun;9(6):211833. doi: 10.1098/rsos.211833. https://royalsocietypublishing.org/doi/abs/10.1098/rsos.211833?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub0pubmed .rsos211833 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. 1989 Sep;13(3):319. doi: 10.2307/249008. [DOI] [Google Scholar]
  • 26.Marangunić N, Granić A. Technology acceptance model: a literature review from 1986 to 2013. Univ Access Inf Soc. 2014 Feb 16;14(1):81–95. doi: 10.1007/s10209-014-0348-1. [DOI] [Google Scholar]
  • 27.Venkatesh. Thong. Xu Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quarterly. 2012;36(1):157. doi: 10.2307/41410412. [DOI] [Google Scholar]
  • 28.Ammenwerth E. Technology Acceptance Models in Health Informatics: TAM and UTAUT. Stud Health Technol Inform. 2019 Jul 30;263:64–71. doi: 10.3233/SHTI190111.SHTI190111 [DOI] [PubMed] [Google Scholar]
  • 29.Lange A, Koch J, Beck A, Neugebauer T, Watzema F, Wrona KJ, Dockweiler C. Learning with virtual reality in nursing education: qualitative interview study among nursing students using the unified theory of acceptance and use of technology model. JMIR Nursing. 2020 Sep 1;3(1):e20249. doi: 10.2196/20249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Lai P. The literature review of technology adoption models and theories for the novelty technology. J Sys Inf Technol Manag. 2017 Jun 08;14(1):21–38. doi: 10.4301/S1807-17752017000100002. [DOI] [Google Scholar]
  • 31.Rogers E. Diffusion of Innovations. Berlin, Germany: Springer; 1995. [Google Scholar]
  • 32.Liu Z, Min Q, Ji S. A comprehensive review of research in IT adoption. 4th International Conference on Wireless Communications, Networking and Mobile Computing; October 12-17; Dalian, China. 2008. [DOI] [Google Scholar]
  • 33.Rahimi B, Nadri H, Lotfnezhad Afshar H, Timpka T. A systematic review of the technology acceptance model in health informatics. Appl Clin Inform. 2018 Dec;9(3):604–634. doi: 10.1055/s-0038-1668091. http://europepmc.org/abstract/MED/30112741 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.ChatGPT banned in Italy over privacy concerns. BBC News. 2023. [2023-04-02]. https://www.bbc.com/news/technology-65139406 .
  • 35.Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. 2022 Dec 09; doi: 10.1038/d41586-022-04397-7.10.1038/d41586-022-04397-7 [DOI] [PubMed] [Google Scholar]
  • 36.Stokel-Walker C, Van Noorden R. What ChatGPT and generative AI mean for science. Nature. 2023 Feb 06;614(7947):214–216. doi: 10.1038/d41586-023-00340-6.10.1038/d41586-023-00340-6 [DOI] [PubMed] [Google Scholar]
  • 37.Nature editorial Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023 Jan 24;613(7945):612–612. doi: 10.1038/d41586-023-00191-1. [DOI] [PubMed] [Google Scholar]
  • 38.van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023 Feb 03;614(7947):224–226. doi: 10.1038/d41586-023-00288-7. http://paperpile.com/b/KWcOMb/9UIV .10.1038/d41586-023-00288-7 [DOI] [PubMed] [Google Scholar]
  • 39.Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pak J Med Sci. 2023 Feb 07;39(2):605–607. doi: 10.12669/pjms.39.2.7653. https://europepmc.org/abstract/MED/36950398 .PJMS-39-605 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ. 2023 Feb 08;9:e45312. doi: 10.2196/45312. doi: 10.2196/45312.v9i1e45312 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Borji A. A Categorical Archive of ChatGPT Failures. arXiv. Preprint posted online on May 9, 2023. doi: 10.21203/rs.3.rs-2895792/v1. [DOI] [Google Scholar]
  • 42.Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health. 2018;6:149. doi: 10.3389/fpubh.2018.00149. doi: 10.3389/fpubh.2018.00149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.MacCallum RC, Widaman KF, Zhang S, Hong S. Sample size in factor analysis. Psychol Methods. 1999 Mar;4(1):84–99. doi: 10.1037/1082-989X.4.1.84. [DOI] [Google Scholar]
  • 44.Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. 2014 Sep 30;70(9):1970–1979. doi: 10.1111/jan.12402. [DOI] [PubMed] [Google Scholar]
  • 45.Artino AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No 87. Med Teach. 2014 Jun;36(6):463–74. doi: 10.3109/0142159X.2014.889814. http://europepmc.org/abstract/MED/24661014 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Holtz B, Mitchell K, Hirko K, Ford S. Using the technology acceptance model to characterize barriers and opportunities of telemedicine in rural populations: survey and interview study. JMIR Form Res. 2022 Apr 15;6(4):e35130. doi: 10.2196/35130. https://formative.jmir.org/2022/4/e35130/ v6i4e35130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Nadal C, Sas C, Doherty G. Technology acceptance in mobile health: scoping review of definitions, models, and measurement. J Med Internet Res. 2020 Jul 06;22(7):e17256. doi: 10.2196/17256. https://www.jmir.org/2020/7/e17256/ v22i7e17256 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.An MH, You SC, Park RW, Lee S. Using an extended technology acceptance model to understand the factors influencing telehealth utilization after flattening the COVID-19 curve in South Korea: cross-sectional survey study. JMIR Med Inform. 2021 Jan 08;9(1):e25435. doi: 10.2196/25435. https://medinform.jmir.org/2021/1/e25435/ v9i1e25435 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) J Med Internet Res. 2004 Sep 29;6(3):e34. doi: 10.2196/jmir.6.3.e34. http://www.jmir.org/2004/3/e34/ v6e34 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Choi BCK, Pak AWP. A catalog of biases in questionnaires. Prev Chronic Dis. 2005 Jan;2(1):A13. https://europepmc.org/abstract/MED/15670466 .A13 [PMC free article] [PubMed] [Google Scholar]
  • 51.Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, Landman A, Dreyer KJ, Succi MD. Assessing the Utility of ChatGPT throughout the entire clinical workflow. medRxiv. Preprint posted online on Feb 26, 2023. doi: 10.1101/2023.02.21.23285886. doi: 10.1101/2023.02.21.23285886.2023.02.21.23285886 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Thirunavukarasu AJ, Hassan R, Mahmood S, Sanghera R, Barzangi K, El Mukashfi M, Shah S. Trialling a large language model (ChatGPT) in general practice with the applied knowledge test: observational study demonstrating opportunities and limitations in primary care. JMIR Med Educ. 2023 Apr 21;9:e46599. doi: 10.2196/46599. https://mededu.jmir.org/2023//e46599/ v9i1e46599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Karabacak M, Ozkara BB, Margetis K, Wintermark M, Bisdas S. The advent of generative language models in medical education. JMIR Med Educ. 2023 Jun 06;9:e48163. doi: 10.2196/48163. https://mededu.jmir.org/2023//e48163/ v9i1e48163 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Sabry Abdel-Messih M, Kamel Boulos MN. ChatGPT in clinical toxicology. JMIR Med Educ. 2023 Mar 08;9:e46876. doi: 10.2196/46876. https://mededu.jmir.org/2023//e46876/ v9i1e46876 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Benoit J. ChatGPT for Clinical Vignette Generation, Revision, and Evaluation. medRxiv. Preprint posted online on Feb 8, 2023. doi: 10.1101/2023.02.04.23285478. [DOI] [Google Scholar]
  • 56.Antaki F, Touma S, Milad D, El-Khoury J, Duval R. Evaluating the performance of ChatGPT in ophthalmology: an analysis of its successes and shortcomings. medRxiv. Preprint posted online on Jan 26, 2023. doi: 10.1101/2023.01.22.23284882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño Camille, Madriaga M, Aggabao R, Diaz-Candido G, Maningo J, Tseng V. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. https://europepmc.org/abstract/MED/36812645 .PDIG-D-22-00371 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Mbakwe AB, Lourentzou I, Celi LA, Mechanic OJ, Dagan A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digit Health. 2023 Feb 9;2(2):e0000205. doi: 10.1371/journal.pdig.0000205. https://europepmc.org/abstract/MED/36812618 .PDIG-D-23-00027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Takagi S, Watari T, Erabi A, Sakaguchi K. Performance of GPT-3.5 and GPT-4 on the Japanese medical licensing examination: comparison study. JMIR Med Educ. 2023 Jun 29;9:e48002. doi: 10.2196/48002. https://mededu.jmir.org/2023//e48002/ v9i1e48002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Giannos P, Delardas O. Performance of ChatGPT on UK standardized admission tests: insights from the BMAT, TMUA, LNAT, and TSA examinations. JMIR Med Educ. 2023 Apr 26;9:e47737. doi: 10.2196/47737. https://mededu.jmir.org/2023//e47737/ v9i1e47737 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Shahsavar Y, Choudhury A. The role of AI chatbots in healthcare: a study on user intentions to utilize ChatGPT for self-diagnosis. JMIR Preprints. Preprint posted online on May 9, 2023. doi: 10.2196/preprints.47564. [DOI] [Google Scholar]
  • 62.Lund BD, Wang T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News. 2023 Feb 14;40(3):26–29. doi: 10.1108/lhtn-01-2023-0009. [DOI] [Google Scholar]
  • 63.Aczel B, Wagenmakers E. Transparency guidance for ChatGPT usage in scientific writing. PsyArXiv. Preprint posted online on Feb 6, 2023. doi: 10.31234/osf.io/b58ex. [DOI] [Google Scholar]
  • 64.Sanmarchi F, Bucci A, Golinelli D. A step-by-step researcher's guide to the use of an AI-based transformer in epidemiology: an exploratory analysis of ChatGPT using the STROBE checklist for observational studies. medRxiv. Preprint posted online on Feb 8, 2023. doi: 10.1101/2023.02.06.23285514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Williams DJ, Noyes JM. How does our perception of risk influence decision-making? Implications for the design of risk information. Theor. 2007 Jan;8(1):1–35. doi: 10.1080/14639220500484419. [DOI] [Google Scholar]
  • 66.Featherman M, Fuller M. Applying TAM to e-services adoption: the moderating role of perceived risk. 36th Annual Hawaii International Conference on System Sciences, 2003; Jan 6-9; Big Island, HI. 2003. pp. 6–9. [DOI] [Google Scholar]
  • 67.Savas-Hall S, Koku PS, Mangleburg T. Really new services: perceived risk and adoption intentions. Serv Mark Q. 2021 Oct 25;43(4):485–503. doi: 10.1080/15332969.2021.1994193. [DOI] [Google Scholar]
  • 68.Sebastian G, George A, Jackson G. Persuading patients using rhetoric to improve artificial intelligence adoption: experimental study. J Med Internet Res. 2023 Mar 13;25:e41430. doi: 10.2196/41430. https://www.jmir.org/2023//e41430/ v25i1e41430 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an adjunct for radiologic decision-making. medRxiv. Preprint posted online on Feb 7, 2023. doi: 10.1101/2023.02.02.23285399. doi: 10.1101/2023.02.02.23285399.2023.02.02.23285399 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Duong D, Solomon BD. Analysis of large-language model versus human performance for genetics questions. medRxiv. Preprint posted online on Jan 28, 2023. doi: 10.1101/2023.01.27.23285115. doi: 10.1101/2023.01.27.23285115.2023.01.27.23285115 [DOI] [Google Scholar]
  • 71.Malik S. The utility of ChatGPT as an example of large language models in healthcare education, research and practice: systematic review on the future perspectives and potential limitations. medRxiv. Preprint posted online on Feb 21, 2023. doi: 10.1101/2023.02.19.23286155. [DOI] [Google Scholar]
  • 72.Chew HSJ, Achananuparp P. Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review. J Med Internet Res. 2022 Jan 14;24(1):e32939. doi: 10.2196/32939. https://www.jmir.org/2022/1/e32939/ v24i1e32939 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Lee DY, Lehto MR. User acceptance of YouTube for procedural learning: an extension of the technology acceptance model. Comput Educ. 2013 Feb;61:193–208. doi: 10.1016/j.compedu.2012.10.001. [DOI] [Google Scholar]
  • 74.Alfadda HA, Mahdi HS. Measuring students' use of Zoom application in language course based on the technology acceptance model (TAM) J Psycholinguist Res. 2021 Aug;50(4):883–900. doi: 10.1007/s10936-020-09752-1. https://europepmc.org/abstract/MED/33398606 .10.1007/s10936-020-09752-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Okonkwo CW, Ade-Ibijola A. Chatbots applications in education: A systematic review. Comput Educ. 2021;2:100033. doi: 10.1016/j.caeai.2021.100033. [DOI] [Google Scholar]
  • 76.Lo CK. What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci. 2023 Apr 18;13(4):410. doi: 10.3390/educsci13040410. [DOI] [Google Scholar]
  • 77.Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial intelligence and groups: effects of attitudes and discretion on collaboration. Group Organ Manag. 2023 Mar 03;48(2):629–670. doi: 10.1177/10596011231160574. [DOI] [Google Scholar]
  • 78.Paul J, Ueno A, Dennis C. ChatGPT and consumers: benefits, pitfalls and future research agenda. Int J Consumer Studies. 2023 Mar 25;47(4):1213–1225. doi: 10.1111/ijcs.12928. [DOI] [Google Scholar]
  • 79.Gates B. The Age of AI has begun: artificial intelligence is as revolutionary as mobile phones and the internet. GatesNotes. [2023-04-17]. https://www.gatesnotes.com/The-Age-of-AI-Has-Begun .
  • 80.Taecharungroj V. “What can ChatGPT do?” Analyzing early reactions to the innovative AI chatbot on Twitter. Big Data Cogn. 2023 Feb 16;7(1):35. doi: 10.3390/bdcc7010035. [DOI] [Google Scholar]
  • 81.Stewart KA, Segars AH. An empirical examination of the concern for information privacy instrument. Inf Syst Res. 2002 Mar;13(1):36–49. doi: 10.1287/isre.13.1.36.97. [DOI] [Google Scholar]
  • 82.Sallam M, Salim NA, Al-Tammemi AB, Barakat M, Fayyad D, Hallit S, Harapan H, Hallit R, Mahafzah A. ChatGPT output regarding compulsory vaccination and COVID-19 vaccine conspiracy: a descriptive study at the outset of a paradigm shift in online search for information. Cureus. 2023 Feb;15(2):e35029. doi: 10.7759/cureus.35029. https://europepmc.org/abstract/MED/36819954 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Beaudry. Pinsonneault The other side of acceptance: studying the direct and indirect effects of emotions on information technology use. MIS Quarterly. 2010;34(4):689. doi: 10.2307/25750701. [DOI] [Google Scholar]
  • 84.Şahin F, Doğan E, Okur MR, Şahin YL. Emotional outcomes of e-learning adoption during compulsory online education. Educ Inf Technol (Dordr) 2022 Feb 24;27(6):7827–7849. doi: 10.1007/s10639-022-10930-y. https://europepmc.org/abstract/MED/35228828 .10930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Scherer R, Siddiq F, Tondeur J. The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput Educ. 2019 Jan;128:13–35. doi: 10.1016/j.compedu.2018.09.009. [DOI] [Google Scholar]
  • 86.Abdullah F, Ward R, Ahmed E. Investigating the influence of the most commonly used external variables of TAM on students’ Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) of e-portfolios. Comput Hum Behav. 2016 Oct;63:75–90. doi: 10.1016/j.chb.2016.05.014. [DOI] [Google Scholar]
  • 87.Songkram N, Chootongchai S, Osuwan H, Chuppunnarat Y, Songkram N. Students' adoption towards behavioral intention of digital learning platform. Educ Inf Technol (Dordr) 2023 Feb 22;:1–23. doi: 10.1007/s10639-023-11637-4. https://europepmc.org/abstract/MED/36846495 .11637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Balaskas S, Panagiotarou A, Rigou M. The influence of trustworthiness and technology acceptance factors on the usage of e-government services during COVID-19: a case study of post COVID-19 Greece. Adm Sci. 2022 Sep 29;12(4):129. doi: 10.3390/admsci12040129. [DOI] [Google Scholar]
  • 89.AlHogail A. Improving IoT technology adoption through improving consumer trust. Technologies. 2018 Jul 07;6(3):64. doi: 10.3390/technologies6030064. [DOI] [Google Scholar]
  • 90.Ray PP. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. IET Cyber-Phys Syst. 2023;3:121–154. doi: 10.1016/j.iotcps.2023.04.003. [DOI] [Google Scholar]
  • 91.Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med. 2023 Mar;13(3):e1206. doi: 10.1002/ctm2.1206. https://europepmc.org/abstract/MED/36854881 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Chang I, Shih Y, Kuo K. Why would you use medical chatbots? Interview and survey. Int J Med Inform. 2022 Sep;165:104827. doi: 10.1016/j.ijmedinf.2022.104827.S1386-5056(22)00141-1 [DOI] [PubMed] [Google Scholar]
  • 93.Kasneci E, Sessler K, Küchemann S, Bannert M, Dementieva D, Fischer F, Gasser U, Groh G, Günnemann S, Hüllermeier E, Krusche S, Kutyniok G, Michaeli T, Nerdel C, Pfeffer J, Poquet O, Sailer M, Schmidt A, Seidel T, Stadler M, Weller J, Kuhn J, Kasneci G. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 2023 Apr;103:102274. doi: 10.1016/j.lindif.2023.102274. [DOI] [Google Scholar]
  • 94.Shorey S, Ang E, Yap J, Ng ED, Lau ST, Chui CK. A virtual counseling application using artificial intelligence for communication skills training in nursing education: development study. J Med Internet Res. 2019 Oct 29;21(10):e14658. doi: 10.2196/14658. https://www.jmir.org/2019/10/e14658/ v21i10e14658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Zhuo T, Huang Y, Chen C, Xing Z. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv. Preprint posted online on Feb 22, 2023. doi: 10.48550/arXiv.2301.12867. [DOI] [Google Scholar]
  • 96.Hu B, Mao Y, Kim KJ. How social anxiety leads to problematic use of conversational AI: the roles of loneliness, rumination, and mind perception. Comput Hum Behav. 2023 Aug;145:107760. doi: 10.1016/j.chb.2023.107760. [DOI] [Google Scholar]
  • 97.Jeong D, Aggarwal S, Robinson J, Kumar N, Spearot A, Park DS. Exhaustive or exhausting? Evidence on respondent fatigue in long surveys. J Dev Econ. 2023 Mar;161:102992. doi: 10.1016/j.jdeveco.2022.102992. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia Appendix 1

Supplementary tables and figures.

mededu_v9i1e48254_app1.docx (128.7KB, docx)

Articles from JMIR Medical Education are provided here courtesy of JMIR Publications Inc.

RESOURCES