Skip to main content
BMC Medical Education logoLink to BMC Medical Education
. 2025 Dec 17;26:133. doi: 10.1186/s12909-025-08453-4

The association between artificial intelligence and health specialty students’ critical thinking, clinical competency, and willingness to practice human interaction skills

Raneem A Hamdan-Mansour 1, Farah A Abu Hardan 1, Mohammad A Alayasrah 2, Huda M Al Nabulsi 2, Mirna Fawaz 3,, Ayman M Hamdan-Mansour 4, Rama Basim AboHanana 5
PMCID: PMC12838490  PMID: 41408557

Abstract

Background

The use of AI in higher education is accelerating, inferring the need to investigate further the positive impact of AI on health specialty students’ academic performance in clinical care settings.

Purpose

The purpose of this study is to assess the relationship between artificial intelligence applications and health specialty students’ critical thinking, clinical competency, and perception of their willingness to practice human interaction skills with their patients.

Methods

A number of 542 health specialty students from Jordanian universities participated in this cross-sectional correlational study. Four instruments were utilized for data collection: AI Attitude Scale (AIAS-4), the Critical Thinking Scale, the Clinical Competency Scale, and the Communication Assessment Tool (CAT). The instruments were translated to Arabic using the forward-backward procedure. Inferential and descriptive data were analyzed using SPSS version 28.

Result

Health specialty students mostly have positive attitudes toward using AI in clinical education (60.0%), have higher levels that AI assisted them in critical thinking, general skills, and clinical competency. Also, students had a moderate level that AI enabled to use effective therapeutic communications skills. AI use and utility has significance (p < .05) and positive correlation with clinical competency, critical thinking and effective use of communication skills, with correlation magnitude ranging from (r = .15) with effective use of communication skills to (r = .42) with critical thinking-general skills. All standardized regression models of AI use on critical thinking, clinical competency and therapeutic use of effective communication skills models were significant (p < .05); however, it is noted that R2 for all variables was small in terms of magnitude, ranging from 0.024 (2.4%) for communication skills to 0.173 (17.3%) for critical thinking-general skills.

Conclusion

Perception of AI use among health specialty students showed positive effect on critical thinking and perception of clinical competency urging the need to integrate AI application in health curricula and pedagogy.

Keywords: Artificial intelligence; Attitudes, clinical competency; Communication skills; Critical thinking; Health specialty students

Introduction

Artificial intelligence (AI) is rapidly revolutionizing the healthcare scene, providing solutions to improve diagnoses, treatment, and patient outcomes. Several recent studies have underlined the importance of establishing confidence in AI applications [1, 2]. This is to assure ethical use, and seamless integration of AI into clinical practice. AI plays an important role in modern healthcare by improving diagnostic accuracy, optimizing treatment plans, and increasing operational efficiencies, however, physicians claimed that AI had a negative effect on their trust with their patients [3]. The usage of AI in higher education is increasing, with considerable data indicating students’ reliance and dependence on AI [4]. Monitoring attendance, assisting with ease of difficulties in learning, and instructor performance are some of the benefits that should be recognized. While the application of AI has the potential to improve student engagement, ethical concerns and potential negative effects must be addressed [5]. This would call into question students’ adequate and efficient usage of AI from both academic and ethical aspects.

According to the literature, several AI applications are utilized to create solutions to inquiries and to help students generate ideas [6, 7]. Students are also using AI to tutor them through engagement and receive feedback on their essays and queries [8]. Such reports on the value of AI in higher education call for additional research from a broader perspective. The literature supported the predicted negative impact of relying on AI in classrooms. For example, Kasneci et al. [8] indicated that deploying AI among students did prove to promote critical thinking and English language ability for non-English speaking students. Critical thinking and clinical competency are critical abilities for health specialist students, since they directly affect their ability to provide a safe patient care as independent health professionals [9, 10]. These skills improve problem-solving abilities, decision-making, and adaptability in clinical practice, particularly in high-pressure settings. According to research, students with good critical thinking skills are more confident and well-prepared for real-world healthcare settings [9].

Several factors influence clinical competency, including learning environment quality and practical training [10]. An organized clinical setting and constructive feedback from experienced mentors considerably improve students’ confidence and performance. Furthermore, there is a strong relationship between critical thinking and self-esteem, with students who are more confident engaging in analytical thinking and making better clinical decisions [9]. To promote the development of these skills, educators should integrate active learning methodologies, standardized assessment tools, and organized clinical training programs [11]. As previously stated, the adoption of specific AI models, such as ChatGPT, did help to improve critical thinking skills among students in general [11].

Artificial intelligence has a huge impact on human interactions, where technology may both strengthen and disrupt emotional ties. When AI is seen as conscious, it inspires trust, however, it may interfere with human connections [12]. AI tools such as chatbots boost efficiency in a variety of fields, but their lack of emotional intelligence may decrease their quality of interactions, particularly those between patients and doctors [13]. Akpan et al. [13] argues that increased reliance on AI may also limit decision-making abilities and increase privacy concerns. It is recommended that healthcare students combine AI integration with empathetic care in order to foster human relationships and avoid overreliance on technology in clinical choices.

The extent to which health specialty students use artificial intelligence (AI) and its impact on their competency as future health professionals demonstrated the need for this study. Understanding the influence of AI on health specialist students is important for improving clinical competency. Akpan et al. [13] showcased that while AI has the potential to improve critical thinking by offering data analysis and evidence-based suggestions, over-reliance on AI may impair students’ capacity to think independently and make clinical decisions under pressure. This highlights the need for a balanced approach where AI is integrated into healthcare education without compromising the human connection. Balancing technology with interpersonal skills is crucial to preparing students to provide effective and compassionate patient care. Therefore, the purpose of this study is to assess the relationship between artificial intelligence applications and health specialty students’ critical thinking, clinical competency, and perception of their willingness to practice human interaction skills with their patients. The main research questions are:

  1. What is the association of artificial intelligence applications on health specialty students’ critical thinking, clinical competency, and perception of their willingness to practice human interaction skills with their patients?

  2. How do health specialty students differ in their perception of the level of artificial intelligence use in medical education in relation to selected sociodemographic and academic factors?

Methods

Design

This study is using a cross-sectional correlational design to identify the association between artificial intelligence applications and health specialty students competencies. Data was collected using structured online self-reported questionnaire utilizing google-form from health specialty students at Jordanian universities.

Setting

Data was collected from Jordanian universities. Jordan’s higher education system includes 10 public and 18 private, and one regional universities. As of 2004–2025, the estimated number of students enrolled in health specialties at the Jordanian universities is 20,589 with almost equal representation of males and females and across at the three main districts of Jordan; North, Center, and South.

Sample and sampling

A convenience sampling technique was used to recruit students from Jordanian universities. Inclusion criteria included: (1) being regularly enroll at health specialty programs at one of the Jordanian universities, and (2) ability to read, write and understand Arabic language as it is the language used in the survey, (3) access to electronic sources such as smart phone of computer as the survey is designed using Google forms. Exclusion criteria include (1) any physical or cognitive impairment that restricts the participant’s ability to answer the question. The estimated sample was calculated using Gpower 3.1.10 software [14], linear multiple regression analysis as test statistics, with a small effect size of 0.02, alpha = 0.05, a power = 0.8, and 4 predictors, a total sample of students that was required was 485 students. In this study we were able to collect data from 542 students which contributed to enhancement of the power of the study.

Data collection

This study used the web-based format of data collection. Data collection was started after ensuring ethical approval from the IRB of the University of Jordan. The research conforms to the provisions of the Declaration of Helsinki in 1995 (as revised in Brazil 2013). The data was collected from health specialty students at Jordanian universities using the online questionnaire from (google forms). The survey was a link and QR code that required smart phones or any electronic device. The front page included invitation to the study and information regarding the purpose of the study, its significance, confidentiality, anonymity, and the voluntary participation. Those who showed interest have to indicate their approval to participate in the study which transfers them to the survey directly. The survey takes about 10 min to be filled out. All data was saved to a password-protected computer file.

Ethical consideration

Instruments

Data was collected using the Arabic format of online self-administered questionnaire. The translation was performed from English to Arabic based on the WHO guidelines for forward-backward translation. Bilingual experts did the translation and back translation using a blind approach. The tools are:

Use of artificial intelligence was measured using the AI attitude scale (AIAS-4) [12]. The scale is designed to measure the public attitudes toward artificial intelligence (AI) and is formed, originally, of four items. The scale adapted to health specialty students and an additional 6 items added separately to address the use of and willingness to use and the ethical -moral relationship of using AI among health specialty students. The students are required to make their responses on a 10-point Likert scale ranging from (1 = Not at all, 10 = Completely Agree). The scale is focusing on assessing the perceived utility and potential relationship on their academic life as students. A sample item is the following: “Artificial intelligence will benefit humankind.”. The scale also contains items with reverse-scores, such as “AI makes me feel uneasy.”. The higher the score, the more positive the attitude towards AI. The scale demonstrated adequate internal consistency, with a Cronbach’s alpha of 0.902 and a McDonald’s omega of 0.904. In this study, after the translation, the scale demonstrated adequate internal consistency, with a Cronbach’s alpha of 0.85. The original tool is validated but the Arabic version translated for this study is not.

Critical thinking was measured using the three domains of critical thinking scale: critical thinking-general (3 items), creative thinking (4 items), and authentic problem-solving (5 items) adapted from Stupple et al. [15]. Critical thinking-general (CreT) scale investigates students’ awareness to the assessments of their learning process, such as making decisions, analyzing tasks, and evaluating arguments, for example, “In this class, I think about other possible ways of understanding what I am learning.”. Higher scores in this domain indicates stronger clinical evaluation skills. Creative thinking (CreT) scale assesses students’ perceptions of the extent to which students generate ideas or develop new ways of doing things. For example, “In this class, I generate many new ideas.”. The higher the score in this domain, the more creativity is involved in tasks. Authentic problem-solving (APS) scale explores students’ agreement that they deal with real-life problems in the classes. For example, “In this class, I investigate the reasons that give rise to real-world problems.”. Higher scores in this domain resonates with better practical problem solving capabilities. The survey items were presented with a five-point Likert scale (e.g., 1 = strongly disagree to 5 = strongly agree). The scale adapted linguistically to reflect the AI use rather than general learning approach. In this study, the scale demonstrated adequate internal consistency, with a Cronbach’s alpha of 0.95. Although the original tool is validated, the Arabic version translated for this study is not.

Clinical competency was measured using the Clinical Competency Scale [16]. The original scale is measuring the core nursing competencies that was developed by Joo and Sohng [17], and later adapted by Kim et al. [16]. The study utilized the adapted Clinical Competency Scale by Kim et al. [16]. This tool consists of 30 questions to assess professional intuition and critical thinking in the field of nursing. Integrated nursing through high school, communication skills, nursing leadership, and respect for life. Two domains of the whole scale have been used in this study; general skills and caring aspects where each domain is formed of five items with higher scores indicating a higher clinical competency. The scale is organized on a 5-point Likert scale ranging from not at all (1) to very well (5). The Cronbach’s ⍺ value for the tool at the time of development was 0.91, while in this study it is 0.95.0. The Arabic version of this tool that was translated specifically for this study is not validated, however, the original tool is validated.

Willingness to practice human interaction skills was measured using the Therapeutic communication skills measured using the Communication assessment tool (CAT) [18]. The Arabic version was used in this study [19]. CAT measures patient perceptions of communication with a health team. CAT is a validated instrument developed to assess communication across different specialties and environments. The CAT includes 15 items and a 5-point response scale (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent). An example of an item is “I believe that AI will improve my work.”. Higher scores indicate better communication skills. It was originally designed to assess a patient’s perception of an individual physician’s communication effectiveness. The original scale had been developed to reflect the medical team that included nurses and non-nurses. However, for the purpose of this study, we have limited the responses to health specialties’ communication skills. This was only a linguistic modification reversing the question from patients being asked about health professional to a students asked about his therapeutic communication skills which was used before in our previous Arabic versions [19]. Although the Arabic version of the questionnaire is widely recognized as reliable, it didn’t go through formal validity.

In addition, sociodemographic and academic-related information will be collected about the students including, age, sex, academic year, academic status, number of clinical courses, etc.

Data analysis

Data was analyzed using IBM-SPSS statistics version 28 for Windows. Descriptive statistics were calculated using the central tendency measures, dispersion measures, frequency and percentage. A multiple linear standardized regression analysis was performed. Critical thinking, effectiveness of practicing communication skills, and clinical competency were set as dependent variables, while use and attitudes toward AI application was set as independent factor. Normality, linearity, multicollinearity, and homoscedasticity assumptions were checked. Alpha set to p =.05. The acceptable skewness value to confirm normality distribution was between (−1) and (1). The normality assumptions for all dependent variables were not violated, with skewness scores within this range. The dependent variable was continuous and normally distributed, and the assumption of homogeneity of variance was met. The Cohen’s guideline [20] was used to make the judgment on the magnitude for effect size based on R2. Small effect (0.00 ≤ R² < 0.09), medium effect (0.09 ≤ R² < 0.25), and large effect (R² ≥ 0.25).

Demographic characteristics

A total of 542 students accepted and filled out the survey that was sent online. The analysis (see Table 1) showed that the mean age was 21.7 (SD = 2.9), ranging from 18 to 51 with a median of 21.0 years. In total, 67.2% (n = 364) were females and 32.8% (n = 178) were males. The highest-representing group was fourth-year students (42.1%, n = 228). The vast majority were Jordanians (90.6%, n = 491), and the mean GPA was 3.2 (SD = 0.54) ranging from 2.0 to 4.0 with 50.0% of them had a GPA of 3.2. of the students, 86.4% (n = 380) chose their specialty, 79.8% (n = 351) want to pursue higher education, and only 5.9% (n = 26) had an academic punishment in their study. The number of clinical courses (on average per term) ranged from 1 (37.5%, n = 203) to 4.0 courses (9.8%, n 53), with a mean of 1.9 and 50.0% (n = 271) of them had two clinical courses. The number of clinical days (in average per term) was also ranging from 1 day per week (34.3%, n = 186) to 4 days per week (28.2%, n 153), and 50.0 (n = 271) of them have 2.0 clinical days per week. Regarding smoking status, 22.5% (n = 99) reported themselves as currently smokers, of them 30.3% (n = 30) smoke cigarettes, 24.2% (n = 24) use electronic cigarettes, 16.2% (n = 16) use water-pipe smoke, and 29.3% (n = 29) smoke more than one type (mixed) (Table 1).

Table 1.

Descriptive statistics of demographic, academic and health factors of students (N = 542)

Variable n %
Sex Male 178 32.8
Female 364 67.2
Academic year First 64 11.8
Second 88 16.2
Third 125 23.1
Fourth 228 42.1
Fifth 20 3.7
Sixth 17 3.1
Nationality Jordanian 491 90.6
Non-Jordanian 51 9.4
Major Nursing 305 56.3
Medicine 68 12.5
Pharmacy 49 9.0
Rehabilitation 45 8.3
Dentistry 20 3.7
Medical Laboratory 6 1.1
Other 49 9.0
Working status Yes 87 16.1
No 455 83.9
Smoking status Yes 82 15.1
No 460 84.9
Type of smoking Cigarettes 29 5.4
Water-pipe 21 3.9
Electronic cigarette 11 2.0
Mix 21 3.9
Have you ever dropped subject Yes 48 8.9
No 494 91.1
Number of clinical training days per week One 186 34.3
Two 134 24.7
Three 69 12.7
Four 153 28.2
Number of clinical courses per term One 203 37.5
Two 241 44.5
Three 45 8.3
Four 53 9.8

Variables of the study

Attitudes toward use of artificial intelligence in clinical education

The mean of use of AI was 50.6 (SD = 21.1), ranging from 10 to 100 with 50.0% (n = 271) had a score of 43.0, and 25.0% (n = 135) had a score of 34.8 or less indicating negative attitudes toward use of AI in education, while 25.0% (n = 135) had a score of 69 or above indicating positive attitudes toward using AI in education. The analysis also showed that 57.9% (n = 314) have a score of 50 or more indicating percentage of students who have positive attitudes toward the use of AI in clinical education, compare to 42.1% (n = 228) who had a score of less than 50 indicating a more likely negative attitudes toward using AI in clinical education/.

Critical thinking

The analysis showed that and mean total score of critical thinking was 43.1 (SD = 9.4), ranging from 12.0 to 60.0. The analysis showed that 50.0% (Q3-Q1) of the students had a score between 37.0 and 48.0, indicating that students had a moderate to high level of general critical thinking skills. Regarding the subscales of critical thinking, the analysis (see Table 2) indicated critical thinking subscale was high, with 50.0% of the students having scores higher than 11.0, while the expected score is up to 15.0. The problem-solving skills were also high, with 50.0% of the students having a score of 18.0, while the expected higher score is 20.0. Also, the score in the authentic skills subscale was moderate to high, where 50.0% of the students having a score of 18.0, where the highest expected score is 25.0.

Table 2.

Descriptive statistics of attitudes toward using artificial intelligence in academia, critical thinking, clinical competency, and communication skills among students (N = 542)

Variable M SD Min Max Q1 Q2 Q3
Use of artificial intelligence 50.6 21.1 10.0 100.0 34.8 43.0 69.0
Critical Thinking Total 43.1 9.4 12.0 60.0 37.0 45.0 48.0
Critical thinking-critical thinking 10.9 2.7 3.0 15.0 9.0 11.0 12.0
Critical thinking-problem solving 14.5 3.4 4.0 20.0 12.0 18.0 20.0
Critical thinking- authentic skills 17.7 4.1 5.0 25.0 15.0 18.0 20.0
Clinical competency-total 33.2 9.2 10.0 50.0 29.0 34.0 40.0
Clinical competency-general skills 16.4 4.7 5.0 25.0 14.0 17.0 20.0
Clinical competency-caring skills 16.8 4.9 5.0 25.0 14.0 17.0 20.0
Communication skills 47.3 16.9 15.0 75.0 37.0 49.0 60.0

Clinical competency

The mean total score of clinical competencies was 33.2 (SD = 9.2), ranging from 10.0 to 50.0. The analysis also showed that 50.0% of the students had a score of 34.0 or above, acknowledging that the expected scores range from 10 to 50, indicating that the students had a moderate to high general perception of their clinical competency. The score of the subscales showed that both subscales (caring and general skills) had a median score (5.0% of the students) of 17.0. As the expected range of scores is 5.0 to 25.0, the results indicate that students had a moderate to high level of caring and general skills.

Communication skills

The mean score of students in their perception of competency to practice therapeutic communication skills was 47.3 (SD = 16.9), with scores ranging from 15.0 to 75.0. Using the interquartile equation, the analysis showed that 50.0% of the students had a score of 49 or above, while the expected score is up to 75.0, indicating a moderate perception that AI enabled the students to practice effective therapeutic communication skills (Table 2).

Bivariate analysis

To assess for the correlation among the variables of the study, using Pearson r, the analysis (see Table 3) showed that AI use and utility has significance (p <.05) and positive correlation with all other variables (competency, critical thinking and communication skills) with correlation magnitude ranging from (r =.15) with perception of competency in using effective communication skills to (r =.42) with critical thinking -subscale. Furthermore, analysis showed that perception of competency in using effective communication skills was also correlated positively and significantly (p <.05) with all other variables (ranging from r =.36) with the critical thinking subscale to (r =.58) with general skills of clinical competency. Similarly, clinical competency subscales were also positively and significantly (p <.05) correlated with critical thinking, with correlation magnitude ranges from (r =.53) between the problem-solving domain of critical thinking with caring skills of clinical competency to (r =.81) between problem problem-solving domain of critical thinking with authentic skills of clinical competency (Table 3).

Table 3.

Correlation matric of the variables of the study (N = 542)

Variable AI CS Cri-T- Cri-T CriT-PS CriT-AS CC-GS CC-CS CC-T
Attitude toward AI -
Communication skills 0.154** -
CriT-total 0.392** 0.429** -
CriT- 0.415** 0.364** 0.896** -
CriT-Problem solving 0.338** 0.394** 0.941** 0.806** -
CriT-Authentic skills 0.341** 0.448** 0.945** 0.750** 0.850** -
CC- General skills 0.297** 0.575** 0.594** 0.544** 0.533** 0.580** -
CC-Caring skills 0.340** 0.480** 0.591** 0.535** 0.531** 0.574** 0.850** -
CC-Total 0.337** 0.549** 0.617** 0.561** 0.555** 0.602** 0.961** 0.963** -

Regression model

To regress attitudes toward AI on the variables of the study, a standardized regression analysis model was developed for the dependent variable on the independent variable. The analysis (see Table 4) showed that attitude toward AI was a statistically significant (p <.001) predictor of critical thinking and its domains, clinical competency and its domains, and perception of practicing effective communication skills. All models were significant (p <.001); however, it is noted that R2 for all variables ranging from small (R2 = 0.024, F = 13.7, Beta = 0.129) for communication skills to medium (R2 = 0.173, F = 112.66, Beta = 0.184) for critical thinking-general skills (Table 4).

Table 4.

Regressing attitudes toward using artificial intelligence in academia on critical thinking, clinical competency and communication skills (N-542)

Variable R 2 F p-value Beta p-value
Attitude toward AI Communication skills 0.024 13.07 < 0.001 0.129 < 0.001
Critical thinking -total score 0.154 98.02 < 0.001 0.184 < 0.001
Critical thinking-ss 0.173 112.66 < 0.001 0.055 < 0.001
Critical thinking-problem solving 0.114 69.56 < 0.001 0.057 < 0.001
Critical thinking- authentic skills 0.117 71.21 < 0.001 0.070 < 0.001
Clinical competency-general skills 0.088 51.75 < 0.001 0.070 < 0.001
Clinical competency-caring skills 0.115 70.36 < 0.001 0.082 < 0.001
Clinical competency-total 0.114 68.45 < 0.001 0.154 < 0.001

Differences in attitudes toward AI in relation to variables and sociodemographics

The levels of attitudes toward using artificial intelligence (more likely positive vs. more likely negative) were used to assess against the variables of the study. The analysis showed (See Table 5) that there were significant difference between those who are more likely have positive attitudes toward using AI in academic and those who are more likely negative in their attitudes in relation to critical thinking and its domains, clinical competency and its domains (p >.05) showing that all mean score of those who are more likely positive in their attitudes that those who are more likely negative, and perception of practicing effective communication skills. However, although those with more likely with positive attitudes have a higher mean score than those who are more likely negative attitudes, the differences were not statistically significant (p >.05) in their perception of the effectiveness of using communication skills (p >.05).

Table 5.

Differences in levels of attitudes toward using artificial intelligence in academia in relation to critical thinking, clinical competency and communication skills (N-542)

Variable AI Attitudes n M SD t-test p-value
CriT-total more likely negative 314 41.11 9.55 −5.99 < 0.001
more likely positive 228 45.88 8.56
CriT more likely negative 314 10.23 2.64 −6.10 < 0.001
more likely positive 228 11.76 2.40
CriT-problem solving more likely negative 314 13.83 3.43 −5.29 < 0.001
more likely positive 228 15.36 3.19
CriT- authentic skills more likely negative 314 16.98 4.18 −4.66 < 0.001
more likely positive 228 18.62 3.85
CC-general skills more likely negative 309 15.86 4.68 −3.31 0.001
more likely positive 227 17.23 4.73
CC-caring skills more likely negative 314 15.98 4.85 −4.65 < 0.001
more likely positive 228 17.90 4.63
CC-total more likely negative 309 31.82 9.19 −4.23 < 0.001
more likely positive 227 35.18 8.92
Communication skills more likely negative 314 46.48 15.97 −1.37 0.173
more likely positive 228 48.48 17.99

Regarding selected sociodemographic characteristics, the analysis showed that there were no statistically significant (p >.05) differences in attitudes toward the use of artificial intelligence in clinical education (more likely positive vs. more likely negative) in relation to age, sex, number of clinical courses per term, number of dropped courses, and GPA. Furthermore, no statistical correlation (p >.05) was found between these selected demographics and with total score of attitudes toward the use of artificial intelligence in clinical education. The analysis also showed no statistical significance (p >.05) between levels (using Ch- square)/or total score attitudes toward AI (using ANOVA) and specialty in the health field, nor academic level (p >.05) (Table 5).

Discussion

Using AI in health specialty education is becoming an essential component of the current pedagogy. This has evoked attention towards the ethical and practical use of AI applications in clinical training among health specialty students. As the use of IA is an inevitable dilemma for instructors and educational policymakers, the need to understand how health specialty students use AI in their clinical assignments and how they perceive the usefulness of AI is an elaborating need. This study addressed the connection among using artificial intelligence applications among health specialty students and their critical thinking, clinical competency, and perception of their willingness to practice communication with their patients. We found that most health specialty students (60%) have a positive attitude toward using AI applications in clinical education. This aligns with recent literature [21]. Another study that employed 702 medical students indicated that 83.3% had favorable attitudes towards AI [22]. The favorable attitudes were influenced by services such as providing up-to-date medical information and study time optimization. Such a positive attitude could be explained and supported from the results of this study. In other words, the study found that students had a moderate to high level of general critical thinking skills, high problem-solving skills, and moderate to high level of caring, general skills, and willingness to practice communication with their patients. In addition, all these factors were associated positively with attitudes toward AI use and attitudes among students. This aligns with a recent study which argued that ChatGPT, an AI tool, enhances critical thinking [23]. It is emphasized in the literature that caution plays a key role in influencing critical thinking abilities [24]. Using AI; therefore, could have contributed to improving such skills among students where they used an application to enhance their skills that seem to be essential for them in their clinical training. The literature, in line with our results, stated that using AI may contribute to improving a wide range of skills among students including their ability to solve problems and generate new ideas [6, 7]. This study adds that AI could have also assisted health specialty students to find solutions to their clinical problems and enhance their critical analysis. Furthermore, and although students had a moderate level of perception that AI assisted them to practice effective therapeutic communication skills with patients, such a relationship was not clinically significant with their attitudes toward using AI as the magnitude of correlation was very low.

The study was not merely focusing on establishing a connection between attitudes toward AI use among health specialty students, but also to test if using AI can predict essential skills such as critical thinking, problem solving, clinical competency, and willingness to practice effective communication skills. The results showed that only general skills and clinical competency were minimally predicted by AI use. The small effect size (small R2) for willingness to practice effective communication skills and clinical competency-general skills warrant caution interpretation and implementation to real practices despite its statistical significance. While the other skills (critical thinking-problem solving, clinical competency-total, clinical competency-caring skills, critical thinking- authentic skills, critical thinking -total score, critical thinking-ss) had medium effect size which implies more opportunity to emphasize while implementing an improve educational strategies. One explanation could be related to the limited use of AI applications in clinical settings, as the rules and regulations at the health specialty schools strictly monitor use of AI by students during clinical time. A recent study employing medical students stated that 53.2% of the participants believed that ChatGPT would improve their learning [25]. However, only 9.4% of the participants used AI in a clinical setting. Students probably used AI during home studying and off-campus time. This explanation was sustained when we found that there is no significant difference in AI use related to particular sociodemographic and academic factors such as age, sex, number of clinical courses per term, number of dropped courses, and GPA. The results might infer that AI applications are used for personal learning issues to address clinical assignments and problems that they encountered during their clinical training time as a complementary assistive device. The results support previous efforts that using AI is used among university students to obtain formative feedback on their essays and questions [8]. In our study, health specialty students likely used AI to ensure the accuracy of their clinical interpretations and proposed solutions to patients’ problems. This also explains the low level of perception that AI is assisting them in establishing effective communication skills, as communicating with patients is taking place in the clinical setting, and there is no way to resort to AI to assist them. Another explanation is that communication as a skill is relational and contextual. Skills of this domain are better honed by indulging in real human interaction and observation. This argument is supported by the Situated Learning Theory [26]. On the other hand, the positive relation of critical thinking to AI mentioned previously can be linked to the Cognitive Load Theory [27]. Sweller argues that off-loading cognitive tasks that are considered lower-level, can provide the learner with the time and energy to deal with high-level cognitive tasks. The results are partially agreeing with previous reports that AI assists students and healthcare professionals to engage more empathetically and foster trust, while diminishing human-to-human interactions [12]. Jackson et al. [28] argued that although AI brings advantages such as decision accuracy, it threatens the humanistic relationship between the physician and patient. Students, in this study, have minimally addressed AI as a helpful device to enhance their communication skills with their patients.

Akpan et al. [13] argues that AI lacks emotional intelligence, which hinders deep human interaction. Such concern is also warranting a decrease in decision-making abilities and increasing privacy concerns [13, 21]. The debate is how AI has been used among health specialty students to propose an ethical question, and raise concerns about how patients’ information might be protected if students used personal accounts while using AI. Aside from privacy, over-reliance is another ethical challenge that should be addressed. It is possible that skills, whether communication or critical thinking, may not show growth or enhancement because of deskilling. The excessive use of AI and overreliance may erode critical thinking skills over time [29]. As students overly depend on AI to perform their tasks, they are exposed to fewer opportunities that allow skill growth. AI should complement, not replace, traditional clinical training.This stresses the importance of establishing guidelines and proper guidance. Also, the lack of difference related to most of the sociodemographic characteristics and academic-related factors is also alarming the health education systems to the impact of using AI as part of their pedagogy and curricula. With the positive connection of AI with students’ clinical competency and critical thinking skills, the need to enhance the legitimate use of AI in health specialty education is supported.

Limitations

The study has several limitations. Self-administered questionnaires may be a source of response, recall, and social desirability bias as it depends on the participants’ own reporting. Cross-sectional design itself prevents causal inference. Thirdly, convenience sampling limits the generalizability of the study. Another limitation of this study is related to lack of more objective measures of the AIA scale among students such as impact of clinical decision making and patient safety. Although the scales demonstrated strong internal consistency and were originally validated, they did not go through formal validity testing after forward-backward translation. To address the limitations, future studies should consider utilizing longitudinal designs, recruiting a diverse sample, and validating the translated instruments.

Conclusion

The association between AI use and health specialty students’ skills found to be positive in terms of critical thinking and clinical competency. Minimal association was detected of AI on students’ willingness to practice effective communication skills. The study emphasized the lack of differences among students in relation to their sociodemographic and academic characteristics. The study has implications to academic bodies, instructors, and guidelines. There is a need to integrate the use of AI as one legitimate tool of learning for students whether during theory or clinical courses. Such an integration will allow monitoring the ethical use of data generated by AI applications. However, their integration must be balanced and careful as to allow for growth for human and relational skills. Declaring use of AI applications by students will also facilitate learning and minimizing academic burden on instructors.

Acknowledgements

The authors would like to thank all the participants of the study.

Patient consent statement

Written consent was obtained from all participants.

Clinical trial number

Not applicable.

Authors’ contributions

- **Conception and design of the study** : Raneem Hamdan-Mansour, Ayman Hamdan-Mansour, Mirna Fawaz- **Data curation** : Raneem Hamdan-Mansour, Farah Abu Hardan, Mohammad Alayasrah, Huda M. Al Nabulsi- **Drafting the manuscript** : Raneem Hamdan-Mansour, Mirna Fawaz & Ayman Hamdan-Mansour.- **Editing and revising the manuscript** : Raneem Hamdan-Mansour, Mirna Fawaz & Ayman Hamdan-Mansour.- **Approval of the final version of the manuscript** : Raneem A. Hamdan-Mansour, Farah A. Abu Hardan, Mohammad A. Alayasrah, Huda M. Al Nabulsi, Mirna Fawaz, Ayman M. Hamdan-Mansour, & Rama Basim AboHanana.

Funding statement

The authors received no financial support for the research, authorship, and publication of this article.

Data availability

The datasets generated during and/or analyzed during the current study are not publicly available because it is confidential but are available from the corresponding author on reasonable request.

Declarations

Ethics approval and consent to participate

The research conforms to the provisions of the Declaration of Helsinki in 1995 (as revised in Brazil 2013). Ethical approval was obtained from the Ethical Committee at the University of Jordan. Informed consent forms were obtained from all participants. All authors approved the final version and agreed to be accountable for all aspects of the work.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Al Kuwaiti A, Nazer K, Al-Reedy A, Al-Shehri S, Al-Muhanna A, Subbarayalu AV, Muhanna A, D., Al-Muhanna FA. A review of the role of artificial intelligence in healthcare. J Personalized Med. 2023;13(6):951. 10.3390/jpm13060951. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021;8(2):e188–94. 10.7861/fhj.2021-0095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. 2022;22(1):772. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Popenici SA, Kerr S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res Pract Technol Enhanced Learn. 2017;12(1):22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Lee H. The rise of chatgpt: exploring its potential in medical education. Anat Sci Educ. 2024;17(5):926–31. [DOI] [PubMed] [Google Scholar]
  • 6.AlAfnan MA, Dishari S, Jovic M, Lomidze K. Chatgpt as an educational tool: Opportunities, challenges, and recommendations for communication, business writing, and composition courses. J Artif Intell Technol. 2023;3(2):60–8. [Google Scholar]
  • 7.Farrokhnia M, Banihashem SK, Noroozi O, Wals A. A SWOT analysis of chatgpt: implications for educational practice and research. Innovations Educ Teach Int. 2024;61(3):460–74. [Google Scholar]
  • 8.Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, Kasneci G. ChatGPT for good? On opportunities and challenges of large Language models for education. Learn Individual Differences. 2023;103:102274. [Google Scholar]
  • 9.Hng SH, Brenda BJ, Ngu ZL. Critical thinking dispositions and Self-esteem among nursing students. Med Health. 2020;15(2):9–16. 10.17576/MH.2020.1502.03. [Google Scholar]
  • 10.Zelesniack E, Oubaid V, Harendza S. Advanced undergraduate medical students’ perceptions of basic medical competences and specific competences for different medical specialties – a qualitative study. BMC Med Educ. 2022;22:590. 10.1186/s12909-022-03606-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.El-hessewi GMS, Almoayad F, Mahboub S, et al Psychological distress and its risk factors during COVID-19 pandemic in Saudi Arabia: a cross-sectional study Middle East Curr Psychiatry 2021;28(1):7. 10.1186/s43045-021-00089-6. [Google Scholar]
  • 12.Grassini S. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front Psychol. 2023;14:1191628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Akpan IJ, Kobara YM, Owolabi J, Akpan AA, Offodile OF. Conversational and generative artificial intelligence and human–chatbot interaction in education and research. Int Trans Oper Res. 2025;32(3):1251–81. 10.1111/itor.13522. [Google Scholar]
  • 14. Faul F, Erdfelder E, Buchner A, Lang AG. G* Power Version 3.1. 7 [computer software]. Uiversität Kiel, Germany. 2013.
  • 15.Stupple EJ, Maratos FA, Elander J, Hunt TE, Cheung KY, Aubeeluck AV. Development of the critical thinking toolkit (CriTT): A measure of student attitudes and beliefs about critical thinking. Think Skills Creativity. 2017;23:91–100. [Google Scholar]
  • 16.Kim BY, Chae MJ, Choi YO. Reliability and validity of the clinical competency scale for nursing students. J Korean Acad Community Health Nurs. 2018;29(2):220–30. [Google Scholar]
  • 17.Joo GE, Sohng KY. Development of nursing competence scale for graduating nursing students. J Korean Public Health Nurs. 2014;28(3):590–604. 10.5932/JKPHN.2014.28.3.590. [Google Scholar]
  • 18.Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the communication assessment tool. Patient Educ Couns. 2007;67:333–42. [DOI] [PubMed] [Google Scholar]
  • 19.Marmash L, Hamdan-Mansour A, Elian R, Hiarat S. Differences in perception between nurses and patients in Jordanian nurses’ effectiveness in practicing communication skills. Jordan Med J. 2012;46(2).
  • 20.Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum; 1988. [Google Scholar]
  • 21.Duan S, Liu C, Rong T, Zhao Y, Liu B. Integrating AI in medical education: A comprehensive study of medical students’ attitudes, concerns, and behavioral intentions. BMC Med Educ. 2025;25:599. 10.1186/s12909-025-07177-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Sami A, Tanveer F, Sajwani K, et al. Medical students’ attitudes toward AI in education: Perception, effectiveness, and its credibility. BMC Med Educ. 2025;25:82. 10.1186/s12909-025-06704-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Suriano R, Plebe A, Acciai A, Fabio RA. Student interaction with ChatGPT can promote complex critical thinking skills. Learn Instruction. 2025;95:102011. 10.1016/j.learninstruc.2024.102011. [Google Scholar]
  • 24.Fabio RA, Plebe A, Suriano R. AI–based chatbot interactions and critical thinking skills: an exploratory study. Curr Psychol. 2025;44:8082–95. 10.1007/s12144-024-06795-8. [Google Scholar]
  • 25.Alkhaaldi SMI, Kassab CH, Dimassi Z, Oyoun Alsoud L, Al Fahim M, Hageh A, C., Ibrahim H. Medical student experiences and perceptions of ChatGPT and artificial intelligence: Cross-Sectional study. JMIR Med Educ. 2023;9:e51302. 10.2196/51302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lave J, Wenger E. Situated learning: legitimate peripheral participation. Cambridge University Press; 1991.
  • 27.Sweller J. Cognitive load during problem solving: effects on learning. Cogn Sci. 1988;12(2):257–85. 10.1207/s15516709cog1202_4. [Google Scholar]
  • 28.Jackson P, Ponath Sukumaran G, Babu C, et al. Artificial intelligence in medical education — perception among medical students. BMC Med Educ. 2024;24:804. 10.1186/s12909-024-05760-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Komasawa N, Yokohira M. Generative artificial intelligence (AI) in medical education: A narrative review of the challenges and possibilities for future professionalism. Cureus. 2025;17(6):e86316. 10.7759/cureus.86316. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are not publicly available because it is confidential but are available from the corresponding author on reasonable request.


Articles from BMC Medical Education are provided here courtesy of BMC

RESOURCES