Abstract
Background
Artificial Intelligence (AI) is increasingly integrated into healthcare, yet its application in physical therapy remains limited. Unlike other medical fields, physical therapy relies heavily on hands-on assessments and individualized clinical reasoning, which may shape unique adoption challenges. Understanding physical therapists’ perceptions of AI-powered diagnostic tools is essential for supporting their effective and ethical implementation.
Objectives
This study explored physical therapists’ perception of AI-powered diagnostic tools and identified key factors influencing their adoption attitudes.
Method
We conducted semi-structured interviews with 27 licensed physical therapists in Saudi Arabia, representing diverse clinical settings and specialties. Transcripts were analyzed thematically to capture perceptions, barriers, and enablers of AI integration.
Results
A total of 27 licensed physical therapists participated (63% female, mean age 29 years, range 24–38; clinical experience 1–7 + years). Participants demonstrated varied perspectives. Ten (37%) emphasized AI’s potential to improve diagnostic accuracy, treatment planning, and improve workflow efficiency, while seven (26%) expressed caution about overreliance and limited insight Ethical concerns were common, with 12 (44%) citing patient data privacy and 5 (19%) highlighting cultural sensitivities in female patient care. Barriers to adoption were identified by 14 (52%), including cost, workload, time, and space limitations. Training needs were also emphasized, with 9 (33%) calling for structured workshops and 6 (22%) noting gaps in foundational AI literacy. Overall, most participants viewed AI as a complementary tool rather than a replacement for clinical judgment.
Conclusion
Physical therapists in Saudi Arabia recognize the potential benefits of AI-powered diagnostic tools but remain cautious due to ethical, educational, and systemic challenges.
Clinical implications
Addressing barriers through structured training, ethical guidelines, and supportive policies can foster responsible adoption of AI in rehabilitation practice.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12909-025-08361-7.
Keywords: Artificial intelligence, Physical therapy, Qualitative research, Rehabilitation, Saudi arabia, Technology adoption
Introduction
Artificial intelligence (AI) is reshaping the landscape of modern healthcare by offering tools that enhance diagnostic precision, optimize clinical decision-making, and increase healthcare efficiency. Through machine learning (ML), deep learning (DL), and natural language processing (NLP), AI systems can analyze large volumes of structured and unstructured data, identify patterns, and generate clinical insights that support diagnostic and prognostic evaluations [14]. AI adoption has been most visible in radiology, oncology, and cardiology, where automation of imaging and diagnostic workflows has demonstrated clear clinical benefits. These precedents are important but not fully transferable to physical therapy, which is distinguished by hands-on assessments, patient–clinician interactions, and individualized treatment planning.
The adoption of AI in medical diagnostics is accelerating, with a recent survey indicating that 75% of healthcare providers anticipate incorporating AI-driven diagnostic tools into their practices within the next five years [16]. For example, AI-based imaging systems powered by convolutional neural networks (CNNs) can identify tumors, fractures, and vascular abnormalities with accuracy comparable to or greater than that of expert radiologists [9, 10, 18].
More recent economic evaluations also indicate that AI-enabled automation contributes to measurable cost savings and improved diagnostic efficiency across multiple health systems, though most evidence remains concentrated in high-resource medical fields rather than rehabilitation. In contrast, physical therapy (PT) presents unique challenges for AI adoption. PT focuses on the evaluation and treatment of movement dysfunctions, requiring manual techniques, clinical reasoning, and therapeutic interactions. This reliance on tacit knowledge and patient engagement means that AI cannot easily replicate or replace the clinician’s role. However, AI-powered tools such as motion capture systems, wearable sensors, and computer vision-based assessment platforms are emerging as promising resources to assist therapists in objectively evaluating posture, gait, and movement patterns. These systems enable more detailed biomechanical analysis, track patient progress over time, and can even suggest corrective strategies based on real-time data [11]. Pilot projects in musculoskeletal and neurological rehabilitation have demonstrated AI’s ability to detect joint degeneration, predict stroke recovery, and monitor home exercise adherence, yet most remain in early stages of validation [4].
However, the incorporation of AI into physical therapy is not without its challenges. Clinicians may be skeptical of AI-generated recommendations, especially when the algorithm’s decision-making process is not transparent. The so-called “black box” nature of some AI models, where the internal logic is inaccessible or poorly understood, can erode trust and limit acceptance among healthcare providers [8, 13]. Concerns also persist regarding data privacy, patient consent, and the potential for algorithmic bias, particularly when datasets used for training AI models lack diversity or clinical relevance [19]. Moreover, physical therapists often work in varied settings, including outpatient clinics, hospitals, schools, and rural health centers, many of which may lack the infrastructure or technical support to implement advanced AI systems [6, 17]. These structural limitations, combined with ethical and cultural considerations, pose barriers that differ significantly from those in other clinical fields.
In addition, professional trust in AI reliability remains uncertain. Many therapists fear overreliance on opaque systems, potential erosion of autonomy, and unclear regulatory frameworks. This aligns with the Technology Acceptance Model (TAM) and diffusion-of-innovation theories, which highlight perceived usefulness, perceived risk, and readiness as critical factors influencing adoption in clinical practice.
To date, only a handful of qualitative studies have assessed physical therapists’ views on AI, most addressing general attitudes or technology use in rehabilitation rather than diagnostic integration [6, 17]. These limited studies highlight the urgent need for empirical evidence grounded in user perspectives. This study addresses this critical gap by focusing specifically on the perceptions of physical therapists in Saudi Arabia regarding AI-powered diagnostic tools.
The rationale for this study lies in the growing integration of AI into clinical settings and the recognition that successful implementation depends on clinician acceptance. As AI becomes more prevalent in healthcare delivery, physical therapists will likely encounter AI-assisted tools in various forms, from decision-support systems to motion-tracking software embedded in rehabilitation equipment. Therefore, it is critical to assess their attitudes toward these technologies and to understand the enablers and barriers they experience.
Accordingly, this study aims to explore physical therapists’ perceptions of AI-powered diagnostic tools in Saudi Arabia, focusing on four domains: (1) perceived usefulness, (2) perceived risks and ethical concerns, (3) readiness for adoption, and (4) contextual factors such as experience and workplace setting. By addressing these dimensions, the study provides evidence to guide policy, education, and implementation strategies for responsible integration of AI in physical therapy.
Methodology
Study design
This study employed a qualitative study design using semi-structured interviews conducted over a defined three-month period to explore the attitudes of physical therapists toward AI-powered diagnostic tools. This design was chosen for its suitability in capturing in-depth insights into therapists’ perceptions, concerns, and expectations within a specific timeframe, without inferring causality.
The study was conducted over a three-month period. Ethical approval was secured from the Institutional Review Board (IRB) of Princess Nourah bint Abdulrahman University (IRB Log Number: 24–0942). Participants were provided with an informed consent form outlining the study’s objectives, voluntary nature, data confidentiality measures, and the right to withdraw at any stage without penalty. Data collection was conducted through semi-structured online interviews, with each participant completing a single session. All data were securely stored in encrypted files accessible only to the research team. No personally identifiable information was collected, and participant anonymity was ensured. No participants withdrew, and no interview data were missing.
Participants
The target population for this study consisted of licensed physical therapists actively engaged in clinical practice within Saudi Arabia. According to recent estimates by the Saudi Physical Therapy Association, there are approximately 12,544 licensed practitioners across the country [12]. The study sought to capture a diverse range of perspectives by recruiting participants from various geographical regions, levels of professional experience, and clinical specialties.
Participants were eligible for inclusion if they were licensed physical therapists currently practicing in Saudi Arabia, had a minimum of one year of clinical experience, and had at least basic awareness of AI-powered diagnostic tools (direct prior use was not required). This allowed the inclusion of participants with varying levels of familiarity, including limited exposure and skepticism. Physical therapists who were not currently practicing were excluded from the study.
To ensure the representation of diverse perspectives, a stratified purposive sampling method was employed. The study population was stratified according to three key characteristics: professional experience level (early-career, mid-career, and experienced practitioners), and clinical specialization (orthopedic, pediatric, neurological, women’s health, and sports physical therapy). This approach ensured systematic inclusion of participants across subgroups, though volunteer and self-selection bias remain possible [3].
Thirty participants were initially recruited, with six from each of the five major regions of Saudi Arabia to ensure geographic diversity. The final dataset included 27 interviews, as three did not proceed due to scheduling conflicts. Thematic saturation was assessed as the point at which no new codes or subthemes emerged across three consecutive interviews, confirmed through team debriefings and codebook stability.
Recruitment procedures
Participants were approached through professional networks, email invitations distributed via the Saudi Physical Therapy Association, and institutional collaborations with rehabilitation centers. Where appropriate, gatekeepers such as department heads facilitated access. Recruitment materials included email invitations with study details and consent forms.
Data collection method
Data for this study were collected using a semi-structured interview guide specifically designed to explore physical therapists’ attitudes toward AI-powered diagnostic tools in clinical practice. The guide was developed to elicit comprehensive, in-depth narratives about clinicians’ familiarity with AI, its perceived influence on decision-making, ethical considerations, barriers and facilitators to adoption, and training needs.
The interview guide was structured into six key sections:
Demographic Information (age, years of professional experience, workplace setting, self-reported familiarity with AI technologies).
Understanding of AI Tools, which assessed participants’ knowledge and perceptions of AI applications in physical therapy;
Clinical Decision-Making, which explored anticipated impacts of AI on diagnostic reasoning and professional autonomy;
Ethical Considerations, focused on trust, data privacy, and clinician responsibility;
Personal Experience with AI, which invited participants to share real-world encounters with AI tools;
Adoption and Integration, addressing perceived barriers, enablers, and required resources for effective AI implementation in practice.
Sample questions included: “How do you believe AI tools could influence your decision-making processes in clinical practice?” and “Can you provide examples where AI could complement or conflict with your clinical judgment?”
Each section contained open-ended questions supplemented by prompts. Interviews were conducted in Arabic or English (based on participant preference), lasted 40–60 min, and were carried out by two trained interviewers. Interviewers underwent two calibration sessions to standardize probing strategies, practiced mock interviews, and used a shared protocol to ensure consistency.
Development and content validation
The interview guide was developed following a literature review of AI adoption themes in healthcare [5, 7, 15, 20]. Themes identified were mapped to relevant questions targeting both general and PT-specific contexts.
The expert panel (n = 3) consisted of a professor of PT, a rehabilitation technology specialist, and a qualitative methodologist. They were selected for disciplinary expertise and experience with qualitative methods. Disagreements were resolved by consensus.
Pilot testing and refinement
Pilot testing with three licensed physical therapists refined wording and flow. While valuable, this limited testing did not substantially reshape thematic domains, which is acknowledged as a limitation. The full interview guide developed for this study is provided as a ‘Supplementary Materials”.
Assuring trustworthiness
To enhance credibility and trustworthiness, the interviews were conducted by trained qualitative researchers using semi-structured protocols. Neutrality was maintained, with standardized training for consistency.
Member checking (returning themes to participants for feedback) was not conducted due to scheduling constraints, which is acknowledged as a limitation.
Data analysis
Demographic variables (age, gender, experience, setting) were summarized using descriptive statistics. Qualitative data were analyzed separately using thematic analysis [2].
The six phases of thematic analysis were applied, supported by NVivo (v15). which facilitated coding and theme development. The process involved (1) familiarization with the data through repeated reading of transcripts; (2) initial coding of meaningful units of text; (3) collating codes into potential themes; (4) reviewing and refining themes against the dataset; (5) defining and naming themes to reflect their conceptual essence; and (6) producing the final narrative synthesis. To ensure reliability, two researchers independently coded the transcripts, achieving substantial intercoder agreement (κ = 0.78). Discrepancies were reconciled through team discussion.
The rationale for selecting thematic analysis is as follows: it provides flexibility in exploring a wide range of perceptions without requiring adherence to a specific theoretical framework. Unlike grounded theory, which aims to build new theory from data, our study sought to capture and interpret existing attitudes and contextual factors surrounding AI adoption in physical therapy. Framework analysis was also considered, but its structured, policy-driven orientation was less suited to the exploratory nature of our research. Thematic analysis allowed us to systematically identify patterns across diverse participants while remaining sensitive to both shared and divergent perspectives. To enhance transparency, we provide illustrative examples in Supplementary Tables 1–3, showing how raw codes were clustered into sub-themes and overarching themes.
Researchers also maintained reflexive journals to record positionality and interpretive decisions. Peer debriefings with external experts enhanced credibility. Triangulation with additional data sources (e.g., observations, documents) was not performed, which is acknowledged as a limitation.
Results
A total of 38 physical therapists were initially assessed for eligibility to participate in the study. However, 11 were excluded for various reasons: eight lacked experiences with AI diagnostic tools, two did not meet the required years of experience, and one incomplete interview. Consequently, 27 participants provided consent and were interviewed, with all 27 interviews included in the final analysis (Fig. 1).
Fig. 1.
Flowchart of participants’ recruitment
The sample consisted of 27 licensed physical therapists, including 17 females (63%) and 10 males (37%), with ages ranging from 24 to 38 years. The largest age group was 24–28 years, comprising 56% of the participants. Clinical experience ranged from 1 to over 7 years, with 41% having 1–2 years of experience, followed by 22% with 6–7 years, and 18.5% each for those with 3–5 years and over 7 years of experience. Participants were equally distributed across private clinics (33.3%), medical centers (33.3%), and academic institutes (33.3%), while 11.1% worked in governmental hospitals. For clarity, “medical centers” referred to private facilities, while “governmental hospitals” were categorized separately as public institutions. “Academic institutes” referred to licensed therapists working as faculty members or clinicians in university-affiliated rehabilitation settings. Regarding clinical specialties, 37% practiced general physical therapy, followed by 29.6% in neuromuscular therapy, 22.2% in orthopedic therapy, and 11.1% in pediatric physical therapy (Table 1).
Table 1.
Demographic and professional characteristics of participating physical therapists (N = 27)
| Category | Subcategory | Frequency | Percentage |
|---|---|---|---|
| Gender | Female | 17 | 63% |
| Male | 10 | 37% | |
| Age | 24–28 years | 15 | 56% |
| 29–33 years | 7 | 26% | |
| 34–38 years | 5 | 18% | |
| Experience | 1–2 years | 11 | 41% |
| 3–5 years | 5 | 18.5% | |
| 6–7 years | 6 | 22% | |
| Over 7 years | 5 | 18.5% | |
| Workplace | Private clinics | 9 | 33.3% |
| Medical centers | 9 | 33.3% | |
| Academic institutes | 6 | 22.2% | |
| Governmental Hospitals | 3 | 11.1% | |
| Specialty | Musculoskeletal therapy | 8 | 29.6% |
| Pediatric physical therapy | 3 | 11.1% | |
| Neuromuscular therapy | 6 | 22.2% | |
| General physical therapy | 10 | 37.0% |
Thematic analysis, conducted following Braun and Clarke’s six-phase framework [2], revealed five overarching themes with accompanying subthemes that reflect participants’ familiarity with, perceptions of, and concerns about AI-powered diagnostic tools. These themes provide comprehensive insights into how physical therapists in Saudi Arabia perceive the role of AI in clinical settings and what factors shape their readiness for adoption (Table 2).
Table 2.
Summary of themes and subthemes familiarity and exposure to AI tools (n = 27)
| Themes | Subthemes | Representative Quotes |
|---|---|---|
| Familiarity and Exposure to AI Tools | Direct Hands-on Experience | “I currently use a device called Checkup from Technologym” (P008); “I use gait analysis devices and VR glasses as part of a team in hospitals” (P019). |
| Academic vs. Clinical Exposure | “We saw motion analysis tools in university, but never in clinics” (P024); “Mostly used in studies and research—it helped a lot. In clinical work, I haven’t explored it much yet” (P018). | |
| Attitudes and Clinical Perceptions | Positive Perceptions: Accuracy, Efficiency, Clinical Value | “They shorten the diagnostic process, provide more clarity, and help implement treatment plans quickly” (P015); “AI tools motivate patients through interactive progress tracking… they are highly accurate” (P011). |
| Skepticism and Caution: Overreliance, Lack of Insight | “AI might lack flexibility and critical analysis, which humans possess” (P001); “No way a robot can make a diagnosis better than a specialist… LBP can be due to psychological stress” (P006). | |
| Ethical and Cultural Concerns | Patient Data Privacy | “Patient consent is crucial. The primary ethical issue is ensuring we don’t misuse sensitive data” (P009); “Privacy issues may arise for patients in sensitive governmental sectors” (P019). |
| Cultural Sensitivities in Female Patient Care | “Camera use violates privacy norms for female patients in our culture” (P017); “If a patient refuses AI-based tools, their consent is the priority” (P001). | |
| Adoption and Integration Readiness | Facilitators: Standardization, Workflow Efficiency, Patient Engagement | “Time efficiency, increased accuracy, and patient engagement encourage adoption” (P001); “Having tools available in centers/labs with training guidelines would facilitate adoption” (P024). |
| Barriers: Cost, Time, Workload, Space | “Cost is the main barrier… space limitations and patient cognitive/functional ability also hinder adoption” (P012); “Time, I keep repeating this because it’s the biggest issue” (P003). | |
| Training Needs and Systemic Barriers | Hands-on Training and Workshops | “Training should accompany new devices… conferences and company-led workshops would promote integration” (P012); “Demonstration-based training… let therapists practice and refine technique” (P001). |
| Gap in Foundational AI Literacy | “Orientation programs are needed to explain AI basics… many therapists don’t fully understand it” (P021); “We need trainers/professors to teach practical application of devices” (P024). |
Theme 1: familiarity and exposure to AI tools
Participants varied in their direct exposure to AI-based diagnostic technologies, with some reporting hands-on experience while others described exposure primarily through academic or research contexts.
Subtheme 1.1: direct hands-on experience
Eight participants shared examples of personally using AI tools in clinical environments, highlighting their integration into technologically advanced rehabilitation settings. One participant stated, “I currently use a device called Checkup from Technologym” (P008), while another elaborated, “I use gait analysis devices and VR glasses as part of a team in hospitals” (P0019). However, as one participant reflected, “Gait analysis shows improved results, yet overall recovery feels stationary” (P009), showing that positive outputs do not always align with clinical improvement.
Subtheme 1.2: academic vs. clinical exposure
Conversely, six therapists indicated that their familiarity with AI tools was confined to academic exposure, particularly during undergraduate or postgraduate studies. “We saw motion analysis tools in university, but never in clinics” (P0024) captures the gap between theoretical knowledge and clinical practice. Others emphasized that exposure to AI has primarily occurred in research contexts: “Mostly used in studies and research—it helped a lot. In clinical work, I haven’t explored it much yet” (P0018).
Theme 2: attitudes and clinical perceptions
Participants expressed a range of attitudes toward AI-powered diagnostic tools, from optimism about their clinical value to skepticism regarding their limitations.
Subtheme 2.1: positive perceptions—accuracy, efficiency, clinical value
Ten physical therapists recognized the potential of AI to enhance clinical workflow and diagnostic precision. One therapist noted, “They shorten the diagnostic process, provide more clarity, and help implement treatment plans quickly” (P0015). Others highlighted AI’s motivational role for patients: “AI tools motivate patients through interactive progress tracking… they are highly accurate” (P0011).
Subtheme 2.2: skepticism and caution—overreliance, lack of insight
Despite these advantages, concerns were raised about overreliance on AI and its inability to replicate human intuition by seven participants. “AI might lack flexibility and critical analysis, which humans possess” (P001) reflects caution against substituting clinical reasoning with algorithms. Another participant stated, “No way a robot can make a diagnosis better than a specialist… LBP can be due to psychological stress” (P006). One therapist added, “Most trainers from medical companies aren’t specialists!… They read a catalog” (P006), showing how lack of quality training further fuels skepticism. However, a few participants expressed optimism about AI utility despite minimal or no formal training, suggesting that enthusiasm for innovation could sometimes outweigh concerns about knowledge gaps. Skepticism was often linked to insufficient training or limited AI literacy, showing overlap with Theme 5.
Theme 3: ethical and cultural concerns
This theme encompassed issues related to patient privacy, data protection, and cultural appropriateness, particularly in female patient care.
Subtheme 3.1: patient data privacy
Data privacy was the most frequently mentioned ethical concern. Twelve participants expressed apprehension regarding data handling and confidentiality. “Patient consent is crucial. The primary ethical issue is ensuring we don’t misuse sensitive data” (P009). Another elaborated, “Privacy issues may arise for patients in sensitive governmental sectors” (P0019). As one participant humorously remarked, “Data and privacy… unless the Minister of Interior is the one who will be treated, that’s a different matter” (P006), highlighting perceived inequities in enforcement of privacy norms.
Subtheme 3.2: cultural sensitivities in female patient care
Cultural appropriateness emerged as a specific concern, particularly regarding the use of cameras and sensors with female patients. “Camera use violates privacy norms for female patients in our culture” (P0017) was a sentiment echoed by others. In such cases, informed consent and patient comfort were prioritized: “If a patient refuses AI-based tools, their consent is the priority” (P001). Some participants, however, reported fewer concerns about this issue, reflecting disconfirming views that cultural barriers might be context-dependent.
Theme 4: adoption and integration readiness
Therapists identified key factors that could either facilitate or hinder the integration of AI tools in clinical practice.
Subtheme 4.1: facilitators—standardization, workflow efficiency, patient engagement
Ten participants cited efficiency, enhanced diagnostic accuracy, and increased patient involvement as motivators for adoption. One therapist remarked, “Time efficiency, increased accuracy, and patient engagement encourage adoption” (P001). Another noted the importance of infrastructure: “Having tools available in centers/labs with training guidelines would facilitate adoption” (P0024).
Subtheme 4.2: barriers—cost, time, workload, space
Barriers to adoption were more frequently discussed than facilitators. Fourteen participants mentioned one or more barriers. Cost and time were consistently emphasized across responses, while workload and space limitations were noted independently by fewer participants. “Cost is the main barrier… space limitations and patient cognitive/functional ability also hinder adoption” (P0012). Another added, “Time, I keep repeating this because it’s the biggest issue” (P003). Several participants linked these barriers to systemic inequities, noting that smaller private clinics faced disproportionate challenges compared with larger hospitals. Ethical concerns (Theme 3), particularly around data privacy, were also frequently framed as barriers to adoption, demonstrating thematic overlap.
Theme 5: training needs and systemic barriers
Participants stressed the importance of targeted education and systemic readiness to support the adoption of AI in physical therapy.
Subtheme 5.1: hands-on training and workshops
There was a clear call for structured, practical training initiatives. “Training should accompany new devices… conferences and company-led workshops would promote integration” (P0012). Another emphasized experiential learning: “Demonstration-based training… let therapists practice and refine technique” (P001).
Subtheme 5.2: gap in foundational AI literacy
Beyond technical training, six participants highlighted a broader educational gap related to AI principles. “Orientation programs are needed to explain AI basics… many therapists don’t fully understand it” (P0021). Similarly, “We need trainers/professors to teach practical application of devices” (P0024). One participant stressed, “Without proper orientation, many of us feel AI is a black box—we can’t trust what we don’t understand” (P008). This subtheme overlapped conceptually with skepticism (Theme 2), as lack of foundational knowledge often reinforced doubts about AI reliability.
Discussion
This qualitative study explored the perceptions and attitudes of physical therapists in Saudi Arabia toward the integration of AI-powered diagnostic tools. The findings revealed a dynamic mix of cautious optimism, ethical concerns, and systemic challenges. Many participants expressed limited familiarity with AI applications beyond academic settings. While some had been introduced to gait or balance analysis technologies during their training, they reported minimal real-world clinical exposure. This supports what Luz and Ray [11] highlighted regarding the underutilization of AI tools in rehabilitation disciplines compared to fields like oncology and radiology, despite their proven diagnostic performance [11].
Participants generally viewed AI tools as objective and time-saving. Several noted that AI could enhance the accuracy of clinical decision-making by reducing human error and offering standardized assessments, particularly in tasks such as gait evaluation. These perspectives align with findings by Danishta et al. [4], who emphasized the value of wearable AI-powered technologies in improving diagnostic precision and efficiency [4]. However, therapists in this study also voiced concerns that an overreliance on AI could hinder critical thinking and reduce professional autonomy. Kalasampath et al. [8] similarly warned that the opaque nature of AI systems, often described as “black boxes,” may erode clinicians’ trust and accountability, especially when the rationale behind the diagnosis is unclear [8].
Another important issue raised by participants was the ethical and cultural dimension of AI use. For instance, therapists discussed how open clinical environments could pose privacy challenges for female patients, especially in conservative settings. These concerns mirror what Williamson and Prybutok [19] described as the tension between technological advancement and sociocultural expectations, particularly when AI systems involve patient data collection or visual tracking [19]. Likewise, Nazir et al. [13] underscored the importance of transparency and data protection in AI tools used in biomedical imaging, warning that lack of clarity in data processing might hinder patient and clinician acceptance [13]. In our study, these concerns were particularly pronounced among participants in governmental hospitals, where institutional policies and cultural expectations are more conservative, compared with therapists in private or academic settings. This highlights how cultural norms around female privacy may heighten ethical concerns in Saudi Arabia. While such concerns echo global patterns of skepticism toward opaque AI systems, the cultural and gender-specific context gives these concerns unique salience in rehabilitation practice.
While therapists acknowledged the potential of AI-powered diagnostic tools, they identified significant barriers to adoption. Cost, space, and time limitations were among the most cited constraints, especially in smaller private clinics. These findings are consistent with Sayem et al., [16] who observed that infrastructure limitations and administrative reluctance are major obstacles to AI adoption, even in technologically capable environments [16]. Participants also noted that the absence of clear national guidelines or institutional strategies contributed to uncertainty and resistance, reinforcing the need for leadership and policy intervention. Roppelt et al. [15] emphasized that successful AI implementation depends not only on access to technology but also on organizational readiness and stakeholder alignment [15]. Notably, younger and early-career therapists often expressed greater openness toward AI integration, whereas more experienced practitioners highlighted risks to autonomy and clinical reasoning. This suggests that experience level influenced perceptions, with optimism and skepticism distributed unevenly across subgroups.
Training emerged as a critical theme throughout the interviews. Participants strongly believed that AI integration in physical therapy should be accompanied by structured training opportunities, particularly those that emphasize hands-on practice. Hafeez et al. [6] similarly reported that one of the main barriers in the physical therapy profession is the lack of AI-related education, which contributes to hesitation and underuse [6]. Aluru [1] argued that training clinicians in both technical competencies and ethical considerations is essential for safe and confident use of AI in patient care. In this study, participants called for workshops, professional development sessions, and academic inclusion of AI content to bridge the current knowledge gap and reduce apprehension [1]. Skepticism about AI reliability was frequently linked to training gaps, indicating that knowledge deficits limit confidence and directly shape clinical decision-making. Without adequate training, clinicians may discount AI outputs altogether, potentially reducing their willingness to incorporate such tools into patient care.
This study also reflects the influence of the sampling approach: stratified purposive sampling captured diverse perspectives across specialties and experience levels, but may have favored participants already interested in technology, introducing potential self-selection bias. This limitation underscores the importance of interpreting the themes in light of participant characteristics.
The study contributes new insights into the complex and evolving relationship between physical therapists and AI technologies in Saudi Arabia. Our findings highlight both alignments and divergences with existing research: while studies in radiology and oncology emphasize technical performance and clinician trust [8, 13], our participants highlighted cultural barriers and training deficits as more immediate obstacles. This divergence underscores the necessity of tailoring AI adoption strategies to the specific context of rehabilitation and Saudi cultural norms. In particular, culturally sensitive guidelines should address informed consent, gender-appropriate use of imaging and sensors, and patient privacy safeguards.
Finally, the policy dimension deserves emphasis. Beyond noting the absence of guidelines, our findings suggest concrete recommendations: (1) developing national frameworks for AI data privacy and ethical use; (2) integrating standardized AI training modules into professional development; and (3) establishing institutional governance structures to ensure transparency, accountability, and clinician involvement in AI deployment. These measures could mitigate skepticism, support ethical implementation, and align adoption with Saudi Arabia’s broader digital health initiatives.
As Ghafur et al. [5] argue, the integration of AI into healthcare is not merely a technical challenge, but a multidisciplinary effort that requires alignment across ethical, infrastructural, and educational dimensions[5]. By addressing these factors, healthcare systems can move toward more confident, equitable, and effective use of AI in physical therapy practice.
Limitation
This study has several methodological and procedural limitations. First, although the study initially targeted 30 participants, data saturation was reached with 27. While this is consistent with qualitative research principles, the smaller sample size may limit the diversity of perspectives, especially within underrepresented subgroups. In addition, the overrepresentation of younger therapists may bias the findings toward more digitally literate or optimistic perspectives.
Second, the study relied on self-reported data from semi-structured interviews, which are subject to biases such as social desirability, recall bias, and personal interpretation. The online interview format itself may also have influenced openness or depth of responses. Potential interviewer bias cannot be excluded, as probing styles may have shaped how participants framed their responses.
Third, variability in participants’ familiarity with AI tools posed a limitation. While participants were required to have at least basic awareness, the depth of their knowledge varied significantly, which may have affected the consistency and specificity of the insights. This inclusion criterion may have excluded therapists with no exposure, which could bias results toward more informed or positive views.
Fourth, contextual factors such as Saudi Arabia’s evolving AI infrastructure and digital health policy landscape may have shaped participants’ responses, particularly their concerns about feasibility and readiness.
Fifth, while credibility was enhanced through peer debriefing and reflexive journaling, member checking (returning themes to participants for validation) was not performed due to scheduling constraints. Triangulation with additional data sources (e.g., observations or documents) was also not conducted, which may limit confirmability.
Finally, the limited availability of research on AI integration in physical therapy, particularly in Saudi Arabia, made it challenging to position the findings within an established empirical framework and hindered direct comparisons with existing literature. Although the interview guide was reviewed by experts for cultural appropriateness, systematic participant testing of cultural sensitivity was not undertaken, which should be considered when interpreting the findings.
Future work
Future research should prioritize addressing practical challenges of AI integration (e.g., training, infrastructure, and cost), as these represent the most critical factors for scaling AI in physical therapy practice. Expanding the sample size remains important to enhance representativeness, with particular focus on underrepresented subgroups such as pediatric and neuromuscular therapists, as well as rural clinicians whose contexts and resources differ from those in urban settings.
Methodological development is also needed. Mixed-methods approaches that combine qualitative interviews with quantitative surveys, as well as experimental or quasi-experimental designs, could move beyond self-reported attitudes to capture actual adoption behaviors following exposure to AI tools or interventions.
In addition, longitudinal tracking should extend beyond perceptions, following therapists after exposure to structured AI training or digital health policy changes to evaluate how their use of AI develops in practice over time. Future studies should specifically examine how levels of AI familiarity influence adoption trajectories, with attention to designing targeted support for less experienced users.
Cross-country comparative studies should also explore how cultural, linguistic, and educational factors shape adoption, while accounting for systemic differences in digital infrastructure and healthcare policy. By prioritizing these directions, future research can provide actionable evidence for the responsible, scalable integration of AI into physical therapy practice.
Conclusion
This study explored how physical therapists in Saudi Arabia perceive the integration of AI-powered diagnostic tools, offering one of the first localized, profession-specific accounts in the rehabilitation field. The findings reveal a cautiously optimistic outlook, tempered by concerns about cultural appropriateness, professional autonomy, and system readiness.
The unique contribution of this study lies in demonstrating how cultural sensitivities, gender norms, and training deficits specifically shape therapists’ attitudes toward AI adoption in Saudi Arabia. These findings extend global debates on AI integration by highlighting that successful implementation in rehabilitation cannot be decontextualized but must account for profession-specific practices and sociocultural environments.
The results point to three critical priorities: (1) the development of structured, practical training to bridge the knowledge gap; (2) the creation of culturally sensitive policies to safeguard privacy and align AI use with local norms; and (3) institutional investment to address resource and infrastructure constraints.
In conclusion, this study contributes to the international literature by providing evidence that therapists’ perceptions of AI adoption are shaped not only by technical performance but also by cultural, educational, and systemic factors. Recognizing and addressing these dimensions will be essential for ensuring that AI enhances rather than disrupts rehabilitation practice, both in Saudi Arabia and in comparable healthcare systems worldwide.
Supplementary Information
Supplementary Material 1. Interview guide.
Acknowledgements
The authors would like to thank all physical therapists who generously shared their insights during interviews. Special appreciation is extended to the Department of Information Systems at the College of Computer and Information Sciences for consultation on AI-related content.
Authors’ contributions
Reem M. Alwhaibi (RA): Conceptualization, Methodology, Supervision, Funding acquisition, Writing, review & editing, Raghad B. Alshammari (RB): Interview conduction, Data analysis, Writing, original draft, Nouf I. Alrufayyiq (NR): Transcription, Coding, Literature review, Jawaher Q. Alenzi (JA): Participant recruitment, Validation, Thematic analysis, Amirah H. Alsumayli (AA): Data curation, Support with NVivo analysis, Btool I. Alrushud (BA): Review of discussion section, Visualization, Tahani J. Alahmadi (TJ): AI technology consultation, Editing for technical clarity.
Funding
This study was funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project (PNURSP2025R117), Riyadh, Saudi Arabia.
Data availability
The datasets generated and/or analyzed during the current study are not publicly available due to the confidential nature of interview transcripts. However, anonymized data may be made available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
This study received ethical approval from the Institutional Review Board (IRB) at Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia (IRB Log Number: 24–0942). All participants were provided with a study information sheet and gave informed consent electronically prior to their participation. The study complied with the ethical principles of the Declaration of Helsinki.
Consent for publication
All participants were informed that their anonymized responses may be used in publications. No identifying personal information (e.g., names, images, or videos) has been included in the manuscript. Therefore, specific consent for publication is not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Aluru KS. AI–powered diagnosis: enhancing accuracy and efficiency in healthcare. Int J Adv Eng Technol Innovations. 2023;1(2):466–89. [Google Scholar]
- 2.Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. 10.1191/1478088706qp063oa. [Google Scholar]
- 3.César CC, Carvalho MS. Stratified sampling design and loss to follow-up in survival models: evaluation of efficiency and bias. BMC Med Res Methodol. 2011. 10.1186/1471-2288-11-99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Danishta S, Kumar M, Gugnani A. Advancements in physiotherapy: A systematic review of AI, robotics and wearable sensor technologies. Int J Multidisciplinary Res. 2025;7(1). 10.36948/ijfmr.2025.v07i01.37309.
- 5.Ghafur S, Patel M, Popescu M, Jangam S, Conroy D, Fontana G. Evidence for AI in healthcare. Eur J Public Health. 2024;34(Supplement_3):ckae144.821. 10.1093/eurpub/ckae144.821. [Google Scholar]
- 6.Hafeez M, Haq MZU, Tahzeem Z, Rahim S, Rao N. AI associated challenges to physical therapy profession. Insights – Journal of Health and Rehabilitation. 2024. 10.71000/jj1x96329a. [Google Scholar]
- 7.Hou T, Li M, Tan Y. (Ricky), Zhao H. Physician adoption of AI assistant. Manufacturing & Service Operations Management. 2024. 10.1287/msom.2023.0093. [Google Scholar]
- 8.Kalasampath K, Kn S, Sajeev S, Kuppa SS, Ajay K, Angulakshmi M. A literature review on applications of explainable artificial intelligence (XAI). IEEE Access. 2025. 10.1109/ACCESS.2025.3546681. [Google Scholar]
- 9.Kutbi M. Artificial intelligence-based applications for bone fracture detection using medical images: a systematic review. Diagnostics. 2024;14(17):1879. 10.3390/diagnostics14171879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer M, Sicular S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591–6. 10.1073/pnas.1806905115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Luz A, Ray D. AI–powered disease diagnosis: Evaluating the effectiveness of machine learning algorithms (EasyChair Preprint 15084). 2024. EasyChair. https://easychair.org/publications/preprint/rmqt.open.
- 12.Mohammed O, Alzahrani H, Marouf E, Shaheen R. Physiotherapists’ perspectives on the implementation of direct access to physiotherapy services in Saudi Arabia: a cross-sectional study. BMJ Open. 2025. 10.1136/bmjopen-2024-089601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med. 2023. 10.1016/j.compbiomed.2023.106668. [DOI] [PubMed] [Google Scholar]
- 14.None KA, Gopinathan R, Arunkumar J, Malar BSB. The role of artificial intelligence in modern healthcare: advances, challenges, and future prospects. Eur J Cardiovasc Med. 2025;15(4):615–24. 10.61336/ejcm/25-04-94. [Google Scholar]
- 15.Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: a systematic literature review on influencing factors. Technol Soc. 2024;76:102443. 10.1016/J.TECHSOC.2023.102443. [Google Scholar]
- 16.Sayem MA, Taslima N, Sidhu GS, Chowdhury F, Sumi SM, Anwar AS, Rowshon M. AI–driven diagnostic tools: a survey of adoption and outcomes in global healthcare practices. Int J Recent Innov Trends Comput Commun. 2023;11(10), 1109–1122. Retrieved from https://ijritcc.org/index.php/ijritcc/article/view/11014.
- 17.Shawli L, Alsobhi M, Faisal Chevidikunnan M, Rosewilliam S, Basuodan R, Khan F. Physical therapists’ perceptions and attitudes towards artificial intelligence in healthcare and rehabilitation: a qualitative study. Musculoskelet Sci Pract. 2024;73:103152. 10.1016/J.MSKSP.2024.103152. [DOI] [PubMed] [Google Scholar]
- 18.Spaanderman DJ, Marzetti M, Wan X, Scarsbrook AF, Robinson P, Oei EHG, et al. AI in radiological imaging of soft-tissue and bone tumours: a systematic review evaluating against CLAIM and FUTURE-AI guidelines. eBioMedicine. 2025;110:105642. 10.1016/j.ebiom.2025.105642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Williamson SM, Prybutok V. Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl Sci. 2024. 10.3390/app14020675. [Google Scholar]
- 20.Zuhair V, Babar A, Ali R, Oduoye MO, Noor Z, Chris K, et al. Exploring the impact of artificial intelligence on global health and enhancing healthcare in developing nations. J Prim Care Community Health. 2024. 10.1177/21501319241245847. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary Material 1. Interview guide.
Data Availability Statement
The datasets generated and/or analyzed during the current study are not publicly available due to the confidential nature of interview transcripts. However, anonymized data may be made available from the corresponding author upon reasonable request.

