Abstract
Background
Digital-intelligent medical technology, centered on big data and artificial intelligence (AI), is rapidly permeating the entire Chinese healthcare process. However, the dual impact mechanism of this technology on physicians’ job resources and job demands remains unclear. Based on the Job Demands–Resources (JD-R) model, this study examines how digital-intelligent medical technologies simultaneously influence physicians’ job characteristics as both job resources and job demands from the perspective of physicians’ psychological perceptions. It provides recommendations for leveraging these technologies to advance healthcare development.
Methods
This study employed a qualitative design. Between October 2024 and June 2025, we conducted semi-structured interviews with 32 clinicians recruited through purposive and snowball sampling in Guangdong Province, China. Participants included physicians with varying levels of clinical experience, including those currently working in medical-related positions at different-tier hospitals and academic institution. The interview guide was designed based on the JD-R model, and participants were asked about their perceived current state of digital-intelligent medical technology applications, the job resources and job demands it introduces to physicians, as well as future recommendations for its implementation. For physicians with prior experience in using such technologies, the interviews further explored the specific application scenarios and conditions of their usage. Data were analyzed using thematic analysis, proceeding through open, axial, and selective coding.
Results
Participants were from neurosurgery, orthopedics, pediatrics, and other departments, with 19 males and 13 females. Their ages ranged from 24 to 54 years, and duration in profession spanned 2 to 28 years. A total of 25 individuals [78.1%] had used digital and intelligent medical technologies, which covered nearly the entire clinical diagnostic and treatment process, including AI-assisted medical record generation, intelligent decision support, intelligent treatment assistance, and AI-enabled follow-up. The study revealed the following findings: (1) Digital and intelligent medical technologies enhance physicians’ job resources through technological integration (efficiency resources, decision support systems, and knowledge expansion), improving diagnostic and therapeutic efficiency and decision accuracy. (2) Challenges such as technical dependency, operational complexity, and ethical responsibility arising from digital-intelligent medical technologies impose new job demands on physicians, exacerbating burnout and skill degradation risks. (3) Individual experience differences, types of disease treated, application scenarios of digital-intelligent medical technologies, and resource allocation influence physicians’ job characteristics. System optimization, technical training, and institutional safeguards can mitigate negative impacts by dynamically allocating technical resources and strengthening human-machine collaboration.
Conclusion
This study proposes a triple mechanism of resources-demands-support, emphasizing that the impact of digital and intelligent medical technologies on physicians’ job characteristics depends on the interaction between technological attributes and organizational support. It is recommended that hospitals and policymakers advance scenario-specific system optimization, trust-based training, and liability insurance. Furthermore, to bridge the resource disparities, regional data sharing should be promoted to maximize technological empowerment while minimizing occupational risks.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12913-025-13876-2.
Keywords: Digital-intelligent medical technology, Job demand, Job resource, Job characteristic, Physician
Introduction
Digital-intelligent technology, centered on big data and artificial intelligence (AI), is rapidly permeating the entire medical process, from automated electronic medical record generation to AI-assisted diagnostic imaging, significantly altering the workflow of physicians [1]. Digital-intelligent technologies now not only provide clinicians with model-based routine diagnostic assessments but also extract and synthesize disease features, offering substantial value in decision support [2, 3]. However, studies also highlight that digital-intelligent technologies, characterized by their formidable computational power, automated decision-making, and algorithmic opacity, may trigger concerns over diminished innovative thinking, increased reliance on technology, professional displacement, and ambiguous accountability, as well as negative consequences such as exacerbated burnout among healthcare professionals [4, 5]. Thus, for medical professionals, digital-intelligent technology presents both opportunities and challenges.
Clearly, the digital and intelligent transformation of healthcare is advancing and has become a key driver of the industry innovation. However, most existing studies focus on the algorithmic accuracy or application effectiveness of the technology itself, while there is insufficient attention to how medical professionals perceive and adapt to the dual nature of these technologies [6, 7]. On the one hand, digital-intelligent technologies provide new job resources for physicians through efficiency tools and decision support, offering potential to enhance diagnostic quality and professional development [8, 9]. On the other hand, technological dependence, operational complexity, and ambiguous accountability also introduce new job demands, which may trigger burnout and ethical conflicts [10]. Therefore, further research on the dual impacts of digital-intelligent technologies on medical personnel’s work holds significant practical importance.
A growing body of researches have examined the impact of digital-intelligent technologies on employees [11, 12], which can provide reference for this study. Among these, the Job Demands–Resources (JD-R) model—introduced by Demerouti et al. in 2001—has become a dominant theoretical lens for explaining energy depletion and motivational processes in digitalized workplaces [13, 14]. Within healthcare, prior studies have investigated the impact of electronic medical records or clinical decision support systems on nurses or medical technicians [15, 16]. Yet systematic research from a psychological perception perspective on how digital and intelligent technologies simultaneously function as both job resources and demands for physicians—particularly as their primary users—remains relatively scarce.
Building on the above, the development of digital and intelligent medical technologies has significantly impacted physicians, who serve as the primary users and stakeholders of these technologies. However, current research from the perspective of physicians’ psychological perception regarding how digital and intelligent medical technologies shape their job characteristics remains insufficient. Therefore, this study applies the JD-R model to identify and understand the specific job demands and job resources that arise from the adoption of digital-intelligent medical technologies in clinical practice. This study aims to provide actionable insights for optimizing the integration of digital and intelligent technologies in healthcare, ultimately enhancing medical quality and technological advancement through a physician-centric lens.
Methods
Theoretical framework
This study adopts the JD-R model as theoretical framework. The model states that job characteristics are underpinned by two core elements: job demands and job resources [17]. Job demands refer to tasks requiring sustained physical or psychological effort from employees, such as high workload, tight deadlines, and high emotional labor, which may lead to resource depletion, exhaustion, and burnout [18]. Conversely, job resources comprise physical, psychological, social, or organizational assets that help employees achieve work goals, offset the physiological and psychological costs of job demands, and stimulate personal growth and learning [19].
By tracing the dynamic interplay between job demands and job resources, the JD-R model explains how job characteristics influence employee well-being and effectiveness. When job demands chronically exceed individual coping capacity, a health-impairing process leading to burnout is set in motion; job resources, however, initiate a motivational process that raises engagement and performance while offsetting the negative effects of job demands [17].
Drawing on this perspective, we define the job characteristics of physicians in the digital and intelligent healthcare technology environment as a dynamic system composed of job demands and job resources. Changes in these characteristics directly influence physicians’ work performance, mental health, and career development processes [20]. Consequently, in this study, the JD-R model guided the construction of the interview guide, codebook and interpretation of findings.
Design, setting and recruitment
We chose the qualitative study design to explore physicians’ perceptions of how digital and intelligent technologies reshape their job characteristics. This study was approved by the Ethics Committee of Southern Medical University in Guangzhou. All participants provided oral informed consent before interviews, and anonymity was ensured by removing identity information from interview records. Each participant was offered reimbursement for their time. Reporting adheres to the Consolidated Criteria for Reporting Qualitative Research (COREQ) [21].
This study was conducted from October 2024 to June 2025 in Guangdong Province, China. The participants included physicians with at least one year of clinical experience, or those who had prior clinical work experience but were currently employed in healthcare-related positions or academic institution. The first, second, and corresponding authors initiated recruitment through direct solicitations and, during the interview phase, continued to expand the sample using purposive and snowball sampling. This strategy aimed to construct a diverse and inclusive cohort of physicians in terms of gender, age, duration in profession, hospital level, specialty, and experience with digital and intelligent medical technologies. Data collection continued until data saturation was reached, meaning no new codes or themes emerged. At this point, an additional 2–3 interviews were conducted to ensure no new information would be obtained.
Data collection
Data were collected through semi-structured, one-to-one interviews. Physicians who could not meet in person were interviewed online via the Tencent Meeting platform, while others participated in face-to-face interviews. Prior to each interview, the interviewer stated their background (medical student) and the study’s objectives, emphasizing that all data would be anonymized. Themes and patterns developed from analysis of earlier interviews were probed in subsequent interviews.
The interview guide was developed based on the JD-R model, and was iteratively refined as analysis progressed. It covered physicians’ perceptions of the current application of digital and intelligent medical technologies, the job resources and demands these technologies introduced to their work, and future implementation suggestions. For physicians with prior experience using digital and intelligent medical technologies, the interview explored specific domains and usage scenarios in greater detail (see Supplementary Material 1 for the full interview guide). Chinese quotes were translated into English by two authors, edited by a third author with overseas study experience, and cross-checked by the same two authors to ensure semantic accuracy. All interviews were audio-recorded and transcribed verbatim, with duration ranged from 13 to 47 minutes.
Analysis
We used the JD-R model as the conceptual framework for our directed thematic analysis of open-text responses from physicians. The JD-R model provides a framework for analyzing the content of the data inductively and deductively, leading to the identification of study themes. Initially, the first author conducted an inductive analysis of all interview transcripts to develop a preliminary codebook based on the interview guide. Subsequently, two authors independently analyzed half of the transcripts each, employing open coding (systematic labeling and categorization of raw data to identify initial concepts), axial coding (establishing relationships between categories to form subcategories and identify hierarchical relationships), and selective coding (integrating core categories with others to develop a theoretical framework). The additional codes generated were added into the codebook. Then, the 3 investigators jointly coded three transcripts using the revised codebook to ensure clarity and consistency in code definitions. At this stage, the codebook was finalized. Each transcript was coded independently by at least two investigators (H.Y., Z.L., or Z.S.), with discrepancies resolved through consensus. The entire analysis process was managed using Word documents and NVivo 20 for data organization and coding.
Results
Demographic of the participants
We interviewed 32 physicians, including 19 males [59.4%] and 13 females [40.6%], with mean age of 35.7 years [range, 24–54 years] and mean duration in profession of 11.3 years [range, 2–28 years] (Table 1). Participants were drawn from all tiers of Chinese healthcare system, including tertiary hospitals, secondary hospitals, and primary medical care institutions. They represented a broad range of specialties, including neurosurgery, orthopedics, dentistry, pediatrics, and internal medicine. Several participants also held concurrent roles in hospital administration or academic department, yet all had prior clinical experience, ensuring a comprehensive and multifaceted perspective for this study.
Table 1.
Demographics of participants
| ID | Gender | Age, years | Duration in profession, years | Affiliation | Specialty | Experience |
|---|---|---|---|---|---|---|
| 1 | Female | 29 | 3 | Tertiary hospital | Infection management | No |
| 2 | Male | 32 | 6 | Academic institution | Traditional Chinese medicine and artificial intelligence joint engineering | Yes b |
| 3 | Female | 41 | 18 | Tertiary hospital | Medical quality management | Yes b |
| 4 | Female | 53 | 28 | Tertiary hospital | Patient service | Yes b |
| 5 | Male | 33 | 8 | Tertiary hospital | Neurosurgery | Yes c |
| 6 | Female | 38 | 10 | Tertiary hospital | Rehabilitation | No |
| 7 | Female | 28 | 6 | Tertiary hospital | Dentistry | No |
| 8 | Male | 36 | 10 | Tertiary hospital | General Surgery | Yes c |
| 9 | Male | 24 | 2 | Tertiary hospital | Dentistry | Yes b |
| 10 | Female | 35 | 7 | Primary medical care institution | Gynecology | Yes d |
| 11 | Female | 39 | 25 | Primary medical care institution | General practice | No |
| 12 | Male | 42 | 18 | Tertiary hospital | Neurosurgery | Yes c |
| 13 | Male | 40 | 14 | Primary medical care institution | General practice | Yes d |
| 14 | Male | 32 | 8 | Tertiary hospital | Orthopedics | Yes c |
| 15 | Male | 31 | 2 | Tertiary hospital | Urology | Yes b |
| 16 | Male | 34 | 10 | Tertiary hospital | Urology | Yes c |
| 17 | Female | 34 | 10 | Primary medical care institution | General practice | Yes b |
| 18 | Male | 47 | 21 | Primary medical care institution | General practice | Yes c |
| 19 | Male | 44 | 19 | Secondary hospital | Diagnostic imaging | Yes c |
| 20 | Female | 32 | 8 | Tertiary hospital | Pharmacy | No |
| 21 | Female | 29 | 3 | Tertiary hospital | General practice | No |
| 22 | Male | 31 | 6 | Tertiary hospital | Anesthesiology | Yes b |
| 23 | Male | 31 | 7 | Tertiary hospital | Orthopedics | Yes c |
| 24 | Male | 54 | 28 | Tertiary hospital | Urology | Yes b |
| 25 | Male | 40 | 15 | Tertiary hospital | Pediatrics | Yes b |
| 26 | Female | 26 | 2 | Tertiary hospital | Internal medicine | Yes b |
| 27 | Male | 42 | 21 | Tertiary hospital | Orthopedics | Yes c |
| 28 | Male | 34 | 11 | Tertiary hospital | Otolaryngology | Yes b |
| 29 | Male | 35 | 12 | Tertiary hospital | Medical administration | Yes b |
| 30 | Female | 29 | 6 | Tertiary hospital | Oncology | Yes a |
| 31 | Female | 30 | 6 | Tertiary hospital | Oncology | Yes a |
| 32 | Male | 36 | 10 | Tertiary hospital | Cardiology | No |
Note: (1) In Chinese healthcare system, tertiary hospitals serve as regional medical centers, secondary hospitals function as intermediate-level institutions, and primary medical care institutions directly provides basic healthcare services to communities at the grassroots level [22](2) Experience column indicated interviewees’ experience with digital-intelligent medical technologies. a AI-assisted medical record generation, b Intelligent decision support, c Intelligent treatment assistance, d AI-enabled follow-up
Digital and intelligent medical technology application scenarios
Given the initial application of digital and intelligent medical technologies in China, and considering that whether doctors have used such technologies may influence their perception of how these technologies impact the characteristics of their work, we also documented each physician’s experience on digital-intelligent medical technologies. Overall, most physicians (25 participants [78.1%]) have practical experience with such technologies, concentrated in four core applications: (1) AI-assisted medical record generation—embedded within the hospital information system, automatically generating standardized records based on patient-reported symptoms and clinical inputs; (2) intelligent decision support—powered by DeepSeek-like AI plug-ins aiding in disease diagnosis, surgical risk assessment, and high-risk patient monitoring; (3) intelligent treatment assistance—combining AI algorithms with image navigation, postoperative simulation, and smart pressure control during surgeries; and (4) AI-enabled follow-up—driven by algorithms within the hospital system to power intelligent voice reminders that conduct automated conversations and collect patient feedback, or utilizing community health platforms to track patients after discharge.
Perceived job characteristics in the digital and intelligent medical technology environment
Job resources
Digital and intelligent medical technology enhances doctors’ job resources through technological integration, primarily by optimizing systematic resource allocation and knowledge collaboration to achieve efficiency gains and innovation-driven practices. This manifests in three key aspects: efficiency resources, decision support systems, and knowledge expansion.
In terms of efficiency resources, the technology leverages efficient resource allocation mechanisms, including hardware infrastructure (e.g., high-performance computing clusters) and software tools (e.g., data analysis tools), to support medical workflows. As one physician with experience in AI-assisted medical record generation noted: “AI automatically (assisted) generates medical records, reducing our documentation time, allowing us to focus more on patient care” (P30). Additionally, AI-assisted medical imaging analysis significantly improves diagnostic efficiency, as another physician stated: “AI-assisted lesion detection in imaging reduced our CT reading time for preliminary analysis” (P25).
Regarding decision support systems, artificial intelligence and big data technologies construct intelligent frameworks that integrate patient medical histories, symptoms, and examination results to provide personalized treatment recommendations. This not only enhances the accuracy of clinical decisions but also strengthens doctors’ ability to manage complex cases. One physician currently responsible for the clinical pathway planning remarked: “The system automatically alerts us to venous thromboembolism risk assessments, enabling timely interventions for high-risk patients before surgery and reducing complication risks” (P4).
In the realm of knowledge expansion, digital and intelligent technologies enable real-time access to the latest research findings, bridging clinical practice and scientific advancement. Hospitals can also adopt cutting-edge technologies and research outcomes, offering continuous learning opportunities for medical professionals. A physician commented: “The system’s ability to accumulate and summarize common features of cases facilitates more efficient data collection for doctors engaged in scientific research” (P17).
Job demands
The adoption of digital-intelligent technology introduces new challenges for physicians, underscoring the necessity of critical thinking and continuous learning to effectively integrate technology into clinical practice. As a double-edged sword, while the technology provides doctors with enhanced job resources, it also imposes new demands, such as redefining professional risks and ethical issues.
On the one hand, the development of intelligent diagnostic tools may undermine physicians’ decision-making abilities, particularly among young physicians who may neglect traditional skills like physical examination proficiency and medical history collection, leading to risks of over-reliance on technology. As a neurosurgeon with 18 years of experience noted: “Newly graduated doctors may habitually adopt AI suggestions without independent judgment, even overlooking individual patient differences” (P12).
On the other hand, complex technological systems may create operational barriers for doctors. Issues such as convoluted user interfaces, redundant functionalities, or non-intuitive interaction logic require additional time and cognitive effort, potentially reducing efficiency and increasing the risk of errors or burnout. Moreover, multitasking demands (e.g., simultaneously managing electronic medical records and AI tools) may fragment attention, negatively impacting doctor-patient communication quality. A physician remarked: “Simpler systems are better; we don’t have time to learn complex functions—ideally, just one-click report generation” (P19).
Furthermore, the application of digital-intelligent medical technologies may provoke ethical and accountability conflicts among physicians. A respondent expressed concern that “if AI misses early-stage lung cancer but the responsibility remains with the doctor, this ambiguity causes immense pressure” (P5).
Factors influencing physicians’ perceptions of digital-intelligent technology’s impact on their job characteristics
Beyond the aforementioned manifestations of digital-intelligent medical technology on physicians’ job characteristics, this study also found that physicians’ personal experience, the type of disease treated, and the application scenarios of such technologies influence their perceived impact on work.
Physicians’ personal experience
There are notable differences in the adoption of digital-intelligent medical technologies between experienced and younger physicians. Experienced doctors tend to prioritize clinical expertise, while younger physicians are more inclined to embrace new technologies. This divergence is illustrated by a comparison between a senior practitioner (P24) and a relatively junior orthopedist (P14) in this study. The senior physician with 28 years of clinical experience stated: “I trust my clinical experience more. AI suggestions can only serve as references, and final decisions must be based on physical examination” (P24). Conversely, the junior physician from orthopedics offered a different perspective, suggesting that such technologies provide younger physicians with additional job resources, facilitating skill development: “Many complex surgeries require years of surgical experience. With AI assistance, the threshold for mastering these skills may be lower, allowing more patients to benefit from advanced care” (P14).
Type of disease treated
The study involved physicians from diverse specialties, who emphasized that the development and application of digital-intelligent medical technologies vary significantly depending on the disease type. A participant specializing in cerebral vascular embolism remarked: “This disease is highly specialized, and there are not enough doctors or data to train the AI” (P5).
Application scenarios
When algorithms derive from visual images or non-emotional text, and functionalities aim to simplify workflows (e.g., medical imaging diagnosis, automated medical record generation), physicians are more receptive. As one interviewee supported: “The most needed is functionality that reduces documentation. Personally, I prefer systems that transcribe verbal notes into structured records” (P25). Conversely, when dealing with emotional text or multi-modal data, and functionalities focus on decision-making (e.g., intelligent diagnosis, treatment planning), physicians express hesitation due to the “black box” effect and uncertainty in patient safety. Another interviewee echoed: “Intelligent decision-making requires massive textual and multi-modal data, which is too costly and currently lacks robust models” (P8).
Resource allocation
Disparities in resource allocation between hospitals at different levels can lead to uneven access to advanced technologies, affecting care quality and professional development opportunities. Additionally, the uneven quality of training data for AI models may widen gaps in physicians’ technical proficiency and efficiency. A general practitioner from a primary medical care institution observed: “Tertiary hospitals focus on developing research-oriented AI, while grassroots facilities lack even basic modules, widening the technological divide” (P13).
Facilitators of positive experiences with digital-intelligent medical technology
In addition to the outlined components of physicians’ job demands and job resources, as well as their influencing factors from individual, treated disease-type and technology-related perspectives, this study also identifies that organizational support is crucial for the successful implementation of digital-intelligent medical technologies. Specifically, system optimization, technical training, and institutional guarantees can mitigate the negative impacts of technology and enhance its benefits.
System optimization
Digital-intelligent medical technologies must dynamically adapt to evolving operational scenarios. The hospital managers and digital-intelligent medical technology developers improve the design and framework of these technologies based on physicians’ feedback, making them more intuitive and user-friendly. This significantly enhances physicians’ work efficiency and satisfaction. As one physician noted: “The radiology department requires specialized imaging analysis modules, but frequent system switching affects work efficiency” (P27).
Technical training
Providing technical training can help physicians integrate digital-intelligent medical technologies into clinical practice. These trainings not only teach technical operations but also emphasize critical thinking skills, enabling doctors to determine when to trust AI outputs and when to question or verify them. Additionally, establishing technical support teams ensures timely resolution of issues encountered during technology use. A physician emphasized: “Training should not only focus on technical skills but also cultivate critical thinking, teaching doctors to evaluate AI’s reliability” (P30).
Institutional guarantees
Institutional guarantees are essential for doctors to confidently adopt digital-intelligent medical technologies. Technical malfunction insurance and data usage agreements reduce risks associated with technology application, fostering trust and enhancing doctors’ confidence in decision-making. As one physician called for: “It is essential to clarify the legal responsibility for AI errors; otherwise, doctors will always bear the brunt of any failures” (P16).
The “Resource-demand-support” triadic mechanism
Building on the previous analysis of how digital-intelligent medical technologies introduce both job demands and job resources for physicians, along with relevant influencing factors and the demonstrated importance of organizational support, we propose a broader theoretical framework, namely the “Resource–Demand–Support” triadic mechanism. This framework explains how digital intelligence technology shapes physicians’ job characteristics from two dimensions—job resources and job demands. It encompasses the provision of job resources such as enhanced work efficiency, while simultaneously introducing job demands like diminished autonomous decision-making ability. This shaping process is further influenced by factors including physicians’ personal experience and the types of diseases they treat. Effective support systems, such as system optimization and technical training, not only mitigate the negative effects of job demands on physicians but also amplify the benefits of job resources. Consequently, this influences the balance in physicians’ perceptions and practices when responding to the development and application of digital intelligence technology in the healthcare industry. The specific process is depicted in Fig. 1.
Fig. 1.
How digital-intelligent medical technology reshapes physicians’ job characteristics
System optimization enhances efficiency and alleviates human–machine friction
Intuitive and smooth system design serves as a key form of organizational support, and systems tailored to specific scenarios can significantly boost physicians’ work efficiency as a core resource. As one interviewee involved in developing AI‑assisted digestive‑system capsule endoscopy image‑reading technology at an academic institution remarked: “Instead of requiring doctors to manually examine tens of thousands of images captured by capsule endoscopes one by one, AI can automatically pre‑screen them, which greatly improves doctors’ work efficiency” (P2).
Furthermore, system optimization can transform technology from a job demand into an effective job resource. Conversely, when system design does not align with clinical workflows, it becomes a cognitive burden and an additional job demand. A physician with experience in AI‑assisted medical record generation noted: “When using the one‑click medical record generation function, which should ideally produce content according to standard formats. Yet, due to the function’s limited intelligence, the generated records often contain substantial repetitive content, requiring additional time for manual identification and modification” (P30).
Technical training fosters correct technology use and mitigates over-reliance
Technical training serves not only to instruct physicians in the operational use of digital intelligence technologies—enabling them to apply these tools more efficiently and obtain corresponding job resources—but also to develop critical evaluation skills to address potential output errors and the risk of passive dependence. This refers to the tendency for physicians to forgo independent judgment and habitually comply with AI recommendations, which may impede the refinement of their professional skills and even compromise medical safety. As one physician noted: “Training is needed on how to effectively communicate with AI—for instance, by using certain keywords to prompt it to generate the required output—and then applying our professional expertise to evaluate the results” (P1).
Institutional guarantees clarify liability and release ethical anxiety
The uncertainty surrounding ethics and responsibility attribution introduced by digital intelligence technology significantly undermines physicians’ psychological safety. Strengthening physicians’ confidence in decision-making when applying such technology relies on clear accountability mechanisms. Such mechanisms can mitigate the anxiety caused by ambiguous accountability in the use of digital intelligence medical technology, while also enabling physicians to fully leverage AI-assisted decision-making. This allows job resources provided by AI—such as decision support and knowledge expansion—can be genuinely and confidently utilized. A neurologist involved in developing AI for automatically identifying embolisms and diffusion ranges remarked: “Although there has been considerable research on AI-based diagnostic technologies, their practical implementation remains challenging, primarily due to the unresolved question of who bears responsibility” (P5).
Discussion
Principle findings
Drawing on the JD-R model, this qualitative study explored how digital intelligent medical technologies reshapes physicians’ job characteristics across different-tier hospitals in Guangdong Province, China. We found that while these technologies provide doctors with valuable resources such as enhancing diagnostic efficiency and decision-making precision, they also introduce new challenges, including exacerbating skill degradation and ethical responsibility risks. Factors such as individual characteristics, type of disease treated, technology application seniors, and resource allocation further influence how physicians perceive the impact of digital-intelligent technologies on their work. Meanwhile, system optimization, technical training, and institutional safeguards can mitigate the negative effects of technology while amplifying its benefits. From these findings, we propose a “resource–demand–support” triadic mechanism to manage the dual role of digital-intelligent medical technologies as both job demands and job resources. This framework captures the multifaceted impact of such technologies on physicians and provides guidance for future implementation strategies.
The findings of this study align with numerous existing studies, demonstrating that digital-intelligent technologies play a pivotal role in optimizing medical processes and improving healthcare quality [23, 24]. Highly efficient resource management systems in hospitals ensure that physicians can promptly access necessary medical equipment, pharmaceuticals, and human resources during work. For instance, one hospital implemented an “AI + imaging” platform to integrate multi-modal data (e.g., CT and MRI scans), significantly enhancing the efficiency of lung nodule screening [1]. Furthermore, this study reveals that digital-intelligent technologies enhance physicians’ knowledge resources by integrating medical knowledge bases and data, thereby supporting professional development. The release of the LungDiag AI model exemplifies this shift, accelerating the translation of personalized, high-precision medicine and compelling clinicians to update their expertise with technological iteration [25]. Collectively, digital-intelligent technologies have become a critical driver of physicians’ career advancement.
Despite the benefits of digital-intelligent medical technologies, this study, consistent with other studies, also identified their negative impacts on physicians’ job characteristics [26, 27]. Koppel et al. indicated that over-reliance on AI may leave clinicians vulnerable during system failures, resulting in markedly reduced efficiency and an elevated risk of diagnostic error [28]. In addition, we found that the operational complexity of these technologies may conversely increase physicians’ workload—for example, system false alarms impose the burden of additional checks. Moreover, this study revealed that the most prominent ethical concern reported by physicians was the ambiguity of accountability. While leveraging technology to improve efficiency, they must maintain high vigilance to identify its potential errors and be prepared to take full responsibility for decisions that may not be entirely their own, which creates an additional psychological burden for physicians. Thus, although AI can assist physicians in their work, the ultimate responsibility for diagnosis, treatment, and humanistic care still rest with the them—which is a key reason why physicians perceive their roles as irreplaceable by technology. This is consistent with the research of Char et al., who emphasized the ethical challenges that need to be addressed when applying digital-intelligent technology in the medical field [10].
The adoption of digital-intelligent technologies places doctors in a dynamic and complex work environment characterized by inherent risks and uncertainties. As highlighted in prior research, individuals may exhibit varying adaptive behaviors and psychological responses when interacting with new technologies, depending on their perceived outcomes [29]. This study demonstrates that experienced physicians tend to rely on their clinical expertise, while younger doctors are more inclined to utilize digital-intelligent tools. This divergence may lead to team decision-making friction and reduced work efficiency [30]. Furthermore, this study reveals that the disease-specific nature of medical specialties significantly constrains the applicability of digital-intelligent technologies. For instance, specialties such as neurosurgery and interventional radiology face limitations due to scarce case volumes and high heterogeneity in imaging data, resulting in insufficient training datasets and suboptimal model performance [31, 32].
This study further explores the application scenarios of digital-intelligent medical technologies, revealing that task characteristics significantly influence physicians’ job characteristics. Consistent with the findings of Capelli et al., high-transparency imaging AI systems are more readily accepted by doctors, whereas AI systems involved in complex clinical decision-making trigger “safety uncertainty” anxiety due to the “algorithmic black box” phenomenon [33]. In addition, this study underscores the institutional-level disparities that exacerbate the “digital divide” effect. Uneven allocation of digital-intelligent technology resources and inconsistent quality of training data between primary healthcare institution and tertiary hospitals amplify differences in physicians’ technical proficiency and application capabilities [34]. This disparity not only widens variations in professional development opportunities for physicians but also intensifies inequities in healthcare access and outcomes [35].
In this study, physicians express a positive attitude toward the significant role of digital-intelligent technologies in healthcare, but they emphasize that the user experience of such technologies is a prerequisite for their adoption. This aligns with the findings of Davis, who highlighted that employees’ willingness to adopt technology is driven by both perceived usefulness and perceived ease of use [36]. Furthermore, technical training and legal safeguards can mitigate risks associated with AI adoption, thereby enhancing doctors’ confidence in decision-making. Institutional safeguards directly shape physicians’ professional security and adaptability to new technologies [37]. Clear accountability frameworks reduce anxiety related to adopting novel technologies [38]. When hospitals demonstrate a strong commitment to physician safety at an institutional level, it not only increases technology acceptance but also strengthens trust in the organization and professional security among doctors [39].
Strengths and limitations
Grounded in the JD-R model, this study not only examined the positive aspects of digital-intelligent medical technology on physicians’ job characteristics, but also delved into their negative effects. We propose a “resource–demand–support” triadic framework that offers a novel perspective for understanding the complex, context-dependent impacts of digital-intelligent technologies in healthcare. Furthermore, integrating the current state of digital-intelligent medical technology adoption in Chinese clinical practice, this study explores the specific application scenarios from the perspective of physicians, highlighting how these technologies reshape their job characteristics, providing actionable insights for the future development and equitable deployment of digital-intelligent medical technologies.
However, there are also limitations. One potential limitation is that the interview questions were based on the JD-R model, which may have inadvertently led participants to respond within our predefined theoretical framework and potentially reinforced a “demands-resources” mindset rather than eliciting their most genuine concerns. In addition, the themes were mapped onto the model’s domains, which may have imposed the framework onto the data rather than generated from the data. Although we made efforts to identify themes beyond the JD-R model during analysis, such as factors influencing physicians’ perceptions of digital-intelligent technology’s impact on their job characteristics, this pre-established structure may still have limited our ability to capture broader or more subtle themes. Another limitation is the potential for cognitive biases in the interviews, although most participants were open to discussing their uncertainties and did not appear to be defensive. Additionally, the study’s sample was limited to physicians in Guangdong Province, which may not fully represent medical practices in other regions. Future research could expand the sample size and geographic scope, adopt open exploratory approaches (e.g., ethnography), incorporate quantitative and objective data sources, and explore the dynamic and long-term effects of digital-intelligent technologies on physicians’ job characteristics over time, which would help verify and complement the findings of this study.
Implications
Drawing on this study, we propose the following suggestions to promote the empowerment of digital-intelligent medical technologies in physicians’ clinical work. Firstly, if hospitals choose to deploy such systems, they should tailor their implementation to the specific needs of different departments. This includes analyzing core metrics such as usage frequency, misdiagnosis rates, and burnout levels among physicians. By collaborating with technology developers or outsourced tech companies, hospitals can optimize the operational processes of digital-intelligent medical products to enhance physicians’ user experience. Future advancements could leverage federated learning [40] or synthetic data techniques [41] to address data scarcity and quality issues in specialized fields, while also noting new risks such as the authenticity of the generated data and the potential to amplify existing biases, which require prudent evaluation.
Secondly, strengthening digital-intelligence training for physicians is essential to improve their technical literacy and critical thinking skills. A robust technical support system should also be established to assist physicians in effectively utilizing digital-intelligent technologies to enhance work efficiency and decision-making quality. Thirdly, hospitals, manufacturers, and insurance companies could jointly establish digital-intelligent accountability agreements and introduce technology liability insurance. This dual mechanism of legal and financial safeguards would reduce physicians’ concerns when using digital-intelligent medical technologies in clinical practice.
Finally, given this study’s finding that unequal resource distribution between different levels of hospitals has led to a digital divide in physicians’ technology access and application capabilities—thereby directly limiting the job resources for some physicians, especially those at grassroots institutions—governments could consider policy-driven initiatives such as regional data-sharing platforms to bridge resource disparities between institutions. At the same time, it must be recognized that data sharing brings ethical challenges such as privacy and security concerns. Therefore, any such platform must be built upon a firm commitment to patient data privacy, reinforced by robust security protocols and clear regulatory frameworks, thereby transforming these new ethical demands into manageable and well-supported institutional processes. Such policies are crucial for mitigating the negative effects of digital-intelligent technology adoption, enhancing physicians’ job satisfaction, and improving the overall quality of healthcare services.
Conclusions
In summary, this study systematically analyzes how digital-intelligent mendical technologies reconfigure the job characteristics (demands and resources) of physicians and proposes a triple-mechanism framework comprising resources, demands, and support. This framework not only elucidates the complex interplay between digital-intelligent technologies and physicians’ job characteristic, but also provides practical guidance for healthcare institutions in balancing technological empowerment with risk management during implementation. Future research could further explore how to optimize organizational support mechanisms based on factors influencing physicians’ perceptions of digital-intelligent medical technologies, better address the challenges brought by those technologies and promote its deep integration and sustainable development in the medical field.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgements
We are grateful to all participants in this study.
Abbreviations
- AI
Artificial intelligence
- JD-R
Job demands-resources
- COREQ
Consolidated Criteria for Reporting Qualitative Research
Author contributions
Hongli Yan and Dong Wang designed the study. Data collection and analysis was carried out by Hongli Yan, Zhen Liu and Zengping Shi. The manuscript was written by Hongli Yan and Zhen Liu, with all authors contributing to its revision. All authors approved the final manuscript.
Funding
This research was funded by the Guangdong Natural Science Foundation (project title: Research on the Impact Mechanism of Digital Intelligent Technologies on Physicians’ Job Crafting Based on the JD-R Model) and the China Postdoctoral Science Foundation under Grant Number 2025M780686. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Data availability
The data underlying this article will be shared by the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
This study was approved by the Ethics Committee of Southern Medical University in Guangzhou and was conducted in accordance with the ethical principles outlined in the World Medical Association’s Declaration of Helsinki for medical research involving human subjects. Informed consent was obtained from all participants.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
- 2.Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8. 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Secinaro S, Calandra D, Secinaro A, et al. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inf Decis Mak. 2021;21(1):125. 10.1186/s12911-021-01488-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Basubrin O. Current status and future of artificial intelligence in medicine. Cureus. 2025;17(1):e77561. 10.7759/cureus.77561. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Karaferis D, Balaska D, Pollalis Y. Digitalization and artificial intelligence as motivators for healthcare professionals. Japan J Res. 2025;6(3):103. 10.33425/2690-8077.1170. [Google Scholar]
- 6.Tanković N, Šajina R, Lorencin I, Transforming Medical Data Access. The role and challenges of recent Language models in SQL query automation. Algorithms. 2025;18(3):124. 10.3390/a18030124. [Google Scholar]
- 7.Price W, Nicholson II, Sara Gerke I, Glenn Cohen. Potential liability for physicians using artificial intelligence. AMA: J Am Med Association. 2019;322(18):1765–6. 10.1001/jama.2019.15064. [DOI] [PubMed]
- 8.Konttila J, Siira H, Kyngäs H, et al. Healthcare professionals’ competence in digitalisation: A systematic review. J Clin Nurs. 2019;28(5–6):745–61. 10.1111/jocn.14710. [DOI] [PubMed] [Google Scholar]
- 9.Tehrani N, How Digital Health Technology Aids Physicians. Int J Biomed. 2015;5(2):104–5. 10.21103/ARTICLE5(2)_PERS1. [Google Scholar]
- 10.Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. 2018;378(11):981–3. 10.1056/NEJMp1714229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Brougham D, Haar J. Smart Technology, artificial Intelligence, Robotics, and algorithms (STARA): employees’ perceptions of our future workplace. J Manage Organ. 2018;24(2):239–57. 10.1017/jmo.2016.55. [Google Scholar]
- 12.Makarius EE, Mukherjee D, Fox JD, Fox AK. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. J Bus Res. 2020;120:262–73. 10.1016/j.jbusres.2020.07.045. [Google Scholar]
- 13.Demerouti E, Bakker AB, Nachreiner F, et al. The job demands-resources model of burnout. J Appl Psychol. 2001;86(3):499–512. 10.1037/0021-9010.86.3.499. [PubMed] [Google Scholar]
- 14.Scholze A, Hecker A. Digital job demands and resources: digitization in the context of the job demands-Resources model. Int J Environ Res Public Health. 2023;20(16):6581. 10.3390/ijerph20166581. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Melnick ER, West CP, Nath B, et al. The association between perceived electronic health record usability and professional burnout among US nurses. J Am Med Inf Assoc. 2021;28(8):1632–41. 10.1093/jamia/ocab059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Bennett P, Hardiker NR. The use of computerized clinical decision support systems in emergency care: a substantive review of the literature. J Am Med Inf Assoc. 2017;24(3):655–68. 10.1093/jamia/ocw151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Bakker B, Demerouti E. The job Demands-Resources model: state of the Art. J Managerial Psychol. 2007;22(3):309–28. 10.1108/02683940710733115. [Google Scholar]
- 18.Bakker AB, Demerouti E. Job demands-resources theory: taking stock and looking forward. J Occup Health Psychol. 2017;22(3):273–85. 10.1037/ocp0000056. [DOI] [PubMed] [Google Scholar]
- 19.Schaufeli WB, Bakker. AB.Job demands, job resources, and their relationship with burnout and engagement: A multi-sample study. J Organizational Behav. 2004;25(3):293–315. 10.1002/job.248. [Google Scholar]
- 20.Van den Broeck A, Elst TV, Baillien E, et al. Job Demands, job Resources, Burnout, work Engagement, and their relationships: an analysis across sectors. J Occup Environ Med. 2017;59(4):369–76. 10.1097/JOM.0000000000000964. [DOI] [PubMed] [Google Scholar]
- 21.Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. 10.1093/intqhc/mzm042. [DOI] [PubMed] [Google Scholar]
- 22.Hu R, Liao Y, Du Z, et al. Types of health care facilities and the quality of primary care: a study of characteristics and experiences of Chinese patients in Guangdong Province, China. BMC Health Serv Res. 2016;16(a):335. 10.1186/s12913-016-1604-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Wubineh BZ, Deriba FG, Woldeyohannis MM. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. Urol Oncol. 2024;42(3):48–56. 10.1016/j.urolonc.2023.11.019. [DOI] [PubMed] [Google Scholar]
- 24.Ali O, Abdelbaki W, Shrestha A, et al. A systematic literature review of artificial intelligence in the healthcare sector: benefits, challenges, methodologies, and functionalities. J Innov Knowl. 2023;8(1):100333. 10.1016/j.jik.2023.100333. [Google Scholar]
- 25.Liang H, Yang T, Liu Z, et al. LungDiag: empowering artificial intelligence for respiratory diseases diagnosis based on electronic health records, a multicenter study. MedComm. 2025;6(1):e70043. 10.1002/mco2.70043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–8. 10.7861/futurehosp.6-2-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Mohsin Khan M, Shah N, Shaikh N, et al. Towards secure and trusted AI in healthcare: A systematic review of emerging innovations and ethical challenges. Int J Med Inf. 2025;195:105780. 10.1016/j.ijmedinf.2024.105780. [DOI] [PubMed] [Google Scholar]
- 28.Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293(10):1197–203. 10.1001/jama.293.10.1197. [DOI] [PubMed] [Google Scholar]
- 29.Markus ML. New games, new rules, new scoreboards: the potential consequences of big data. J Inform Technol. 2015;30(1):58–9. 10.1057/jit.2014.28. [Google Scholar]
- 30.Parikh RB, Obermeyer Z, Navathe AS. Regulation of predictive analytics in medicine. Science. 2019;363(6429):810–2. 10.1126/science.aaw0029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Patel SA, Covell MM, Patel S, et al. Advancing endovascular neurosurgery training with extended reality: opportunities and Obstacles for the next decade. Front Surg. 2024;11:1440228. 10.3389/fsurg.2024.1440228. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Lastrucci A, Iosca N, Wandael Y, et al. AI and interventional radiology: A narrative review of reviews on Opportunities, Challenges, and future directions. Diagnostics (Basel). 2025;15(7):893. 10.3390/diagnostics15070893. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Capelli G, Verdi D, Frigerio I, et al. White paper: ethics and trustworthiness of artificial intelligence in clinical surgery. Art Int Surg. 2023;3:111–22. 10.20517/ais.2023.04. [Google Scholar]
- 34.Mishori R. Artificial intelligence technology in healthcare and the digital divide. In: Adirim T, editor. Digital Health, AI and generative AI in healthcare. Cham: Springer; 2025. pp. 139–47. 10.1007/978-3-031-83526-1_11. [Google Scholar]
- 35.d’Elia A, Gabbay M, Rodgers S, et al. Artificial intelligence and health inequities in primary care: a systematic scoping review and framework. Fam Med Community Health. 2022;10(Suppl 1):e001670. 10.1136/fmch-2022-001670. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40. 10.2307/249008. [Google Scholar]
- 37.Crigger E, Reinbold K, Hanson C, et al. Trustworthy augmented intelligence in health care. J Med Syst. 2022;46(2):12. 10.1007/s10916-021-01790-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Chen C, Hu W, Wei X. From anxiety to action: exploring the impact of artificial intelligence anxiety and artificial intelligence self-efficacy on motivated learning of undergraduate students. Interact Learn Environ. 2025;33(4):3162–77. 10.1080/10494820.2024.2440877. [Google Scholar]
- 39.Peikari HR, Shah TR, Lo MH. Patients’ perception of the information security management in health centers: the role of organizational and human factors. BMC Med Inf Decis Mak. 2018;18(1):102. 10.1186/s12911-018-0681-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Chaddad A, Wu Y, Desrosiers C. Federated learning for healthcare applications. IEEE Internet Things J. 2024;11(5):7339–58. 10.1109/JIOT.2023.3325822. [Google Scholar]
- 41.Abufadda M, Mansour K. A survey of synthetic data generation for machine learning. 2021 22nd International Arab Conference on Information Technology (ACIT), Muscat, Oman. 2021:1–7. 10.1109/ACIT53391.2021.9677302.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article will be shared by the corresponding author upon reasonable request.

