Skip to main content
Journal of Multidisciplinary Healthcare logoLink to Journal of Multidisciplinary Healthcare
. 2025 Sep 1;18:5405–5419. doi: 10.2147/JMDH.S541271

Ethical and Legal Governance of Generative AI in Chinese Healthcare

Jinrun Jia 1,, Shiqiao Zhao 1
PMCID: PMC12412760  PMID: 40917542

Abstract

The application of generative artificial intelligence (AI) technology in the healthcare sector can significantly enhance the efficiency of China’s healthcare services. However, risks persist in terms of accuracy, transparency, data privacy, ethics, and bias. These risks are manifested in three key areas: first, the potential erosion of human agency; second, issues of fairness and justice; and third, questions of liability and responsibility. This study reviews and analyzes the legal and regulatory frameworks established in China for the application of generative AI in healthcare, as well as relevant academic literature. Our research findings indicate that while China is actively constructing an ethical and legal governance framework in this field, the regulatory system remains inadequate and faces numerous challenges. These challenges include lagging regulatory rules; an unclear legal status of AI in laws such as the Civil Code; immature standards and regulatory schemes for medical AI training data; and the lack of a coordinated regulatory mechanism among different government departments. In response, this study attempts to establish a governance framework for generative AI in the medical field in China from both legal and ethical perspectives, yielding relevant research findings. Given the latest developments in generative AI in China, it is necessary to address the challenges of its application in the medical field from both ethical and legal perspectives. This includes enhancing algorithm transparency, standardizing medical data management, and promoting AI legislation. As AI technology continues to evolve, more diverse technical models will emerge in the future. This study also proposes that to address potential risks associated with medical AI, efforts should be made to establish a global AI ethics review committee to promote the formation of internationally unified ethical and legal review mechanisms.

Keywords: generative artificial intelligence, healthcare, policy, law, ethics

Introduction

Recent years have seen rapid advancements in AI, leading to disruptive changes in traditional productive systems and the very fabric of human society. For instance, AI systems are increasingly being integrated into pathology and diagnostic testing.1 Notably, in November 2022, OpenAI officially launched ChatGPT, which swiftly gained 100 million users. Generative AI, a highly discussed branch of AI capable of creating original content based on user requests, has attracted considerable attention.

During the COVID-19 pandemic, AI models capable of generative tasks, such as GPT-3, were widely utilized for various medical applications. Conversational chatbots take on some of the duties of medical professionals.2 These models generated considerable value in emergency medicine.3

By January 2025, China’s generative AI models, such as Deepseek, will offer advanced capabilities at a significantly lower cost and will be made available as open-source software. Together with large-scale language modeling, this type of AI is revolutionizing the medical industry.4

While traditional AI is typically limited to single tasks such as clinical documentation and decision support, generative AI offers a broader range of applications. It can scan medical images, analyze MRIs, X-rays, and tissue samples, provide tumor diagnosis, and offer sleep and fitness advice, among other capabilities.

Generative AI in healthcare has the potential to automate many tasks that were previously thought to be possible only by humans.5 Furthermore, it can reduce healthcare costs by enhancing diagnostic accuracy6 and minimizing unnecessary tests and treatments.7 Additionally, generative AI can optimize the allocation efficiency of medical resources, thereby alleviating the issue of resource mismatch, characterized by “overcrowded tertiary hospitals and underutilized community hospitals”. Despite the promising potential of generative AI in healthcare, there are also significant concerns that need to be addressed. These include the accuracy of content generation, transparency, data privacy, ethics, bias, and regulatory compliance. Technological development should enhance human well-being within reasonable limits, rather than restricting human living space and increasing the risks to human survival.

In recent years, academic research on the ethical and legal governance of medical AI risks has steadily grown. This body of work can be divided into three main areas:

First, research on improving the credibility and explainability of medical AI systems, which creatively proposes technical solutions. For example, some scholars have introduced a trustworthy AI scoring system to assess privacy protection and credibility in medical AI applications;8 others have explored the synergistic development of blockchain and AI in healthcare;9 semi-structured interviews with AI experts suggest that developing ethically robust medical AI requires a holistic approach rather than being confined to product guidelines.10

Second, research on ethical risks and countermeasures in medical AI applications from multiple perspectives. Specifically, ethical analyses have been conducted on the fundamental principles embedded in medical AI guidelines;11 through questionnaire surveys, scholars have proposed that medical personnel should receive training in both AI operational capabilities and ethical accountability;12,13 based on Catholic ethics, certain studies emphasize that medical AI must respect the irreplaceability of human judgment;14 six major ethical and legal issues in medical AI development—including privacy and bias—have been systematically discussed.15

Third, research on the ethical and legal governance of medical AI risks from national and global perspectives. This encompasses comparative studies of global regulatory frameworks16–18 and country-specific analyses (eg, Tanzania,19 South Africa,20 Spain21), highlighting contextualized governance approaches.

However, a scarcity of literature specifically addresses China’s regulatory landscape concerning generative AI applications in healthcare.

This study primarily focuses on legal and ethical analysis, while also covering specific analyses of clinical and technical impacts. Through an in-depth analysis of the actual application of generative artificial intelligence in China’s medical field, this study explores the potential risks associated with the application of this technology. The objective of this study is to examine the shortcomings of existing regulatory measures based on a review of relevant regulatory documents and academic literature related to generative artificial intelligence in China’s healthcare sector. These shortcomings include the lag in legal rule-making, challenges in administrative regulation, and the lack of uniformity in relevant standards. Ultimately, this study proposes strategies to address these issues from both ethical and legal perspectives.

Materials and Methods

This paper reviews medical generative AI currently deployed in China. Numerous generative AI applications are now in use across the Chinese medical sector. For instance, over 100 AI products developed by United Imaging have been implemented in more than 4,000 medical institutions nationwide, supporting auxiliary diagnosis, surgical treatment, medical record documentation, and scientific research applications. Guangdong Maternal and Child Health Hospital has completed the localized deployment of the DeepSeek-R1 Big Model.22 Furthermore, the Fifth Affiliated Hospital of Southern Medical University has fully integrated DeepSeek-R1 with the Clinical Laboratory Intelligence Management System, enhancing intelligence across various fields, including blood, urine, and genetic testing. This integration is expected to improve diagnostic efficiency by 20–30% and increase the standardized coverage rate of quality control to over 90%.23 Given the large number of medical-related generative artificial intelligence products available in the market, this study selected three widely used product categories from both general-purpose and specialized products for listing and brief analysis. See Table 1 for specific product names and features.

Table 1.

Large-Scale Artificial Intelligence Models in the Chinese Healthcare Sector

Type Product Name Product Features Application Scenarios
Comprehensive medical big model Baidu Lingyi Large Model Integrates a wealth of medical literature, electronic health records, and other data to support multimodal medical diagnosis and treatment. All scenarios in medical diagnosis and treatment activities. Including diagnosis, auxiliary treatment, drug research and development, health care, etc.
Tencent Miying Following the incorporation of artificial intelligence, large-scale language models’ capabilities in multimodal analysis and report generation are enhanced. This enables providing precise dietary plans for patients.
Alibaba Health Medical Large Model Supports text, image, audio, and video interaction, with powerful and sophisticated reasoning capabilities.
Medical sub-field big model Huawei Pangu Drug Molecule Large Model In collaboration with the Shanghai Institute of Materia Medica, Chinese Academy of Sciences, we have developed a pre-trained large-scale model specifically designed for drug research and development. Drug development scenarios
iFlytek Starfire Medical Large Model This model demonstrates high precision in medical record quality control. It successfully passed the national medical licensing examination, thereby building a solid technical foundation for general practice auxiliary diagnosis. Disease diagnosis scenarios
uAI Yingzhi Large Model It achieves remarkable accuracy in CT/MRI multi - cancer screening and supports a fully automated “scan - analyse - report” workflow. Fully automated cancer screening scenarios

This study examines the ethical and legal literature pertinent to the application of generative AI in healthcare, with content analysis as the primary analytical method. Based on this examination, we have summarized the key ethical and legal issues frequently discussed, reflecting recent advancements in China’s generative AI industry.

In addition, this study searched the “Official Website of the Chinese Government” and the “Wolters Kluwer Legal & Regulatory” database for legal and regulatory documents incorporating the keywords “medical artificial intelligence”, “generative artificial intelligence”, and “artificial intelligence” between 2020 and 2025. See Appendix 1 for details. Subsequently, the search results were individually reviewed, and documents directly relevant to this study were manually curated. The reviewed documents include, among others, the “Rules for the Regulation of Internet Diagnosis and Treatment (Trial)”;24 the “Guiding Principles for the Classification and Definition of AI Medical Software Products”;25 the “Code of Ethics for the New Generation of AI”;26 the “Interim Measures for the Administration of Generative AI Services”;27 the “Reference Guidelines for AI Scenarios in Healthcare”;28 and the “Law of the People’s Republic of China on Personal Information Protection”,29 as well as the “Civil Code of the People’s Republic of China”.30 Then, a systematic review was performed on the aforementioned legal and regulatory documents. In the discussion section, a cross-regional comparative analysis was conducted to evaluate legislative frameworks for medical AI applications in the EU and the US, with specific reference to the EU Artificial Intelligence Act, the General Data Protection Regulation, and the Health Insurance Portability and Accountability Act. By studying the timing and process of legislation in these countries, we can see how China’s AI legislation is lagging.

Lastly, integrating these findings, the article evaluates the regulatory gaps in the application of generative AI within China’s healthcare sector and proposes measures for enhancement.

Dilemmas of Generative Artificial Intelligence Applications in Healthcare

Human Subjectivity

Patient’s Autonomy

Firstly, patient autonomy is manifested in their ability to reject AI-based medical interventions. Autonomy stands as one of the four cornerstone principles in medical ethics, affirming individuals’ right to act freely and make their own decisions. Consequently, patients possess the inherent right to decline medical advice that they do not endorse. However, the integration of artificial intelligence (AI) into healthcare may potentially undermine patient autonomy. As generative AI technology becomes increasingly prevalent in the healthcare industry, patients may find themselves unable to effectively scrutinize the legitimacy and rationality of AI-driven medical decisions.

Generative AI systems often operate within an “algorithmic black box”,31 where the decision-making process remains undisclosed, unknowable, unexplainable, and inherently uncertain.

(i) Algorithmic decision-making process is not transparent. Developers do not disclose the core algorithms and sources of training data for training generative AI. The processing and selection process of the data is not transparent. Complex AI systems operate like a black box and it is difficult to understand how they reach their conclusions.32

(ii) The computing process of generative AI is unpredictable Generative AI distinguishes itself from traditional AI in that its computational process is not pre-designed but rather relies on “self-learning” mechanisms to produce results. The steps and underlying logic of these calculations are not readily apparent to the user. Furthermore, complex computations often necessitate the concurrent execution of multiple algorithmic models, which collaborate to produce an answer. Users lack the means to ascertain the distribution of operations among these models and the specific synergistic processing that occurs.

(iii) There exist unexplained instances in the realm of generative AI techniques. For instance, the phenomenon of “emergence”33 is prevalent in generative AI. Large models, grounded in vast training data and numerous parameters, frequently exhibit capabilities that surpass human expectations when tasked with activities for which they have not been explicitly trained. These emergent capabilities can encompass a diverse array of language models, task varieties, and experimental contexts. An exemplary case is Sora, the AI known as “Vincent Video”, which possesses the ability to create novel, unprogrammed knowledge or patterns by amalgamating a multitude of nonlinear mapping functions. It autonomously acquires an understanding of intricate concepts by learning the relationships and patterns embedded within the data. Nonetheless, to date, this particular behavior has solely been observed at specific computational scales.34 Humans lack a comprehensive understanding of the fundamental mechanisms underlying the emergence of this generative AI phenomenon. Besides the size of the data, are there other factors that can propel breakthroughs in AI performance? The answer remains elusive. The outcomes of generative AI formation are not interpretable.

A multitude of intricate factors impact the transparency and reliability of generative AI applications in the healthcare domain. Users who emotionally question the rationale behind machine decisions often face difficulties in obtaining technical-level information. This practical obstacle may perpetuate medical paternalism and under-mine patient autonomy. 35

Secondly, patient autonomy is evident in their control over personal health data.

In the legal framework, accessing personal information necessitates explicit and voluntary consent from the individual, embodying the right to informed consent and serving as a crucial indicator of human autonomy. The advancing versions of generative AI necessitate vast quantities of data for model training. Such data originate from diverse sources, including patients’ clinical treatments.36 Subsequent medical decisions concerning patients are also based on the processing of these data. Many medical generative AIs actively gather patients’ health data while providing essential services, potentially infringing upon their privacy.37 Consequently, patients’ personal information may become accessible to large technology companies.

The unrestricted collection of patients’ personal information by medical generative AI undermines patient autonomy. Although some smart applications incorporate consent forms within their systems to request the use of patients’ data, these electronic agreements are often lengthy, featuring complex and confusing terminology that is challenging for the average person to comprehend. Consequently, in practice, most individuals do not meticulously read the agreement’s content, leaving them unaware of the potential future uses of their data and the risks associated with data breaches. The validity of users’ “consent” in such scenarios is questionable.

Physician Autonomy

Generative AI, with its constant updates and advanced deep learning capabilities, has revolutionized both the labor market and medical practice by mimicking human analytics.38 When widely accepted by society and unchallenged within the realm of medical science, medical AI presents fewer risks, alleviates burdens, and potentially offers superior treatment outcomes. However, this acceptance will inevitably result in an objective constriction of healthcare professionals’ autonomy in decision-making.39 Some American commentators foresee that the utilization of medical AI will ultimately become the new benchmark for medical duty of care across the United States.40 Consequently, doctors in the future may increasingly rely on AI in their medical endeavors, without adequately scrutinizing the principles underlying its recommendations.41 Medical professionals are increasingly reliant on generative AI for making treatment and diagnostic decisions. Some medical personnel may unquestioningly adopt diagnostic results and treatment plans generated by AI, potentially doing so to avoid taking responsibility. In such cases, doctors who adhere to AI decisions become mere executors and may face the risk of being gradually replaced by AI. The control of medical activities, traditionally held by medical personnel, is being transferred to generative AI. This shift reverses the traditional roles of machines and humans in the medical field.

Alienation of the Doctor-Patient Relationship

Firstly, the doctor-patient relationship is gradually being instrumentalized. The application of generative artificial intelligence in the medical field potentially leads to the estrangement of the doctor-patient bond. In traditional medical practice, this relationship emphasizes human connections. With the introduction of generative AI, doctor and patient behavior is now governed by standardized systems. The process of diagnosing and treating diseases has become increasingly standardized, generating clear and comprehensive records. These records can serve as evidence for identifying and assigning responsibility in doctor-patient disputes. Given the increasing frequency of such disputes, the use of generative AI to mitigate risk has become a necessary choice for many hospitals and doctors. Consequently, the original human-centric doctor-patient relationship has evolved into a technological one dominated by tools. In the future, both doctors and patients may be reduced to digital symbols and subjects of digital management, reflecting a loss of autonomy in the doctor-patient relationship.

In 2025, the Shanghai Municipal Market Supervision Bureau of Putuo District announced a landmark case involving drug safety violations. The pharmacy in question sold 350 bottles of human serum albumin (a blood plasma protein used in clinical treatment) over four days. Electronic prescriptions issued by the internet hospital exhibited consistent abnormalities in prescribed medication quantities and diagnosed symptoms. Investigations revealed that the pharmacy had employed AI technology from the internet hospital to generate prescriptions, thereby obtaining a large volume of electronic prescriptions while forging associated records and receipts. Notably, this case is not an isolated incident; in reality, numerous instances exist of online consultations and remote prescriptions being entirely operated by AI systems without meaningful physician or pharmacist oversight. The doctor-patient relationship tends to be instrumentalized.

Secondly, the doctor-patient relationship is becoming de-emotionalized. The evolution of generative AI drives medical behavior towards greater precision and nuance, but this comes at a cost. Most medical personnel become “cogs” in the machine’s decision-making process, collaborating with artificial intelligence instructions to engage in medical activities, rather than directly engaging with the patient. Healthcare is a field that necessitates human emotions, and medical activities transcend mere technical operations. As Hippocrates famously stated, “Cure sometimes, treat often, comfort always”. Doctors require not only technical proficiency but also the ability to provide patients with a “human touch” grounded in their own emotions. When appropriate, they should offer emotional support and medical advice infused with a “human face”. Otherwise, the absence of a “genuine” relationship may lead to decreased social inter-action with the patient, fostering a sense of isolation.42 Artificial intelligence cannot replicate the expertise of a true professional and lacks the mental state and human understanding that humans possess.43 While generative AI brings immense convenience to the healthcare industry, it is also incrementally contributing to the de-emotionalization of the doctor-patient relationship. Studies indicate that patients harbor concerns about the use of AI in healthcare.44 The industry’s over-reliance on generative AI may hinder doctors’ ability to provide emotional support to their patients, and for patients to emotionally resonate with their doctors. Consequently, doctors and patients become individual symbols on the technologized medical assembly line, and the doctor-patient relationship is progressively alienated by the absence of emotion, shifting from a people-centered to a tool-centered paradigm.

Fairness and Justice Issues

The application of generative AI in healthcare has the potential to alleviate, to some extent, the issue of uneven regional distribution of healthcare resources. However, it may also exacerbate group inequities in accessing these resources.

Firstly, incomplete generative AI algorithms and biased data may trigger group biases. Inevitably, the designers of these algorithms may inadvertently incorporate their own value-based biases. Furthermore, some data may inherently contain social biases, which AI models can inadvertently learn, perpetuate, or even amplify, thereby leading to unfair and discriminatory outcomes.45 For instance, in 2017, researchers used deep learning to identify skin cancer from images. The dataset they used, consisting of nearly 130,000 images, included less than 5% of dark-skinned individuals, making the algorithm unsuitable for dark-skinned populations.46 Similarly, the data currently used to train generative AI models, such as ChatGPT, is predominantly sourced from English-speaking regions. Consequently, the medical decisions made by these models may not apply to non-English-speaking regions, particularly violating the rights of patients in smaller language communities. Such prejudice not only affects patients’ equal access to healthcare resources but also violates their personal dignity and potentially leads to mental health issues.

Secondly, digitally disadvantaged groups, particularly older adults, face limitations in their ability to utilize generative AI, which hinders their full benefit from advancements in healthcare technology. Furthermore, in statistical calculations for generative AI, the large elderly population is often underrepresented due to the scarcity of data, leading to their being ignored by machines when making decisions. The healthcare industry’s responsibility is to provide life and health protection to all individuals in society. The purpose of technological development is to enhance human welfare, not to exclude the elderly. Excluding them would constitute a gross violation of the interests of digitally vulnerable groups.

Furthermore, there is a risk of gender inequality in the application of AI in healthcare. Researchers frequently overlook gender differences, resulting in inadequate and incomplete data collection and application concerning women.47 For example, a study attempted to use smartphones to collect data from Parkinson’s disease patients. Of the 43 patients who participated, only 8 were female, accounting for only 18.6% of the participants.48 However, gender differences impact the manifestation of Parkinson’s disease.49 The study found, through comparison of AI model performance, that AI models performed optimally when the training dataset had a 50% male-to-female ratio.50 Ideally, by examining gender differences in medical diagnosis and treatment, researchers can determine how to improve model performance and address these differences, thereby promoting the development of medical care. Therefore, strengthening the identification and correction of gender bias in generative AI is crucial not only for advocating for fair medical rights and interests for women but also for driving the sustainable development of society. For instance, integrating gender differences in the medical field into the algorithm design and data processing of generative AI can enhance the precision of personalized medical services.

Currently, individuals encounter challenges in addressing the fairness and controversy issues, which makes it difficult for them to seek remedies for their rights. Firstly, the undisclosed, complex, and unexplainable nature of the algorithms makes it challenging for individuals who have experienced discrimination by generative AI to recognize unfair treatment. Secondly, individuals’ limited power makes it difficult for them to provide compelling evidence of rights violations. Returning to the first issue, human initiative is compromised, and ordinary people find themselves trapped in the “medical algorithmic cage” of generative AI, struggling to protect their health and safety rights.

Uncertainty Regarding Responsibility Assumption for the Technology

Patient “Xu Shuihe” was admitted to the defendant’s rehabilitation center due to dysphagia with postprandial choking sensations persisting for two months. The hospital employed the Da Vinci robotic surgical system to perform the procedure, necessitating conversion to thoracotomy during surgery. Postoperative complications arose, and failed resuscitation ensued, ultimately resulting in the patient’s death. The court determined that the medical institution had failed to adequately communicate surgical risks to the patient and their family, with relevant medical records lacking completeness. Notably, while these deficiencies were not directly causative of the patient’s death, medical procedures inherently involve uncertainty, and the integration of artificial intelligence has amplified this unpredictability, which may result in unexplainable clinical decisions.

The advent of generative AI in the healthcare industry poses a pivotal question: who should shoulder the responsibility? The design, decision-making, and application of medical AI encompass various stakeholders. Medical outcomes are now influenced not only by doctors’ autonomous decisions but also by a multitude of entities, such as designers of medical algorithmic models, users, and their supervisors. Each of these stakeholders exerts a varying degree of influence on the design and utilization of the technology. Consequently, medical errors occurring in the context of generative AI often involve “multiple causes and one effect” or “multiple causes and multiple effects”, making it challenging to pinpoint the layers of causality. When medical harm arises due to a decision-making error, it becomes difficult to ascertain which of the complex algorithms or stakeholders is responsible for the decisive harmful act. For instance, at the data collection stage, it is nearly impossible to eliminate hidden bias and discriminatory information from the data. Therefore, if adverse consequences arise, it is unjustifiable to solely assign blame to the data collector or processor.

Moreover, even if it is established that generative AI directly caused the medical malpractice, can it be considered a responsible party? Currently, globally, it is challenging for AI to be considered a liable subject under civil law. For example, defining AI as a civil subject would essentially grant it the same legal status as humans through legislation, potentially undermining human subjectivity. Additionally, AI lacks independent property and the financial capacity to compensate for damages. Given that generative AI cannot be held liable for the medical malpractice it causes, the question arises: who should bear the responsibility for compensation? These ethical and legal questions remain unresolved and are critical issues that require attention in the current application of generative AI in the medical industry.

China’s Regulatory Status for Generative AI Applications in the Medical Field

In recent years, China has enacted and released several laws and regulatory documents to explore the regulation of generative AI in healthcare. This study searched the “Official Website of the Chinese Government” and the “Wolters Kluwer Legal & Regulatory” database for legal and regulatory documents incorporating the keywords “medical artificial intelligence”, “generative artificial intelligence”, and “artificial intelligence” between 2020 and 2025. Subsequently, the search results were individually reviewed, and documents directly relevant to this study were manually curated., has resulted in Table 2.

Table 2.

China’s Regulatory Status for Generative AI Applications in the Medical Field

Type Name Release Time Issuing Organisation Key Content Functions
Laws Law of the People’s Republic of China on the Protection of Personal Information 2021.08 Adopted at the 30th Meeting of the Standing Committee of the National People’s Congress Article 14 The patient’s right to informed consent is safeguarded.
Civil Code of the People’s Republic of China 2020.05 National People’s Congress Article 2 Generative AI does not possess legal civil subject status.
Other normative documents Interim Measures for the Management of Generative Artificial Intelligence Services 2023.07 National Internet Information Office, etc. Article 3 China actively encourages the development of generative artificial intelligence and has established clear regulatory principles to govern its use.
Article 11 Protecting the security of patients’ personal information.
Article 12 Ensure patients’ right to know about the involvement of generative AI in diagnostic and therapeutic activities.
Rules for the Regulation of Internet Diagnosis and Treatment (for Trial Implementation) 2022.02 General Office of the National Health and Wellness Commission Article 9 The patient’s right to informed consent is guaranteed.
Article 13 Determine the subjective status of physicians in diagnostic and therapeutic activities.
Guiding Principles for Defining the Classification of Artificially Intelligent Medical Software Products 2021.07 State Drug Administration Management attributes and management categories of AI medical software products. Guaranteeing the Safety and Effectiveness of AI Medical Products
Reference Guidelines for Artificial Intelligence Application Scenarios in the Health and Wellness Industry 2024.11 General Office of the National Health Commission Clarify the application of AI in 84 specific scenarios under 4 categories Emphasizes that AI mainly plays an auxiliary role in the medical field.
Code of Ethics for the New Generation of Artificial Intelligence 2021.09 National Professional Committee on Governance of New-Generation Artificial Intelligence. 1. Six basic principles
2. Sub-segment norms
Ethical norms that should be observed for the application of generative AI in China’s medical field.

Regulatory Issues of Generative Artificial Intelligence Applications in Healthcare in China

In recent years, the application of generative AI in medical contexts has been subject to multifaceted legal and ethical frameworks, reflecting China’s commitment to regulating medical artificial intelligence. Nevertheless, China’s regulatory approach to generative AI in the medical field remains incomplete.

Firstly, regulatory frameworks remain fragmented, lacking both specificity and practical applicability. By contrast, many countries are actively promoting dedicated legislation to advance AI applications in sectors such as healthcare. Notably, in May 2024, the EU adopted the world’s first comprehensive AI regulatory framework—the EU Artificial Intelligence Act —which establishes a risk-based regulatory approach. While the United States has yet to enact federal AI legislation, legislative activity at the state level has surged; as of December 2024, 43 US states have passed AI-related laws covering healthcare. South Korea enacted the Basic Act on the Development and Establishment of Trust in Artificial Intelligence in December 2024, becoming the second jurisdiction after the EU to implement comprehensive AI regulation. Moreover, regulatory priorities for generative AI in healthcare vary internationally: EU legislation emphasizes technological safety, the US prioritizes innovation, and China has yet to establish dedicated AI statutes. Given the borderless nature of AI development, divergent regulatory philosophies and enforcement mechanisms across nations may hinder international cooperation, data sharing, and market competition as healthcare technologies advance.

Secondly, concerning liability allocation for generative AI applications in medicine, China’s Civil Code and other statutes do not confer legal personhood on AI systems, nor do they explicitly define AI’s legal status. On 12 May 2025, the National Science and Technology Ethics Committee released the Ethical Guidelines for the Development of Virtual Reality Technology (VR Ethics Guidelines). Nevertheless, existing ethical frameworks and legal systems fail to specify liability attribution mechanisms for AI-driven medical diagnoses. In practice, China lacks clear rules for identifying responsible parties in medical incidents involving AI.

Moreover, medical data’s heterogeneous nature raises challenges such as incomplete or inaccurate records, which undermine the training efficacy and operational performance of AI models. Article 8 of the Interim Measures for the Administration of Generative Artificial Intelligence Services (Generative AI Measures) mandates explicit, actionable annotation protocols and quality assessments for training data. However, China has yet to establish unified data quality standards or robust governance mechanisms. Additionally, patient data security remains inadequately protected. To address this, the EU implemented the General Data Protection Regulation on 25 May 2018, enforcing strict controls over health data and safeguarding patients’ informed consent rights. The US The Health Insurance Portability and Accountability Act similarly defines medical data privacy and security standards. Comparatively, China’s Personal Information Protection Law and Cybersecurity Law lack targeted provisions for medical data protection and have insufficiently enforced safeguards.

Finally, the regulation of generative AI in the medical field involves multiple agencies, including health, pharmacovigilance, cyber information, and medical insurance departments. Each department has distinct responsibilities, potentially leading to coordination challenges and regulatory inconsistencies. While China has established institutions such as the China AI Development and Safety Research Centre of the AI Safety Institute (AISI), and numerous government departments, organizations, and enterprises are paying attention to and investing in the safety governance of this technology, an effective collaborative regulatory mechanism has yet to materialize. Consequently, the existing regulatory system struggles to address the complex application scenarios of medical AI.

Improving the Regulation of Generative AI Applications in Healthcare

The application of generative artificial intelligence in medicine is profoundly transforming modern diagnostic and treatment paradigms. In the face of potential future risks, ethics and law constitute two interrelated governance frameworks crucial for mitigating technological risks in healthcare. First, ethical principles may evolve into legal norms; for instance, to clarify medical accountability, ethics mandate traceability of AI-generated decisions. Subsequently, the European Union designated medical AI as “high-risk” in its Artificial Intelligence Act, mandating algorithmic transparency51 and explainability. Second, legal frameworks serve to ensure ethical compliance. As a rigorous and authoritative regulatory tool, law enforces responsibility attribution while compelling stakeholders to fulfill ethical obligations. The relationship between ethical and legal governance of generative AI applications in the medical field is shown in Figure 1.

Figure 1.

Figure 1

Relationship diagram between generative artificial intelligence ethics and legal governance in the medical field.

Ethical Governance

The trend toward the utilization of generative AI in healthcare is irreversible and necessitates regulation to address potential risks. However, these regulatory measures must not impede technological advancements. The formulation of laws is a time-consuming process. Ethical governance, by embedding values, can steer the technology towards positive development. This serves as a crucial governance option, particularly in the nascent stages of technology development. It is imperative to clarify that the ethics of medical AI encompasses both bioethics and AI ethics, including ensuring technology safety, and fairness, and respecting patient privacy.52

Firstly, it is evident that doctors must retain clear and independent decision-making authority. Humans remain the sole entities qualified to make medical decisions. In the healthcare industry, with the continual updating and iteration of generative AI, medical intelligence systems may offer accurate advice on intricate medical issues. Nevertheless, the ultimate medical decision must be evaluated and confirmed by a medical professional.53 Generative AI that contravenes this principle cannot be employed in the medical field. Through explicit delineation of doctors’ independent decision-making authority, we can prevent erosion of clinical autonomy and detachment in physician-patient relations during the era of generative AI.

Secondly, the implementation of generative AI in healthcare necessitates a focus on embedding values. The integration of AI in healthcare demands not only technical proficiency but also “value flexibility”—the capability to navigate between algorithmic suggestions and human values in clinical decision-making.54 In the future, there is a need for active discourse on the values that ought to be embedded within generative AI in healthcare. These values should be prioritized and sequenced accordingly. Naturally, “human-centeredness” should rightfully take precedence as the highest value. All complex medical decisions must adhere to this principle, which principally functions to: (1) safeguard human agency by preventing autonomy erosion among patients and clinicians, and (2) protect minority group interests through equitable value advocacy that addresses algorithmic disparities affecting marginalized populations. To ensure ethical AI deployment, technical designs must undergo human-centric compliance testing, with generative AI systems exhibiting group discrimination or bias excluded from clinical implementation to reinforce human agency.

Thirdly, enhancing algorithmic transparency in medical AI is critical. While protecting core intellectual property, model architectures should be rendered maximally transparent, with training/fine-tuning data provenance disclosed. For instance, ChatGPT and DeepSee have partially open-sourced their foundational model frameworks, yet current transparency measures remain inadequate to guarantee users’ right to informed decision-making. Future efforts must mandate disclosure of technological limitations, data biases, and knowledge gaps in medical AI systems. Additionally, safeguarding user autonomy requires explicit documentation of model boundaries, training data constraints, and potential knowledge blind spots. When collecting patient data, generative AI systems are ethically obligated to secure informed consent through transparent communication of data usage purposes and risks.

Fourthly, it is crucial to establish ethical governance guidelines for the application of generative AI in healthcare in China. The World Health Organization (WHO) has published guidelines on the Ethical Governance of Artificial Intelligence in Healthcare,55 offering recommendations for ethical policies, principles, and practices in AI’s application in healthcare. The objective is to safeguard patients’ rights and interests while mitigating and eliminating ethical risks in the development of AI in healthcare. Emphasis is placed on protecting human autonomy, transparency, and accountability. These guidelines possess significant empirical value for the ethical review of China’s generative AI applications in healthcare. Moving forward, the Chinese government should endeavor to develop ethical governance guidelines tailored to China’s industrial development in generative AI for healthcare.

Lastly, a specific ethical review mechanism should be established. On one hand, medical institutions should form a dedicated ethical review committee to scrutinize the ethical implications of generative AI applications in the medical field. This review encompasses compliance with fundamental ethical principles, potential infringement of patients’ rights and interests, and bias. Doctors must maintain decision-making autonomy and AI-generated decisions should be critically examined. On the other hand, public opinion is highly regarded. An ethical complaint and reporting mechanism should be put in place to encourage public oversight of generative AI applications in the medical field, enabling timely review of technical details that violate ethical principles.

Legal Regulation

As ethical governance frameworks for generative artificial intelligence in the medical field gain clarity, legal oversight necessitates progressive refinement. Whereas ethical governance establishes normative guidelines, the law can impose stricter, more explicit, and operationally effective regulatory measures.

Firstly, establishing a robust system of laws and regulations is crucial. Laws can set clear, mandatory requirements for technology development, ensuring a balance between rapid innovation and safe development. By drafting laws and regulations tailored to the medical application of generative AI, we can delineate the responsibilities of AI developers, medical institutions, data providers, and other stakeholders in the event of medical accidents. For example, it should be stipulated that the developer is responsible for the safety and reliability of the algorithm, while medical institutions are responsible for the process of using AI and reviewing the results. In cases where errors are caused by data quality issues, the data provider should bear the corresponding responsibility. At the same time, efforts should be directed towards gradually exploring the legal status of AI as “subjects” and considering the establishment of an AI product liability insurance system to address potential compensation issues that may arise from liability claims.

Secondly, the standardization of medical data management is necessary. Establish standardized data protocols, standardize the collection, storage, preprocessing, and annotation of medical data, thereby ensuring that patients provide explicit informed consent to data collection. Strengthened supervision and management of data quality should be implemented, ensuring that medical institutions and data providers maintain the completeness, accuracy, and consistency of data. Strict enforcement of relevant laws and regulations, such as the Cybersecurity Law,56 the Data Security Law,57 and the Personal Information Protection Law, is crucial, as is strengthening the security protection of medical data at all stages. Medical institutions and AI R&D companies must implement necessary technical and management measures to safe-guard against data leakage, tampering, and misuse. A data security emergency response mechanism should be established to handle data security incidents promptly.

Concurrently, the improvement of the regulatory coordination mechanism is vital. A joint regulatory mechanism involving multiple departments, such as health and wellness, pharmacovigilance, Internet information, and medical insurance, should be established, clarifying the division of responsibilities among each department and strengthening interdepartmental information sharing, communication, and collaboration. Cooperation between the medical industry and industries such as science and technology and the Internet should also be promoted. Regular joint meetings should be held to collaboratively address regulatory challenges in the medical application of generative AI, ultimately leading to the formulation of unified policies and standards.

Global Governance Mechanism

China must continue to actively engage in the global regulation of generative AI applications in the medical field. Currently, the more advanced generative AI technology is controlled by a limited number of large tech companies worldwide, and AI risks are not confined to individual countries. Global governance is a trend in regulating generative AI in healthcare. In October 2023, China proposed the Global AI Governance Initiative, which opposes the politicization of technology, advocates for open-source cooperation and capacity building in developing countries, and promotes technology inclusion.58 In November 2023, China, along with 28 other countries and the EU, signed the Bletchley Declaration (2023), participating in the UN resolution on AI capacity building and supporting the establishment of a multilateral governance framework.59 Fu Ying, former Vice Minister of China’s Ministry of Foreign Affairs, called for transcending geopolitical interference. She emphasized the complementary nature of China and the US in the area of AI security, suggesting a combination of technological research and development with application scenarios.60 In the future, as generative AI becomes more deeply embedded in the medical field, global dialogue and cooperation should be strengthened to ensure technological safety. Based on fully respecting the differences in policies and practices among countries, a consensus on the governance of generative AI in healthcare should be promoted. At the same time, an international ethics oversight committee will be established with the aim of facilitating the establishment of an internationally harmonized ethical and legal governance mechanism for medical generative AI.

Specific issues and corresponding governance paths are shown in Table 3.

Table 3.

Issues and Governance Pathways for the Application of Generative Artificial Intelligence in the Medical Field

Question Contents Governance Pathway
Risk Performance The question of human autonomy Impairing patient autonomy (eg, opaque algorithmic decision-making compromises transparency and reliability in medical generative AI systems). Enhancing the transparency of generative artificial intelligence algorithms in the medical field. 1.Embedding the “people-centered” values into the design of artificial intelligence technologies.
2. Developing ethical governance guidelines and establishing an ethical review mechanism.
Control over personal health data Ensuring patients’ informed consent for data collection.
Compromising physician autonomy Clarifying that physicians must possess clear and independent decision-making authority.
The alienation of the physician-patient relationship (including instrumentalization and de-emotionalization) Strengthening human agency.
Issues of fairness and justice Group bias, gender inequality, and severe infringement on the interests of digitally vulnerable groups such as the elderly. Standardize the collection of medical data.
Inadequate regulatory oversight Uncertainty in technical accountability persists due to the absence of explicit legal definitions for the legal status of artificial intelligence. Strengthen legislative frameworks to explicitly define the legal status of generative artificial intelligence.
The medical field’s data ecosystem is characterized by diverse and complex sources, which may result in issues such as incomplete or inaccurate data. Standardize healthcare data governance and ensure patients’ informed consent for data collection.
Coordination among regulatory authorities for generative artificial intelligence in the medical sector requires enhancement to align with evolving governance frameworks. Improve regulatory coordination mechanisms.

Conclusion

The application of generative AI in the medical field holds significant potential for development, yet it is imperative to acknowledge the myriad challenges that accompany it. Firstly, it poses a threat to human subjectivity, potentially limiting the autonomy of patients and doctors, and fostering a sense of alienation in the doctor-patient relationship. Secondly, issues of fairness and controversy may arise. Specifically, algorithmic biases and incomplete datasets could trigger group disparities, compress the survival space of the elderly and other digitally disadvantaged groups, and exacerbate gender inequality. Thirdly, the problem of responsibility looms large. In medical activities involving generative AI, defining and assigning responsibility presents a formidable challenge fraught with controversies.

By examining the relevant laws, regulations, and normative documents formulated by the Chinese government concerning the application of generative AI in the medical field, this study concludes that there are regulatory deficiencies in this context in China. These deficiencies are primarily manifested in incomplete laws and regulations, irregular data governance, and nascent collaborative regulatory mechanisms. Without addressing these issues, hastily scaling up the application of generative AI in the medical field could lead to grave moral and legal risks.

Finally, we recommend the following: 1) The legal personality of generative artificial intelligence should be proactively clarified through legislative frameworks to address potential liability issues. 2) Promoting standardized management of medical data. Standardized data lifecycle management can effectively improve the performance of generative artificial intelligence, requiring strict control by data providers, medical institutions, and AI R&D entities while safeguarding patient data privacy. 3) Notably, the application of generative AI in the medical field constitutes a high-stakes context, necessitating the implementation of comprehensive ethical review mechanisms and legal regulatory frameworks. Public oversight must be encouraged to mitigate ethical and legal risks. 4) The establishment of a global AI ethics review committee should be promoted. It is crucial to recognize that the risks posed by medical generative AI are cross-regional, cross-cultural, and transnational, making unified global regulatory rules the future development trajectory. Given the unpredictability of future risks, exploring the establishment of a unified international ethical and legal regulatory mechanism and fostering international collaboration are imperative. We must adhere to a human-centric governance philosophy to determine a safe development pathway for medical generative AI.

However, this study still has research limitations. For example, the paper lacks interviews with patients, doctors, and other stakeholders who have accepted generative AI in healthcare.

Disclosure

The authors report no conflicts of interest in this work.

References

  • 1.Jackson BR, Ye Y, Crawford JM, et al. The ethics of artificial intelligence in pathology and laboratory medicine: principles and practice. Acad Pathol. 2021;8:2374289521990784. doi: 10.1177/2374289521990784 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Fournier-Tombs E, McHardy J. A Medical Ethics Framework for Conversational Artificial Intelligence. J Med Internet Res. 2023;25:e43068. doi: 10.2196/43068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Chenais G, Lagarde E, Gil-Jardine C. Artificial intelligence in emergency medicine: viewpoint of current applications and foreseeable opportunities and challenges. J Med Internet Res. 2023;25:e40031. doi: 10.2196/40031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Savulescu J, Giubilini A, Vandersluis R, Mishra A. Ethics of artificial intelligence in medicine. Singapore Med J. 2024;65(3):150–158. doi: 10.4103/singaporemedj.SMJ-2023-279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Munch L, Bjerring JC. Can large language models help solve the cost problem for the right to explanation? J Med Ethics. 2024;jme–2023–109737. doi: 10.1136/jme-2023-109737 [DOI] [PubMed] [Google Scholar]
  • 6.Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back. Pflugers Archiv-Eur J Physiol. 2024;477:591–601. doi: 10.1007/s00424-024-02984-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Elgin CY, Elgin C. Ethical implications of AI-driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals’ perspectives. BMC Med Ethics. 2024;25(1):148. doi: 10.1186/s12910-024-01151-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Singh A, Sharma KK, Bajpai MK, Sarasa-Cabezuelo A. Patient centric trustworthy AI in medical analysis and disease prediction: a comprehensive survey and taxonomy. Appl Soft Comput. 2024;167(112374):112374. doi: 10.1016/j.asoc.2024.112374 [DOI] [Google Scholar]
  • 9.Omidian H. Synergizing blockchain and artificial intelligence to enhance healthcare. Drug Discovery Today. 2024;29(9):104111. doi: 10.1016/j.drudis.2024.104111 [DOI] [PubMed] [Google Scholar]
  • 10.Arbelaez Ossa L, Lorenzini G, Milford SR, Shaw D, Elger BS, Rost M. Integrating ethics in AI development: a qualitative study. BMC Med Ethics. 2024;25(1):10. doi: 10.1186/s12910-023-01000-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ossa LA, Milford SR, Rost M, Leist AK, Shaw DM, Elger BS. AI through ethical lenses: a discourse analysis of guidelines for AI in healthcare. Sci Eng Ethics. 2024;30(3). doi: 10.1007/s11948-024-00486-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wang WS, Wang YC, Chen L, Ma R, Zhang MH. Justice at the forefront: cultivating felt accountability towards artificial intelligence among healthcare professionals. Soc sci med. 2024;347:116717. doi: 10.1016/j.socscimed.2024.116717 [DOI] [PubMed] [Google Scholar]
  • 13.Perrella A, Bernardi FF, Bisogno M, Trama U. Bridging the gap in AI integration: enhancing clinician education and establishing pharmaceutical-level regulation for ethical healthcare. Front Med. 2024;11. doi: 10.3389/fmed.2024.1514741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Gozum IEA, Flake CCD. Human dignity and artificial intelligence in healthcare: a basis for a Catholic ethics on AI. J Religion Health. 2024. doi: 10.1007/s10943-024-02206-1 [DOI] [PubMed] [Google Scholar]
  • 15.Corfmat M, Martineau JT, Régis C. High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare. BMC Med Ethics. 2025;26(1):4. doi: 10.1186/s12910-024-01158-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back. Pflugers Archiv-Eur J Physiol. 2025;477(4):591–601. doi: 10.1007/s00424-024-02984-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Palaniappan K, Lin EYT, Vogel S. Global regulatory frameworks for the use of artificial intelligence (AI) in the healthcare services sector. Healthcare. 2024;12(5). doi: 10.3390/healthcare12050562 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Palaniappan K, Lin EYT, Vogel S, Lim JCW. Gaps in the global regulatory frameworks for the use of artificial intelligence (AI) in the healthcare services sector and key recommendations. Healthcare. 2024;12(17). doi: 10.3390/healthcare12171730 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Shidende N, Mwogosi A. Exploring the impact of generative AI tools on healthcare delivery in Tanzania. J Health Organiz Manage. 2025. doi: 10.1108/jhom-01-2025-0007 [DOI] [Google Scholar]
  • 20.Ngcobo M. The ethics and law of medical AI in South Africa: balancing innovation with responsibility. SAMJ S Afr Med J. 2025;115(5B):75–79. doi: 10.7196/SAMJ.2025.v115i5b.3667 [DOI] [Google Scholar]
  • 21.Molina OA, Bernal MJ, Wolf DL, Herreros B. What is Spanish regulation on the application of artificial intelligence to medicine like? Humanit Soc Sci Commun. 2024;11(1). doi: 10.1057/s41599-023-02565-2 [DOI] [Google Scholar]
  • 22.Guangzhou Daily. Two Guangzhou Hospitals introduce DeepSeek, live experience, hospitals respond to medical data privacy and security issues. Available from: https://mp.weixin.qq.com/s/aE6TeOranOpH0k6rLv0gig. Accessed February 20, 2025.
  • 23.Southern Metropolis Daily. Pilot testing, pathology Department of South Five Hospitals layout DeepSeek practice ap-plication. Available from: https://news.qq.com/rain/a/20250215A01F5500. Accessed February 20, 2025.
  • 24.Notice on the issuance of rules for the regulation of Internet diagnosis and treatment (for trial implementation). Available from: http://www.nhc.gov.cn/yzygj/s3594q/202203/fa87807fa6e1411e9afeb82a4211f287.shtml. Accessed February 20, 2025.
  • 25.Circular of the State Drug Administration on the release of guiding principles for defining the classification of artificial intelligence medical software products. Available from: https://m.cqn.com.cn/ms/content/2021-07/08/content_8710872.htm. Accessed February 20, 2025.
  • 26.Code of ethics for the next generation of artificial intelligence. Available from: https://www.most.gov.cn/kjbgz/202109/t20210926_177063.html?ref=salesforce-research. Accessed February 20, 2025.
  • 27.Interim measures for the management of generative artificial intelligence services. Available from: https://www.gov.cn/zhengce/zhengceku/202307/content_6891752.htm. Accessed February 20, 2025.
  • 28.Circular of the General Office of the National Health Commission on the Issuance of Reference Guidelines for Artificial Intelligence Application Scenarios in the Health Care Industry. Available from: http://www.nhc.gov.cn/guihuaxxs/gongwen12/202411/647062ee76764323b29a1f0124b64400.shtml. Accessed February 20, 2025.
  • 29.Law of the People’s Republic of China on the Protection of Personal Information. Available from: http://www.npc.gov.cn/npc/c2/c30834/202108/t20210820_313088.html. Accessed February 20, 2025.
  • 30.Civil Code of the People’s Republic of China. Available from: http://legal.people.com.cn/n1/2020/0602/c42510-31731656.html. Accessed February 20, 2025.
  • 31.A black box refers to a system that cannot be directly opened or observed from the outside. An algorithmic black box refers to a stage in the algorithm’s operation that involves technical complexity and is incomprehensible or unexplainable to some people.
  • 32.Lopez M. Reevaluating human values for patient care in the age of artificial intelligence. J AI Law Regulation. 2024;1(1):50–63. doi: 10.21552/aire/2024/1/7 [DOI] [Google Scholar]
  • 33.AI emergence refers to the phenomenon where, when the parameters of an AI model reach a certain threshold, the model exhibits new and unprecedented capabilities.
  • 34.Wei J. Emergent abilities of large language models. Available from: https://arxiv.org/abs/2206.07682. Accessed February 20, 2025.
  • 35.Kapp MB. A history and theory of informed consent. J Legal Med. 1986;7:397–402. doi: 10.1080/01947648609513478 [DOI] [Google Scholar]
  • 36.Li X, Cong Y. Exploring barriers and ethical challenges to medical data sharing: perspectives from Chinese researchers. BMC Med Ethics. 2024;25(1):132. doi: 10.1186/s12910-024-01135-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Wang C, Zhang J, Lassi N, Zhang X. Privacy protection in using artificial intelligence for healthcare: Chinese regulation in comparative perspective. Healthcare. 2022;10(10):1878. doi: 10.3390/healthcare10101878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Baldassarre A, Padovan M. Regulatory and ethical considerations on artificial intelligence for occupational medicine. Med Lav. 2024;115(2):e2024013. doi: 10.23749/mdl.v115i2.15881 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Cai DS. Content innovation and compensation optimisation of intelligent medical civil liability. Jianghan Forum. 2023;(12):121–126. [Google Scholar]
  • 40.Froomkin AM, Kerr I, Pineau J. When AIs outperform doctors: confronting the challenges of a tort-induced over-reliance on machine learning. Arizona Law Rev. 2019;61(1):66. [Google Scholar]
  • 41.Prochaska M, Alfandre D. Artificial intelligence, ethics, and hospital medicine: addressing challenges to ethical norms and patient-centered care. J Hospital Med. 2024;19(12):1194–1196. doi: 10.1002/jhm.13364 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Lee WT. Artificial intelligence in medicine: a caution about good intentions and where it may lead. Otolaryngol-Head Neck Surg. 2024;170(6):1605–1606. doi: 10.1002/ohn.658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Ferrario A, Biller-Andorno N. Large language models in medical ethics: useful but not expert. J Med Ethics. 2024;50(9):653–654. doi: 10.1136/jme-2023-109770 [DOI] [PubMed] [Google Scholar]
  • 44.Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics. 2024;25(1):74. doi: 10.1186/s12910-024-01066-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  • 46.Zou J, Schiebinger L. AI can be sexist and racist - it’s time to make it fair. Nature. 2018;559(7714):324–326. doi: 10.1038/d41586-018-05707-8 [DOI] [PubMed] [Google Scholar]
  • 47.Wu H. Gender justice in healthcare and its governance in the age of artificial intelligence. Philosophy Sci Technol. 2024;41(02):115–121. [Google Scholar]
  • 48.Lipsmeier F, Taylor KI, Kilchenmann T, et al. Evaluation of smartphone-based testing to generate exploratory outcome measures in a Phase 1 Parkinson’s disease clinical trial. Mov Disord. 2018;33(8):1287–1297. doi: 10.1002/mds.27376 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Haaxma CA, Bloem BR, Borm GF, et al. Gender differences in Parkinson’s disease. J Neurol Neurosurg Psychiatry. 2007;78(8):819–824. doi: 10.1136/jnnp.2006.103788 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Larrazabal AJ, Nieto N, Peterson V, Milone DH, Ferrante E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc Natl Acad Sci. 2020;117(23):12592–12594. doi: 10.1073/pnas.1919012117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Algorithmic transparency refers to the comprehensibility and explainability of an algorithm’s decision-making process, input data, output results, and the underlying logic and assumptions.
  • 52.Galiana I, Gudino LC, Gonzalez PM. Ethics and artificial intelligence. Rev Clin Esp. 2024;224(3):178–186. doi: 10.1016/j.rce.2024.01.007 [DOI] [PubMed] [Google Scholar]
  • 53.Khan AA, Khan AR, Munshi S, et al. Assessing the performance of ChatGPT in medical ethical decision-making: a comparative study with USMLE-based scenarios. J Med Ethics. 2025:jme–2024–110240. doi: 10.1136/jme-2024-110240 [DOI] [PubMed] [Google Scholar]
  • 54.McDougall RJ. Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. 2019;45(3):156–160. doi: 10.1136/medethics-2018-105118 [DOI] [PubMed] [Google Scholar]
  • 55.World Health Organization. Ethics and governance of artificial intelligence for health. Available from: https://www.who.int/publications/i/item/9789240029200. Accessed February 20, 2025.
  • 56.Network Security Law of the People’s Republic of China. Available from: http://www.npc.gov.cn/zgrdw/npc/xinwen/2016-11/07/content_2001605.htm. Accessed February 20, 2025.
  • 57.Data Security Law of the People’s Republic of China. Available from: http://www.npc.gov.cn/c2/c30834/202106/t20210610_311888.html. Accessed February 20, 2025.
  • 58.Ministry of Foreign Affairs of the People’s Republic of China. Global artificial intelligence governance initiative. Avail-able from:: https://www.fmprc.gov.cn/web/ziliao_674904/1179_674909/202310/t20231020_11164831.shtml. Accessed February 20, 2025.
  • 59.Pescadores. First global AI statement: 28 countries including China, EU sign Bletchley Declaration. Available from: https://www.thepaper.cn/newsDetail_forward_25153617. Accessed February 20, 2025.
  • 60.Ying F. Global Governance of artificial intelligence should go beyond geopolitics. Available from: https://news.qq.com/rain/a/20250213A07NNX00. Accessed February 20, 2025.

Articles from Journal of Multidisciplinary Healthcare are provided here courtesy of Dove Press

RESOURCES