Skip to main content
. 2025 Sep 1;18:5405–5419. doi: 10.2147/JMDH.S541271

Table 3.

Issues and Governance Pathways for the Application of Generative Artificial Intelligence in the Medical Field

Question Contents Governance Pathway
Risk Performance The question of human autonomy Impairing patient autonomy (eg, opaque algorithmic decision-making compromises transparency and reliability in medical generative AI systems). Enhancing the transparency of generative artificial intelligence algorithms in the medical field. 1.Embedding the “people-centered” values into the design of artificial intelligence technologies.
2. Developing ethical governance guidelines and establishing an ethical review mechanism.
Control over personal health data Ensuring patients’ informed consent for data collection.
Compromising physician autonomy Clarifying that physicians must possess clear and independent decision-making authority.
The alienation of the physician-patient relationship (including instrumentalization and de-emotionalization) Strengthening human agency.
Issues of fairness and justice Group bias, gender inequality, and severe infringement on the interests of digitally vulnerable groups such as the elderly. Standardize the collection of medical data.
Inadequate regulatory oversight Uncertainty in technical accountability persists due to the absence of explicit legal definitions for the legal status of artificial intelligence. Strengthen legislative frameworks to explicitly define the legal status of generative artificial intelligence.
The medical field’s data ecosystem is characterized by diverse and complex sources, which may result in issues such as incomplete or inaccurate data. Standardize healthcare data governance and ensure patients’ informed consent for data collection.
Coordination among regulatory authorities for generative artificial intelligence in the medical sector requires enhancement to align with evolving governance frameworks. Improve regulatory coordination mechanisms.