Skip to main content
Journal of Thoracic Disease logoLink to Journal of Thoracic Disease
. 2026 Feb 25;18(2):173. doi: 10.21037/jtd-2025-1814

Narrative review of the ethics of artificial intelligence: are we ready for artificial intelligence in surgery?

Erin Yu 1, Graeme M Rosenberg 2, Brooks V Udelsman 2, Takashi Harano 2, Scott M Atay 2, Anthony W Kim 2, Baddr A Shakhsheer 3,*, Sean C Wightman 2,*,
PMCID: PMC12972794  PMID: 41816458

Abstract

Background and Objective

Artificial intelligence (AI) is transforming surgical care by enhancing clinical decision-making and providing intraoperative guidance. As its applications expand, ethical challenges arise, including algorithmic bias, transparency in AI reasoning, informed consent regarding AI involvement, and accountability surrounding AI-guided decisions. This review explores the readiness of the surgical community to address these issues at both the institutional and individual levels.

Methods

A PubMed search identified literature on AI in surgery published between 2018–2025. Fourteen key studies were selected and reviewed to assess AI applications across the surgical continuum, with attention to ethical considerations and barriers to integration.

Key Content and Findings

AI now supports surgical care from the preoperative diagnosis through postoperative recovery. AI can outperform or match clinician performance in tumor detection, disease diagnosis, and surgical risk stratification. Predictive models using deep learning can outperform traditional scoring systems for perioperative and postoperative complication risk. Intraoperatively, AI enables surgical phase recognition, augmented reality guidance, and detection of technical errors. Despite these benefits, ethical concerns remain. Algorithmic bias may underestimate the needs of marginalized populations. Furthermore, questions of legal liability arise when AI-guided decisions cause harm. Informed consent must now address AI’s role, limitations, and potential consequences if declined. Surgeons must guard against “automation bias” to preserve human judgment and patient trust. Institutional readiness remains unsatisfactory, as many healthcare systems lack infrastructure for real-time data integration and governance over data ownership. Surgeon skepticism and the “black box” nature of models also hinder adoption of the technology. Education on AI’s design, validation, and biases is essential for safe integration.

Conclusions

While AI holds immense potential to enhance surgical care, its use should be grounded in ethical principles to ensure non-maleficence and justice. Adoption should aim to maximize beneficence while preserving patient autonomy through transparent consent and promoting equity in access and implementation. At the same time, surgeons must remain vigilant against automation bias such that AI supports, not replaces, clinical intuition and trust, which lies at the core of the surgeon-patient relationship.

Keywords: Artificial intelligence (AI), surgical AI, ethics, algorithmic bias

Introduction

The incorporation of artificial intelligence (AI) into clinical care marks an era of profound transformation within medicine. AI refers to machine-driven intelligence that replicates and often exceeds aspects of human cognitive processes, including pattern recognition and predictive decision-making (1). AI systems are increasingly used to support clinical decision-making and procedural guidance in surgical settings, offering new ways of improving surgical efficiency (1-13).

Nevertheless, the adoption of AI in surgery raises pressing ethical and regulatory concerns. Who is accountable when an AI-guided decision leads to harm? Can AI systems trained on biased datasets equitably account for diverse patient populations?

This review explores the current landscape of AI in surgery, the associated ethical considerations, and the readiness of the surgical community for its widespread integration. Drawing on recent studies on AI support for diagnostic and intraoperative decisions, we assess the impact of AI systems. Furthermore, ethical considerations regarding bias in algorithms, accountability in decision-making, and patient consent are discussed. Altogether, this paper aims to provide a balanced perspective on the opportunities and challenges posed by AI in surgery. We present this article in accordance with the Narrative Review reporting checklist (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1814/rc).

Methods

Using PubMed, papers pertaining to AI and surgery were searched utilizing medical subject headings including “surgery” and “artificial intelligence” between 2018–2025 (Table 1). Studies published in languages other than English were excluded. Fourteen pertinent papers were selected to review the scope of AI within medicine, as well as the ethics of whether we are ready as a surgical specialty for the incorporation of AI within our clinical workflow.

Table 1. Summary of search strategy.

Items Specification
Date of search June 14, 2025
Database searched PubMed
Search terms used Surgery, artificial intelligence
Timeframe 2018–2025
Inclusion criteria English language
Selection process Selected by authors (E.Y. and S.C.W.)

AI across the surgical continuum

AI systems now influence surgical care from the early stages of diagnosis to the later stages of postoperative recovery. Preoperatively, AI enhances imaging analysis, diagnosis, and risk stratification. For instance, Guo et al. [2025] describe numerous studies demonstrating the application of AI detecting colorectal tumors more effectively than conventional endoscopy and radiological examinations (1). The authors further describe the multifaceted functionalities of AI, including its exceptional abilities in diagnosing ophthalmologic diseases, enabling precise surgical planning for orthopedic knee and hip replacements, and improving the accuracy of predicting the risk of intraoperative hypoxemia for anesthesiologists (1). In breast imaging, McKinney et al. [2020] demonstrated that AI outperformed radiologists in detecting breast cancer, with an absolute reduction of false positives by 5.7% and false negatives by 9.4% (2). Similarly, Liu et al. [2019] found that AI diagnostic performance across imaging modalities was at least equivalent to that of health professionals, with a sensitivity of 87.0% [95% confidence interval (CI): 83.0–90.2%] and specificity of 92.5% (95% CI: 85.1–96.4%) for deep learning models, compared to a sensitivity of 86.4% (95% CI: 79.9–91.0%) and specificity of 90.5% (95% CI: 80.6–95.7%) for healthcare professionals (3).

Beyond diagnosis, AI also supports preoperative risk stratification and can predict postoperative complications using electronic health records and population data, often with greater nuance than conventional scoring systems. One review found that deep learning, a subset of AI in which computer systems learn and adjust data according to their weighted associations among input variables, can outperform traditional scoring models for predicting in-hospital mortality for intensive care unit (ICU) patients, and can predict risk for perioperative and postoperative complications (4). Another study found that AI was able to incorporate clinical variables including patient history, medications, blood pressure, and length of stay to predict in-hospital mortality after open abdominal aortic aneurysm repair with a sensitivity of 87% and specificity of 96.1%, and with an overall accuracy of 95.4% (4). The advantage of accurately predicting risk for perioperative and postoperative complications translates to augmenting recommendations for operative management and the informed consent process (5).

Within the intraoperative stage, AI can provide real-time information to support surgical decisions. Kitaguchi et al. [2020] reported over 90% accuracy in phase recognition during laparoscopic sigmoidectomy using convolutional neural network-based deep learning (6). This study utilized a deep learning model for automatic surgical phase recognition of laparoscopic sigmoidectomy videos, which were divided into 11 surgical phases and annotated for each surgical action on every frame (6). Surgical phase recognition refers to the ability of AI systems to identify discrete steps of a procedure in real time using deep learning on operative videos. These phases generally include, for example, port placement, exposure and identification of anatomy, execution of key procedural steps, and closure. By recognizing where a surgeon is within this sequence, AI can contextualize recommendations, anticipate maneuvers, and detect deviations from expected workflow. Likewise, Liu [2025] and Dayan [2024] demonstrated successful phase labeling during endoscopic submucosal dissections and sleeve gastrectomies, respectively (7,8). With accurate phase recognition, AI systems can contextualize recommendations, anticipate subsequent surgical steps, and detect deviations from expected workflow (7,8). Furthermore, augmented reality platforms within robotic surgeries can overlay 3D models onto the surgical field, providing anatomical guidance and real-time feedback (1). In endovascular abdominal aortic aneurysm repairs, AI was able to identify inappropriate landing zones for stents using a deep learning model that was trained to identify “No-Go” landing zones, defined by coverage of the lowest renal artery by the stent graft (9). This high accuracy of surgical workflow recognition suggests the potential of robotic systems to autonomously detect errors or suggest adjustments during surgery.

Recent studies highlight the expanding role of AI across multiple surgical subspecialties, including liver, thoracic, and cardiac surgery. As discussed above, AI has increasingly been shown to support imaging-based diagnoses, perioperative risk stratification, and intraoperative decision-making within these fields (10-12). These applications highlight AI’s growing influence in a broad range of surgical contexts. AI capabilities are actively expanding across the surgical continuum, therefore it is imperative to evaluate the ethical implications of these evolving tools.

Key ethical and regulatory challenges

As AI tools gain influence in clinical decision-making and operative workflows, urgent questions surrounding bias, transparency, autonomy, and accountability arise. Without thoughtful stewardship, these technologies designed to improve care may instead exacerbate disparities and obscure responsibility (Table 2).

Table 2. Core ethical principles governing the use of AI in surgery.

Ethical principle How the issue arises in AI-enabled surgical care
Autonomy AI involvement in diagnosis, risk stratification, and treatment planning complicates informed consent due to model opacity and uncertainty
As AI becomes more embedded in standard clinical workflows, patients may have limited ability to decline AI-integrated care without affecting access to advanced diagnostic tools
Beneficence Predictive models may outperform traditional perioperative and postoperative risk stratification tools
Intraoperative applications such as surgical phase recognition may enhance technical performance and situational awareness
Non-maleficence Automation bias may cause clinicians to over-rely on AI outputs, even when they conflict with clinical judgment
Algorithmic bias arising from non-representative training data may lead to unsafe or inappropriate clinical recommendations
Justice Bias in training data may disproportionately disadvantage marginalized patient populations
Unequal access to AI infrastructure across various resource settings risks widening disparities in surgical care

AI, artificial intelligence.

One of the most pressing concerns is the potential for algorithmic bias. AI systems trained on datasets that are skewed by race, gender, socioeconomic status, or geography may further exacerbate existing inequities in surgical care. For example, Obermeyer et al. [2019] showed that an AI model widely used in United States hospitals systematically underestimated the needs of select populations, because it used health costs as a proxy for health status (13). In surgery, such bias could translate into disparities in preoperative triage or operative planning. In the World Health Organization (WHO) guidance titled Ethics and Governance of Artificial Intelligence for Health, AI systems must be designed with an explicit commitment to equity, requiring multidisciplinary oversight and input from marginalized communities that are misrepresented by current datasets (14). The bias in AI is especially ironic, as bias is traditionally viewed as a characteristic of human, rather than computer, decision-making. Overreliance on AI as “unbiased” may prove a difficult multiplier of inequity.

Another core challenge lies in the question of legal and professional accountability, particularly when complications arise from AI-generated recommendations. In one hypothetical scenario, an AI-based sepsis prediction tool marks a patient as “low risk”, resulting in the de-escalation of monitoring and treatment. If the patient then develops severe sepsis requiring emergency surgery, who should be responsible for this adverse consequence? Though the physician may have acted within institutional protocols, an argument could be made against the healthcare professional in relying on AI’s recommendation without independent clinical verification. The hospital and AI vendor could also face scrutiny for lack of adequate oversight and validation of the tool. Who should be ultimately responsible for the patient’s poor outcome: the surgeon following the guidance, the developer of the algorithm, or the hospital that deployed the system? Existing legal frameworks offer little clarity, as most jurisdictions have yet to establish liability standards for AI-assisted care. Without clear regulations, responsibility may default to the surgeon, even if they acted in good faith and within institutional protocols.

AI also complicates the doctrine of informed consent. Traditionally, surgeons disclose the risks and benefits associated with procedures. But when AI contributes to diagnostic or therapeutic decisions, patients must also be informed about the limitations and potential biases of these systems. The WHO [2021] highlights the need to protect patient understanding and agency in AI-integrated care. Consent processes must evolve to include plain-language explanations of AI’s role, level of autonomy, and the recourse patients have when they disagree with a machine-generated recommendation. If, however, a patient declines AI-integrated care, the consequences will depend on the extent to which AI is embedded in the relevant clinical pathway. As AI becomes the default in certain services, declining AI could limit access to the most advanced diagnostic tools or treatment planning methods, potentially creating a two-tier system. When patients decline AI-integrated care, the ethical question of whether they are receiving a reduced standard of care must be considered. For instance, laparoscopic cholecystectomy is considered the standard of care for acute cholecystitis. If that patient were to attempt to exercise autonomy by requesting an open laparotomy, they would be selecting a pathway associated with longer recovery and higher complication rates—a substandard level of care. Similarly, if a hypothetical patient refuses care in which AI is deeply integrated, they may be opting into an AI-naive pathway that, in the future, could be considered less effective or even below the evolving standard of care.

Lastly, the surgeon-AI relationship poses an ethical dilemma. Unlike most other tools, AI systems actively interpret and make suggestions for clinical decisions, raising fundamental questions about the surgeon’s role and the preservation of clinical intuition. When a surgeon operates, they engage in a trusting covenant with the patient that they will act in the patient’s best interests, especially in times of crisis. The WHO [2021] cautions against automation bias, noting that clinicians may over-rely on algorithmic outputs without applying their own judgment. For surgeons, this risk is magnified because surgical expertise is not merely technical execution, but a relational practice built on judgment, empathy, and the trust of the patient. For example, an AI system may recommend proceeding with a resection based on predicted survival probabilities. However, the surgeon who is aware of the patient’s frailty, values, and previously expressed wishes may judge that aggressive intervention would violate the patient’s goals of care. In such moments, it is the relational dimension of surgery grounded in dialogue and trust, not algorithmic output, that ultimately guides surgical decision-making. Surgeons must remain vigilant against automation bias that may develop when merely executing AI-derived recommendations, and prioritize clinical intuition built on decades of experience and mentorship.

From the standpoint of beneficence, one could argue that if AI models demonstrate superior predictive accuracy, it may be ethically required to use them to maximize patient outcomes. However, unlike other medical specialties, the art of surgery relies on real-time judgment that is often driven by the human bond forged outside of the operating room. The patient’s trust in the surgeon is not only about sheer technical skill, but also in the surgeon’s ability to balance data-driven recommendations with the patient’s unique circumstance and goals. This bond, built through preoperative conversations, shared decision-making, and the surgeon’s commitment to act in the patient’s best interest cannot be replicated by AI. If AI dictates surgical plans without the full integration and ownership of a human surgeon, we risk dismantling this sacred bond. Surgical intuition, judgment, and the patient relationship are actively cultivated through interactions between surgeons and patients. Dismantling surgical decision-making by inserting independent AI decisions will lead to erosion of the surgeon-patient relationship cultivated by active engagement. Surgical judgment, based not on algorithms but on lived experiences, empathy, and moral reflections should not be outsourced. Surgeons must remain active stewards of AI, and preserve the integrity of surgical judgment while using AI to augment, not replace, the covenant of care.

Readiness for AI integration in surgery

The question facing the surgical community is no longer if or when AI will transform practice, but how we will guide its safe and ethical utilization. Many healthcare systems remain unprepared for the infrastructural demands of AI integration. When evaluating institutional readiness, one study highlighted that of the healthcare entities supported by the Clinical and Translational Science Awards (CTSA) program funded by the National Institutes of Health (NIH), 53% of the participating programs identified data security as a primary concern for AI deployment, followed by clinician distrust (50%) and AI bias (44%) (15). Furthermore, resource-constrained institutions often lack the funding or infrastructure to adopt AI into their systems. This disparity risks widening existing inequities in surgical quality between high- and low-resource settings.

While the technologies behind AI are advancing rapidly, real-world implementation faces practical barriers. Integrating AI into electronic medical records poses a significant technical hurdle, which may be further exacerbated across diverse healthcare settings. In high-resource academic centers, the principal barriers often relate to governance of data, interoperability across different electronic health records, and the workforce required for model validation and maintenance. AI must be able to synthesize multimodal data, including radiology, electronic health records, pathology, and lab results in real time, which is often siloed into different systems at many hospital systems. Governance over surgical data remains underdeveloped. To generate autonomous involvement in assisting surgical tasks, large volumes of detailed intraoperative data must be collected. Yet there is little consensus over who controls that data: the patient, the surgeon, the hospital, or the manufacturer of surgical robots. Patients often have the least access to data derived from their own procedures, raising concerns about transparency and ownership.

Resource-constrained settings face additional barriers to access of digitized records, basic computing infrastructures, and personnel trained in data governance and AI evaluation. As a result, institutions with the greatest potential to benefit from the efficiencies of AI may face challenges in implementing it safely. Ethical deployment of AI in these settings requires context-specific strategies that are tailored to limited computing infrastructures and are applicable to diverse patient subgroups. For example, large academic hospital systems may utilize cloud-based AI systems with real-time imaging, electronic health record data, and operative videos to support intraoperative decision-making, supported by dedicated data science and governance teams. In contrast, hospitals in resource-limited settings may lack the basic infrastructure, bandwidth, and personnel required for such systems. In these settings, ethical deployment may instead require locally deployable AI tools that perform narrowly defined functions and can operate offline, such as identifying patients who may benefit from early transfer to tertiary care, or prioritizing postoperative monitoring based on predicted complication risk. Without such context-specific deployment strategies, AI adoption is likely to remain concentrated in well-resourced centers, exacerbating existing disparities in surgical quality and outcomes.

Perhaps the most complex barrier to AI integration is human readiness. Surgeons are trained to trust their instincts, honed by experience. Historically, new surgical techniques like laparoscopic or robotic-assisted surgeries were met with considerable skepticism before gaining acceptance (16,17). However, early resistance for these techniques were eventually overcome through continued demonstration of their safety, efficacy, and improved outcomes. AI must similarly be subjected to the iterative processes of careful scrutiny and monitoring of outcomes, before being adopted into standard clinical workflows. Cobianchi et al. [2023] surveyed 650 surgeons and found significant skepticism about the value of AI for decision-making. Many respondents expressed discomfort with the perceived opaqueness of AI and emphasized a preference for personal clinical judgment (18). To incorporate AI safely and effectively, education will be key. Surgeons do not need to code algorithms, but they must be able to understand how models are trained, validated, and applied. AI by design is a black-box of processes leading to a result. But if understanding the data acquired to create a recommendation we may arrive at a type of “explainable AI” that can provide defendable reasons for their outputs, thus preserving human decision-making authority. The use of AI in clinical practice requires that surgeons not only use AI tools, but understand the underlying data, limitations, and hidden biases.

Strengths and limitations of the current evidence

One major strength of recent AI research in surgery is the demonstration of its technical abilities across diverse surgical procedures. Studies have shown that AI can exceed human performance in image analysis (2), surgical phase recognition (7-9) and complication prediction (1,3). These findings suggest a strong potential for AI to enhance surgical clinical and intraoperative decision-making.

Despite these advances, several critical limitations remain. First, most of the studies are retrospective studies with limited sample sizes. Thus, the findings of these studies may not be generalizable to real-world settings of surgical practice. Second, surgery has yet to see AI integration to where it is providing live surgical recommendations, even in straightforward operations, outside of surgical phase identification.

Conclusions

AI is no longer a futuristic concept awaiting arrival but is actively shaping the trajectory of surgical care. From augmenting diagnostic accuracy to helping guide intraoperative decisions, AI will offer new ways of redefining the surgical field. However, the incorporation of such a powerful tool within surgical practice requires careful consideration of the ethical challenges that come with it.

Future prospective and multicenter validation studies are needed to evaluate model discrimination and calibration of AI performance across diverse patient subgroups and healthcare settings, addressing concerns related to justice and equity. Implementation science should also evaluate how AI tools are incorporated into surgical workflows, and how automation bias can be mitigated through interface design and education. Qualitative studies exploring patient understanding and perceptions of AI involvement in healthcare can also help elucidate how consent processes should be modified to preserve patient autonomy in AI-supported care.

Our review underscores that while the technical capabilities of AI in surgery are advancing rapidly, readiness across the individual, institutional, and technological dimensions remains uneven. Critical priorities include ensuring algorithmic transparency for surgeons to understand the limitations of AI, safeguarding patient autonomy with data, and establishing accountability frameworks for when AI may mislead clinical judgment. Ultimately, AI should not replace surgical judgment but should be cautiously used as a tool to enhance efficacy of care. By adopting a stance of critical stewardship, the surgical community can lead in shaping a responsible AI-integrated future.

Supplementary

The article’s supplementary files as

jtd-18-02-173-rc.pdf (93.6KB, pdf)
DOI: 10.21037/jtd-2025-1814
jtd-18-02-173-coif.pdf (1.1MB, pdf)
DOI: 10.21037/jtd-2025-1814

Acknowledgments

None.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Footnotes

Reporting Checklist: The authors have completed the Narrative Review reporting checklist. Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1814/rc

Funding: None.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1814/coif). The authors have no conflicts of interest to declare.

References

  • 1.Guo C, He Y, Shi Z, et al. Artificial intelligence in surgical medicine: a brief review. Ann Med Surg (Lond) 2025;87:2180-6. 10.1097/MS9.0000000000003115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature 2020;577:89-94. 10.1038/s41586-019-1799-6 [DOI] [PubMed] [Google Scholar]
  • 3.Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019;1:e271-97. 10.1016/S2589-7500(19)30123-2 [DOI] [PubMed] [Google Scholar]
  • 4.Hashimoto DA, Rosman G, Rus D, et al. Artificial Intelligence in Surgery: Promises and Perils. Ann Surg 2018;268:70-6. 10.1097/SLA.0000000000002693 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Loftus TJ, Tighe PJ, Filiberto AC, et al. Artificial Intelligence and Surgical Decision-making. JAMA Surg 2020;155:148-58. 10.1001/jamasurg.2019.4917 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kitaguchi D, Takeshita N, Matsuzaki H, et al. Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surg Endosc 2020;34:4924-31. 10.1007/s00464-019-07281-0 [DOI] [PubMed] [Google Scholar]
  • 7.Liu R, Yuan X, Huang K, et al. Artificial intelligence-based automated surgical workflow recognition in esophageal endoscopic submucosal dissection: an international multicenter study (with video). Surg Endosc 2025;39:2836-46. 10.1007/s00464-025-11644-1 [DOI] [PubMed] [Google Scholar]
  • 8.Dayan D. Implementation of Artificial Intelligence-Based Computer Vision Model for Sleeve Gastrectomy: Experience in One Tertiary Center. Obes Surg 2024;34:330-6. 10.1007/s11695-023-07043-x [DOI] [PubMed] [Google Scholar]
  • 9.Li A, Javidan AP, Namazi B, et al. Development of an Artificial Intelligence Tool for Intraoperative Guidance During Endovascular Abdominal Aortic Aneurysm Repair. Ann Vasc Surg 2024;99:96-104. 10.1016/j.avsg.2023.08.027 [DOI] [PubMed] [Google Scholar]
  • 10.Dimopoulos P, Mulita A, Antzoulas A, et al. The role of artificial intelligence and image processing in the diagnosis, treatment, and prognosis of liver cancer: a narrative-review. Prz Gastroenterol 2024;19:221-30. 10.5114/pg.2024.143147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Leivaditis V, Maniatopoulos AA, Lausberg H, et al. Artificial Intelligence in Thoracic Surgery: A Review Bridging Innovation and Clinical Practice for the Next Generation of Surgical Care. J Clin Med 2025;14:2729. 10.3390/jcm14082729 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Leivaditis V, Beltsios E, Papatriantafyllou A, et al. Artificial Intelligence in Cardiac Surgery: Transforming Outcomes and Shaping the Future. Clin Pract 2025;15:17. 10.3390/clinpract15010017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447-53. 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  • 14.World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva, Switzerland: World Health Organization; 2021. [Google Scholar]
  • 15.Idnay B, Xu Z, Adams WG, et al. Environment scan of generative AI infrastructure for clinical and translational science. Npj Health Syst 2025;2:4. 10.1038/s44401-024-00009-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.George EI, Brand TC, LaPorta A, et al. Origins of Robotic Surgery: From Skepticism to Standard of Care. JSLS 2018;22:e2018.00039. [DOI] [PMC free article] [PubMed]
  • 17.Alkatout I, Mechler U, Mettler L, et al. The Development of Laparoscopy-A Historical Overview. Front Surg 2021;8:799442. 10.3389/fsurg.2021.799442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Cobianchi L, Piccolo D, Dal Mas F, et al. Surgeons' perspectives on artificial intelligence to support clinical decision-making in trauma and emergency contexts: results from an international survey. World J Emerg Surg 2023;18:1. 10.1186/s13017-022-00467-3 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    The article’s supplementary files as

    jtd-18-02-173-rc.pdf (93.6KB, pdf)
    DOI: 10.21037/jtd-2025-1814
    jtd-18-02-173-coif.pdf (1.1MB, pdf)
    DOI: 10.21037/jtd-2025-1814

    Articles from Journal of Thoracic Disease are provided here courtesy of AME Publications

    RESOURCES