Abstract
Clinical documentation demands are increasingly eroding clinician time and morale. Large language models (LLMs) are emerging as practical allies, drafting notes in real-time and laying the groundwork for decision support. This narrative review examines both recent clinical applications of AI across healthcare domains and leadership strategies for implementing these technologies in hospitals and ambulatory networks. We conducted a narrative review of recent literature and high-quality practice reports published, focusing on leadership strategies for implementing LLMs in hospitals and ambulatory networks. Evidence shows that when executives establish multidisciplinary AI committees, run quickly iterated pilots, and embed continuous bias and safety audits, LLM deployments improve workflow efficiency and clinician satisfaction without compromising quality. Effective programs pair clear vendor scorecards with transparent communication to staff and patients and align metrics with broader equity goals. Recent regulatory frameworks in North America and Europe reinforce the need for life-cycle governance and performance monitoring. The review concludes with a leadership roadmap linking strategic vision to practical actions that sustain safe, equitable, and financially sound LLM integration.
Keywords: AI in healthcare, healthcare AI, LLMs in medicine, clinical decision support AI, healthcare leadership and artificial intelligence
Introduction
Current projections estimate the healthcare artificial intelligence (AI) market will expand from USD 26.6 billion in 2024 to USD 187.7 billion by 2030. This growth correlates with widespread integration, as approximately 80% of organizations now utilize AI applications.1 The driving force behind this growth is the massive volume of medical data generated from electronic health records and imaging scans to genomic tests and continuous monitoring. Modern research suggests that only about 30% of health outcomes are determined by genetics, while roughly 60% come from lifestyle and environmental factors, highlighting how much information resides outside the clinic that AI could analyze. By finding subtle patterns across these data sources, AI promises “unprecedented opportunities” to assist clinicians and reduce errors.2
This transformative potential has spurred the rapid adoption of AI in healthcare. Industry surveys demonstrate that 66% of US physicians now use AI tools in their practice (up from 38% just a year earlier).3 These tools range from image analysis software to predictive models that sift through patient records. During the COVID-19 pandemic, AI models were hastily developed to detect infections on chest scans or to predict patient deterioration,1 demonstrating how rapid innovation could yield clinically useful tools. Consistent with these trends, the American Medical Association emphasizes that AI must be employed as “augmented intelligence,” a support for clinicians, not a replacement for human judgment.3 However, a critical barrier to realizing these benefits at scale is the gap between AI’s technical capabilities and its successful real-world implementation, which includes clinical validation, workflow integration, equity assurance, and sustainable governance.
At the same time, deploying AI in healthcare faces significant challenges. Medical data is sensitive and often siloed across institutions, so building reliable models requires careful data engineering and privacy safeguards.2 Patient confidentiality must be protected: techniques like federated learning (training AI across multiple sites without sharing raw data) are being developed to allow large-scale AI learning while preserving privacy.2 Regulators and professional societies are also trying to keep pace. For example, the US FDA has issued draft guidance for AI-driven medical software (often called “Software as a Medical Device”), and medical societies are working on ethical frameworks and best practices for clinical AI. Overcoming these hurdles is an active process.4
We employed a narrative review methodology to synthesize both emerging clinical evidence and practical implementation frameworks across diverse AI applications. This approach allows for broader inclusion of recent case reports, expert guidance, and real-world deployment experiences that offer practical advice for healthcare leaders navigating rapid technological change.
To bridge the implementation gap, this review examines areas where AI is making a difference in healthcare today and provides evidence-based strategies for healthcare leaders to deploy these technologies safely and equitably. The sections below discuss major clinical domains, imaging diagnostics, patient monitoring, surgical care, and health system management, showing how AI can improve efficiency, accuracy, and patient outcomes. These sections also discuss illustrative case examples and remaining ethical and practical challenges to provide an in-depth look at AI’s role in healthcare. As depicted in Figure 1, AI integration in healthcare follows an S-shaped maturity trajectory spanning four key phases: Early Pilot, Initial Scale, Enterprise Roll-out, and Sustainable Integration, that collectively map the evolution of organizational readiness over time.
Figure 1.
Health System AI Adoption Curve.
Diagnostics and Imaging Applications
In radiology, pathology, and other diagnostic fields, AI is already yielding tangible benefits. Many hospitals employ AI tools to streamline the imaging workflow and highlight findings. For example, some radiology departments use AI-enabled cameras or software to automatically position patients in CT or MRI scanners, ensuring the correct orientation and reducing setup errors.5 Then AI-driven reconstruction algorithms convert lower-dose scans into high-resolution images, improving patient safety without sacrificing image quality (Philips Editorial Team, 2022). In cardiac ultrasound, deep-learning software automatically analyzes heart images to measure chamber volumes and blood flow, tasks that used to require manual tracing.5 These AI-driven measurements are fast and reproducible, freeing clinicians from repetitive work and allowing them to focus on complex interpretations.
AI is also enhancing cancer screening and detection. For instance, in population-scale breast cancer screening, adding AI to the workflow significantly improved early detection. A German trial involving 463,094 mammograms found that radiologists aided by an AI tool detected 17.6% more cancers than those without AI,6 with no increase in false positives. This gain (roughly 5.7 to 6.7 cancers per 1000 women) means many more cancers are caught early. Importantly, it came without extra work: the recall rate (the fraction of women called back for further testing) was slightly lower in the AI-assisted group. Traditional screening relies on two clinicians reading each image, and many programs face a shortage of radiologists.6 By pre-flagging suspicious cases, AI could help alleviate this workload while maintaining, or even improving, accuracy.6
Pooled results from a German nationwide cohort (463,094 screens) show that adding an LLM-based triage tool to double reading raised sensitivity from 0.82 to 0.96, equivalent to 5.7 → 6.7 cancers detected per 1000 women, while the recall burden fell slightly (0.06 → 0.05; 38.3 → 37.4 per 1000 screens).6
Beyond mammography, AI advances are occurring in many other imaging specialties. In ophthalmology, autonomous AI systems now screen for diabetic eye disease. Three FDA-cleared algorithms can analyze retinal photographs and flag cases of moderate or worse diabetic retinopathy with about 87–90% sensitivity and specificity. These tools have been deployed in clinics and community health programs, helping catch retinal changes in diabetics who might not otherwise see an eye specialist.7 Researchers in dermatology are testing AI applications for melanoma screening purposes.8 Smartphone apps and clinical tools can now analyze photos of skin lesions and suggest which ones need biopsy. Early studies show some AI models can match dermatologists at identifying suspicious moles (though regulatory approval and robust validation are still needed before these are in routine clinical use). The common theme is that AI’s pattern recognition strengths apply to many kinds of medical images.
AI is also making strides in pathology. In digital pathology labs, machine learning models analyze scanned microscope slides to detect cancer cells and grade tumors. In many studies, these AI systems have matched or even surpassed human pathologists in identifying malignancies. This work is especially important given the global shortage of pathologists. In some countries, pathologists’ workloads are so high that slide reviews can take days; AI could pre-screen slides and flag those needing urgent review.9 For example, an AI program might highlight regions on a tissue slide where a tumor is most likely, allowing pathologists to concentrate on those areas first. Over time, artificial intelligence could speed up diagnosis and help standardize readings across different labs.
AI’s role extends beyond any single organ system. For example, algorithms have been developed to automatically measure key parameters, such as ejection fraction from echocardiogram videos and to segment brain lesions on MRI; these tasks previously required laborious manual work.10–12 Automating such measurements provides consistent, quantitative data to track disease over time. Other AI tools analyze laboratory data; for instance, some systems can count blood cells or identify abnormal cells in images of blood and tissue samples.13,14 In microbiology, AI is being tested to classify bacteria and predict antibiotic resistance from culture images.15 These applications show that as long as a task involves pattern recognition, AI can add value, whether on a doctor’s screen or in the lab.
Regulatory approvals are on the rise as well. By 2026, dozens of AI diagnostic tools will have received FDA clearance or equivalent international approval. These include software that flags possible strokes on head CT scans and screens for eye disease from fundus photos, as well as algorithms that analyze electrocardiogram (ECG) traces for arrhythmias. For instance, some AI ECG systems can identify atrial fibrillation or even left ventricular dysfunction from a single rhythm strip with accuracy close to a cardiologist.16 These approvals demonstrate AI’s maturing role in clinical practice and pave the way for broader adoption.
Another emerging area is AI triage for routine scans. In busy emergency departments, AI chest X-ray triage systems are being evaluated. One multicenter study of an AI tool (Lunit INSIGHT CXR) categorized chest X-rays as “normal,” “non-urgent,” or “urgent,” achieving about 89% sensitivity and 93% specificity for identifying normal studies.17 Crucially, this AI reduced radiologists’ report turnaround time by roughly half, since clear scans were automatically deprioritized.17 Similar to this, AI models have demonstrated performance comparable to that of radiologists in identifying intracranial hemorrhage, fractures, and pneumothorax on emergency scans.18 By filtering out clearly normal images and highlighting acute findings, these systems promise to speed critical diagnoses in high-volume settings.
AI is even venturing into unconventional diagnostic signals. Shen et al’s voice analytics study showed that an explainable machine-learning model could identify early Parkinson’s disease from brief speech samples with more than 90% accuracy.19 As illustrated in Figure 2, the diagnostic pipeline begins with a short voice recording and progresses through AI-based acoustic analysis to predict Parkinson’s disease risk with over 90% accuracy. Smartphone and wearable usage data tell a similar story: a 23,000-participant observational trial demonstrated that passive keystroke, touch, and motion patterns detected mild cognitive impairment months before standard tests.20 In laboratory medicine, convolutional networks now segment and classify every leukocyte in a digital smear within seconds, matching expert technologists while processing hundreds of slides per hour.21 Proteomics adds a further frontier: sparse machine-learning models built on ~3000 plasma proteins improved ten-year risk prediction for dozens of rare and common diseases beyond conventional clinical scores.22 Together, these examples illustrate how pattern-recognition advances are widening AI’s medical footprint beyond imaging and EHR data.
Figure 2.
AI Voice Analysis Workflow for Early Parkinson’s Detection.
In summary, AI-assisted diagnostics are not limited to one specialty. From automated X-ray triage to mobile ultrasound guidance to smart lab microscopes, the technology is being applied wherever pattern recognition can add value. The key takeaway is that AI tools excel when they take over data-intensive, repetitive tasks and present findings to human experts rather than making autonomous decisions. As clinical trials and real-world studies accumulate evidence, these AI systems will likely become routine diagnostic aids in many settings, helping clinicians catch conditions earlier and more reliably.
Patient Monitoring and Management Applications
AI’s impact extends beyond imaging to include patient monitoring and everyday care. Modern wearable devices and home monitors increasingly use machine learning to predict health events before they become emergencies. For instance, advanced continuous-glucose monitors for diabetic patients now do more than track sugar levels: they learn each patient’s patterns and can forecast dangerous highs or lows hours in advance.23 If the system predicts a hazardous sugar swing, it instantly alerts the patient (and, if enabled, the care team) to take corrective action. Similarly, smart heart monitors (patches or watches) with AI can continuously analyze ECG signals or blood pressure trends. These devices can detect subtle arrhythmias or flag early signs of heart failure that might be missed during a brief doctor’s visit. AI helps move care from a reactive to a proactive model by turning passive data streams into useful predictions. Figure 3 outlines how wearable or remote devices collect real-time physiological data, which is transmitted to an AI system for continuous risk analysis. When abnormal trends are detected, such as impending hypoglycemia or cardiac instability, the system triggers an early alert, prompting timely clinician review and intervention.
Figure 3.
Continuous Monitoring Workflow.
In hospitals and clinics, AI-driven monitoring tools are improving patient safety and efficiency. Researchers have developed AI vision systems, using cameras or smart glasses, to supervise routine clinical tasks in real time. For example, an AI-assisted medication checker can scan a nurse’s medication bag and the patient’s ID, cross-referencing both to ensure the right drug and dose are given. Early tests of such systems in simulated settings show they catch administration errors instantly,23 effectively providing a double-check during busy shifts. Similarly, AI cameras are being piloted in operating rooms to detect breaches in sterile techniques or missing instruments, alerting the team before such issues cause harm.24,25 In intensive care units, AI alarms are being tested to reduce “alarm fatigue”: instead of squawking for every small fluctuation, smart algorithms learn each patient’s normal patterns and only alert when truly abnormal or dangerous trends appear.26,27
Even in low-resource environments, simple AI-enhanced devices are making a difference. Investigators have paired AI algorithms with low-cost pulse oximeters and thermometers to extend critical care to rural clinics. For example, during dengue outbreaks in tropical countries, an AI-enhanced pulse oximeter was able to analyze a patient’s vital sign patterns and predict which dengue patients would deteriorate and need intensive care hours in advance.23 This early warning allows scarce hospital beds and treatments to be reserved for those most at risk. At the public health level, AI analysis of community data is being used to triage vaccine distribution and outbreak responses in real-time, demonstrating that even modest tools can have a big impact when deployed wisely.
Beyond direct monitoring, AI helps manage patient populations and hospital operations. Electronic health records (EHRs) generate so much data that busy clinicians cannot easily spot every pattern. AI analytics now sifts through these records to aid decision-making. For instance, predictive models can flag patients at high risk of readmission or complications (like sepsis) before they leave the hospital, enabling targeted follow-up or preventive therapy.2 Hospitals also use AI to predict supply needs and manage staffing; on the administrative side, AI is applied to billing and fraud detection. One notable example is IBM’s DataProbe, an AI system that looked at billing records from hospitals in the US and found $41.5 million in false Medicare claims in just a few months.2 These tools reduce waste and free up resources for patient care.
AI is also reshaping telemedicine and virtual care. Many health systems now embed symptom-checker chatbots and self-scheduling assistants in their patient portals; real-world analyses show these tools cut unscheduled emergency visits while still offering safe triage advice.28–30 Prospective studies report that patients who use such apps feel more satisfied with their encounters and experience lower anxiety around care decisions.31–33 Modeling work further suggests that rerouting even 30% of low-acuity cases could meaningfully ease emergency department crowding.34 Beyond first contact, “virtual nurses” and AI health coaches are supporting chronic disease management.35,36 Randomized and pilot trials show that AI messaging apps can improve blood pressure control and medication adherence compared with usual care in hypertension and diabetes.35,36 A fully automated diabetes-prevention program recently matched human coaching for weight loss and activity over six months.37 Meanwhile, a parallel trial pairing continuous-glucose monitoring with AI coaching is underway to test long-term metabolic gains.38 The common thread is round-the-clock, personalized guidance: algorithms learn each user’s patterns and adjust reminders or advice in real-time, extending clinical support far beyond the clinic visit.36,39
AI is also making inroads in mental health and rehabilitation. Smartphone apps now include AI chatbots that deliver cognitive-behavioral therapy exercises to users with anxiety or depression.40 Some studies find that people who use these digital “therapists” report measurable improvements in mood and coping skills.40,41 Similarly, in physical therapy, AI-based motion trackers can analyze a patient’s exercise form at home and give real-time feedback, helping prevent reinjury.42,43 Smart home systems can monitor seniors for falls by analyzing movement patterns; for instance, AI cameras in assisted-living facilities can detect if a resident has fallen out of bed and automatically summon help.44,45 These innovations extend the reach of healthcare professionals and provide support to patients between their visits.
Integrating AI into monitoring systems requires thoughtful design. Alerts and recommendations must fit into clinicians’ normal workflows. Several hospitals are embedding AI outputs directly into their EHR interfaces, so a risk score or alert pops up next to the patient’s chart, rather than forcing doctors to use a separate app. Training is also important: physicians and nurses need to understand what an AI alert means and how to respond. Some institutions have set up “AI committees” or liaison teams to guide this education. Over time, as these systems demonstrate value (fewer readmissions, smoother rounds, etc.), clinicians tend to adopt them more readily.
AI-powered monitoring is indeed moving us toward more proactive care. By constantly observing vital signs, symptoms, and behaviors, intelligent systems can catch warning signals that humans might miss. This continuous vigilance has the potential to improve patient outcomes, for example, by preventing hospitalizations, and to make healthcare more efficient. The ongoing challenge is to deploy these tools in ways that respect patient privacy and integrate smoothly into care, but the examples above show that meaningful progress is already being made.
Robotic Surgery and Therapeutic Applications
AI is also advancing beyond the operating room. Modern surgical robots (such as the da Vinci system) give surgeons highly precise control of instruments but traditionally still require a human hand for every movement. The next step is to make these systems “smart” with AI enhancements. One major area is visual assistance: researchers have trained AI to enhance the surgical view in real time. For example, an AI model can denoise and clarify the live endoscopic video feed, removing blur from instrument motion or smoke from cauterization.46 It can also adjust the colors or contrast to highlight blood vessels and nerves. This clear, augmented view helps surgeons see fine structures that might otherwise be obscured, reducing the chance of accidental damage.
Beyond improving visibility, AI is enabling new degrees of automation. While fully autonomous surgery remains a future goal, parts of procedures can now be partially automated under supervision. In lab demonstrations, robots have used AI to autonomously suture wounds or perform bone cuts along pre-planned lines. In one experiment, an AI-driven system performed a running suturing task as well as expert surgeons.47 (A human operator was standing by to supervise, but the success shows how reliable certain subtasks can become). Similarly, AI algorithms can analyze the surgeon’s hand movements in real time.48–50 If the algorithm detects tremor or fatigue, it can stabilize the robot’s motion, smoothing out small unintentional movements.51 This kind of assistance could reduce fatigue-related errors in long procedures.
AI also aids in surgical planning and navigation. Machine learning models can analyze patient scans to create customized guides. For instance, in orthopedic surgery, AI algorithms generate a 3D model of a patient’s hip or knee from CT data and help design a perfectly fitting cutting guide or implant. AI has been utilized in neurosurgery to dynamically map brain anatomy: a study trained an AI using surgical ultrasound and electrical impedance data to distinguish tumor tissue from normal brain in real time.46 During an operation, this tool can highlight the tumor boundary, guiding the surgeon in removing the right amount of tissue. In spinal surgery and ENT procedures, similar AI tools align a patient’s preoperative images with live tracking to ensure each cut follows the planned safe path.
The robotics utility extends to postoperative care and rehabilitation. In rehabilitation, AI-powered exoskeletons are being refined to support patients recovering from stroke or spinal injury, adjusting assistance in real time to each individual’s gait patterns.52 Smart prosthetic limbs with embedded intelligence translate electromyographic or cortical signals into smoother, more intuitive movements; Capsi-Morales et al demonstrated a system that decodes muscle activity on-the-fly, enabling natural hand control with high precision.53 Mullin et al further showed that a myoelectric arm can learn from residual muscle signals to drive a robotic hand in real time, achieving gesture accuracy above 90%.54 In radiation oncology, it was reported that AI-based auto-contouring software cuts target-definition time to minutes while preserving dosimetric quality.55 Collectively, these augmented-reality cameras, semi-autonomous robots, adaptive rehab devices, and AI-planned therapies illustrate how intelligent systems continue to widen the boundaries of surgical and restorative care.
Early indicators suggest these innovations can translate into better outcomes. Experts generally agree that the continued integration of AI into surgery and therapy will enhance patient safety and efficacy.46 Initial clinical reports are encouraging: teams using AI-assisted navigation have observed shorter operative times and fewer small complications. For example, one center reported that robotic laparoscopic procedures guided by AI had less blood loss and shorter hospital stays on average.56 In gastrointestinal endoscopy, AI systems that analyze colonoscopy video in real-time have already been shown in trials to increase the rate at which doctors detect precancerous polyps, reducing the chance that a patient goes home with an undetected lesion.57–59 Furthermore, AI models predicting patient outcomes have helped tailor interventions. They forecast that postoperative pain or infection risk can prompt preemptive measures, improving recovery.60,61
In the post-operative setting, AI algorithms are being deployed to stratify risk, utilizing continuous vital sign and lab data to predict potential complications.62,63 By flagging these patients early, care teams can intervene, for example, by adjusting medications or ordering more frequent checkups, before a small problem becomes a crisis.64 This type of AI is already available for general wards and ICUs, demonstrating how the technology extends from the operating room into the entire patient recovery pathway.
AI in surgery and therapy is about augmenting every step of the surgical journey. Rather than replacing surgeons, these tools serve as digital assistants, helping clinicians work more accurately and freeing them to focus on the human aspects of care. From real-time vision enhancement to automated suturing demonstrations, each AI innovation adds up to safer, more efficient procedures. The ultimate measure will be patient outcomes, and so far, early results and expert consensus are promising that AI-enhanced care will improve those outcomes over time.
Implementation Strategies, Challenges, and Considerations
Despite the promise of AI, translating research prototypes into routine healthcare practice has hurdles. A primary issue is data integration. Medical records, imaging, and device outputs are often stored in incompatible formats across different hospital systems. Bringing all this data together for AI requires extensive data engineering, cleaning, standardizing, and annotating records so algorithms can learn from them. At the same time, every data-prep step should tag age, sex, ethnicity, and geography so that later performance dashboards can surface subgroup drift. Patient privacy is another top concern. Healthcare organizations must implement strong de-identification and security measures so that AI training does not risk exposing personal information.2 Techniques like federated learning (where an AI model is trained across multiple sites without sharing raw data) are being explored to address this, but they remain complex to deploy.
Leadership Roadmap for AI Adoption: A Four-Phase Pathway
The following roadmap (Figure 4) outlines the four stages health systems must navigate to scale AI safely: urgency, coalition, pilot, and scale.
Figure 4.
AI Leadership Roadmap: A 4-Phase Adoption Strategy.
Create Urgency: Quantify Local Pain
Leaders first surface a problem that frontline teams already feel but have not yet measured. At Kaiser Permanente, anonymous time-motion logs showed physicians devoting 43% of their workdays to documentation; introducing ambient LLMs scribes reclaimed 15,791 hours in a single year, or roughly 1794 clinician days.65 Similar numbers emerged in German breast-screening hubs, where AI-supported double reading raised detection from 5.7 to 6.7 cancers per 1000 women without extra recalls, underscoring both patient safety and throughput dividends.6 Publicly sharing these local metrics at grand rounds or board retreats transforms diffuse frustration into a quantified performance gap that demands executive action.
Build A Coalition—Lock In Multidisciplinary Ownership
Momentum stalls if urgency rests on a single champion. Successful sites convene a steering group that pairs C-suite sponsors with mid-level clinical leaders, data scientists, equity officers, and patient advocates.66 In one mammography network, radiographers and community liaisons helped shape recall-letter language, while finance directors mapped time savings to budget cycles. Equity specialists pre-agreed on subgroup performance thresholds, preventing late-stage fairness surprises.67 Formalizing roles, meeting cadence, and KPIs at this stage converts enthusiasm into shared accountability.
Pilot-Test In High-Signal, Low-Risk Settings
Pilots mirror the target workflow yet remain small enough for weekly iteration. An ICU introduced a machine-learning alarm suppressor on two pods; within six weeks, false alarms fell by 60% with no missed critical events.68 A spinal cord injury unit trialed AI motion tracking during home rehab and documented significant gains in upper-limb strength compared with standard exercise videos.43 In virtual care, an AI symptom checker embedded in the patient portal cut non-urgent ED visits among chemotherapy patients, boosting satisfaction scores.69 Each pilot tracks operational (minutes saved, visits avoided), clinical (sensitivity, adverse events), and human-factor metrics (trust, satisfaction). Dashboards refresh monthly and disaggregate by age, sex, and ethnicity to surface bias early. To help CFOs and board committees translate technical promise into accountable value, Table 1 distills total-cost-of-ownership elements, an average ROI benchmark of USD 3.20 returned within 14 months for every dollar spent on healthcare AI, and the governance checkpoints required to track those savings over time.1
Table 1.
Financial Stewardship Checklist for AI Deployments: Key TCO Elements, Typical 3.2:1 ROI Within 12–18 months, and Governance Checkpoints From Business-Case Approval to Annual Reinvestment
| Financial Stewardship Element | Guidance & Benchmark Numbers |
|---|---|
| Total cost of ownership (TCO) | Budget ≈ license fee × (1 + 0.5 integration and security + 0.2 training). |
| Baseline & pay-back | Use a fully loaded clinical salary (≈ USD 120/hour). A 15,000-hour documentation pilot yields ~USD 1.8 M gross value. Median ROI across enterprise AI projects is 3.2:1 with a 12–18-month payback. |
| Variable costs | Plan 0.1 × license per annual model retraining and bias audit. |
| CapEx vs OpEx split | Capitalize integration/cyber spend; treat hosting, monitoring, and audits as operating expenses to smooth cash flow. |
| Governance gates | 1) The CFO/quality committee approves the pre-investment business case. 2) Quarterly variance review triggers re-tuning if savings drift ±10%. 3) Annually reinvest ≥15% of realized savings in workforce or equity initiatives. |
| Reporting | Finance and clinical leads co-publish ROI, quality, and equity metrics to the board dashboard each quarter. |
Notes: Bold values represent critical financial metrics: 3.2:1 = target ROI ratio; ±10% = acceptable variance range; ≥15% = minimum reinvestment requirement.
Scale—Institutionalize Governance, Infrastructure, And Education
After regulatory compliance is confirmed, governance becomes routine rather than ad hoc: quarterly model-performance reviews, annual bias audits, and real-time drift alerts feed the quality committee. Cloud or on-prem compute budgets move from innovation funds to the operating ledger, aligned with cybersecurity controls.70 Education widens beyond early adopters; mandatory micro-learning teaches every clinician to interpret and override AI outputs. Financial officers embed the 3.2:1 ROI expectation into budget forecasts, reallocating gains to sustain licensing and retraining fees. Progress dashboards, hours returned to care, alarm reductions, and triage accuracy are featured in staff newsletters and board packets to preserve urgency. Finally, an “AI product owner” coordinates updates so that new versions do not fracture across departments.
Threading Ethics And Equity
Equity checkpoints appear in every phase: diverse data selection during urgency framing, subgroup dashboards during pilots, and public reporting once at scale. Telestroke services in rural networks, for example, now require quarterly audits showing no performance drop compared with urban centers, linking adoption to access improvement.71
Outcome
By anchoring urgency in local metrics, broadening ownership, validating with rigorously measured pilots, including telehealth and rehabilitation exemplars, and scaling under disciplined governance and ROI targets, leaders turn AI from experimental gadgetry into a strategic, equitable asset that satisfies boards, regulators, and bedside teams alike.
Other Challenges and Considerations
Bias and fairness are also significant challenges. Because AI learns from historical data, it can inadvertently perpetuate disparities present in that data. For example, if an AI model for imaging diagnosis was trained mostly on data from urban hospitals, it might underperform when used on patients in rural areas or from underrepresented ethnic groups. Studies have documented cases where medical AI tools were less accurate for certain races or ages.72,73 Steering committees must set a “red-flag” threshold, eg, ±5% drop in sensitivity for any demographic group, which triggers model retraining or workflow rollback. Quarterly equity reports should be presented at the same board meeting that reviews infection control or readmission metrics, signaling that fairness carries equal governance weight. Addressing this requires deliberate effort: developers must use diverse training datasets, and models should be continuously tested on different patient populations. Professional societies and regulators increasingly emphasize fairness checks. Any implemented AI tool should come with documentation of its performance across subgroups and with a plan for monitoring its real-world outcomes.
Building clinicians’ trust in AI is another hurdle. Most AI tools have been validated retrospectively (on archived data), but clinicians rightly want to see them succeed in real-time patient care. Although prospective clinical trials of AI are still relatively rare, their numbers are increasing. For example, the breast cancer screening study mentioned earlier was effectively a real-world trial of AI in practice.6 Trust also rises when frontline staff see disaggregated accuracy on a shared dashboard and can escalate equity concerns via a rapid-response ticket. Meanwhile, many hospitals are gradually introducing AI by having it pre-screen X-rays or lab results and flag notable cases instead of allowing the AI to make final diagnoses. This “AI as second reader” approach lets doctors get used to the tool’s suggestions without relinquishing responsibility.74 Over time, evidence of benefit and transparency about limitations helps build confidence. Even now, survey data indicate that most doctors see AI as a helpful aid rather than a threat, provided it comes with clear guidelines and oversight.
Practical infrastructure is also a consideration. High-performance computing power and secure data storage are needed, especially for image-heavy and genomics AI. Some solutions rely on cloud computing, which means hospitals need fast, reliable internet and strong cybersecurity. The upfront costs for software licenses, hardware, and workforce training can be substantial, especially for smaller or rural hospitals. However, many institutions find that successful AI tools pay for themselves over time through efficiency gains. For instance, by reducing duplicate tests or catching billing errors, hospitals can recoup initial investments. One survey reported that top-performing AI healthcare deployments achieved an average return on investment of roughly USD 3.20 for every USD 1.1 This demonstrates that financial incentives can align with innovation, although the path to those savings must be managed carefully.
Ethical and legal issues also require attention. Patients should ideally be informed when AI is used in their care and understand its role. Currently, most AI systems function as “black boxes”, providing recommendations without reasoning that is easily interpretable Efforts in “explainable AI” are trying to address this by providing at least some understandable rationale (for example, highlighting which features led to a prediction).9 Another unresolved issue is liability: who bears responsibility if an AI-based recommendation results in an incorrect diagnosis: the doctor, the hospital, or the AI developer? In most current practice, the physician remains ultimately accountable, but legal standards will evolve. Some regions are already developing guidelines to clarify these issues. For now, the best practice is clear communication: doctors should not defer blindly to AI, and patients should know that a human is making the final judgment.
Another challenge is that AI models evolve. Once deployed, an AI system may drift in performance as new data or patient populations are seen. For example, if a hospital changes its imaging equipment or treats a different mix of diseases, the AI’s accuracy could degrade. This means hospitals must continuously monitor AI accuracy and be prepared to retrain models with new data. Regulatory agencies like the FDA have proposed frameworks for ongoing post-market surveillance of AI devices, treating them more like dynamic systems than static machines. In practice, this adds a new layer of quality assurance—akin to how hospitals regularly calibrate lab instruments, they will need to audit AI systems on an ongoing basis.
Education and workforce training are also essential. Community-stakeholder panels, including patients, caregivers, and advocacy groups, should preview AI outputs and language to ensure cultural relevance before go-live. Many hospitals create multidisciplinary teams, including data scientists, IT experts, and clinicians, to oversee AI projects. Clinicians need training on how to use AI tools safely and interpret their outputs.75 Some medical schools and professional societies are now introducing AI literacy into their curricula.76 The World Health Organization and ITU have collaborated on AI in healthcare guidelines,77 and standards bodies are extending healthcare data standards (like HL7 FHIR) to support AI.78 These governance and training efforts are part of the broader infrastructure needed to make AI work in real-world healthcare.
Finally, broader policy and social context matter. Many countries have launched digital health strategies that include AI components. For example, Canada’s national AI strategy includes various healthcare projects, while the European Union’s Horizon programs provide funding for health AI research. Funding agencies, such as the US NIH, have established special programs for AI in medicine that promote the creation of large shared databases. The looming global shortage of health workers, projected to exceed 10 million by 2030,1 motivates AI adoption, especially in underserved areas. We are seeing pilot models where AI-assisted telemedicine extends specialists’ reach to rural clinics. All these efforts, technical, organizational, and policy, must come together. The organizations that succeed will be those that approach AI as part of a larger system: setting up beneficial data pipelines, training staff, establishing ethical oversight, and aligning incentives, rather than just dropping algorithms in isolation.
Conclusion
AI already improves detection, monitoring, and clinical workflows, proving its value alongside skilled clinicians. Successful adoption hinges on disciplined governance, multidisciplinary oversight, rigorous validation, and transparent performance tracking so that algorithms remain accurate, equitable, and trustworthy. Used this way, AI frees clinicians from repetitive data tasks and lets them focus on nuanced decision-making and patient connection.
As every past medical innovation required training and a cultural shift, so will AI. The goal is augmentation, not replacement: a radiologist with an AI reader, a nurse with an AI triage assistant, or a surgeon with AI-guided navigation. Future research must prioritize long-term prospective trials demonstrating clinical outcomes, real-world fairness audits across diverse populations, and standardized frameworks for AI lifecycle governance. With vigilant oversight and continuous learning, healthcare can harness AI to deliver safer, faster, and more personalized care, expanding access while preserving the human touch at the heart of medicine.
Ethical Statement
This narrative review analyzed only publicly available sources and involved no human participants, animals, or patient-identifiable data; therefore, institutional ethics approval and informed consent were not required.
Author Contributions
All authors contributed significantly to the reported work, including conception, study design, execution, data acquisition, analysis, and interpretation; participated in drafting, revising, or critically reviewing the article; provided final approval of the version to be published; agreed on the journal for submission; and accepted accountability for all aspects of the work.
Disclosure
The authors report no conflicts of interest in this work.
References
- 1.Grand View Research. Grand View Research. Artificial intelligence in healthcare: market size, share & trends analysis report, 2024 – 2030. 2025. [Cited July 2, 2025]. Available from: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market. Accessed December 6, 2025.
- 2.Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.American Medical Association. Augmented intelligence in medicine. 2025. [Cited July 2, 2025]. Available from: https://www.ama-assn.org/practice-management/digital-health/augmented-intelligence-medicine. Accessed December 6, 2025.
- 4.Commissioner O of the FDA. FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices. 2025. [Cited July 2, 2025]. Available from: https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices. Accessed December 6, 2025.
- 5.Philips Editorial Team. Philips. 10 real-world examples of AI in healthcare. 2022. [Cited July 2, 2025]. Available from: https://www.philips.com/a-w/about/news/archive/features/2022/20221124-10-real-world-examples-of-ai-in-healthcare.html. Accessed December 6, 2025.
- 6.Eisemann N, Bunk S, Mukama T, et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med. 2025;31(3):917–924. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Anand ER, Aaron YL. AI for DR screening: where are we in 2025? Retina Specialist. 2025. [Cited July 2, 2025]. Available from: http://www.retina-specialist.com/article/ai-for-dr-screening-where-are-we-in-2025. Accessed December 6, 2025.
- 8.Crawford ME, Kamali K, Dorey RA, et al. Using artificial intelligence as a melanoma screening tool in self-referred patients. J Cutan Med Surg. 2024;28(1):37–43. doi: 10.1177/12034754231216967 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Wang J, Wang T, Han R, Shi D, Chen B. Artificial intelligence in cancer pathology: applications, challenges, and future directions. Cyto J. 2025;22:45. [Internet]. doi: 10.25259/Cytojournal_272_2024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hadjidimitriou S, Pagourelias E, Apostolidis G, et al. Clinical validation of an artificial intelligence-based tool for automatic estimation of left ventricular ejection fraction and strain in echocardiography: protocol for a two-phase prospective cohort study. JMIR Res Protoc. 2023;12:e44650. doi: 10.2196/44650 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Olaisen S, Smistad E, Espeland T, et al. Automatic measurements of left ventricular volumes and ejection fraction by artificial intelligence: clinical validation in real time and large databases. Eur Heart J - Cardiovasc Imaging. 2025;25(3):383–395. [Internet]. doi: 10.1093/ehjci/jead280 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ouyang D, He B, Ghorbani A, et al. Video-based AI for beat-to-beat assessment of cardiac function. Nature. 2025;580(7802):252–256. doi: 10.1038/s41586-020-2145-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Muhammad Hashim H. The role of artificial intelligence (AI) in enhancing the efficiency of automated blood cell counters the role of artificial intelligence (AI) in enhancing the efficiency of automated blood cell counters. Stem Cell Res J. 2025;2025:1. [Google Scholar]
- 14.Naouali S, El Othmani O, Yahyaoui A, Bouatay A, Dhaouadi R. AI-driven automated blood cell anomaly detection: enhancing diagnostics and telehealth in hematology. J Imaging. 2025;11(5):157. doi: 10.3390/jimaging11050157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Işıl Ç, Koydemir HC, Eryilmaz M, et al. Virtual Gram staining of label-free bacteria using dark-field microscopy and deep learning. Sci Adv. 2025;11(2):eads2757. doi: 10.1126/sciadv.ads2757 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Lueken M, Mettner J, Spicher N, et al. Towards artificial intelligence-based decision support for large-scale screening for atrial fibrillation. IEEE J Biomed Health Inform. 2025;29(10):7633–7642. doi: 10.1109/JBHI.2025.3579621 [DOI] [PubMed] [Google Scholar]
- 17.Sridharan S, Seah Xin Hui A, Venkataraman N, et al. Real-world evaluation of an AI triaging system for chest X-rays: a prospective clinical study. Eur J Radiol. 2024;181:111783. doi: 10.1016/j.ejrad.2024.111783 [DOI] [PubMed] [Google Scholar]
- 18.Huang J, Wittbrodt MT, Teague CN, et al. Efficiency and quality of generative AI–assisted radiograph reporting. JAMA Network Open. 2025;8(6):e2513921. doi: 10.1001/jamanetworkopen.2025.13921 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Shen M, Mortezaagha P, Rahgozar A. Explainable artificial intelligence to diagnose early Parkinson’s disease via voice analysis. Sci Rep. 2025;15(1):11687. doi: 10.1038/s41598-025-96575-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Butler PM, Yang J, Brown R, et al. Smartwatch- and smartphone-based remote assessment of brain health and detection of mild cognitive impairment. Nat Med. 2025;31(3):829–839. doi: 10.1038/s41591-024-03475-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Campos-Medina M, Blumer A, Kraus-Füreder P, Mayrhofer-Reinhartshuber M, Kainz P, Schmid JA. AI-enhanced blood cell recognition and analysis: advancing traditional microscopy with the web-based platform IKOSA. J Mol Pathol. 2024;5(1):28–44. doi: 10.3390/jmp5010003 [DOI] [Google Scholar]
- 22.Carrasco-Zanini J, Pietzner M, Davitte J, et al. Proteomic signatures improve risk prediction for common and rare diseases. Nat Med. 2024;30(9):2489–2498. doi: 10.1038/s41591-024-03142-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Mahajan A, Heydari K, Powell D. Wearable AI to enhance patient safety and clinical decision-making. Npj Digit Med. 2025;8(1):176. doi: 10.1038/s41746-025-01554-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Haider SA, Ho OA, Borna S, et al. Use of multimodal artificial intelligence in surgical instrument recognition. Bioengineering. 2025;12(1):72. doi: 10.3390/bioengineering12010072 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.ImageVision.ai. Surgical Instrument Tracking with Vision AI | imageVision.ai [Internet]. ImageVision.ai. [Cited July 2, 2025]. Available from: https://imagevision.ai/applications/surgical-instrument-tracking/. Accessed December 6, 2025.
- 26.Au-Yeung WTM, Sahani AK, Isselbacher EM, Armoundas AA. Reduction of false alarms in the intensive care unit using an optimized machine learning based approach. Npj Digit Med. 2019;2(1):86. doi: 10.1038/s41746-019-0160-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Flint AR, Schaller SJ, Balzer F. How AI can help in error detection and prevention in the ICU? Intensive Care Med. 2025;51(3):590–592. doi: 10.1007/s00134-024-07775-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Hammoud M, Douglas S, Darmach M, Alawneh S, Sanyal S, Kanbour Y. Evaluating the diagnostic performance of symptom checkers: clinical vignette study. JMIR AI. 2024;3(1):e46875. doi: 10.2196/46875 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Huang MY, Weng CS, Kuo HL, Su YC. Using a chatbot to reduce emergency department visits and unscheduled hospitalizations among patients with gynecologic malignancies during chemotherapy: a retrospective cohort study. Heliyon. 2025;9(5):e15798. doi: 10.1016/j.heliyon.2023.e15798 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Wallace W, Chan C, Chidambaram S, et al. The diagnostic and triage accuracy of digital and online symptom checker tools: a systematic review. Npj Digit Med. 2025;5(1):118. doi: 10.1038/s41746-022-00667-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Liu AW, Odisho AY, Brown IIIW, Gonzales R, Neinstein AB, Judson TJ. Patient experience and feedback after using an electronic health record–integrated COVID-19 symptom checker: survey study. JMIR Hum Factors. 2022;9(3):e40064. doi: 10.2196/40064 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Schmieding ML, Kopka M, Bolanaki M, et al. Impact of a symptom checker app on patient-physician interaction among self-referred walk-in patients in the emergency department: multicenter, parallel-group, randomized, controlled trial. J Med Internet Res. 2025;27(e64028):e64028. doi: 10.2196/64028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.You Y, Ma R, Gui X. User experience of symptom checkers: a systematic review. AMIA Annu Symp Proc. 2023;2022:1198–1207. doi: 10.1016/j.socscimed.2015.04.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Simbo AI. The role of AI Symptom Checkers in Modern Healthcare: streamlining Patient Care and Reducing Emergency Room Strain | simbo AI - Blogs [Internet]. Simbo AI - Blogs. 2025. [Cited July 2, 2025]. Available from: https://www.simbo.ai/blog/the-role-of-ai-symptom-checkers-in-modern-healthcare-streamlining-patient-care-and-reducing-emergency-room-strain-2733241/. Accessed December 6, 2025.
- 35.Layton AT. A heart-to-heart with ChatGPT: AI applications in hypertension. Am J Hypertens. 2025;hpaf045. doi: 10.1093/ajh/hpaf045/8104469 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Reis ZSN, Pereira GMV, Dos Dias CS, Lage EM, de Oliveira IJR, Pagano AS. Artificial intelligence-based tools for patient support to enhance medication adherence: a focused review. Front Digit Health. 2025;7:1523070. doi: 10.3389/fdgth.2025.1523070 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Mathioudakis NN, Abusamaan MS, Alderfer ME, et al. 1956-LB: artificial Intelligence vs. human coaching for diabetes prevention—results from a 12-month, multicenter, pragmatic randomized controlled trial. Diabetes. 2025;74(Supplement_1):1956–LB. doi: 10.2337/db25-1956-LB [DOI] [Google Scholar]
- 38.Singapore General Hospital. Continuous Assessment of Risk and Efficacy (CARE) Using Health Coaching, Continuous Glucose Monitoring and AI Mobile App in Diabetes. clinicaltrials.gov; 2024. Apr [Cited July 2, 2025]. Report No.: NCT06028139. Available from: https://clinicaltrials.gov/study/NCT06028139. Accessed December 6, 2025.
- 39.Fraser HSF, Cohan G, Koehler C, et al. Evaluation of diagnostic and triage accuracy and usability of a symptom checker in an emergency department: observational study. JMIR MHealth UHealth. 2022;10(9):e38364. doi: 10.2196/38364 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Karkosz S, Szymański R, Sanna K, Michałowski J. Effectiveness of a web-based and mobile therapy chatbot on anxiety and depressive symptoms in subclinical young adults: randomized controlled trial. JMIR Form Res. 2024;8(1):e47960. doi: 10.2196/47960 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Zhong W, Luo J, Zhang H. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: a systematic review and meta-analysis. J Affect Disord. 2024;356:459–469. doi: 10.1016/j.jad.2024.04.057 [DOI] [PubMed] [Google Scholar]
- 42.Ekambaram D, Ponnusamy V. Real-time monitoring and assessment of rehabilitation exercises for low back pain through interactive dashboard pose analysis using streamlit—a pilot study. Electronics. 2024;13(18):3782. doi: 10.3390/electronics13183782 [DOI] [Google Scholar]
- 43.Lee HJ, Jin SM, Kim SJ, et al. Development and validation of an artificial intelligence-based motion analysis system for upper extremity rehabilitation exercises in patients with spinal cord injury: a randomized controlled trial. Healthcare. 2023;12(1):7. doi: 10.3390/healthcare12010007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Dino MJ, Thiamwong L, Xie R, et al. Mobile health (mHealth) technologies for fall prevention among older adults in low-middle income countries: bibliometrics, network analysis and integrative review. Front Digit Health. 2025;7:1559570. doi: 10.3389/fdgth.2025.1559570 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Johnson A. 2025. This home security camera can also monitor for falls and call for help. The Verge. [Cited 2025 July 2]. Available from: https://www.theverge.com/2025/1/6/24335351/kami-fall-detect-camera-ai-vision. Accessed December 6, 2025.
- 46.Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg. 2024;18(1):102. doi: 10.1007/s11701-024-01867-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.R/ J. Robot that watched surgery videos performs with skill of human doctor [Internet]. The Hub. 2024. [Cited July 3, 2025]. Available from: https://hub.jhu.edu/2024/11/11/surgery-robots-trained-with-videos/. Accessed December 6, 2025.
- 48.Carciumaru TZ, Tang CM, Farsi M, et al. Systematic review of machine learning applications using nonoptical motion tracking in surgery. Npj Digit Med. 2025;8(1):28. doi: 10.1038/s41746-024-01412-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Han F, Huang X, Wang X, et al. Artificial intelligence in orthopedic surgery: current applications, challenges, and future directions. MedComm. 2025;6(7):e70260. doi: 10.1002/mco2.70260 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Robotic Assisted Orthopedic Surgery: the Acrobot Story (Ferdinando Rodriguez Y Baena) [Internet]. 2023[cited 2025 July 3]. Available from: https://www.youtube.com/watch?v=QLzUeTfgK28. Accessed December 6, 2025.
- 51.KumarAkhlesh K, KaushikAjeet K, S S. Real time estimation and suppression of hand tremor for surgical robotic applications. Microsyst Technol. 2020. doi: 10.1007/s00542-019-04736-1 [DOI] [Google Scholar]
- 52.Wen S, Huang R, Liu L, Zheng Y, Yu H. Robotic exoskeleton-assisted walking rehabilitation for stroke patients: a bibliometric and visual analysis. Front Bioeng Biotechnol. 2024;12:1391322. doi: 10.3389/fbioe.2024.1391322 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Capsi-Morales P, Barsakcioglu DY, Catalano MG, Grioli G, Bicchi A, Farina D. Merging motoneuron and postural synergies in prosthetic hand design for natural bionic interfacing. Sci Rob. 2025;10(98):eado9509. doi: 10.1126/scirobotics.ado9509 [DOI] [PubMed] [Google Scholar]
- 54.Mullin E. Muscle Implants Could Allow Mind-Controlled Prosthetics—No Brain Surgery Required. Wired. 2024. Dec 9, [Cited July 3, 2025]; Available from: https://www.wired.com/story/amputees-could-control-prosthetics-with-just-their-thoughts-no-brain-surgery-required-phantom-neuro/. Accessed December 6, 2025.
- 55.Team Mv. AI-Powered Dose Planning: reducing Delays and Enhancing Quality in Radiation Oncology [Internet]. MVision AI. 2025. [cited 2025 July 3]. Available from: https://mvision.ai/ai-powered-dose-planning-reducing-delays-and-enhancing-quality-in-radiation-oncology/. Accessed December 6, 2025.
- 56.Buia A, Stockhausen F, Hanisch E. Laparoscopic surgery: a qualified systematic review. World J Methodol. 2015;5(4):238–254. doi: 10.5662/wjm.v5.i4.238 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Lagström RMB, Bräuner KB, Bielik J, et al. Improvement in adenoma detection rate by artificial intelligence-assisted colonoscopy: multicenter quasi-randomized controlled trial. Endosc Int Open. 2025;13:a25215169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Makar J, Abdelmalak J, Con D, Hafeez B, Garg M. Use of artificial intelligence improves colonoscopy performance in adenoma detection: a systematic review and meta-analysis. Gastrointest Endosc. 2025;101(1):68–81.e8. doi: 10.1016/j.gie.2024.08.033 [DOI] [PubMed] [Google Scholar]
- 59.Nayar KD, Yakout A, Nader B, et al. S419 the impact of real time artificial intelligence enhanced colonoscopy: a one year performance review. Off J Am Coll Gastroenterol ACG. 2024;119(10S):S297. doi: 10.14309/01.ajg.0001031044.32887.5b [DOI] [Google Scholar]
- 60.Alba C, Xue B, Abraham J, Kannampallil T, Lu C. The foundational capabilities of large language models in predicting postoperative risks using clinical notes. Npj Digit Med. 2025;8(1):95. doi: 10.1038/s41746-025-01489-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Santacoloma MSD. Postoperative sepsis predictions made possible with AI » Quality and Patient Safety » College of Medicine » University of Florida. University of Florida Health. 2024. [cited July 3, 2025]. Available from: https://qpsi.med.ufl.edu/2024/09/06/postoperative-sepsis-predictions-made-possible-with-ai/. Accessed December 6, 2025. [Google Scholar]
- 62.Shickel B, Loftus TJ, Ruppert M, et al. Dynamic predictions of postoperative complications from explainable, uncertainty-aware, and multi-task deep neural networks. Sci Rep. 2023;13:1224. doi: 10.1038/s41598-023-27418-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.van den Eijnden MAC, van der Stam JA, Bouwman RA, et al. Machine learning for postoperative continuous recovery scores of oncology patients in perioperative care with data from wearables. Sensors. 2023;23(9):4455. doi: 10.3390/s23094455 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Ren Y, Loftus TJ, Datta S, et al. Performance of a machine learning algorithm using electronic health record data to predict postoperative complications and report on a mobile platform. JAMA Network Open. 2022;5(5):e2211973. doi: 10.1001/jamanetworkopen.2022.11973 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Tierney AA, Gayre G, Hoberman B, et al. Ambient artificial intelligence scribes: learnings after 1 year and over 2.5 million uses. NEJM Catal. 2025;6(5):CAT.25.0040. [Google Scholar]
- 66.Bedard A. The 8-Step Process for Leading Change | dr. John Kotter. Kotter International Inc. [Cited July 4, 2025]. Available from: https://www.kotterinc.com/methodology/8-steps/. Accessed December 6, 2025. [Google Scholar]
- 67.Hospodková P, Berežná J, Barták M, Rogalewicz V, Severová L, Svoboda R. Change management and digital innovations in hospitals of five european countries. Healthcare. 2021;9(11):1508. doi: 10.3390/healthcare9111508 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Li B, Yue L, Nie H, et al. The effect of intelligent management interventions in intensive care units to reduce false alarms: an integrative review. Int J Nurs Sci. 2023;11(1):133–142. doi: 10.1016/j.ijnss.2023.12.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Docus Research Team. How AI Symptom Checkers Helped Users in 2024 [Internet]. Docus. 2025. [Cited July 4, 2025]. Available from: https://docus.ai/blog/how-ai-symptom-checkers-help. Accessed December 6, 2025.
- 70.Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: a perspective for healthcare organizations. Artif Intell Med. 2024;151:102861. doi: 10.1016/j.artmed.2024.102861 [DOI] [PubMed] [Google Scholar]
- 71.Liu CF, Huang CC, Wang JJ, Kuo KM, Chen CJ. The critical factors affecting the deployment and scaling of healthcare AI: viewpoint from an experienced medical center. Healthcare. 2021;9(6):685. doi: 10.3390/healthcare9060685 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
- 73.Yang Y, Zhang H, Gichoya JW, Katabi D, Ghassemi M. The limits of fair medical imaging AI in real-world generalization. Nat Med. 2024;30(10):2838–2848. doi: 10.1038/s41591-024-03113-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Schalekamp S, Van leeuwen K, Calli E, et al. Performance of AI to exclude normal chest radiographs to reduce radiologists’ workload. Eur Radiol. 2024;34(11):7255–7263. doi: 10.1007/s00330-024-10794-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Misra R, Keane PA, Hogg HDJ. How should we train clinicians for artificial intelligence in healthcare? Future Healthc J. 2024;11(3):100162. doi: 10.1016/j.fhj.2024.100162 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Patrick B. Association of American Medical College. Medical schools move from worrying about AI to teaching it. 2025. [Cited July 3, 2025]. Available from: https://www.aamc.org/news/medical-schools-move-worrying-about-ai-teaching-it. Accessed December 6, 2025.
- 77.WHO. World Health Organization. Global Initiative on AI for Health. 2023. [Cited July 3, 2025]. Available from: https://www.who.int/initiatives/global-initiative-on-ai-for-health. Accessed December 6, 2025.
- 78.Gary D, Mark JMM. HL7 AI Standards – laying the Foundation. Health Level Seven Int. 2023;2023:1. [Google Scholar]




