Skip to main content
Mayo Clinic Proceedings: Digital Health logoLink to Mayo Clinic Proceedings: Digital Health
. 2026 Jan 29;4(1):100341. doi: 10.1016/j.mcpdig.2026.100341

Balancing Innovation and Ethics: Ambient Listening Artificial Intelligence in Health Care

Emma O’Neil a, Adam Rodman b, Lisa Soleymani Lehmann c,d,
PMCID: PMC12945620  PMID: 41767129

The long hours physicians spend documenting in the electronic health record (EHR) is contributing to an epidemic of physician burnout.1 Efforts to efficiently type notes during clinical encounters erode physicians’ ability to be truly present with patients. Ambient listening AI is increasingly adopted by health care systems and integrated into some EHR software systems, offering a promising solution to documentation burdens. This technology records audio from clinical encounters, uses automatic speech recognition to convert spoken language into text, distinguishes speakers, extracts medical information from the transcribed dialog, and generates a structured clinical note. Clinicians review, edit, and approve the final version for inclusion in the patient’s EHR.

Although the technology has significant benefits, it is critical to handle its integration into clinical care responsibly, safely, and in a way that is respectful of patients. We analyze the ethical concerns, risks, and benefits of ambient listening AI and offer recommendations for how health care systems can ethically integrate this innovative technology, minimizing the risks while realizing the benefits for clinicians, patients, and society.

Risks

Privacy and security

Ambient listening AI in the examination room captures sensitive patient data, raising concerns about privacy and security. Patients and providers may have concerns because the process generates a full transcript of a clinical encounter, which is not part of the medical record and therefore remains legally discoverable. Passively capturing all information shared during a clinical encounter may cause unintended harm to patients and physicians. For example, discussions about abortion counseling in restrictive states or discussions of a patient’s undocumented immigration status could harm patients and physicians, especially if this information was subpoenaed. Clinicians typing a note may be more likely to think twice about including potentially damaging information, but they may miss it if it is automatically included by ambient listening AI. Transparency around data storage and retention as well as how patient data are used for research and development is critical.

The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule sets standards for the use and disclosure of protected health information (PHI), allowing patients to access and control their health data.2 Although AI technology must adhere to HIPAA standards, there are no restrictions on the use of de-identified patient information.3 This necessitates scrutiny of how technology companies de-identify data, the security measures in place, and the potential risk of harm if there is a data breach. Powerful modern AI systems are capable of re-identifying data, even when PHI is removed.4

Moreover, the HIPAA Security Rule requires safeguards for electronic PHI against cybersecurity threats. AI technology must be implemented with security measures to ensure the confidentiality, integrity, and availability of processed PHI. With rising cybersecurity threats, governmental action is necessary to enhance security standards.2

Differential privacy, which protects against re-identification by adding noise to data, and federated learning, which reduces the risk of data breaches and misuse by keeping raw patient data local, are technical methods for preserving privacy. Additionally, audit-trail-driven safety analytics can support proactive identification of risks and enhance system-wide monitoring.

Accuracy

Although AI accuracy is a common metric of evaluation and likely to improve, inaccuracies in AI-generated clinical notes pose risks.5 AI-generated notes may omit key information, include wrong information, or add inappropriate or irrelevant information.6,7 If physicians automatically accept errors in AI-generated notes, incorrect information may enter patient records, potentially affecting future care.

A study by The Permanente Medical Group identified instances of hallucinations, where the AI generates content that appears plausible but is false.1 In an example, the physician mentioned scheduling a prostate examination, and the AI erroneously generated that a prostate examination had been performed.1 Additionally, there are limited data on the accuracy of ambient listening AI when used for patients with limited English proficiency or nonstandard accents. The performance of the technology should be analyzed across different physician and patient demographic groups to identify potential bias.

Physicians will need to vigilantly review AI-generated notes for quality and accuracy, but as reliance on the technology increases, there is a risk that thorough reviews will diminish. Clinicians are the most important guardrail for this new technology. Although the de facto standard is a manual review of every note by a human-in-the-loop, we worry that automation bias may lead to broader acceptance of errors. Deeper auditing, including mitigation strategies for automation bias, is important for full deployment. Emerging technology like real-time hallucination detection methods may promote accuracy. Features that map generated summaries to source data may support auditing and aid clinicians in verifying accuracy during the review process.8

Consent

Given the current risks, transparency about ambient listening AI and patient consent is ethically required. Most patients do not expect their conversations to be shared outside their trusted health care system and may not realize that the technology generates a transcript of their conversation. In addition to potentially harmful privacy and security breaches, there is a possibility of patient harm as a result of propagation of errors in notes or the inclusion of potentially damaging patient information. Recording patients without consent breaches trust and infringes on privacy. Transparency about the use of the technology can foster trust and allows patients to make informed decisions regarding their personal information. Information regarding data use, storage, and access is important to patients in their decision to consent.9

Questions remain about the consent process: Should consent be obtained before each visit? Who should obtain it? Is verbal consent sufficient? Should patients be given advance notice if recording is the default? Should patients and clinicians be allowed to opt out? Lawrence et al9 found that many patients want a clear opt-out option. What if the technology becomes so commonplace and helpful that clinicians refuse to see patients who do not agree to its use? As the technology becomes commonplace and patients’ expectations change, the necessity of consent discussions may evolve.

A physician’s authority may inadvertently pressure patients to consent to use of the technology. If consent is requested by physicians at the point of care, patients may feel compelled to agree. A more respectful approach would involve disclosing the technology use in advance, having nonclinical staff members affirm patients’ preferences, and allowing patients to opt in or out without physician pressure.

Benefits

Ambient listening AI may reduce documentation burden and physicians’ cognitive load, enhance efficiency, and improve the physician-patient relationship by allowing physicians to focus on patients as opposed to the computer.9 Physicians can maintain greater eye contact and more natural interactions, fostering better communication and rapport. Physicians may also feel they have more time to address patient concerns. Furthermore, the immediacy of having visit summaries enables patients to access their care plans quickly, supporting the quality and timeliness of clinical decision making.

The potential to reduce physician burnout itself may impact the patient experience because clinicians may feel less rushed or distracted, promoting trust between patients and physicians. Such trust is critical for patient outcomes because the breakdown of this relationship may otherwise reduce patient engagement with care, reduce adherence to care plans, and increase the likelihood of negative health outcomes. In a study by The Permanente Medical Group on ambient AI scribes following the initial pilot phase, a majority of physicians reported a positive experience, and all patients reported the impact of the technology’s usage on the quality of their visit as positive to neutral.10

Although ambient listening AI may enhance patients’ and physicians’ experience, it is unclear how reduced note-taking impacts physicians’ cognitive engagement. Although note-taking may be distracting, reducing focus on the patient, it can also support physicians’ metacognition. Automated documentation does not guarantee improved care, and although it may save time and mental energy, it could inadvertently degrade clinical reasoning, especially as ambient listening AI involve more decision support.

Clinical notes often contain information that has been propagated forward from older notes, sometimes reflecting outdated information not discussed during the most recent encounter. This trend creates the risk of misrepresenting a patient’s current state and makes notes cumbersome to read. Ambient listening AI has the potential to produce notes that better reflect the current encounter, reducing clinicians’ temptation to copy information from previous notes.

Access to visit recordings or transcripts of the recordings may directly benefit patients. Providing patients with access to visit transcripts through a patient portal may empower them similarly to how OpenNotes has fostered transparent communication.

Recommendations

To mitigate the risks of ambient listening AI while maximizing the benefits, health care systems can implement the following strategies:

  • 1.

    Exceed HIPAA standards: Although HIPAA establishes a baseline for privacy and security, health care systems can adopt stricter measures. This includes setting clear expectations for third parties regarding data sharing and privacy protections. Essential safeguards such as encryption, authentication, authorization, auditing, and monitoring must be in place to protect patient data from unauthorized access. On-device processing to avoid external transmission of audio files can support data privacy.

  • 2.

    Transparency and meaningful disclosure: Health care systems should communicate transparently with patients about the technology. Patients should receive clear, concise documentation outlining how it is used, associated risks and benefits, and their option to opt out. Ideally, this information should be provided electronically before their appointment, allowing patients time to review it. Patients should have the opportunity to express concerns with nonclinical staff. To increase awareness, clinics could display signs similar to those used for TSA facial recognition technology (Figure).

  • 3.

    Ensure accuracy: Clinicians must recognize the importance of carefully reviewing AI-generated notes. Guidance on best practices for using the technology should be provided, and clinics can implement a quality review by monitoring a random sample of notes for accuracy.

  • 4.

    Regulatory standards: Regulation can elevate cybersecurity standards and enforce scalable oversight. It could standardize implementation requirements for technology developers and operational processes for medical centers. Standards should mandate validation, testing, and post-deployment monitoring, and vendors should be required to report standardized accuracy measures.

  • 5.

    Stakeholder engagement: Involving a diverse group of stakeholders, including patients, caregivers, clinicians, and data handlers, in the ethical implementation of the technology is crucial for building trust.4 These stakeholders could help assess risks and develop solutions that will maintain responsible use of the technology.

Figure.

Figure

Example sign indicating use of ambient listening artificial intelligence in the examination room. The QR code icon can be replaced with a QR code that directs patients to additional information, empowering them to learn more about the technology and its application in their health care setting.

As we navigate the complexities of integrating innovative technology into clinical care, a commitment to ethical standards will allow us to realize respectful patient-centered care that is also efficient. The future of health care depends on our ability to innovate responsibly.

Potential Competing Interests

Dr Rodman reports grants from ARPA-H, NIH, Google, Macy Foundation, and Gordon and Betty Moore Foundation; reports consulting fees from Google; and is a board member of AO Foundation. The other authors report no competing interests.

References


Articles from Mayo Clinic Proceedings: Digital Health are provided here courtesy of Elsevier

RESOURCES