Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2026 Feb 14;2025:899–908.

Leveraging Provocative Design Methods to Address Implicit Bias in Clinical Interactions through Technology

Deepthi Mohanraj 1, Raina Langevin 1, Libby Shah 1, Janice Sabin 1, Brian R Wood 1, Wanda Pratt 1, Nadir Weibel 2, Andrea L Hartzler 1
PMCID: PMC12919562  PMID: 41726505

Abstract

Implicit bias impacts the quality of patient-clinician interactions, influencing patient outcomes and trust in healthcare. Most interventions to mitigate bias rely solely on expensive human assessments, rather than leveraging AI technology with clinician input. To explore clinician-envisioned interventions, we conducted interviews with 16 primary care clinicians using provocative design methods to facilitate innovative ideation on using technology to address implicit bias. Themes from interviews included: patient communication monitoring, clinician self-awareness, systemic solutions, optimizing workflow, clinician education, and patient feedback. These envisioned interventions provide design considerations for technology-based implicit bias feedback tools. The broad range of innovative solutions generated by clinicians at various career stages reflects the utility of provocative design methods in unlocking creative thinking among a population that is not often encouraged to think beyond structured real-world constraints.

Introduction

Implicit biases are automatic, unconscious attitudes based on learned associations between social groups and characteristics, such as race, sexual orientation, and gender1. Such attitudes can lead to discriminatory behaviors and unequal treatment of individuals seeking healthcare, where biases impact the quality of patient-clinician interactions and influence patient outcomes2,3. As a result, implicit bias education is a vital component of clinician training at all career stages4. Although the effects of implicit bias are well-documented5, a gap remains in how to leverage technology to mitigate the impact of these biases through educational interventions that promote patient-centered communication skills among clinicians6.

User participation in the development of technology-based interventions is essential, particularly for tools designed to address bias. Ensuring that these tools align with clinician needs, experiences, and preferences increases the potential for adoption and impact. The ability of clinicians to think creatively can help address challenges in healthcare since the healthcare system often fosters an environment with rigid protocols, evidence-based practice, and standardized healthcare delivery7,8. Despite this tradition, recent explorations into the future of healthcare highlight the growing importance of creativity in addressing complex problems, especially with the emergence of AI and digital technologies allowing for innovative solutions9.

To foster innovative thinking in guiding the design of implicit bias interventions, we applied provocative design methods10, which aim to provoke participant reactions, challenge existing attitudes, and encourage reflection on hypothetical scenarios. A key element to provocative design is the use of “provotypes”, which are deliberate tensions or barriers within the field of interest, which are normally avoided but, when surfaced, can support creative problem-solving10,11. One such method, the invisible design film technique11, uses ambiguous film-based scenarios to encourage participants to reflect critically on a challenging issue shown. Rather than focusing on technology, this technique uses short film-based scenarios as design provocations that embody tensions and provoke reflection on difficult-to-discuss issues, such as implicit bias. Invisible design films focus on dialogue between characters in the film, leaving participants to unpack the film and challenging them to imagine and speculate on the design of technology11.

This study investigates the invisible film technique11 as a provocative design method to engage clinicians in ideation to generate innovative technological solutions in addressing implicit bias in clinical interactions between patients and clinicians. We aim to identify types of solutions that emerge from this approach and gather clinician-generated design considerations for future communication feedback interventions focused on implicit bias.

Materials and Methods

We conducted 1-hour interviews over Zoom12 with primary care clinicians who practice in academic and community-based clinical settings to explore their ideas for technology-based interventions to address implicit bias in their clinical interactions with patients. We used these envisioned technologies to formulate design considerations for future implicit bias interventions. Study procedures were reviewed by the University of Washington IRB and determined to be exempt (#STUDY00018136). Participants were recruited from academic and community-based primary care practices in the WWAMI region (i.e., Washington, Wyoming, Alaska, Montana, and Idaho). Participants completed a survey on demographics and clinical experience that we used to characterize the sample. Interviews were recorded and transcribed for qualitative analysis.

Data collection

We employed the invisible design film technique11 to prompt participants to brainstorm future technology for addressing implicit bias in patient-provider communication. To align participants’ thinking before envisioning and guiding future directions for intervention, we showed participants two publicly available short 2-minute videos with actors illustrating more and less patient-centered interactions during a fictitious primary care visit (Figure 1)13.

Figure 1.

Figure 1.

Short video prompts: The less patient-centered interaction (left) depicts the clinician standing over the patient, not making eye contact and focusing on the chart, and the patient looking away out of discomfort from the interaction; The more patient-centered interaction (right), depicts the clinician sitting at eye level with the patient, in close proximity, making eye contact, and the patient engaged in the conversation with mutual eye contact.

The videos served as prompts for participants to brainstorm future technology that monitors communication in the “exam room of the future” by improving experiences. The interview guide (Table 1) asked participants to first unpack the film to situate the expression of implicit bias within the context of patient-provider communication. Participants were then asked about what technologies came to mind when thinking about the scenes from the videos and how future technology could help mitigate implicit bias by improving patient-provider interactions.

Table 1.

Interview guide

Unpacking the film How well does this scene resonate with your clinical experience?
What did you see happening in these scenes?
What did you see in the communication between the patients and providers in these scenes?
Technology ideation With these scenes in mind, how might you envision an “exam room of the future”?
What future technology that monitors communication in the room could help improve patient-provider interactions? (e.g., technology that ‘reads the room’)

Data analysis

We identified themes from interviews through an inductive content analysis14, which entailed establishing a codebook among five researchers (DM, RL, LS, AH, JS) who reviewed transcripts to identify and refine codes through weekly discussion and consensus. Then, three researchers (DM, RL, LS) independently coded transcripts by applying the codebook in Atlas.ti15 and met weekly to identify themes. DM compared themes by participants’ level of clinical experience with “early-stage clinician” describing participants who are residents or have less than 10 years of experience and “an experienced clinician” describing participants with 10 or more years of clinical experience.

Results

Participants

We interviewed 16 participants (P1-P16) (Table 2). The majority of participants were physicians who practice in community-based clinics, with over 10 years of experience in clinical practice. Four of the physicians were residents. Most participants identified as White women not of Hispanic, Latino, or Spanish origin. When asked if they had attended any continuing education workshops in the last 12 months, the majority reported having attended workshops working with patients who are LGBTQ+ or who are racially diverse with the average training time of 4 and 8 hours, respectively.

Table 2.

Participant Characteristics (n=16)

Age - mean (SD), range 47 (14), 29 - 75
Gender
Man
Woman
6 (37.5%)
10 (62.5%)
Race - n (%)
White
American Indian Alaskan Native
Asian
13 (81.3%)
1 (6.3%)
3 (18.8%)
Ethnicity: Not Hispanic, Latino, or Spanish origin - n (%) 14 (87.5%)
Clinical role
Physician
Behavioral Health Provider
Physician Assistant
12 (75.0%)
3 (18.8%)
1 (6.3%)
Years in role - mean (SD), range 14 (11), 0.2 - 33
Area of practice
Family medicine
Behavioral health
Other: “Primary Care at Residency Clinic”, “Internal Medicine Primary Care
12 (75.0%)
2 (12.5%)
2 (12.5%)
Clinical practice setting
Community-based clinic
Academic medical center
Other: “Non-profit hospital”, “Non-profit”
11 (68.8%)
3 (18.8%)
2 (12.5%)

Unpacking the films

When asked questions to unpack the film, participants’ responses followed the following themes: 1) Relating to their own clinical experiences, 2) recognizing potential bias in the scenes, and 3) focusing on body language in the communication between the patient and provider. First, when relating what they saw in the video to their own clinical experiences, participants resonated with the time constraints the clinicians may have experienced, which caused the rushed nature of the less patient-centered interaction video. Second, participants identified potential bias when the clinician assumed the patient did not understand her own experience. Participants also identified a power dynamic between the clinician and the patient and assumptions that the patient was displaying drug-seeking behavior when speaking to the clinician about her back pain. Third, participants also described nonverbal and verbal cues in the body language of the clinician and patient, such as eye contact, distance from the patient, and tone of voice. These themes from the video prompts helped to ensure participants recognized the potential for implicit bias in clinical communication and were ready to brainstorm solutions during the technology ideation.

Technology ideation

We identified six themes in the technical solutions participants generated: Patient communication monitoring, clinician self-awareness, systemic solutions, optimizing workflow, clinician education, and patient feedback. We describe each theme with representative participant quotes.

1. Patient Communication Monitoring

Patient communication monitoring refers to technology that can objectively track and analyze patient communication behaviors during clinical visits. This includes envisioned technologies focused on assessing and providing clinicians with feedback on verbal and nonverbal communication cues expressed by patients during encounters.

Visual and Physical Cues: Participants expressed the potential to use technology to monitor nonverbal patient communication behaviors through visual cues, such as eye contact and body language, and physical cues such as cortisol levels, vitals, and sweat gland metrics. Participants earlier in their careers emphasized real-time feedback while participants later in their careers envisioned technologies that present data in post-visit reports. For example, P14, a behavioral health care provider, described a real-time technology in the form of a clinician-worn contact lens that monitors patients’ nonverbal cues and notifies clinicians about potential communication problems:

“[The] Biggest thing I think is body language. So having a little contact lens and having a little notification pop up like hey, they [the patient] made this micro expression or they have a grimace on their face or they’re shifting away from you. That way you can know if something I said or I’m doing didn’t land well or something like that I think would be really helpful.” -P14

P2, an experienced clinician, describes how vital signs could be monitored as a metric of anxiety during the interaction:

“If we had a camera, then the machine in the future could read those cues. They could measure the temperature. So if the patient’s temperature was going higher or if the heart rate was going higher, maybe it’s because the patient’s getting anxious because I am doing something wrong. So I could have a little thing telling me in my mind, Hey, you are being too pushy .” -P2

Vocal Cues: Participants also envisioned technology, such as glasses, a monocle, contact lenses, and smartwatches to monitor vocal cues and voice patterns, such as verbal hesitancies expressed by patients. Some early career participants describe hesitancies as a sign of patient confusion whereas participants with more clinical experience tended to interpret them as a sign of patient danger or unspoken issues. The most common technology envisioned for monitoring vocal cues was ambient listening. For example, P12, an early-stage career clinician, envisioned a technology that captured hesitations in the patient’s voice and used AI to provide reminders of correcting behaviors to the clinician:

“Little moments like that with teach back or a moment where the patient might pause a little too long before responding, where an AI system could say maybe confusion, ask more questions, things that would force us to just slow down for a second and focus on the communication.” -P12

2. Clinician Self-awareness

Clinician self-awareness refers to technology that objectively tracks and analyzes clinician communication behaviors during patient visits. This includes technologies focused on assessing and providing clinicians with feedback on visual and physical cues, spatial positioning in the room, and vocal cues to raise clinician awareness of their tendencies when communicating with patients.

Visual and Physical Cues: Participants envisioned technology ranging from cameras and glasses to space suits that monitor their own reactions, body language, eye contact, and other nonverbal communication cues. The most common intervention included cameras, such as a body camera, a camera in the room, or a camera behind a one-way mirror. For example, P7, an experienced clinician, drew a parallel to police officers and the use of body cameras, and how similar technology could be used in healthcare visits:

“I think if doctors could see themselves sometimes it would be tremendously helpful. ‘cause they don’t often, sometimes they don’t know how they’re being perceived because there’s not a mirror in the room for them to see themselves mm-hmm. I just think about cops and body cams. They’re a really good idea, but we need that to be reversed for providers.” -P7

Similarly, P8, an experienced clinician, envisioned a multi camera system in the clinic room that could monitor clinicians’ behaviors:

“I think if we had multiple cameras in the room instead of just one video camera that actually could measure head distance, could focus on eye movements and give a percentage of eye contact to the resident and then analyze facial expressions for empathic concern, frowning, rejection, distaste if the software was available to actually recognize those common expressions and feedback. Every time you have a patient who has [certain medical condition], you get a look of distaste on your face. That would be a useful piece of information for the resident who’s doing exams.” -P8

Other interventions that provide real-time feedback about implicit bias include signals delivered through “electric shocks”, changes in room lighting, and alarms. Some participants described that technology could benefit clinicians by providing feedback that highlights any discrepancies in care based on characteristics of patients, like an example that was shared by P3, an experienced clinician:

“A metric like the amount of time I spend in my, in red button and green button, depending on the ethnicity of the patient. Okay. Well, that’s gonna be a couple of visits before it starts giving me aggregate data. And then I’ll start seeing, ooh, I’ve got red button problems with my Hispanic women [patients]. Uh oh. Uh, I, I need to really start thinking about this a little bit more.” -P3

Spatial Positioning: A few participants envisioned technologies that assess spatial positioning in the room, proposing technologies that monitor where the clinician stands in relation to the patient, calculate standing versus sitting time, and prompt the clinician to sit at eye level to facilitate patient engagement. For example, P2, an experienced clinician, envisioned the use of sensors to monitor positioning in the room:

If the chair is not feeling my weight, it can remind me to sit. So we could put a sensor on the chair saying, the chair is expecting me to sit after two minutes of being in the room. Or it could say like my Fitbit tells me every hour that I need to move. So maybe we could create a room that has all of these sensors for different issues and the feedback will be on the iWatch or the Fitbit or Garmin or device you’re using.” -P2

Vocal Cues: Participants also envisioned vocal communication monitoring. For the detection of clinicians’ vocal communication cues, participants mentioned using ambient listening technology, AI, holograms, and smartphones to monitor tone, interruptions, and cadence. Some suggested glasses and contact lenses to provide clinicians with feedback and suggestions to express empathetic statements to improve verbal communication with patients in real time. For example, P12, an early-stage clinician, described how a hologram could detect vocal patterns:

“I think the best way that I could see it would be a third party in the room, whether that would be the science fiction hologram in the corner or something that was listening to the conversations going on and was able to identify these patterns in communication that have high risk of bias. Where technology could assist us to keep that interaction a little bit more honest I guess. And give us feedback that was not hurtful because it’s coming from a machine, but also very honest.” -P12

P8, an experienced clinician, also mentioned the importance of empathy when speaking to patients and how empathetic statements can be monitored through vocal communication cues:

“I don’t think we use enough empathic statements and empathic facial expressions when patients divulge important pieces of information and talk about their pain and discomfort. Suppose you had a pair of glasses that had electronic messages that flashed across them that said, ‘Empathic statements and expression required’.” -P8

P3, an experienced clinician, described how speed dials could be integrated into EHR for clinicians to monitor how patient-centered the questions and statements they are making to the patient are:

“You could have like three little speed dials at the top of your E M R and you walk in there and you start asking questions and your, and your patient-centeredness speed dial starts in the red and it goes up into the green zone as you get there. And then you’re talking along and you get tired and it drifts back into the yellow. And then you ask a very insightful question, you get back in the green.” -P3

3. Systemic Solutions

Systemic solutions refer to interventions that enhance existing services supporting patient-provider communication in healthcare systems. These solutions include optimizing interpreter services, making electronic health record (EHR) enhancements, and improving clinic room layout.

Optimizing Interpreter Services: Participants discussed the benefits and limitations of current interventions used during visits and how they impact patient communication. One concern was with the use of interpreters. While necessary, including interpreters can interrupt the flow of conversation and decrease the number of concerns that can be addressed during the visit. For example, P11, an early-stage clinician, describes how current interpretation services negatively impact communication with patients:

“It’s hard to make the gestures I’m making and the expressions on my face make any sense to them. It’s hard for me to interpret their expressions when there’s just this lag of something interpreting in the middle. And it’s really frustrating when it seems like they know enough English that we’re getting it and then we have to pause and wait for the interpreter to do their thing. It takes twice as long when there’s a translator in the middle.” -P11

EHR Enhancements: More experienced participants described how EHRs can be leveraged to create more personal connections with patients by surfacing patient information. Participants described information such as preferred name, demographic information, support system information, and language preference. Some participants also mentioned including phrases from a patient’s native language that the healthcare worker could say to create a more welcoming atmosphere, as described by P9, an early stage clinician:

“Like a simple like ‘How to introduce yourself when you walk in the room.’ Um, or how to say common words in those languages like ‘doctor,’ ‘resident,’ um, ‘hello,’ you know, simple phrases. It doesn’t have to be advanced with like, you know, pronunciation guides on phonetics. Um, I think that would feel really welcoming to the patient that you were thinking about that.” -P9

Improving Clinic Room Setup: Participants also emphasized the role of room layout in facilitating patient communication, with some suggesting that even small changes, such as having enough chairs in the room for the patient, clinician, and family members, could improve their interactions with patients. Some participants mentioned integrating technologies that determine a room’s ideal setup for the best communication to aid room design. For example, P11, an early-stage clinician, detailed how this type of technology would have been useful in their own experience:

“I wonder if there’s a way for technology to notice the number of people and the number of chairs needed. If there could be a system to make ‘em easily more easily accessible. ‘cause I know like on my palliative care rotations, whenever we had to make sure that we had the right number of chairs for the right number of people, sometimes it took minutes out of our day to track down like, and poking into other patient rooms.” - P11

4. Optimizing Workflow

Optimizing Workflow refers to technologies that use AI to assist healthcare workers in completing their workload and enhancing decision-making to mitigate the role of bias in providing accurate and efficient patient care. This includes technologies that help improve time constraints on clinicians so that they have more time to communicate with patients and solutions that clinicians can consult when making medical decisions to reduce bias.

Improving Time Constraints: Participants reported that one of the biggest barriers they face in their communication with patients is time constraints. Thus, technologies to decrease workload could allow clinicians to spend more time with their patients. Some examples of these interventions are AI-assisted note taking, robots managing routine appointments such as medication refills, and receiving reminders of what imaging and labs to order. For example, P10, an experienced clinician, expressed how this type of technology would benefit them in their own practice:

“My sole interest right now is actually in helping to alleviate some of those pressures on the physician and on the provider. Whether that’s the fact that I now don’t have to sit down for another 15 minutes and write this note up or it’ll be faster the orders, I don’t have to separately type, I don’t have to remember I meant to get an X-ray and an MRI…” -P10

Another example is described by P5, an early-stage clinician, in which technology could help address medication checks to alleviate the appointment load on clinicians, allowing more time for communication with patients who have complex issues:

“Maybe there’s like a robot that can check that everything’s in line. So like say I have several patients that are on like oxycodone or some sort of controlled substance. And every month I have to like, you know, check to make sure they prescribed it on time and they didn’t take too many and they didn’t get it from too many providers…and I think it would reduce bias. ‘cause I think a lot of people are coming in to get their pain meds refilled. They sometimes feel like criminals because they have to go through this process to get the medication they want.” -P5

Clinical Decision-Making Support: Participants also saw ways for AI to support clinical decision-making in ways that reduce bias. For example, one envisioned technology was AI which generated a list of differentials based on the patient’s chief complaint and demographic information to decrease the potential of bias clinicians may have from their personal experiences when diagnosing patients. Participants also suggested that AI could develop guiding questions that clinicians should ask patients to make sure that clinical decisions were not guided by implicit bias and that no information was missed. One participant mentioned a USB with all past patient data that could be given to the clinician so that they have all the relevant data needed to provide a care plan without having to make assumptions about the patient’s past medical history. For example, P13, a behavioral health care provider, described how AI could assist clinical decision-making during a patient visit for back pain:

“So I think with each, let’s say you open up an encounter [in the EHR] or you start to pre-chart on a patient and AI kind of comes in with the presenting issue. So it’s like back pain and so then it can pull up relevant encounters with something generated with the word pain and then it could highlight the things that the provider should be aware of before going into the visit…some technology with Epic [EHR] where there was something that was listening along that it could prompt questions to ask and it kind of takes the art out of it and the human element, but it could also reduce bias if it’s more objective and a formula to follow, at least as covering the basis.” -P13

5. Clinician Education

Clinician education refers to technologies that teach clinicians how to improve their communication with patients. This includes reviewing videos of their patient interactions with experts, receiving training to improve nonverbal communication, and opportunities to practice communication skills through simulated patients visits.

Video Review: The most common educational tool suggested by participants was video review guided by a professional with expertise in communication, such as a psychologist, psychiatrist, or interpreter. Some participants stated this is already a frequently used training method in residency programs, and while it can feel uncomfortable, can help clinicians to become aware of their communication behaviors during patient visits. For example, P8, an experienced clinician, described how video review is often used in residency:

“I think it would be an extension of what a lot of training programs currently do, which is just video review. They video a consultation, and then a psychologist and a faculty member frequently together as a team will ask the resident to analyze what’s happened and assist them with both verbal and nonverbal components of the interaction.” -P8

Nonverbal Communication Training: Participants mentioned the use of AI to provide education on how to improve nonverbal communication, such as facial expressions and body language. This type of education can benefit clinicians by guiding them to improve their nonverbal communication rather than just becoming aware of it. For example, P6, an experienced clinician, envisioned a software tool that could analyze facial expressions during a patient encounter and provide education on how to change nonverbal expressions that might convey bias:

“I think training people so they need to know what sort of expressions they’re making that they’re not aware of, and then how those are interpreted. I know there’s like a whole body of science on facial expressions, physical, nonverbal cues and, what they convey, or at least what people think they convey. So sort of matching the observed cues and what the likely meaning is or how they’re interpreted would probably be really useful.” -P6

Simulated Patient Visits: Participants also suggested mock patient interactions as a useful training tool to increase clinician exposure to patients with different ethnic backgrounds and facilitate practicing uncomfortable conversations. Some of the more experienced participants mentioned the importance of training experiences on cultural norms for patients from different backgrounds, such as conversational distance and eye contact. For example, P8, an experienced clinician, recounted how comfort with conversational distance can differ with culture:

“There was some work done in the ‘70s on comfortable conversational distance in different cultures and different races. Maybe we should be sensitive to that. Maybe one should adjust one’s conversational distance according to the race and gender of your patients…” -P8

6. Patient Feedback

Patient feedback refers to receiving feedback that patients provide to clinicians based on how they felt their visit went. Examples of these interventions include providing an automatic survey, getting anonymous feedback, and having AI color code feedback from patients. Participants commented on the utility of feedback from patients, stating it would be most helpful if presented to clinicians immediately after the appointment in an actionable format. Some participants thought that patient feedback should be anonymous to create a safe space for patients to provide their honest thoughts. Participants with more clinical experience wanted to understand patterns in this feedback, such as whether patients from certain ethnic backgrounds routinely do not feel heard. For example, P13, a behavioral health care provider, described how a tablet could be used to solicit patient feedback following a clinic visit:

“As part of the rooming process and as part of the checking out process is that the MA comes in and gives them the tablet and says, just leave it when you’re done. Hopefully the patient can feel safe enough to do that where then they’re not going to have a negative interaction following that or be treated differently because it’s already at the end of the encounter.” -P13

Participants also discuss the utility of AI in making patient feedback more digestible to clinicians, such as through color coding as described by P8, an experienced clinician:

“Well, there are patient feedback forms that we use, so you could hope that the patient would say, ‘I was uncomfortable because Dr. So-and-So stood over me the whole time and wasn’t very sympathetic.’ So that’s reasonably powerful. If we’re trying to think of artificial intelligence, you could color code the feedback. So you have red, green, and yellow. The red would be, ‘Stop doing this.’ So those are a couple of examples for you.” -P8

Discussion

Implicit bias among healthcare professionals can significantly impact patient care and contribute to healthcare inequities14,15. Through provocative design methods, we engaged 16 primary care clinicians in generating innovative technological solutions to address the impact of implicit bias in patient-clinician communication. Situated in the invisible film technique, we identified 6 themes in the ideas generated: patient communication monitoring, clinician self-awareness, systemic solutions, optimizing workflow, clinician education, and patient feedback. Across themes, the envisioned interventions included technology-based assessment and feedback for real-time and post-visit use. While several of the envisioned interventions exist such as smartwatches that sense vital signs16, AI-monitored police body cameras17, ambient scribes to facilitate note-taking18, standard patient simulations19 as well as studies on optimization of clinic layout20, participants identified technological enhancements and several new directions, including interventions such as addressing implicit bias in clinical decision-making, and receiving actional feedback.

Findings from this study demonstrate that provocative design methods like the invisible design technique,11 can help clinicians approach sensitive topics such as implicit bias with creativity. By presenting fictitious scenarios through short films, this method allowed participants to explore challenging themes such as power dynamics, race, ethnicity, culture, and assumptions about communication behavior while encouraging reflection on how to improve clinical interactions. This provocative design method also facilitated creativity as participants began to think outside traditional frameworks and experiment with new types of interactions with patients, such as using contact lenses to monitor patient vital signs, holograms to monitor clinicians’ verbal communication patterns, and AI-guided clinical decision-making.

By engaging clinicians of varying experience levels, we explored whether openness to innovative thinking and implicit bias monitoring strategies varied based on stage in career. In this study, the experience level of participants did not result in vastly different technical ideas. More experienced participants expressed greater interest in using retrospective data to understand trends in how their care might differ between patients of different ethnicities. In contrast, early-stage participants focused on immediate feedback through real-time interventions.

This work highlights key design considerations for technologies that aim to address implicit bias in patient-clinician interaction, including a focus on objective metrics, actionable feedback, and preventative measures against implicit bias. Prior research on communication monitoring tools focuses on nonverbal cues from the clinician such as a smartwatch that measures vitals and speech behaviors16 and audio and language processing pipelines that detect opportunities for empathetic statements21. While participants recognized the utility of clinician monitoring technologies through interventions in the “Clinician Self-awareness” theme, the theme “Patient Communication Monitoring” also suggested the importance of collecting objective data measurements on patient communication as this would uncover the true feelings of patients during the interaction. Current technologies for patient monitoring exist such as the use of cameras and sensors for facial and postural detection but are primarily used for monitoring patient recovery22. The focus on both patient and clinician monitoring technologies suggests future technologies have components of both rather than focusing on just one individual during the interaction. Along with the focus on objective measures, participants commented on the importance of subjective measures such as direct feedback from patients. The “Patient Feedback” theme stated the importance of hearing directly from the patient and areas to improve the authenticity and actionability of this feedback on patient-provider communication.

While participants stated the first step in addressing implicit bias is the recognition of bias, they also described the importance of having actionable feedback. This is consistent with prior research on medical education which states that the most beneficial feedback is when the recipient has a thorough understanding of their performance and detailed guidance on enhancing their skills23. The theme “Clinician Education” informs us on how to provide detailed guidance in future technologies such as by working with professionals to review a video of their patient interaction and receive education on how to improve their facial expressions and body positioning. Participants also suggested working with standardized patient actors during simulated visits to practice engaging with patients of different ethnicities and practicing difficult scenarios. These ideas are already integrated into many residency programs and have been well accepted by trainees24. Another form of actionability that participants desired was understanding potential trends in their implicit bias behaviors by having their communication patterns mapped to specific characteristics of patients.

Participants also stated the importance of preventative measures against bias before entering the room such as the ideas that surfaced in the themes “Systemic Solutions” and “Optimizing Workflow”. For example, improving interpretation services or having the technology to determine ideal room set-up can foster better communication. Additionally, participants envisioned technologies to aid in note-taking and in reducing appointment load to allow for more time to communicate with patients and mitigate bias when prescribing routine medications. Such ideas suggest the need for further exploration into outside factors that could be causing communication barriers. Another preventative measure participants described was having AI integrated into the diagnostic process can mitigate implicit biases that can interfere with diagnostic accuracy25. This highlights the need for future interventions to explore how to support clinicians during patient interactions.

Further, some of the participants’ ideas may be more feasible in the near term, while other ideas are more speculative. Ideas with higher feasibility include interventions already in use, such as mock patient interactions, video review, AI-assisted note taking, and simple integrations such as EHR enhancements, patient feedback via tablets/surveys, and color-coded feedback. These ideas are already integrated into some healthcare systems or pose minimal challenges for implementation and privacy. Lower feasibility technologies contact lenses sensing micro expressions, stress sensing technology, clinician body cameras, and hologram-based conversation monitoring. Such technologies may require breakthroughs in biosensing integration, raise privacy concerns, and could face clinician/patient resistance.

The broader use of ambient intelligence in healthcare settings introduces additional ethical considerations. For example, listening technology that captures clinical conversations could risk public exposure of sensitive information, and patients might not be fully aware of how that data is used. Incorporation of listening technology as a feedback mechanism introduces a new dynamic to the traditionally dyadic patient-clinician relationship, potentially impacting patient-provider trust and altering clinician autonomy. Moreover, clinician monitoring and reliance on AI tools could potentially increase professional liability and raise questions about how responsibility is shared between clinicians and the technology being used26. Despite these concerns, ambient technologies are being rapidly adopted—particularly for automating clinical documentation—due to their potential to reduce clinician burden27. However, patient and clinician acceptance of conversational monitoring and AI integration will likely depend on addressing multiple factors. For example, patients agree that AI models have access to diverse datasets but express concerns regarding how models can provide personalized care that address unique patient needs and the patient’s role in shared decision making. Further, patients express increased openness when they are clearly informed about how the technology will be used in their clinical encounter and how it will directly benefit them28. To increase clinician acceptance, it is important for the technology to be easy to use, seamlessly integrate into the current patient care workflow, and provide feedback in a nonjudgemental manner26.

There are several limitations as well as strengths for this study. While participants were in varying levels of their careers, medical students and more residents could be engaged in future research to provide additional perspectives on technology innovations. Additionally, interviews over Zoom, could have reduced emotional engagement compared with in-person interviews, which is important during reflection and creative ideation29. In future work, the use of storyboarding or collaboration among providers in design workshops might enhance creative thinking. A strength of the study was engaging participants from different clinical settings across multiple states, including community-based clinics and clinics associated with academic health centers, to provide diverse perspectives.

Conclusion

Through the use of provocative design methods with primary care clinicians, we elicited 6 themes that reflect clinicians’ perspectives on technology to address implicit bias in patient-provider communication. Participants proposed a wide range of ideas across six themes of envisioned technologies: patient communication monitoring, clinician self-awareness, systemic solutions, optimizing workflow, clinician education, and patient feedback. This study highlights the utility of provocative design methods in overcoming barriers to creativity in ideation among clinicians in healthcare, while also contributing design considerations for future technology-based implicit bias feedback tools and the broader effort to improve patient-centered interactions.

Acknowledgments

This work was supported by National Institutes of Health Award Number #1R01LM013301. Recruitment for this study was supported by the National Center For Advancing Translational Sciences of the National Institutes of Health under Award Number UL1 TR002319. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Figures & Tables

References

  • 1.American Psychological Association. Implicit bias. American Psychological Association. Available from https://www.apa.org/topics/implicit-bias.
  • 2.Gopal DP, Chetty U, O’Donnell P, et al. Implicit bias in healthcare: clinical practice, research and decision making. Future Healthc J. 2021;8(1):40–48. doi: 10.7861/fhj.2020-0233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Hall WJ, Chapman MV, Lee KM, et al. Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. Am J Public Health. 2015;105(12):e60–76. [Google Scholar]
  • 4.Crump A, Al-Jorani MS, Ahmed S, et al. Implicit bias assessment by career stage in medical education training: a narrative review. BMC Med Educ. 2025;25(1):137. doi: 10.1186/s12909-024-06319-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18:19–19. doi: 10.1186/s12910-017-0179-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Maina IW, Belton TD, Ginzberg S, et al. A decade of studying implicit racial/ethnic bias in healthcare providers using the implicit association test. Soc Sci Med. 2018;199:219–229. doi: 10.1016/j.socscimed.2017.05.009. [DOI] [PubMed] [Google Scholar]
  • 7.Gilson LL, Mathieu JE, Shalley CE, Ruddy TM. Creativity and standardization: complementary or conflicting drivers of team effectiveness? Acad Manag J. 2005;48(3):521–31. [Google Scholar]
  • 8.Kim SH, Weaver SJ, Yang T, Rosen MA. Managing creativity and compliance in the pursuit of patient safety. BMC Health Serv Res. 2019;19:116. doi: 10.1186/s12913-019-3935-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Thornhill-Miller B, Camarda A, Mercier M, et al. Creativity, critical thinking, communication, and collaboration: assessment, certification, and promotion of 21st century skills for the future of work and education. J Intell. 2023;11(3):54. doi: 10.3390/jintelligence11030054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ozkaramanli D, Desmet PMA. Provocative design for unprovocative designers: strategies for triggering personal dilemmas. Proc. Design Research Society Conference. 2016;1:1–12. [Google Scholar]
  • 11.Briggs P, Blythe M, Vines J, et al. Invisible design: exploring insights and ideas through ambiguous film scenarios. Proc. ACM Conference on Designing Interactive Systems. 2012:534–43. [Google Scholar]
  • 12.Zoom Video Communications. Zoom Video Communications. Available from: https://www.zoom.com.
  • 13.Van Schaik E, Howson A, Sabin J. A. Healthcare Disparities. MedEdPORTAL. 2014. Available from: www.mededportal.org/publication/9675.
  • 14.Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
  • 15.ATLAS.ti. ATLAS.ti. Available from: https://atlasti.com/
  • 16.LeBaron V, Boukhechba M, Edwards J, et al. Exploring the use of wearable sensors and natural language processing technology to improve patient-clinican communication: protocol for a feasibility study. JMIR Res Protoc. 2022;20;11(5):e37975. [Google Scholar]
  • 17.Kaste M. Human reviewers can’t keep up with police bodycam videos. AI now gets the job. NPR. 2024.
  • 18.Tierney A, Gayre G, Hoberman B, et al. Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NJEM. 2024;5(3) [Google Scholar]
  • 19.Flanagan O, Cummings K. Standardized patients in medical education: a review of literature. Cureus. 2023;15(7):e42027. doi: 10.7759/cureus.42027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Unruh K, Skeels M, Hartzler A, et al. Transforming clinic environments into information workspaces for patients. Proc SIGCHI Hum Factor Comput Syst. 2010:183–192. [Google Scholar]
  • 21.Chen Z, Gibson J, Chiu M, et al. Automated empathy detection for oncology encounters. ARIV. 2022.
  • 22.Davoudi A, Malhotra K, Shickel B, et al. The intelligent ICU pilot study: using artificial intelligence technology for autonomous patient monitoring. ARIV. 2018.
  • 23.Alsahafi A, Newell M, Kropmans T. A retrospective feedback analysis of objective structured clinical examination performance of undergraduate medical students. MedEdPublish. 2024;14:251. doi: 10.12688/mep.20456.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Moller JE, Doherty E, Brogger MN. “Bring your worst”: residents’ perspectives on video review of challenging patient communication as a learning tool. PEC Innov. 2024;5:100322. doi: 10.1016/j.pecinn.2024.100322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hahn A, Gawronski B. Facing one’s implicit biases: from awareness to acknowledgment. J Pers Soc Psychol. 2019;116(5):769–794. doi: 10.1037/pspi0000155. [DOI] [PubMed] [Google Scholar]
  • 26.Martinez-Martin N, Luo Z, Kaushal A, et al. Ethical issues in using ambient intelligence in health-care settings. Lancet Digital Health. 2020 Dec 21;3(2):e115–e123. doi: 10.1016/S2589-7500(20)30275-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Stults C, Deng S, Martinez M, et al. Evaluation of an Ambient Artificial Intelligence Documentation Platform for Clinicians. JAMA. 2025;8(5):e258614. [Google Scholar]
  • 28.Hmido S, Rahim H, Ploem C, et al. Patient perspectives on AI based decision support in surgery. BMJ. 2025;7(1):e000365. [Google Scholar]
  • 29.Shoshan H. Understanding “zoom fatigue”: a mixed-method approach. IAAP. 2022;71(3):827–852. [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES